2

Antagonistic AI

 3 months ago
source link: https://arxiv.org/abs/2402.07350
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Computer Science > Artificial Intelligence

[Submitted on 12 Feb 2024]

Antagonistic AI

Download PDF

The vast majority of discourse around AI development assumes that subservient, "moral" models aligned with "human values" are universally beneficial -- in short, that good AI is sycophantic AI. We explore the shadow of the sycophantic paradigm, a design space we term antagonistic AI: AI systems that are disagreeable, rude, interrupting, confrontational, challenging, etc. -- embedding opposite behaviors or values. Far from being "bad" or "immoral," we consider whether antagonistic AI systems may sometimes have benefits to users, such as forcing users to confront their assumptions, build resilience, or develop healthier relational boundaries. Drawing from formative explorations and a speculative design workshop where participants designed fictional AI technologies that employ antagonism, we lay out a design space for antagonistic AI, articulating potential benefits, design techniques, and methods of embedding antagonistic elements into user experience. Finally, we discuss the many ethical challenges of this space and identify three dimensions for the responsible design of antagonistic AI -- consent, context, and framing.
Comments: 17 pages, 1 figure, 5 tables
Subjects: Artificial Intelligence (cs.AI); Human-Computer Interaction (cs.HC)
ACM classes: I.2.0; J.0; K.4.0
Cite as: arXiv:2402.07350 [cs.AI]
  (or arXiv:2402.07350v1 [cs.AI] for this version)
  https://doi.org/10.48550/arXiv.2402.07350

Submission history

From: Ian Arawjo [view email]
[v1] Mon, 12 Feb 2024 00:44:37 UTC (492 KB)

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK