MIRAGE Logo

IUI' 26 Workshop

MIRAGE

Misleading Impacts Resulting from AI Generated Explanations

Workshop @ IUI 2026

March 23, 2026



Explanations from AI systems can illuminate, yet they can misguide. This full-day MIRAGE workshop at IUI confronts the Explainability Pitfalls and Dark Patterns embedded in AI-generated explanations. Evidence now shows that explanations may inflate unwarranted trust, warp mental models, and obscure power asymmetries—even when designers intend no harm.

We distinguish these negative effects of XAI into Dark Patterns (DPs) where AI explanations were intentionally designed to achieve desirable states of AI systems e.g. placebo explanations to increase trust in AI systems versus Explainability Pitfalls (EPs) where unanticipated effects from AI explanations manifest even when there is no intention to manipulate users. The negative effects of explanations extend to propagation of errors across models (model risks), over-reliance on AI (human-interaction risks), false sense of security (systemic risks).

We convene an interdisciplinary group of researchers and practitioners to define, detect, and defuse these hazards. By shifting the focus from making explanations to making explanations safe, MIRAGE propels IUI toward an accountable, human-centered AI future.


Workshop Schedule

14:00 Welcome and introductions
14:30 Highlight talks by accepted authors + panel Q&A
  • Eda Ismail-Tsaous, Celine Spannagl and Ute Schmid: Examples of Null Effects and Explainability Pitfalls in XAI User Studies
  • Rui Pedro Porfírio, Rui Neves Madeira and Pedro Albuquerque Santos: Situational Vulnerability and Social Firewalls: Uncovering Potential Explainability Pitfalls in Precision Viticulture
  • Natalie Friedman, Lifei Wang, Chengchao Zhu, Zeshu Zhu, Adelaide Nyanyo, Kevin Weatherwax and Joy Mountford: Not Too Short, Not Too Long: How LLM Response Length Shapes People’s Critical Thinking in Error Detection
  • Ilya Ilyankou, Stefano Cavazzi and James Haworth: The Scenic Route to Deception: Dark Patterns and Explainability Pitfalls in Conversational Navigation
15:30 Coffee break
16:00 Highlight talks by accepted authors + panel Q&A
  • Luca Deck, Anton Hummel, Paula Ziethmann and Niklas Kühl: 5(0) Shades of Wrong: Disentangling the Wrongness of AI Explanations
  • Ariful Islam Anik and Andrea Bunt: Advancing Pitfall-Aware Explanation Design for Human-Centered AI
16:30 Activity to map the research area – breakout groups
  • What dark patterns and pitfalls could emerge from Generative AI explanations?
  • What pressing investigations into dark patterns and pitfalls are necessary in the next 12 months?
  • Your own!
17:15 Next Steps
17:30 Close

Important Dates

October 10, 20251st call for papers
December 19, 2025Paper submission deadline
February 2, 2026Acceptance notification
February 6, 2026Registration deadline for authors
February 16, 2026Camera-ready CEUR papers to workshop chairs
March 23, 2026Anticipated workshop date at IUI 2026

Scope and Topics

Our workshop will bring together interdisciplinary researchers and practitioners in HCI and AI to gather the state-of-the-art in investigating and measuring Dark Patterns (DPs) and Explainability Pitfalls (EPs) as well as providing solutions for avoiding or mitigating DPs and EPs.

Topics covered in this workshop include (but are not exclusive to):

  • Theoretical work defining DPs and/or EPs
  • Case studies of DPs and EPs
  • Operationalizing the EP taxonomy—turning the four fault-lines into designer "pitfall prompt-cards"
  • Investigations of AI explanation user interface (UI) designs associated with EPs or DPs
  • XAI strategies for discriminative or generative AI which encourage (or discourage) DPs and/or EPs
  • Research into possible negative effects of explanations, including downstream effects
  • Research and possible design solutions into mitigations of EPs
  • Evaluation strategies for explanations that uncover possible DPs and EPs
  • Evidence of Null Effects of providing explanations

Submission Information

Submission types include position papers summarizing authors' existing research in the area and how it relates to the workshop theme, papers offering an industrial perspective or real-world approach to the workshop theme, papers that review the related literature and offer a new perspective, and papers that describe work-in-progress research projects.

We invite submissions to this workshop of 2-8 pages (without references). Prepare your submission using the latest ACM templates: Word Submission Template, or the ACM LaTeX template using \documentclass[manuscript,review]{acmart}. Please note that your submission does not need to be anonymized.

All submissions will be peer-reviewed through our program committee. Accepted submissions must be presented at the workshop; please note that the workshop is in-person only. Submission through Easychair: MIRAGE2026.

Organizers


Program Committee

We are grateful to the following people for helping make the MIRAGE workshop a success:

  • Krzysztof Gajos (Harvard University, USA)
  • Jasmina Gajcin (IBM Research)
  • Marios Constantinides (CYENS Centre of Excellence, Cyprus)
  • Margaret Burnett (Oregon State University, USA)
  • Patrick Song (Harvard University, USA)
  • Eoin Delaney (Trinity College Dublin, Ireland)
  • Susanne Hindennach (University of Stuttgart, Germany)