MIRAGE

MIRAGE

Misleading Impacts Resulting from AI Generated Explanations

Workshop @ IUI 2026

March 23, 2026 (Anticipated)

MIRAGE


Explanations from AI systems can illuminate, yet they can misguide. This full-day MIRAGE workshop at IUI confronts the Explainability Pitfalls and Dark Patterns embedded in AI-generated explanations. Evidence now shows that explanations may inflate unwarranted trust, warp mental models, and obscure power asymmetries—even when designers intend no harm.

We distinguish these negative effects of XAI into Dark Patterns (DPs) where AI explanations were intentionally designed to achieve desirable states of AI systems e.g. placebo explanations to increase trust in AI systems versus Explainability Pitfalls (EPs) where unanticipated effects from AI explanations manifest even when there is no intention to manipulate users. The negative effects of explanations extend to propagation of errors across models (model risks), over-reliance on AI (human-interaction risks), false sense of security (systemic risks).

We convene an interdisciplinary group of researchers and practitioners to define, detect, and defuse these hazards. By shifting the focus from making explanations to making explanations safe, MIRAGE propels IUI toward an accountable, human-centered AI future.


Important Dates

October 10, 20251st call for papers
December 19, 2025Paper submission deadline
February 2, 2026Acceptance notification
February 16, 2026Camera-ready CEUR papers to workshop chairs
March 23, 2026Anticipated workshop date at IUI 2026

Scope and Topics

Our workshop will bring together interdisciplinary researchers and practitioners in HCI and AI to gather the state-of-the-art in investigating and measuring Dark Patterns (DPs) and Explainability Pitfalls (EPs) as well as providing solutions for avoiding or mitigating DPs and EPs.

Topics covered in this workshop include (but are not exclusive to):

  • Theoretical work defining DPs and/or EPs
  • Case studies of DPs and EPs
  • Operationalizing the EP taxonomy—turning the four fault-lines into designer "pitfall prompt-cards"
  • Investigations of AI explanation user interface (UI) designs associated with EPs or DPs
  • XAI strategies for discriminative or generative AI which encourage (or discourage) DPs and/or EPs
  • Research into possible negative effects of explanations, including downstream effects
  • Research and possible design solutions into mitigations of EPs
  • Evaluation strategies for explanations that uncover possible DPs and EPs
  • Evidence of Null Effects of providing explanations

Submission Information

Submission types include position papers summarizing authors' existing research in the area and how it relates to the workshop theme, papers offering an industrial perspective or real-world approach to the workshop theme, papers that review the related literature and offer a new perspective, and papers that describe work-in-progress research projects.

We invite submissions to this workshop of 2-8 pages (without references). Prepare your submission using the latest ACM templates: Word Submission Template, or the ACM LaTeX template using \documentclass[manuscript,review]{acmart}. Please note that your submission does not need to be anonymized.

All submissions will be peer-reviewed through our program committee. Accepted submissions must be presented at the workshop; please note that the workshop is in-person only. Submission through Easychair: MIRAGE2026.

Organizers


Program Committee

We are grateful to the following people for helping make the MIRAGE workshop a success:

  • Krzysztof Gajos (Harvard University, USA)
  • Jasmina Gajcin (IBM Research)
  • Marios Constantinides (CYENS Centre of Excellence, Cyprus)
  • Margaret Burnett (Oregon State University, USA)
  • Patrick Song (Harvard University, USA)
  • Eoin Delaney (Trinity College Dublin, Ireland)
  • Susanne Hindennach (University of Stuttgart, Germany)

Workshop Schedule

  • 09:00 Welcome and introductions
  • 09:30 Keynote talk & Q&A
  • 10:15 Coffee break
  • 10:30 Highlight talks by accepted authors
  • 11:30 Lightning talks by accepted authors
  • 12:15 Lunch
  • 13:30 Activity to map the research area
  • 15:00 Coffee Break
  • 15:30 Panel and Q&A
  • 16:30 Wrap-up
  • 17:00 Close