Menu
  • Research
    • Agile Computing
    • Human-Machine Teamwork
    • Computational & Philosophical Foundations
    • Cybersecurity
    • Healthspan, Resilience & Performance
    • Human-Machine Communication & Language Processing
    • Intentions, Beliefs & Trust
    • Knowledge Discovery, Data Science, Learning from Big Data
    • Knowledge Modeling, Work Analysis & Simulation, and Expertise Studies
    • Augmentics
    • Robotics, Exoskeletons, & Human Robotic Interdependence
    • Visualization & Human-Centered Displays
  • Study Participation
  • CmapTools
  • People
  • About
    • The IHMC Story
    • Science Advisory Council
    • Board of Directors
    • Employment Opportunities
    • Internship Opportunities
    • Contact
    • IHMC Internal
  • Outreach
    • Evening Lectures
    • Science Saturdays
    • Robotics Camp
    • Teacher and Parent Resources
    • News
    • Newsletters
    • STEM-Talk
    • Art of Innovation – “Mentor”

Publications on Explainable AI

Assessing Satisfaction in and Understanding of a Collaborative Explainable AI (CXAI) System through User Studies

Explaining Explanation for “Explainable AI”

Explaining Explanation, Part 1: Theoretical Foundations

Explaining Explanation, Part 2: Empirical Foundations

Modeling the Process by Which People Try to Explain Complex Things to Others

Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance

Psychology and AI at a Crossroads: How Might Complex Systems Explain Themselves?

“Minimum Necessary Rigor” in empirically evaluating human–AI work systems

Evaluating machine-generated explanations: a “Scorecard” method for XAI measurement science

Explainable AI: roles and stakeholders, desirements and challenges

Increasing the Value of XAI for Users: A Psychological Perspective

Methods and standards for research on explainable artificial intelligence: Lessons from intelligent tutoring systems

Privacy Policy | FCOI Policy

© 2025 IHMC