CHIWork 2026 Workshop

Trust and Transparency in XAI for Workplace Automation

Guiding Industry Decisions on Process Automation

Monday, June 22, 2026
9:00 AM โ€“ 1:00 PM (Half-Day)
In-Person Workshop

Organized by Nina Hubig (IT:U, Austria), Daniel Kolb (Leibniz SCC, Germany) & Romeo Kienzler (IBM Research, Switzerland)

Workshop Overview

Abstract

โ€œ

As organizations increasingly adopt artificial intelligence for workplace automation, critical questions emerge about where and how to automate processes responsibly. This half-day workshop addresses the intersection of Explainable AI (XAI), trust, and transparency in workplace automation contexts. We bring together researchers and industry practitioners to explore decision-making frameworks that guide automation adoption while maintaining human oversight, regulatory compliance, and worker dignity.

The workshop features presentations of cutting-edge research papers, keynote insights from industry leaders, and interactive sessions where participants collaboratively develop practical frameworks for assessing automation readiness. Expected outcomes include a published proceedings of research contributions, actionable decision frameworks for industry professionals, and establishing a community of practice bridging academic XAI research with real-world workplace automation challenges.

This initiative directly addresses CHIWork 2026's theme by examining the friction points and flow opportunities in AI-augmented work.

The Decision Gap

Lack of structured frameworks for determining automation readiness

The Trust Deficit

Workers distrust opaque AI systems lacking recourse mechanisms

The Implementation Divide

XAI research advances faster than real-world workplace adoption

๐Ÿ“š

Published via CEUR-WS.org

Open Access

๐Ÿ›๏ธ

ACM Submission Format

2โ€“4 pages

๐ŸŽ“

CHIWork 2026

Workshop Paper #107

Research Areas

Key Topics

The workshop welcomes submissions across these interconnected research and practice areas.

XAI Techniques & Methods

Explainability methods applicable to workplace automation contexts, from LIME and SHAP to emerging interpretable architectures.

Trust & Transparency

Building and maintaining trust in AI-driven automation through transparency and accountability mechanisms.

Human-in-the-Loop (HITL)

Mixed-initiative systems preserving meaningful human engagement and oversight in automated workflows.

Bias Detection & Fairness

Mechanisms to detect and mitigate bias in AI systems, ensuring ethical and equitable workplace automation.

Regulatory Compliance

Navigating EU AI Act, GDPR, and sector-specific regulations for responsible AI deployment in workplaces.

Industry Case Studies

Real-world XAI implementations in manufacturing, healthcare, services, and other sectors with actionable insights.

Decision Frameworks

Structured approaches to assess automation readiness, appropriate oversight levels, and explainability requirements.

Worker Perspectives

Understanding worker trust, acceptance, and experience with AI transparency in automated workplace systems.

Evaluation Methods

Metrics and methodologies for assessing XAI effectiveness in real applied workplace settings.

Organizing Committee

Workshop Organizers

NH

Nina Hubig

Main Contact & Workshop Organizer

Interdisciplinary Transformation University Austria (IT:U)
Linz, Austria

Assistant Professor of Explainable Artificial Intelligence at IT:U. Her research focuses on trust and transparency in XAI for high-stake decisions spanning industry, medicine and geo-environmental domains. She develops transparent and interpretable AI systems using deep learning, network science, and human-centric methods. Ph.D. from TU Munich; former Data Scientist at BMW on process mining and manufacturing optimization.

XAITrust & TransparencyDeep LearningProcess Mining
[email protected]
DK

Daniel Kolb

Workshop Organizer

Leibniz Supercomputing Centre, Bavarian Academy of Sciences
Munich, Germany

Postdoctoral researcher specializing in human-computer interaction, immersive technologies, and thanatosensitive systems. His research addresses how users form, calibrate, and sustain trust in AI systems, particularly in socially and ethically sensitive contexts. Expert in Interactive Digital Testimonies and Embodied Conversational Agents.

HCITrust CalibrationImmersive TechnologiesConversational AI
[email protected]
RK

Romeo Kienzler

Workshop Organizer

IBM Research
Zurich, Switzerland

AI Research Engineer at IBM Research Zurich. Expert in deep learning, high-performance computing, scalable systems, and open-source AI ecosystems. Leads projects like TerraTorch and IBM Geospatial Studio, transforming black-box AI into interactive visual experiences through embedding space analysis โ€” making AI's internal logic understandable for scientists and practitioners.

Deep LearningHPCGeospatial AIIBM Research
[email protected]
June 22, 2026

Workshop Schedule

Half-day workshop running 9:00 AM โ€“ 1:00 PM

09:00 โ€“ 09:15
๐ŸŽ™๏ธ

Welcome & Introduction

Plenary

Organizers introduce workshop goals, themes, expected outcomes, and schedule. Participants briefly introduce themselves.

09:15 โ€“ 10:00
๐Ÿ”‘

Keynote: From Black Boxes to Glass Boxes

Keynote

Industry Perspectives on XAI Implementation. An invited speaker discusses real-world challenges and successes implementing XAI in workplace automation, followed by 15-min Q&A.

10:00 โ€“ 10:30
โ˜•

Coffee Break & Networking

Break

Informal networking opportunity with display of posters from selected submissions for asynchronous viewing.

10:30 โ€“ 11:30
๐Ÿ“„

Research Paper Presentations

Paper Session

Presentations of accepted research papers, followed by moderated discussion synthesizing themes across presentations.

11:30 โ€“ 12:15
๐Ÿงฉ

Interactive Activity: The Automation Decision Matrix

Interactive

Participants in small mixed groups (researchers + practitioners) collaboratively assess realistic workplace automation scenarios using a shared decision matrix framework.

12:15 โ€“ 12:45
๐Ÿ”

Case Study Presentations & Framework Discussion

Plenary

Groups present case study analyses, identify common patterns, and synthesize a generalizable "XAI Automation Readiness Framework."

12:45 โ€“ 13:00
๐Ÿš€

Synthesis & Next Steps

Closing

Organizers summarize key themes, discuss publication plans for workshop proceedings, and establish an ongoing community of practice.

Get Involved

Call for Participation

Join Us at CHIWork 2026

As organizations navigate increasing automation of workplace processes, critical questions emerge: Which processes should be automated? How do we maintain trust and transparency? What human oversight is appropriate? This workshop brings together researchers, industry practitioners, policy stakeholders, and students to explore XAI as a foundation for responsible workplace automation decisions.

Who Should Participate

  • Academic Researchers in XAI, HCI & AI ethics
  • Industry Practitioners working with AI automation
  • Policy Stakeholders & Regulatory experts
  • Graduate Students exploring XAI topics

Submission Types

  • Position Papers

    Theoretical stances and perspectives

  • Research Papers

    Empirical findings and experiments

  • Industry Case Studies

    Real-world XAI implementation stories

๐Ÿ“

Length

2โ€“4 pages

Excluding references

๐Ÿ“‹

Format

ACM Single-Column

Standard conference format

๐Ÿ“…

Camera-Ready Due

June 15, 2026

One week before the workshop

๐Ÿ“š Publication via CEUR-WS.org

All accepted submissions will be published in open-access workshop proceedings, indexed by DBLP, Google Scholar, and Scopus. Expected publication: August 2026.

Submit Now โ†’
Reach Out

Contact

Have questions about the workshop, submissions, or collaboration opportunities? Reach out to our organizing team.

๐Ÿ“„

Read the Full Paper

Download the complete workshop proposal paper (CHIWORK26 Paper #107)

Download Paper (PDF)