Trust and Transparency in XAI for Workplace Automation
Guiding Industry Decisions on Process Automation
Organized by Nina Hubig (IT:U, Austria), Daniel Kolb (Leibniz SCC, Germany) & Romeo Kienzler (IBM Research, Switzerland)
Abstract
As organizations increasingly adopt artificial intelligence for workplace automation, critical questions emerge about where and how to automate processes responsibly. This half-day workshop addresses the intersection of Explainable AI (XAI), trust, and transparency in workplace automation contexts. We bring together researchers and industry practitioners to explore decision-making frameworks that guide automation adoption while maintaining human oversight, regulatory compliance, and worker dignity.
The workshop features presentations of cutting-edge research papers, keynote insights from industry leaders, and interactive sessions where participants collaboratively develop practical frameworks for assessing automation readiness. Expected outcomes include a published proceedings of research contributions, actionable decision frameworks for industry professionals, and establishing a community of practice bridging academic XAI research with real-world workplace automation challenges.
This initiative directly addresses CHIWork 2026's theme by examining the friction points and flow opportunities in AI-augmented work.
The Decision Gap
Lack of structured frameworks for determining automation readiness
The Trust Deficit
Workers distrust opaque AI systems lacking recourse mechanisms
The Implementation Divide
XAI research advances faster than real-world workplace adoption
Published via CEUR-WS.org
Open Access
ACM Submission Format
2โ4 pages
CHIWork 2026
Workshop Paper #107
Key Topics
The workshop welcomes submissions across these interconnected research and practice areas.
XAI Techniques & Methods
Explainability methods applicable to workplace automation contexts, from LIME and SHAP to emerging interpretable architectures.
Trust & Transparency
Building and maintaining trust in AI-driven automation through transparency and accountability mechanisms.
Human-in-the-Loop (HITL)
Mixed-initiative systems preserving meaningful human engagement and oversight in automated workflows.
Bias Detection & Fairness
Mechanisms to detect and mitigate bias in AI systems, ensuring ethical and equitable workplace automation.
Regulatory Compliance
Navigating EU AI Act, GDPR, and sector-specific regulations for responsible AI deployment in workplaces.
Industry Case Studies
Real-world XAI implementations in manufacturing, healthcare, services, and other sectors with actionable insights.
Decision Frameworks
Structured approaches to assess automation readiness, appropriate oversight levels, and explainability requirements.
Worker Perspectives
Understanding worker trust, acceptance, and experience with AI transparency in automated workplace systems.
Evaluation Methods
Metrics and methodologies for assessing XAI effectiveness in real applied workplace settings.
Workshop Organizers
Workshop Schedule
Half-day workshop running 9:00 AM โ 1:00 PM
Welcome & Introduction
PlenaryOrganizers introduce workshop goals, themes, expected outcomes, and schedule. Participants briefly introduce themselves.
Keynote: From Black Boxes to Glass Boxes
KeynoteIndustry Perspectives on XAI Implementation. An invited speaker discusses real-world challenges and successes implementing XAI in workplace automation, followed by 15-min Q&A.
Coffee Break & Networking
BreakInformal networking opportunity with display of posters from selected submissions for asynchronous viewing.
Research Paper Presentations
Paper SessionPresentations of accepted research papers, followed by moderated discussion synthesizing themes across presentations.
Interactive Activity: The Automation Decision Matrix
InteractiveParticipants in small mixed groups (researchers + practitioners) collaboratively assess realistic workplace automation scenarios using a shared decision matrix framework.
Case Study Presentations & Framework Discussion
PlenaryGroups present case study analyses, identify common patterns, and synthesize a generalizable "XAI Automation Readiness Framework."
Synthesis & Next Steps
ClosingOrganizers summarize key themes, discuss publication plans for workshop proceedings, and establish an ongoing community of practice.
Call for Participation
Join Us at CHIWork 2026
As organizations navigate increasing automation of workplace processes, critical questions emerge: Which processes should be automated? How do we maintain trust and transparency? What human oversight is appropriate? This workshop brings together researchers, industry practitioners, policy stakeholders, and students to explore XAI as a foundation for responsible workplace automation decisions.
Who Should Participate
- Academic Researchers in XAI, HCI & AI ethics
- Industry Practitioners working with AI automation
- Policy Stakeholders & Regulatory experts
- Graduate Students exploring XAI topics
Submission Types
Position Papers
Theoretical stances and perspectives
Research Papers
Empirical findings and experiments
Industry Case Studies
Real-world XAI implementation stories
Length
2โ4 pages
Excluding references
Format
ACM Single-Column
Standard conference format
Camera-Ready Due
June 15, 2026
One week before the workshop
๐ Publication via CEUR-WS.org
All accepted submissions will be published in open-access workshop proceedings, indexed by DBLP, Google Scholar, and Scopus. Expected publication: August 2026.
Contact
Have questions about the workshop, submissions, or collaboration opportunities? Reach out to our organizing team.
Nina Hubig
Main Contact
IT:U, Linz, Austria
[email protected]Daniel Kolb
Workshop Organizer
Leibniz SCC, Munich, Germany
[email protected]Romeo Kienzler
Workshop Organizer
IBM Research, Zurich, Switzerland
[email protected]Read the Full Paper
Download the complete workshop proposal paper (CHIWORK26 Paper #107)
Download Paper (PDF)