Explainable AI: Building Trust and Transparency in Artificial Intelligence Systems

Source: ualberta.ca

Published on October 8, 2025 at 09:16 AM

As artificial intelligence increasingly influences critical decisions, trust in these systems becomes paramount. Imagine algorithms dictating hiring, medical diagnoses, or parole decisions. This raises a crucial question: how can we truly understand the reasoning behind AI's choices?

The Promise of Explainable AI

Explainable AI (XAI) offers a potential solution. This growing research field aims to make machine learning more transparent, fair, and accountable. Mi-Young Kim, an associate professor and AI researcher, explores the complexities of interpreting AI models.

Unveiling the Black Box

Kim highlights the inherent risks of “black box” decisions, where the rationale remains hidden. She also discusses methods designed to help people understand why AI makes specific predictions. The goal is to build systems that inspire genuine confidence.

Kim's Expertise in AI

Mi-Young Kim brings extensive expertise to this discussion. She is an associate professor of Computing Science at the University of Alberta's Augustana Faculty. Her research encompasses Natural Language Processing (NLP), Artificial Intelligence (AI), and Explainable/Trustworthy AI.

Focus on Medical and Legal Fields

Kim's work particularly emphasizes information extraction within the medical and legal domains. Since 2014, she has co-organized the International Competition on Legal Information Extraction and Entailment (COLIEE).

AI in Legal and Health Contexts

Her team's AI legal assistant excelled in answering Yes/No legal bar exam questions from 2014 to 2019, and again in 2022. Kim is also developing AI systems for automated health assessments within Alberta’s 811 HealthLink telehealth service.

AI-Driven Health Insights

These systems are designed to provide explanatory rationales for their predictions. Furthermore, Kim is analyzing Alberta radiology reports, extracting data related to inflammatory bowel disease. She uses AI-based methods to deliver interpretable explanations.