Interpretable Deep Learning for Enhanced AI Trust and Clarity
DOI:
https://doi.org/10.37965/jait.2025.0748Keywords:
AI trust, artificial intelligence, deep learning, interpretability, LIME, SHAP, transparencyAbstract
This research aims to explore the role of interpretability in increasing user trust in artificial intelligence (AI) systems through tools such as Shapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME). The study stands out with its approach of comparing the effectiveness of two popular interpretability techniques in bringing transparency to deep learning models, particularly in high-risk applications such as health and finance. The research method involves applying the interpretability tools to AI models and evaluating user confidence and perception of transparency using feature visualization. Results show that interpretability has been significant in increasing user confidence, with SHAP excelling in providing global interpretation and LIME providing clarity on specific predictions. Visualizations proved effective for nontechnical users in understanding model decisions, although computational efficiency challenges remain, especially with SHAP. In conclusion, interpretability supports the ethical use of AI by increasing accountability and accessibility, and it demonstrates the importance of selecting interpretability tools based on context and user needs. The results provide practical direction for AI developers in integrating interpretability from the design stage to ensure transparency and reliability.
Metrics
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Authors

This work is licensed under a Creative Commons Attribution 4.0 International License.