Using Explainable Artificial Intelligence (X-AI) To Address The “Black-Box” Dilemma in Autonomous Driving
DOI:
https://doi.org/10.54097/fphdfy81Keywords:
Artificial intelligence, autonomous driving, explainable AI, “Black-Box”.Abstract
Deep learning-based autonomous vehicles (AVs) demonstrate significant potential in reducing traffic accident rates and enhancing transportation efficiency. However, the “black-box” nature of artificial intelligence systems such as deep neural networks (DNNs) has raised widespread concerns regarding the explainability, transparency, and safety of their decision-making processes. This paper focuses on the application of explainable artificial intelligence (XAI) in the autonomous driving domain as an effective approach to address current technological challenges. The paper first reviews two fundamental architectures of autonomous driving systems and provides a basic overview of X-AI technologies, explaining their significance within these systems. It then systematically outlines the classification framework of X-AI and its critical importance in high-risk domains. Through analysis of multiple cutting-edge research frameworks—including the SafeX framework for modular architectures, the XAI integration framework for end-to-end systems, and the XAI-ADS system for cybersecurity— This paper delves into how X-AI enhances the safety, regulatory compliance, and user trust of autonomous driving systems. Finally, it outlines future development directions for X-AI in autonomous driving technology and proposes research recommendations for building explainable, safe, and responsible autonomous driving systems.
References
[1]Zhang Q, Zada M, Khan S, et al. Exploring the role of tourist pro-environmental behavior in autonomous vehicle adoption: A TPB and PLS-SEM approach. Sustainability, 2024, 16(20): 9021.
[2]International S. SAE Standards News: J3016 automated-driving graphic update. 2025.
[3]Moye B. AAA: Fear of self-driving cars on the rise. AAA Newsroom, 2023.
[4]Atakishiyev S, Salameh M, Yao H, et al. Explainable artificial intelligence for autonomous driving: A comprehensive overview and field guide for future research directions. IEEE Access, 2024.
[5]Zhao J, Zhao W, Deng B, et al. Autonomous driving system: A comprehensive survey. Expert Systems with Applications, 2024, 242: 122836.
[6]Chamola V, Hassija V, Sulthana A R, et al. A review of trustworthy and explainable artificial intelligence (XAI). IEEE Access, 2023, 11: 78994–79015.
[7]Speith T. A review of taxonomies of explainable artificial intelligence (XAI) methods. In: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022: 2239–2250.
[8]Kuznietsov A, Gyevnar B, Wang C, et al. Explainable AI for safe and trustworthy autonomous driving: A systematic review. IEEE Transactions on Intelligent Transportation Systems, 2024.
[9]Koo J, Kwac J, Ju W, et al. Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance. International Journal on Interactive Design and Manufacturing, 2015, 9(4): 269–275.
[10]Huang G, Pitts B J. Takeover requests for automated driving: The effects of signal direction, lead time, and modality on takeover performance. Accident Analysis & Prevention, 2022, 165: 106534.
[11]Mok B, Johns M, Lee K J, et al. Emergency, automation off: Unstructured transition timing for distracted drivers of automated vehicles. In: 2015 IEEE 18th International Conference on Intelligent Transportation Systems, 2015: 2458–2464.
[12]Wan J, Wu C. The effects of lead time of take-over request and nondriving tasks on taking-over control of automated vehicles. IEEE Transactions on Human-Machine Systems, 2018, 48(6): 582–591.
[13]Nazat S, Li L, Abdallah M. XAI-ADS: An explainable artificial intelligence framework for enhancing anomaly detection in autonomous driving systems. IEEE Access, 2024, 12: 48583–48607.
[14]Van Der Heijden R W, Lukaseder T, Kargl F. Veremi: A dataset for comparable evaluation of misbehavior detection in VANETs. In: International Conference on Security and Privacy in Communication Systems, 2018: 318–337.
[15]Pedregosa F, Varoquaux G, Gramfort A, et al. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 2011, 12: 2825–2830.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.







