From Voxels to Victory: Enhancing Racing AI with Fuzzy DQN and Imitation Learning
DOI:
https://doi.org/10.54097/8bsgzc59Keywords:
Car racing game, Deep reinforcement learning, Imitation learning, Fuzzy DQNAbstract
With the rapid development of the esports industry, racing games have become a key focus for both academic research and industrial applications. However, traditional racing game artificial intelligence (AI) struggles to adapt to increasingly complex track environments and meet evolving player demands. Deep Reinforcement Learning (DRL) has shown great promise in enhancing AI’s environmental perception and strategy optimization, yet its application in racing games remains in an exploratory phase, hindered by high data demands, slow convergence, and poor generalization. To address these challenges, this paper proposes a novel racing AI training method that combines voxel-based track feature extraction, fuzzy Deep Q-Network (DQN), and imitation learning. Voxelization enables the AI to automatically extract key track features, while fuzzy DQN refines speed adjustment strategies for better performance in complex environments. The inclusion of imitation learning reduces reliance on expert data, significantly accelerating training convergence. Experimental results demonstrate that the proposed method outperforms traditional algorithms like DQN, Q-Learning, and SARSA. Specifically, the custom algorithm achieved a 25% faster convergence rate, with a final total reward of around 400, compared to approximately 300 for DQN and 250 for SARSA. Additionally, the loss values were reduced by 60% in the final stages of training, indicating improved stability and learning efficiency. These results confirm that the proposed approach not only enhances AI training efficiency in racing games but also holds potential for real-world autonomous driving applications.
References
[1] Fang, Z. Y. (2023). Active tracking and navigation based on deep reinforcement learning. Thesis of University of Science and Technology of China.
[2] Duan, W. H., Zhao, J., Liang, J. R., & Cao, R. (2023). Multi-agent dynamic pathfinding algorithm based on deep reinforcement learning. Computer Simulation, (1), 441-446, 473.
[3] Zang, R. (2022). Research on multi-agent deep reinforcement learning in non-omniscient environments. Thesis of Taiyuan University of Technology.
[4] Zhang, Y. T. (2021). Research on deep reinforcement learning algorithm for autonomous obstacle avoidance navigation of UAVs. Thesis of Southeast University.
[5] Zhang, R. Y. (2022). Research on deep reinforcement learning algorithm based on episodic memory and its sample efficiency. Thesis of University of Electronic Science and Technology of China. https://link.cnki.net/doi/10.27005/d.cnki.gdzku.2022.004024. Pu, X. Q. (2023). Application research of deep reinforcement learning in racing games. Thesis of Lanzhou University of Technology.
[6] Yang, L. Y., Li, C., Zou, H. F., Wan, J. T., Zhang, R. Q., Liu, H., & Lu, H. Robot path planning optimization based on improved ant colony algorithm combined with A* algorithm. Journal of System Simulation, 1-10.
[7] Zhou, H. X., Xiong, Z. G., & Xiao, X. Improvement of A* algorithm for obstacle avoidance of unstructured obstacles. Automation Technology and Application, 1-5.
[8] Li, Y. G., Guo, R., & Qiu, G. P. (2023). Path planning for large game maps using improved HPA* algorithm. Journal of Sanming University, (3), 53-62.
[9] Gao, Q. (2022). Research on model-free deep reinforcement learning algorithms based on online policy. Thesis of Beijing University of Posts and Telecommunications.
[10] Wang, M., Chen, J. X., & Deng, Z. X. (2021). Speed planning method for autonomous driving based on fuzzy neural networks. Computer Engineering and Science, (11), 2011-2019.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.