Finite-Time Bounds for AMSGrad-Enhanced Neural TD

Authors

  • Tiange Fu
  • Qingtao Wu

DOI:

https://doi.org/10.54097/jceim.v10i3.8758

Keywords:

Reinforcement learning, Temporal difference learning, Neural networks, Adaptive methods

Abstract

Although the combination of adaptive methods and deep reinforcement learning has achieved tremendous success in practical applications, its theoretical convergence properties are not well understood. To address this issue, we propose a neural network-based adaptive TD algorithm, called NTD-AMSGrad, which is a variant of temporal difference learning. Moreover, we rigorously analyze the convergence performance of the proposed algorithm and establish a finite-time bound for NTD-AMSGrad under the Markov observation model. Specifically, when the neural network is wide enough, the proposed algorithm can converge to the optimal action-value function at a rate of, where is the number of iterations.

References

N. Salpea, P. Tzouveli, D. Kollias. Medical image segmentation: A review of modern architec-tures. Computer Vision–ECCV 2022 Workshops. Expo Tel Aviv, 2022, pp. 691-708.

N. Lang, N. Shlezinger. Joint privacy enhancement and quantization in federated learning. IEEE Transactions on Signal Processing. Vol. 71 (2023), pp. 295-310.

T. Brown, B. Mann, N. Ryder, et al. Language models are few-shot learners. Proceedings of the 34th International Conference on Neural Information Processing Systems. Virtual, 2020, pp. 1877-1901.

A. Kumari, S. Tanwar. A reinforcement-learning-based secure demand response scheme for smart grid system. IEEE Internet of Things Journal. Vol. 9 (2022) No. 3, pp. 2180-2191.

S. R Sutton, G. A Barto. Reinforcement learning: An introduction. 2nd ed. MA: MIT press, 2018, pp. 1-552.

S. R Sutton. Learning to predict by the methods of temporal differences. Machine Learning. Vol. 3 (1988) No. 1, pp. 9-44.

D. Ye, Z. Liu Z, M. Sun M, et al. Mastering complex control in moba games with deep reinforcement learning. Proceedings of the 34th AAAI Conference on Artificial Intelligence. NY, USA, 2020, pp. 6672-6679.

M. Dalal, D. Pathak, R. R Salakhutdinov. Accelerating robotic reinforcement learning via pa-rameterized action primitives. Proceedings of the 35th International Conference on Neural Information Processing Systems. Virtual, 2021, pp. 21847-21859.

S. R Sutton. Generalization in reinforcement learning: Successful examples using sparse coarse coding. Proceedings of the 9th International Conference on Neural Information Processing Systems. CO, USA, 1995, pp. 1038-1044.

J. Tsitsiklis, B. Roy Van. Analysis of temporal-diffference learning with function approxima-tion. Proceedings of the 10th International Conference on Neural Information Processing Systems. CO, USA, 1996, pp. 1075-1081.

H. Maei, C. Szepesvari, S. Bhatnagar, et al. Convergent temporal-difference learning with arbi-trary smooth function approximation. Proceedings of the 23rd International Conference on Neural Information Processing Systems. British Columbia, Canada, 2009, pp. 1204-1212.

V. Mnih, K. Kavukcuoglu, D. Silver, et al. Human-level control through deep reinforcement learning. Nature. Vol. 518 (2015) No. 7540, pp. 529-533.

J. Zou, T. Hao, C. Yu, et al. A3C-DO: A regional resource scheduling framework based on deep reinforcement learning in edge scenario. IEEE Transactions on Computers. Vol. 70 (2020) No. 2, pp. 228-239.

Q. Cai, Z. Yang, D. J Lee, et al. Neural temporal-difference learning converges to global opti-ma. Proceedings of the 33rd International Conference on Neural Information Processing Systems. BC, Canada, 2019, pp. 11312-11322.

J. Fan, Z. Wang, Y. Xie, et al. A theoretical analysis of deep q-learning Proceedings of the 2nd Conference on Learning for Dynamics and Control. Berkeley, USA, 2020, pp. 486-489.

P. Xu, Q. Gu. A finite-time analysis of q-learning with neural network function approxima-tion. Proceedings of the 37th International Conference on Machine Learning. Virtual, 2020, pp. 10555-10565.

V. Mnih, P. A Badia, M. Mirza, et al. Asynchronous methods for deep reinforcement learning. Proceedings of the 33rd International Conference on Machine Learning. NY, USA, 2016, pp. 1928-1937.

W. Fedus, P. Ramachandran, R. Agarwal, et al. Revisiting fundamentals of experience replay. Proceedings of the 37th International Conference on Machine Learning. Virtual, 2020, pp. 3061-3071.

J. S Reddi, S. Kale, S. Kumar. On the convergence of adam and beyond. Proceedings of the 6th International Conference on Learning Representations. BC, Canada, 2018.

Q. Cai, Z. Yang, D. J Lee, et al. Neural temporal-difference learning converges to global opti-ma. Proceedings of the 33rd International Conference on Neural Information Processing Systems. BC, Canada, 2019, pp. 11312-11322.

Downloads

Published

24-05-2023

Issue

Section

Articles

How to Cite

Fu, T., & Wu, Q. (2023). Finite-Time Bounds for AMSGrad-Enhanced Neural TD. Journal of Computing and Electronic Information Management, 10(3), 132-136. https://doi.org/10.54097/jceim.v10i3.8758

Similar Articles

1-10 of 80

You may also start an advanced similarity search for this article.