Application of Large Language Models in Power System Operation and Control

Authors

  • Yiqian Zhang

DOI:

https://doi.org/10.54097/sb9qdz28

Keywords:

Large language model, Power system, Artificial intelligence

Abstract

The introduction of "carbon peak" and "carbon neutrality" targets and the vigorous advancement of national energy market construction, renewable energy sources like wind and solar power are experiencing rapid development. However, this also brings challenges in accounting for uncertainties in various operational scheduling and optimization control processes of the power system, especially for the theoretical control methods. Fortunately, Large language model (LLM) with rapid development has shown promising prospects in the power sector. This review summarizes the application of LLM technology in power system operation and control, outlining the new power system's demand for AI technology, the impact of LLM on system management, and the technological foundation including network architecture, training methods, and data configuration. Finally, it explores the applications of LLM in power system operation and control from the perspectives of generation, transmission, distribution, consumption, and equipment.

References

[1] Liu, X., & Xu, W. (2010). Economic load dispatch constrained by wind power availability: A here-and-now approach. IEEE Transactions on sustainable energy, 1(1), 2-9.

[2] Alam, M. S., Al-Ismail, F. S., Salem, A., & Abido, M. A. (2020). High-level penetration of renewable energy sources into grid utility: Challenges and solutions. IEEE access, 8, 190277-190299.

[3] Lu, J., Batra, D., Parikh, D., & Lee, S. (2019). Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. Advances in neural information processing systems, 32.

[4] Cao, Y., Li, S., Liu, Y., Yan, Z., Dai, Y., Yu, P. S., & Sun, L. (2023). A comprehensive survey of ai-generated content (aigc): A history of generative ai from gan to chatgpt. arXiv preprint arXiv:2303.04226.

[5] Driess, D., Xia, F., Sajjadi, M. S., Lynch, C., Chowdhery, A., Ichter, B., ... & Florence, P. (2023). Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378.

[6] Yue, X., Ni, Y., Zhang, K., Zheng, T., Liu, R., Zhang, G., ... & Chen, W. (2024). Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 9556-9567).

[7] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C. C., Liu, Z., & Wang, L. (2023). The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421, 9(1), 1.

[8] Xue, L., Gao, M., Xing, C., Martín-Martín, R., Wu, J., Xiong, C., ... & Savarese, S. (2023). Ulip: Learning a unified representation of language, images, and point clouds for 3d understanding. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 1179-1189).

[9] Yu, W., Yang, Z., Li, L., Wang, J., Lin, K., Liu, Z., ... & Wang, L. (2023). Mm-vet: Evaluating large multimodal models for integrated capabilities. arXiv preprint arXiv:2308.02490.

[10] Shanahan, M. (2024). Talking about large language models. Communications of the ACM, 67(2), 68-79.

Downloads

Published

26-12-2024

Issue

Section

Articles

How to Cite

Zhang, Y. (2024). Application of Large Language Models in Power System Operation and Control. Journal of Computing and Electronic Information Management, 15(3), 79-83. https://doi.org/10.54097/sb9qdz28