Social Network Analysis Using Graph Neural Networks and Large Language Models
DOI:
https://doi.org/10.54097/2pvna108Keywords:
Graph neural networks, large language models, social network.Abstract
Social network analysis has become essential for understanding interaction patterns, information diffusion, and community behavior in the digital era. With the rapid expansion of online platforms such as Twitter, Facebook, and Weibo, vast amounts of data are generated daily, reflecting not only individual communication but also broader societal trends. This explosion of data requires advanced computational techniques to process, analyze, and derive meaningful insights. Graph Neural Networks (GNNs) and Large Language Models (LLMs) have emerged as two of the most promising approaches for addressing the complexity inherent in social network data. GNNs excel at modeling relational structures within networks, allowing for accurate predictions of community structures, influence propagation, and link prediction. LLMs, built on transformer architectures, process large-scale unstructured textual data, enabling tasks such as sentiment analysis, misinformation detection, and influencer identification. This paper offers an in-depth overview of the use of Graph Neural Networks (GNNs), Large Language Models (LLMs), and their combined application in analyzing social networks. It provides a structured assessment of the advantages and shortcomings of these methodologies, highlights existing obstacles, and proposes potential avenues for future research to overcome them. Major suggestions involve improving computational efficiency, reducing biases in both data and model outputs, advancing the modeling of time-dependent behaviors, and developing consistent and reliable evaluation protocols for integrated models. By consolidating recent advancements in this fast-growing domain, the study emphasizes the significant potential of merging GNNs and LLMs to achieve more precise, scalable, and fair analysis of social networks.
Downloads
References
[1] Zhang Y, Wang H, Li X, et al. ChatGPT for social network analysis: A case study on influencer identification. arXiv preprint arXiv:2305.12651. 2023.
[2] Park JS, Kim J, Lee S, et al. Social simulacra: Creating populated prototypes for social computing systems. In: Proc 35th Annu ACM Symp User Interface Softw Technol (UIST). 2022. p. 1-15.
[3] Veličković P, Cucurull G, Casanova A, Romero A, Liò P, Bengio Y. Graph attention networks. In: Proc Int Conf Learn Represent (ICLR). 2018.
[4] Rossi E, Chamberlain B, Frasca F, Eynard D, Monti F, Bronstein M. Temporal graph networks for deep learning on dynamic graphs. arXiv preprint arXiv:2006.10637. 2020.
[5] Zhu D, Zhang Z, Cui P, Zhu W. Robust graph neural networks against adversarial attacks. In: Proc 25th ACM SIGKDD Int Conf Knowl Discov Data Min (KDD). 2019. p. 1399-407.
[6] Zhu Y, Xu Y, Chen J, et al. GNN-LM: A graph neural network enhanced language model. arXiv preprint arXiv:2110.14127. 2021.
[7] Sun X, Liu Q, Wang J, et al. Graph neural prompting. arXiv preprint arXiv:2310.00001. 2023.
[8] Liu Z, Sun M, Zhou T, Huang G, Darrell T. Rethinking the value of network pruning. arXiv preprint arXiv:1810.05270. 2018 Oct 11.
[9] Gou J, Yu B, Maybank SJ, Tao D. Knowledge distillation: A survey. International journal of computer vision. 2021 Jun;129 (6):1789-819.
[10] Han X, Huang Z, An B, Bai J. Adaptive transfer learning on graph neural networks. InProceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining 2021 Aug 14 (pp. 565-574).
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Academic Journal of Science and Technology

This work is licensed under a Creative Commons Attribution 4.0 International License.








