Challenges and Breakthroughs of LLMs in Detecting False Information on Social Media: From DELL to Human-Machine Collaboration
DOI:
https://doi.org/10.54097/qzn9b238Keywords:
Large language model, false information, social media.Abstract
Due to the rise of large language models (LLMs) in today's society, the spread of false information has become even more rampant, and the detection and intervention of false information are quite challenging. This paper proposes to use LLMs to detect and intervene in false information on social media. More specifically, the DELL framework based on LLMs provides a systematic multi - stage governance solution. This framework maintains strong generalization performance and offers a clear - structured, operable, and interpretable technical path for applying large language models to false information governance. At the same time, it copes with complex scenarios by integrating multi - modal detection. We also explore the role of graph neural networks in rumor detection and combine graph neural networks with LLMs to form a joint framework for collaborative detection and intervention. This paper provides a comprehensive review of the detection and intervention of false information by LLMs, which can help other researchers better understand the current mainstream methods for detecting and intervening in false information.
Downloads
References
[1] Shah SB, Kumar A, Yadav S, et al. Navigating the web of disinformation and misinformation: Large language models as double-edged swords. IEEE Access. 2024.
[2] Papageorgiou E, Mantas J, Zoulias E, et al. A survey on the use of large language models (LLMs) in fake news. Future Internet. 2024;16 (8):298.
[3] Zhang Y, Sharma K, Du L, et al. Toward mitigating misinformation and social media manipulation in LLM era. In: Companion Proceedings of the ACM Web Conference 2024. 2024. p. 1302-5.
[4] Jin Y, Choi M, Verma G, et al. MM-SOC: Benchmarking multimodal large language models in social media platforms. arXiv preprint arXiv:2402.14154. 2024.
[5] Huang H, Sun N, Tani M, et al. Can LLM-generated misinformation be detected: A study on Cyber Threat Intelligence. Future Gener Comput Syst. 2025:107877.
[6] Hao G, Wu J, Pan Q, et al. Quantifying the uncertainty of LLM hallucination spreading in complex adaptive social networks. Sci Rep. 2024;14 (1):16375.
[7] Large language model. Available from: https://en.m.wikipedia.org/wiki/Large_language_model
[8] Zhang Y, Li X, Wang H, et al. DELL: A Framework for Detection and Explanation of Fake News with Large Language Models. In: Proc ACM Conf Fairness, Accountability, and Transparency. 2023. p. 1125-37.
[9] Zheng L, Wang J, Zhang H, et al. Multimodal fake news detection via cross-modal alignment and semantic reasoning. In: Proc IEEE/CVF Conf Comput Vis Pattern Recognit. 2023. p. 1450-9.
[10] Liu Z, Pan S. A survey on graph neural networks for fake news detection. ACM Trans Intell Syst Technol. 2022;13 (5):1-34.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Academic Journal of Science and Technology

This work is licensed under a Creative Commons Attribution 4.0 International License.








