Large Language Models for Misinformation Detection and Intervention in Media Networks
DOI:
https://doi.org/10.54097/1m57nb32Keywords:
Large language model, false information, social media.Abstract
The rapid dissemination of information in media networks such as social media, news platforms, and video-sharing applications has reshaped public communication and knowledge acquisition. However, the same environment has accelerated the spread of misinformation and fake news, which can undermine trust, distort perceptions, and create severe consequences in sensitive domains including politics, healthcare, and finance. Early detection methods, relying on keyword-based heuristics and small-scale classifiers, proved inadequate in addressing the scale, diversity, and multimodality of modern misinformation. The emergence of Large Language Models (LLMs) provides new opportunities, as these models demonstrate strong semantic understanding, contextual reasoning, and few-shot adaptability. This paper reviews methodological advances in LLM-based misinformation detection, including direct classification, retrieval-augmented verification, network-aware detection, and generative intervention strategies. We also discuss major challenges such as hallucination, computational costs, multimodal complexity, data limitations, and privacy concerns. Finally, potential solutions are proposed, including continual dataset updates, hierarchical detection pipelines, multimodal fusion, and privacy-preserving personalization. These findings highlight both the opportunities and limitations of LLMs, underscoring the need for robust, scalable, and ethical frameworks to combat misinformation in media networks.
Downloads
References
[1] Lazer DM, Baum MA, Benkler Y, et al. The science of fake news. Science. 2018;359 (6380):1094-6.
[2] Conroy NJ, Rubin VL, Chen Y. Automatic deception detection: Methods for finding fake news. Proc Assoc Inf Sci Technol. 2015;52 (1):1-4.
[3] Shu K, Sliva A, Wang S, Tang J, Liu H. Fake news detection on social media: A data mining perspective. SIGKDD Explor Newsl. 2017;19 (1):22-36.
[4] Zhou X, Zafarani R. Fake news: A survey of research, detection methods, and opportunities. ACM Comput Surv. 2019;51 (2):1-35.
[5] Rubin VL, Chen Y, Conroy NJ. Deception detection for news: Three types of fakes. Proc Assoc Inf Sci Technol. 2015;52 (1):1-4.
[6] Vosoughi S, Roy D, Aral S. The spread of true and false news online. Science. 2018;359 (6380):1146-51.
[7] Pérez-Rosas V, Kleinberg B, Lefevre A, Mihalcea R. Automatic detection of fake news. Proc COLING. 2018:3391-401.
[8] Brown T, Mann B, Ryder N, et al. Language models are few-shot learners. Adv Neural Inf Process Syst (NeurIPS). 2020.
[9] Wang Y, Chen L, Zhang M, et al. Combating misinformation in the age of LLMs: Opportunities and challenges. Proc AAAI Conf Artif Intell. 2024.
[10] Johnson R, Smith K, Liu J, et al. Fact-checking information from large language models can influence public perceptions. PLoS One. 2024;19 (3):e0279542.
[11] Lewis P, Perez E, Piktus A, et al. Retrieval-augmented generation for knowledge-intensive NLP tasks. Adv Neural Inf Process Syst (NeurIPS). 2020.
[12] Ji Z, Lee N, Fries J, Yu T, Fung D. Survey of hallucination in natural language generation. ACM Comput Surv. 2023.
[13] Zhang X, Li Y, Zhao H, et al. Uses and strategies of LLMs in navigating disinformation. arXiv preprint arXiv:2508.05309. 2025.
[14] Wu P, Chen J, Wang R, et al. Large language models for social networks: Applications, challenges, and opportunities. arXiv preprint arXiv:2403.00123. 2024.
[15] Li M, Zhou Y, Fang X, et al. Personalizing LLM responses to combat political misinformation. Proc ACM Conf Comput Support Coop Work (CSCW). 2025.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Academic Journal of Science and Technology

This work is licensed under a Creative Commons Attribution 4.0 International License.








