Combating false information in Emergencies: Leveraging LLMs for Targeted False Information and Clarification Generation

Authors

  • Jiahong Lin

DOI:

https://doi.org/10.54097/113d2p18

Keywords:

Emergency Events; Large Language Models; Information Generation; AI-generated content (AIGC).

Abstract

The rapid dissemination of false information during emergencies has become a significant challenge in managing online public opinion. While large language models (LLMs) have enhanced the speed and efficiency of information generation, they have also exacerbated the complexity of public sentiment by facilitating the spread of both false and clarifying information. This paper addresses the critical need for targeted clarification information to counteract false narratives during emergencies. We propose a novel approach by fine-tuning open-source LLMs to generate both false information and corresponding clarification texts, tailored to specific emergency scenarios and public opinion dynamics. By constructing a high-quality dataset of 1,715 paired false and clarification information samples from authoritative platforms, we employ a task-separated fine-tuning strategy using LoRA (Low-Rank Adaptation) to optimize model performance. Our evaluation metrics, including text fluency (BLEU), novelty (NOV), and diversity (DIV), demonstrate that fine-tuned models, particularly LLaMA3.1, excel in generating coherent and relevant texts. The results highlight the potential of LLMs in both generating and debunking false information, offering a robust framework for improving public opinion management during emergencies. This research contributes to the growing body of work on false information mitigation and provides practical insights for leveraging LLMs in crisis communication.

Downloads

Download data is not yet available.

References

[1] Sequence to Sequence Learning with Neural Networks. [EB/OL], 2014

[2] Qiu X, Sun T, Xu Y, et al. pre-trained models for natural language processing: A survey[J]. Science China technological sciences, 2020, 63(10): 1872-1897.

[3] Vaswani A. Attention is all you need[J]. Advances in Neural Information Processing Systems, 2017.

[4] Kenton J D M-W C, Toutanova L K. Bert: Pre-training of deep bidirectional transformers for language understanding[C]. Proceedings of naacL-HLT, 2019: 2.

[5] Radford A, Wu J, Child R, et al. Language models are unsupervised multitask learners[J]. OpenAI blog, 2019, 1(8): 9.

[6] Raffel C, Shazeer N, Roberts A, et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. Journal of machine learning research, 2020, 21(140): 1-67.

[7] Crothers E N, Japkowicz N, Viktor H L. Machine-generated text: A comprehensive survey of threat models and detection methods[J]. IEEE Access, 2023, 11: 70977-71002.

[8] Generative language models and automated influence operations: Emerging threats and potential mitigations. [EB/OL]. arXiv preprint arXiv:2301.04246, 2023

[9] FakeGPT: Fake News Generation, Explanation and Detection of Large Language Models. [EB/OL]. arXiv preprint arxiv:2310.05046, 2024

[10] Yue Huang L. FakeGPT: Fake News Generation, Explanation and Detection of Large Language Models[J].

[11] Sun Y, He J, Cui L, et al. Exploring the Deceptive Power of LLM-Generated Fake News: A Study of Real-World Detection Challenges[J]. arXiv preprint arXiv:2403.18249, 2024.

[12] Lai J, Yang X, Luo W, et al. RumorLLM: A Rumor Large Language Model-Based Fake-News-Detection Data-Augmentation Approach[J]. Applied Sciences, 2024, 14(8): 3532.

[13] Qwen2 technical report. [EB/OL]. arXiv preprint arXiv:2407.10671, 2024

[14] Baichuan 2: Open large-scale language models. [EB/OL]. arXiv preprint arXiv:2309.10305, 2023

[15] Chatglm: A family of large language models from glm-130b to glm-4 all tools. [EB/OL]. arXiv preprint arXiv:2406.12793, 2024

[16] The llama 3 herd of models. [EB/OL]. arXiv preprint arXiv:2407.21783, 2024

[17] Ma J, Gao W, Mitra P, et al. Detecting rumors from microblogs with recurrent neural networks[J], 2016.

[18] Wang Y, Yang W, Ma F, et al. Weak supervision for fake news detection via reinforcement learning[C]. Proceedings of the AAAI conference on artificial intelligence, 2020: 516-523.

[19] Zhang X, Cao J, Li X, et al. Mining dual emotion for fake news detection[C]. Proceedings of the web conference 2021, 2021: 3465-3476.

[20] Nan Q, Cao J, Zhu Y, et al. MDFEND: Multi-domain fake news detection[C]. Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 2021: 3343-3347.

Downloads

Published

18-02-2025

How to Cite

Lin, J. (2025). Combating false information in Emergencies: Leveraging LLMs for Targeted False Information and Clarification Generation. Highlights in Science, Engineering and Technology, 124, 405-415. https://doi.org/10.54097/113d2p18