Leveraging LLMs for COVID-19 Fake News Generation and Detection: A Comparative Analysis on Twitter Data

The rapid spread of rumors on social media, especially during crises like the COVID-19 pandemic, highlights an urgent need for advanced tools to detect fake news. Large Language Models (LLMs), with their vast knowledge and emergent abilities, show great promise in tackling this challenge. This study...

Full description

Saved in:
Bibliographic Details
Main Authors: Hong N. Dao, Yasuhiro Hashimoto, Incheon Paik, Truong Cong Thang
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11097282/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The rapid spread of rumors on social media, especially during crises like the COVID-19 pandemic, highlights an urgent need for advanced tools to detect fake news. Large Language Models (LLMs), with their vast knowledge and emergent abilities, show great promise in tackling this challenge. This study investigates five state-of-the-art LLMs — DeepSeek, GPT-3.5, GPT-4, Gemini, and Claude— in creating and detecting fake news. Especially, we consider not only the performance but also the similarity of the models. Regarding tweet generation, we identify interesting patterns about similarity among the outputs of the models. Specifically, the outputs of DeepSeek are most similar to those of GPT models, while Gemini’s outputs are more distinct from the others. Regarding detection, no model demonstrates strong performance, even on its own generated dataset. Still, the decisions of DeepSeek are most similar to those of GPT-3.5, while the decisions of Gemini are the least similar to those of the other models. This study provides valuable insights into the dual role of LLMs as both detectors and potential sources of rumors, contributing to the development of more robust and reliable fake news detection systems.
ISSN:2169-3536