News

AI-Generated Disinformation: The New Challenge of Credibility

Source: uio.no

Published on January 17, 2026

Updated on January 17, 2026

AI-Generated Disinformation: The New Challenge of Credibility

The Rise of AI-Generated Disinformation

Artificial intelligence is increasingly being used to create disinformation that appears more credible than human-written falsehoods. Researchers have found that AI-generated texts are often perceived as more informative and trustworthy, raising concerns about the ease with which misinformation can now spread. This trend highlights the evolving landscape of digital deception and the need for advanced detection tools.

A recent study conducted by the NxtGenFake project revealed that AI-generated disinformation was rated higher in credibility and informativeness compared to human-written counterparts. Participants in the study were unaware of the source of the texts they evaluated, yet they consistently preferred the AI-generated content. This preference underscores the effectiveness of AI in crafting persuasive narratives that mimic trustworthy sources.

The Linguistic Features of AI-Generated Disinformation

One of the key findings of the research is the use of specific linguistic techniques in AI-generated disinformation. These techniques include generic references to authority, such as ‘according to researchers’ or ‘experts believe,’ which make the claims difficult to verify. Additionally, AI-generated texts often end with appeals to values, urging action to achieve goals like increased growth or public trust. This strategy enhances the perceived legitimacy of the content.

The study also noted that AI-generated propaganda exhibits less variation in persuasive techniques compared to human-written propaganda. This consistency may contribute to the effectiveness of AI-generated disinformation, as it aligns with formats that people instinctively trust. The researchers emphasized the importance of raising awareness about these linguistic cues to help the public recognize and guard against AI-generated misinformation.

Implications and Future Outlook

The implications of AI-generated disinformation are far-reaching. As AI tools become more sophisticated, the ability to detect and counteract disinformation becomes increasingly challenging. The development of robust fact-checking tools is essential to combat this threat. Researchers have already made progress in this area, creating tools that can identify linguistic features unique to AI-generated content.

However, the battle against disinformation is ongoing. The NxtGenFake project, which runs until 2029, continues to explore the nuances of AI-generated misinformation. By understanding the techniques used by AI and developing advanced detection methods, researchers hope to stay ahead of the curve and mitigate the risks associated with this emerging form of deception.