print preview Back Overview S+T

Could artificial intelligence automatically detect fake news?

It has never been as easy to spread misinformation as targeted and widespread as in the era of social media. In order to counteract this development, scientists from the Cyber-Defence (CYD) Campus of armasuisse Science and Technology have launched research projects exploring tools which automatically detect misinformation. The aim of one of these projects is to use artificial intelligence methods to identify images with text memes which potentially spread misinformation.

06.10.2021 | Dr. Raphael Meier, scientific project manager, Cyber-Defence Campus

A keyboard on which only the letters F, A, K and E can be seen and which therefore says FAKE.

Disinformation campaigns and formation of opinion

Disinformation campaigns, whether state-run or not, can target entire societies or specific sections of a society, in order to deceive or confuse with regard to a particular topic. Various different stakeholders such as foreign intelligence services, political parties or lobbies can use disinformation as a means of steering social discourse in a desired direction to assert their own interests. Disinformation campaigns are not a new phenomenon. The Cold War period provides a good example, during which the involved states spread misleading information, falsifications and propaganda material.

Fake news in social media

The concept of «fake news» has heavily influenced the current landscapes in media and politics. Social media enables news posted on the Internet to be distributed and magnified on a massive scale. Social bots and cyborgs can share misinformation in targeted disinformation campaigns and thus reach a growing network of real people. Social bots are automated programmes which, for example, react to certain hashtags with a pre-programmed answer and share certain content on social media. They act like normal user accounts of people or companies with a profile photo, posts and an interactive network. Cyborgs are semi-automated user accounts operated by humans and thus have a more authentic appearance than bots. This user interaction creates an illusion of truth with regard to the propagated misinformation.

As a result of technological progress, misinformation can be found in a wide range of very different manifestations on the Internet. Well-known examples of visual disinformation are the so-called «deepfakes» and «shallowfakes». These differ significantly regarding the way in which they are created. Deepfakes are altered or entirely fake media content such as videos, images or audio recordings which have been manipulated using artificial intelligence or deep learning methods. Shallowfakes, on the other hand, are created using standard image processing programmes such as Photoshop. In view of the density of misinformation in social media and the quality of the manipulated content, countermeasures are required to be able to recognise misleading information via automated processes wherever possible.

A comic figure with a sceptical face expression. Above it is written: "What is an IWT meme?"
An example of an image-with-text (IWT) meme.

CYD research projects for identifying visual disinformation

With the goal of protecting public discourse, scientists working on research projects at the CYD Campus are committed to developing various tools which could help to automatically detect fake media content on the Internet. Raphael Meier, scientific project manager at the CYD Campus, is carrying out research on the detection of IWT (image with text) memes. These Internet memes are used as a medium for spreading ideas, based on a combination of image and text, and are distinguished by their characteristic viral distribution. They are particularly interesting as an object of research, because they are an effective means of influencing online narratives and are thus frequently used in disinformation campaigns.

The first step in countering meme-controlled disinformation is to distinguish it from the abundance of other image data available in social media. To this end, an algorithm has been developed which automatically classifies IWT images in comparison with non-meme images (such as holiday photos, screenshots, etc.). Specifically, the algorithm is based on what is known as convolutional neural networks, which are part of the broader field of deep learning. The method is trained on various IWT memes and non-meme data records and can thus perform a binary classification of newly received image data into the categories IWT meme and non-meme image. By subsequently characterising the content and the users of the IWT memes detected in this manner, the research project aims in future to be able to filter out precisely those memes which may be spreading visual disinformation.

For this purpose, the content of IWT memes first needs to be analysed. When the content is determined, conclusions are drawn as to whether it could be considered disinformation by identifying the topic and its emotional nature. Disinformation frequently contains topics which have a socially divisive effect and which can intensify associated negative emotions (such as anger) in the observer. In addition, an attempt is made to characterise the user of the meme. The question of who is distributing the IWT meme is also explored. Could it be a question of suspicious distribution of the IWT memes by social bot-like accounts? And how can the intention of the user behind the spread of the meme be assessed? If, for example, a probable intention to discredit a political figure or to sow division between groups of the population can be detected behind the distribution of the meme, this might be an indication for the existence of disinformation. These automatic methods of analysis can drastically reduce the amount of image data to be evaluated in open-source intelligence missions and thus enable more rapid processing of security-relevant disinformation campaigns by analysts.

In disinformation campaigns, image and text content of the IWT meme is tailored to support a strategic narrative and address specific preferences of the target audience. Often, socially polarizing topics are used. For a selection of IWT memes used in disinformation campaigns, see the report by DiResta et al. from Stanford University.

Report by Stanford University