Skip to main content

CommunicationPublished on 29 January 2026

Real or fake? What research teaches us.

The Cyber-Defence Campus of armasuisse Science and Technology (S+T) is publishing new studies about the increasing difficulty in identifying images generated by artificial intelligence (AI) as such. This joint research project of the Cyber-Defence Campus and the University of Applied Sciences of Northwestern Switzerland (FHNW) deals with the increasing use of AI-generated images and with the subtle artefacts that distinguish them from real images.

Andrea Thäler, Specialist area Cyberdefence, Competence Centre armasuisse Science and Technology

Synthetic portrait of a woman with small irregularities on the left eye, the hair, the teeth and the shape of the left ear.

Artificial intelligence has revolutionised the creation of digital images. As a result, synthetic, photorealistic images are becoming increasingly widespread and are used in various areas – for example in entertainment or in advertising. However, this technology also encourages misuse, for example for spreading wrong or misleading information through realistic looking but fabricated images.

AI-generated images are currently becoming more realistic, so that it is increasingly difficult to distinguish between real and synthetic contents. For this reason, the Cyber-Defence Campus and the University of Applied Sciences of Northwestern Switzerland (FHNW) conducted studies on the analytical capabilities of modern generative diffusion models, focusing on their weaknesses, in a joint research project. The focus of a study are the difficulties in identifying synthetic contents. Here, it is emphasised how important it is to account for small but significant errors such as irregularities in the human anatomy, the incidence of light and the object symmetry. These subtle errors, which can be overlooked by the untrained eye, provide important clues when distinguishing AI-generated images from real images.

In the studies, the use of generative deep learning models – including the diffusion models – for generating synthetic images for potential deception, manipulation and infiltration in cyber operations is examined. Although AI can generate high-quality illustrations and not realistic image contents, the achievement of photorealistic results still remains extremely difficult. The reasons for this are the limited processing power and the need for human post-processing. The accessibility and practical applicability of such tools lead to concerns with regard to their misuse, in particular to the distribution of disinformation as well as digital deception. The findings of the second study concerning the problems of recognising synthetic photos is presented below.

Hidden clues in AI-generated images

Even if AI models keep improving, they still have difficulty generating certain details. In the study, typical errors in synthetic images were determined, termed artefacts, which could provide important clues to the synthetic origin. The errors were then classified in a comprehensive taxonomy. These errors frequently occur because generative models cannot reproduce complex visual structures completely accurately. Many of the problems include:

Errors in the human anatomy

AI frequently generates hands with too many or too few fingers, unnatural positioning of the fingers or asymmetric facial features. Ears, eyes and teeth can also be distorted or placed incorrectly.

Irregularities in light

Shadows and reflections can appear unnatural, because light sources behave differently than in a natural environment. An emphasis on the face or on objects can be incorrectly placed and thus produce an artificial impression.

No symmetry

Objects can be slightly distorted or placed incorrectly such that symmetrical objects appear asymmetrical – for example, different rear-view mirrors of a vehicle – or repetitive structures can be reproduced inconsistently, such as railings or fences with irregular spacing.

Even if these errors are subtle, they become obvious when observed more closely. As AI-generated images are increasingly improving, both professionals and the public must be made equally aware of how to recognise these subtle fine details so that they can distinguish between real and synthetic contents.

Why training and raising awareness are important

Training and raising awareness are essential for counteracting the risks posed by synthetic imagery. The study recommends that specialists in the areas of journalism, news analysis and digital forensics are specifically trained in better recognising AI-generated contents. Raising awareness of synthetic image generation among the general public can also help to reduce the dissemination of misinformation and manipulation.

Avoiding distortions in image analysis

Two basic errors are possible when analysing images: “False positives” can occur, where real images can mistakenly be identified as artificial, as well as “false negatives”, where AI-generated images can be incorrectly considered to be real. These types of errors can lead to misinformation, mistrust and serious consequences in areas such as journalism, law and research. Cognitive distortions, particularly confirmation errors, can further distort the analysis and lead to artefacts being incorrectly identified or clear signs of manipulation being overlooked. The frequency of such errors can be reduced if a systematic, impartial approach and modern recognition tools are deployed. In addition, it is important to make image analysts aware of the influences of cognitive distortions when examining images.

Practical application of the results of the study

AI-generated images are already used today to manipulate public opinion, stock markets and even political events. Falsified images can spread incorrect information as well as cause confusion and mistrust. Countermeasures could consist of novel recognition technologies and better media skills. The documents and findings of the study can be used to:

  • support education and training measures in the areas of digital forensics, news analysis and journalism,
  • check images systematically for clues to AI generation,
  • support campaigns for media expertise,
  • develop new guidelines and instructions to prevent the improper use of synthetic imagery,
  • encourage new research for the automatic recognition of AI-generated images.

FHNW perspective

At the FHNW, we examine AI in contexts in which rigorous research is confronted with practical challenges. As researchers, we are at the same time both fascinated and concerned about the latest advances in AI and the associated potential impact on our society. The CYD Campus promotes research on these important questions with financial resources and brings specialists from various areas together to manage projects on topics with practical relevance. We are proud that we were able to examine the weaknesses of generative AI and could develop a workflow together for practitioners to identify synthetic imagery as such. Our thanks go to all those involved who have led this project to success, but particularly to Raphael Meier, who supported us with his extensive expertise and the efficient use of his network to bring together science and practice.

Synthetic images of HIMARS vehicles. Both images were generated using ControlNet, where the edge structures of real images served as templates. Image (A) was generated by the prompt «Photo of a HIMARS vehicle on a gravel road in good weather with blue sky», image (B) by the instruction «Photo of a HIMARS in a large military bunker with clear structures and little light».

Further information: