Skip to main content

CommunicationPublished on 14 November 2023

Cyber threats of generative artificial intelligence

There has been an increased focus of public interest on generative artificial intelligence (AI) since the start of this year. Commercial language models such as ChatGPT and the integration of OpenAI in Microsoft Bing have evoked both enthusiasm for their potential skills as well as concerns regarding their misuse.

Michiel Lüchinger, Andrea Thäler, specialist area Cyber Security and Data Science, competence sector armasuisse Science and Technology

A fusion of Swiss alpine scenery with advanced digital technology symbols

As part of a technology monitoring project at the Cyber-Defence Campus, armasuisse Science and Technology, a study has been conducted which gives an overview of the developments, the state of the art and the implications of generative language models, also known as large language models (LLMs) for Switzerland.

Generative language models use complex mathematical models to model text contents. This technology has a large number of application areas, such as correcting and generating texts, as well as interaction options with an AI chatbot. The recently published study of the Cyber-Defence Campus in cooperation with Effixis SA, École Polytechnique Fédérale de Lausanne (EPFL) (the Swiss Federal Institute of Technology in Lausanne), HEC Lausanne (the Faculty of Business and Economics of the University of Lausanne) and HES-SO Valais-Wallis (the University of Applied Sciences and Arts of Western Switzerland) gives a detailed insight into the development and risks of LLMs for industry, public administration and science in Switzerland.

The understanding of LLMs is fundamentally different from the human understanding of language. AI does not distinguish between letters, words or sentences as they are interpreted by people. Instead, LLMs use probability calculations and neural networks to find out how text modules are combined with each other. By using large quantities of text as training material, the probability with which a particular text module follows another text module can be calculated. However, LLMs do not have their own logic, which is why their results must always be critically evaluated. Although LLMs have the potential of revolutionising information and knowledge sharing, they also harbour substantial risks.

The risks of large language models

On the one hand, LLMs can generate misinformation based on biased training data, while on the other hand, the technology can also be misused for active disinformation campaigns. The use of social platforms for information operations has, up to now, required a deep understanding of the respective language, culture and local events. Through the use of LLMs, these requirements can be circumvented, as high-quality, context-related texts can be generated in all languages within a short space of time. This not only makes it easier for malicious parties to influence opinions on public platforms, but also to disseminate authentic phishing messages, for example. It is not out of the question that LLMs will, in future, have the necessary training data to understand and generate texts even in Swiss German. This could result in a further potential hazard for Switzerland.

Apart from disinformation, the publication of private information also represents a potential threat. LLMs are not trained only with public training data from the Internet, but also indirectly through user entries in the system. For this reason, no private data should be entered when using LLMs. In addition, LLM-extended search engines enable the Internet to be searched more efficiently and deeply. This enables «hidden» information such as databases and programme code in the Deep Web – the part of the Internet which is not well indexed and is therefore undetectable using a normal search engine search function – to be made public.

LLMs require continuous monitoring

In summary, it can be said that many of the known threats in cyberspace have become more easily accessible and better scalable through the use of LLMs. In order to protect this technology from misuse, it is necessary to continuously monitor the use of LLMs, to clarify their risks to the population and to avoid the supply of private information. You can find the detailed analysis of the LLM landscape as well as its limitations and risks for Swiss cyber defence in the complete study.

The Cyber-Defence Campus has been monitoring LLMs and the associated risks for many years. Various projects are underway to develop solutions which could reduce the risks of this technological development.

Further information