Skip to main content

CommunicationPublished on 10 March 2026

What is AI? – Milestones of artificial intelligence

“Can machines think?” This is the question that British mathematician Alan Turing asked himself back in the early 1950s. To this day, there is no universally accepted definition of artificial intelligence. However, AI is generally understood as an attempt to replicate human intelligence, for example by processing large amounts of data, recognizing patterns, or learning adaptively. Find out in this report how AI has evolved since then.

Dr Michael Rüeggsegger, Head of Competence Centre for AI and Simulation

What is AI?

“Can machines think?” With this question, Alan Turing, a British mathematician and pioneer in the field of computer sciences, coined the term “artificial intelligence” in the early 1950s. Despite this, no universally accepted definition has been established to date. A large number of definitions and typologies exist around the concept of artificial intelligence (AI). Nonetheless, AI is described in the most common definitions as an attempt to recreate human intelligence. This means that AI processes large quantities of information to fulfil specific tasks. These include processing natural language, recognising patterns, adaptive learning and developing strategies. Accordingly, AI is dependent on ideas and methods from various disciplines such as mathematics, neuroscience, linguistics and psychology.

Three levels of AI can basically be defined – weak AI (Artificial Narrow Intelligence), which is specialised in executing a task such as a chatbot on a website. In addition, general or strong AI (Artificial General Intelligence) focuses on replicating human intelligence. In this context, strong AI has the ability to acquire broad knowledge in order to perform various tasks. Finally, the third level refers to artificial super intelligence (Artificial Super Intelligence), the capabilities of which exceed human intelligence. Compared to human cognitive function, the intellectual capabilities of this super intelligence are highly developed and well advanced.

The current research and AI technologies concern, in particular, the strand of weak AI. Here, it is clear that AI applications do not match human skills. Only in certain specialised branches have individual AI technologies succeeded in exceeding human skills.

Milestones of AI

1943 – McColloch-Pitts-neuron

Back in the early 1940s, Warren McCulloch and Walter Pitts presented a first biological neuron model. Based on a binary approach, the model recognises neurons as inactive or active elements. The neurons are assigned the value 0 or 1. To date, the model is considered to be the first work in the research field of artificial intelligence.

1950 – Alan Turing

Portrait of Alan Turing
The renowned British mathematician Alan Turing created the Turing Test still used today in 1950. The test is a recognised indicator for checking the independent computing power and intelligence of machines. The test, also known by the name Imitation Game, is considered passed as soon as a human being can no longer distinguish whether the managed interaction is taking place with a human being or with a machine.

1956 – Dartmouth College Conference

Group photo of the Dartmouth Conference of 1956
In summer 1956, leading computer scientists, mathematicians and linguists met at Dartmouth College in the US state of New Hampshire for a workshop dedicated to the topic of artificial intelligence. This formally designates the birth of the term “artificial intelligence”. During the meeting, the first AI programme with the name Logic Theorist was developed on the spot.

1970-1990 – AI winter

Despite several intermediate milestones in the history of AI, such as the primitive neuron network Perceptron developed by Frank Rosenblatt in 1958 and the psychotherapeutic dialogue system ELIZA in 1966, these achievements remained far behind expectations. The AI winter designates, in particular during the period between 1970 and 1990, the boundaries of AI at that time. Small quantities of data, a limited pool of specialised knowledge and a shortage of skills in language recognition and interpretation led directly to the extensive discontinuation of financial support. As a result, the activities in the area of AI were reduced to a large extent.

1990-2010 – AI upswing

At the beginning of the 1990s, the introduction of the publicly accessible Internet in particular presented a considerable breakthrough for the research field of AI. Due to the rapid spread of the Internet, the globalisation and the advancing digitalisation, interest in AI technologies once again flourished. In particular, the rapid increase in freely accessible data quantities led directly to an exponential development of AI systems. This positive trajectory was promoted by the continuous increase of the processing power of computers and the improved methods in the area of AI. At the end of the 1990s, AI returned to the media spotlight as a result of a number of promising victories against its human opponents in the areas of chess and computer games. In the 2000s, private companies such as Amazon, Google and IBM also started to finance their own AI projects to an increasing extent. One thing was certain – by this time, AI was a fixed component in the business models of several private companies.

2020 – First guidelines of the Federal Administration on AI

The tasks and activities of the Federal Administration are also increasingly affected by advancing digitalisation and thus by AI. AI has long been an important technological component of the Federal Administration in many areas. As a reaction to the increasing impact of artificial intelligence as well as the accompanying challenges, the Federal Council approved first guidelines on dealing with AI in the Federal Administration in November 2020. These guidelines primarily offered a reference framework for all responsible bodies of the Federal Administration. The goal was to attain a joint understanding of AI and thus to pursue a uniform policy when dealing with AI.

Dal 2021 – generative AI boom

From the 2020s, major leaps in development led to generative AI. Generative AI is based on what are known as Large Language Models (LLM). These enable various functions such as processing and editing texts, creating contents and translating languages. The term first became known to a wider public through the publication of the tool ChatGPT from the US company OpenAI in 2022. ChatGPT was the first service of its type and is available free of charge to its users. Only a short time after the market entry of OpenAI, countless services of the same type from other companies followed.

2022 – Foundation of the Competence Network CNAI

Logo CNAI
In 2022, the Federal Council assigned the Federal Statistical Office (FSO) the task of setting up a Competence Network for Artificial Intelligence. The tasks of the Competence Network include, for example, support in the exchange of knowledge and networking in the area of AI, either within the Administration or beyond.

2024 - Foundation of the Competence Centre AI and Simulation (AISI)

Dr Michael Rüeggsegger, Head of Competence Centre for AI and Simulation
As part of the development of armasuisse 4.0, armasuisse S+T was assigned the task of drawing up a development plan for the Competence Centre for Artificial Intelligence and Simulation (AISI) at the beginning of 2024. The goal of the AISI Competence Centre is to develop and transfer innovative solutions for institutions of national security. For this purpose, specialists work together closely with the end users of all federal offices within the DDPS.

One of the main services is to develop demonstrators and test these in experiments as well as together with the end users. In addition, the Competence Centre is the central point of contact within the DDPS. Specifically, it guides and coordinates all practical activities in the area of AI and simulation for security applications. These activities include, for example, technical advice to the Armed Forces when initialising new projects or conveying technical expertise to partners and industry to develop demonstrators for products. Thanks to the high degree of specialisation in the various specialist areas of armasuisse S+T, the AISI can draw on internal support from various experts in the execution of its activities. The Competence Centre also conducts technology and market monitoring, to identify new developments early on and to incorporate them in good time in projects.

2025 – Focal points of the AISI Competence Centre

The Competence Centre is currently focused on closing priority capability gaps in defence. For operational capabilities such as shared situational awareness, joint management and robust as well as secure data processing, the Competence Centre implements and tests first demonstrators in the operational environment with the troops. In this context, the learning method Reinforcement Learning (RL) is used in addition to generative AI.

2025+ – Neuromorphic computing

Symbolic image neuromorphic computing
Although the beginnings of neuromorphic computing lie in the 1980s, the current importance of this technology is steadily increasing. The computer approach is based on the replication of functions in the human brain, to develop efficient and adaptive computer systems. Here, the technology is primarily focused on the neurological and biological structures of the brain. Neuromorphic calculation is therefore also treated as a future key technology for optimising the energy efficiency of resource-intensive AI tasks.