Technically speaking, the field of Artificial Intelligence has only been official since 1956. In fact, the history of artificial intelligence stems back way more than just 67 years. It’s a product of countless milestones , or building blocks, that accumulated over time. The Greek myths told of Talos and Descartes theorized that human thoughts can be mechanized long before Alan Turing proposed the “Game of Imitation” in 1950. AI is a 3,000 year-old dream that is more than the ChatGPT you came to know about last year. This article will navigate the History of Artificial intelligence in 8 major time epochs and highlight the major achievements in each.
History of Artificial Intelligence: Quick Summary
Now of course you can breakdown history however you want, but for this article AI History will stretch over 8 different milestones and see them as building blocks. If AI history had a visual representation that would showcase each era’s work, the cumulative efforts would look something like this. Keep in mind that the future holds endless building blocks and milestones for AI.
History Overview
The search for creating a machine that can automate human tasks is rooted deep in antiquity. These thoughts and philosophies were not realized until the big strides in computer science happened with Ada Lovelace and Babbage designing the first computer in 1840 . World War 2 (1939- 1945) pushed Alan Turing to develop the code breaking machine and accelerate the advance in computer science.
Turing went on to question the intelligence of machines in 1950 and create the Turing Machine. John McCarthy then hosted a summer camp-like conference and coined the field and term “Artificial Intelligence” in 1956. This lead to the next 20 year boom in AI research and funding until they hit a wall with the computer’s limited capabilities at that time. Then came the AI winter between 1970s and early 1990s. It is split in two periods with a break of new research about Expert Systems and Neural Networks in between.
Since the hype in the 1960s, AI came back (Renaissance) with a bang in 1997 when Deep Blue beat the world chess player. This had a butterfly effect of AI success until 2011, where new Big Data & Machine Learning methods were revolutionary. AI entered every household with Siri, Alexa, and Robot Vacuums. Since that era until now we are in the AI boom era – where, surprisingly, your grandma uses ChatGPT.
History of Artificial Intelligence Milestones
1- Myths, Philosophy & Automata (Antiquity – 18th Century):
The concept of artificial beings with human-like intelligence dates back to ancient mythology and folklore. During this period, AI was primarily a subject of philosophical speculation and mechanical engineering. There were no real computational machines capable of exhibiting human-like intelligence.
3,000 years ago – Greece
Greek myths told of giant automatons. Talos was a bronze giant designed to defend the island of Crete . He would throw boulders at the ships of invaders and slowly circle the island every day. The myth says that Hephaestus forged Talos as a gift to the son of Zeus. This is arguably the first mention of a Robot in human history.
Philosophers like Aristotle pondered the possibility of automata. In particular, his development of syllogism and its use of deductive reasoning was a key moment in humanity’s quest to understand its own intelligence.
8th to 13th century – Islamic Golden Age
Muslim scholars like Al-Kindi and Al-Jazari explored automata and mechanical devices. Al-Jazari’s “Book of Knowledge of Ingenious Mechanical Devices” described various automated machines that can, for example, play various melodies and row boats. It was suggesting an early programmable device.
15th to 18th Century – Renaissance and Early Modern Europe
- Blaise Pascal developed the Pascaline, a mechanical calculator designed to perform arithmetic operations automatically. While not AI in the modern sense, it represented early efforts to mechanize intellectual tasks.
- Leibniz proposed a universal symbolic language and calculus ratiocinator, which laid the foundation for symbolic logic and computation.
- Thomas Bayes introduced Bayesian probability theory, which later became a fundamental concept in AI, especially in probabilistic reasoning.
- Philosophers like Thomas Hobbs and Rene Descartes reasoned that the process of human thought could be mechanized. They explored the possibility that all rational thought could be systematized as simple algebra.
2- Emergence of Computer Science (1900s – 1940s):
The theoretical underpinnings of modern computing progressed steadily with George Boole inventing Boolean algebra in 1854. Even though simple calculating machines have existed for centuries, it wasn’t until the 1840s that Ada Lovelace and Charles Babbage designed a truly programmable computer. It was called the analytical engine. It was never built but it remains a remarkable turning point in the development of computer science.
Initial AI attempts
- 1818 → Mary Shelly publishes “Frankenstein” the earliest Sci-fi novel that manifests AI in pop culture.
- 1913 → Bertrand Russel and Alfred Whitehead described formal logic .
- 1920 → Czech writer Karel Capek sci-fi play “R.U.R” popularizes the word robot. In time replaces the word automation.
- 1927 → “Metropolis” was the first Sci-Fi movie addressing AI.
- 1942 → Isaac Asimov publishes the “Three Laws of Robotics. An idea commonly found in science fiction media about how artificial intelligence should not bring harm to humans.
- 1943 → Warren McCullough and Walter Pitts publish the paper “A Logical Calculus of Ideas Immanent in Nervous Activity,” which proposes the first mathematical model for building a neural network.
- (1939-1945) World War II → Alan Turing creates a decrypting machine that breaks the Nazi’s ENIGMA code.
3- Birth of Artificial Intelligence (1950-1960s)
If there must be father for AI, in the sense of who made the biggest strides in Artificial Intelligence history to bring the discipline to the masses, it would be Alan Turing and John McCarthy.
Birth of the Concept With Alan Turing (1950s):

Alan Turing was many things but basically a computer scientist who is most known for his successful code-breaking work during World War II. After the war, he focused on the possibilities of computer intelligence.
At the beginning of 1950, John Von Neumann and Alan Turing did not create the term AI but were the founding fathers of the technology behind it. The two researchers formalized the architecture of our contemporary computers.
Turing raised the question of the possible intelligence of a machine for the first time in his famous 1950 article “Computing Machinery and Intelligence” and described a “game of imitation” that gave birth to the Turing Machine and Turing Test.
Birth of AI Field at the Dartmouth Conference (1956):
In the early 1950s, the field of “thinking machines” (we know now as artificial intelligence) was given many names. From cybernetics to automata theory to complex information processing.
John McCarthy was a young Assistant Professor of Mathematics at Dartmouth College. He wanted to focus on the potential for computers to possess intelligence beyond simple behaviors. So, he decided to organize a group to clarify and develop ideas about thinking machines.
In 1955 John, along with friends and colleagues Marvin Minsky , Nathaniel Rochester, and Claude Shannon, approached the Rockefeller Foundation to request funding for a summer seminar at Dartmouth. In 1956 the conference started and lasted approximately six to eight weeks. It was essentially an extended brainstorming session with a mission “to show the world that AI is the future”.
Early AI Achievements in 1950s:
- 1950
→ Harvard undergraduates Marvin Minsky and Dean Edmonds build SNARC, the first neural network computer.
→ Claude Shannon publishes the paper “Programming a Computer for Playing Chess.” - 1952 → Arthur Samuel develops a self-learning program to play checkers.
- 1954 → The Georgetown-IBM machine translation experiment automatically translates 60 carefully selected Russian sentences into English.
- 1956
→ John McCarthy coined the term and field ‘artificial intelligence’ at the first-ever AI conference at Dartmouth College.
→ Allen Newell and Herbert Simon demonstrate Logic Theorist (LT). It was designed to mimic the problem solving of a human and often called the first AI program in history.
- 1958 → John McCarthy develops the AI programming language Lisp and publishes “Programs with Common Sense,”. The paper proposed the hypothetical Advice Taker, a complete AI system with the ability to learn from experience as effectively as humans.
- 1959
→ Allen Newell, Herbert Simon and J.C. Shaw develop the General Problem Solver (GPS), a program designed to imitate human problem-solving.
→ John McCarthy and Marvin Minsky found the MIT Artificial Intelligence Project.
→ Arthur Samuel coined the term machine learning while he was working at IBM.
Early AI Achievements in 1960s:
- 1963 → John McCarthy started an Artificial Intelligence Lab at Stanford.
- 1966 → Joseph Weizenbaum created the first ever chatbot named ELIZA.
- 1969
→ Marvin Minsky and Seymour Papert publish a book titled Perceptrons. It became both the landmark work on neural networks and an argument against future neural network research projects.
→ The first successful expert systems, DENDRAL and MYCIN, are created at Stanford. - 1967 → Frank Rosenblatt builds the Mark 1 Perceptron, the first computer based on a neural network that ‘learned’ through trial and error.
- 1972
→ The first humanoid robot was built in Japan named WABOT-1.
→ The logic programming language PROLOG is created.
4- First AI Winter (1970s – 1980s):
Interest and funding in AI died out at the dawn of 1970s. This signaled the first winter in Artificial intelligence history, and it could be summed up in three reasons:
- AI research was undirected. At that time researchers would try lots of different ideas with no clear applications in mind. Basically, wasting money. Therefore, the Mansfield Amendment of 1969 put pressure on DARPA to fund “mission-oriented directed research”. This lead to only fund projects with clear objectives, such as autonomous tanks and battle management systems.
- Unrealistic AI expectations. The British government released The 1973 Lighthill Report. It detailed the disappointments in AI research and lead to severe cuts in funding for AI projects. Then the frustration with the progress of AI development lead to major DARPA cutbacks in academic grants.
- Computers were weak and expensive. Computers were not able to store commands; they were only able to execute them. Basically, computers could be given instructions but they couldn’t remember what they did. Also, computing was extremely expensive. For example, leasing a computer for a single month would cost up to 200,000$ .
5- Rise of New Research in AI: Expert Systems and Neural Networks (1980s)
A glimmer of hope broke the AI winter, when new research in Expert Systems and Neural Networks came to surface. Companies were spending more than a billion dollars a year on expert systems. In August 1980, the first ever national conference of the American Association of Artificial Intelligence was organized at Stanford University.
Also, in pop and movie culture, AI hype revived with the release of the first Star Wars movie in 1977. It introduced human-friendly robots C3P0 and R2D2.
Expert Systems and Rule-Based AI
Digital Equipment Corporations developed R1 (also known as XCON) which was the first successful commercial expert system, designed to configure orders for new computer systems.
Neural Networks and Connectionism
In 1982 John Hopfield was able to prove that a neural network could learn and process information in a completely new way. Jeffrey Hinton and David Rummelhart popularized a method for training neural networks called “back propagation”. This research came in handy by 1990s as they became successful in speech recognition and character recognition.
6- Second AI winter From 1987 to 1993
The rise of new research died out after computing technology improved, and cheaper alternatives emerged. During this period, expert systems proved too expensive to maintain and update, eventually falling out of favor.
The second AI winter hit from 1987 to 1993. This time in history Artificial Intelligence thrived despite the lack government funding and public attention. Researchers kept working, because computers were actually getting smarter.
How ? → Moore’s Law
In 1965, Gordon Moore, the co-founder of Intel, made a bold prediction. He said that the number of transistors on a chip would double every two years. This simple observation stood the test of time for over 50 years. AI takes good advantage of the continued exponential growth in computing power predicted by Moore’s law. That’s because AI requires massive amounts of data and computing power to train its algorithms. As chips continue to get smaller and more powerful, AI will become more powerful and influential.
7- AI Renaissance (1990s – 2010)
Despite the lack of funding during the AI Winter, the early 90s showed some impressive strides forward in the history of Artificial Intelligence research. Advances in machine learning algorithms, such as neural networks and support vector machines, rekindled interest in AI. Applications included handwriting recognition and spam email filtering.
The biggest accelerator was in 1997, when an AI system beat a reigning world champion chess player. This era also introduced AI into everyday life via innovations such as the first household vacuum (Roomba) and the first commercially-available speech recognition software (Dragon Naturally Speaking) on Windows computers.
Major AI Achievements between 1990s and 2000s
- 1997 → IBM’s Deep Blue beats world chess champion Gary Kasparov.
- 1999 → The Matrix first movie was released. It ushered a big hype in AI among the public.
- 2002 → The inception of vacuum cleaners made AI enter homes.
- 2005
→ STANLEY, a self-driving car, wins the DARPA Grand Challenge.
→ The U.S. military begins investing in autonomous robots like Boston Dynamics’ “Big Dog” and iRobot’s “PackBot.” - 2006 → Companies like Meta, Google, Twitter, Netflix started using AI algorithms in their platforms.
- 2008 → Google makes breakthroughs in speech recognition and introduces the feature in its iPhone app.
8- AI BOOM: Big Data & Machine Learning (2010s-2020s)
After Deep Blue’s success and every household got familiar with AI, a major boom happened in the history of Artificial Intelligence. In 2010, the availability of massive datasets and powerful GPUs led to breakthroughs in deep learning. Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) revolutionized computer vision, natural language processing, and speech recognition.
The major boom happened specifically in 2011 after IBM’s Watson beat jeopardy champion ken Jennings. This proved that AI systems can understand and answer trivia questions remarkably effective way. In that same year Apple introduced the virtual assistant Siri and this helped people become more familiar with AI technology. After that there was no stopping.
AI Major Achievement in 2010s
- 2011
→ Watson (an IBM computer) won the game show Jeopardy in 2011. Watson had demonstrated that it could comprehend plain language and solve complex problems fast.
→ Apple releases Siri, an AI-powered virtual assistant through its iOS operating system. - 2012 → Andrew Ng, founder of the Google Brain Deep Learning project, feeds a neural network using deep learning algorithms 10 million YouTube videos as a training set. The neural network learned to recognize a cat without being told what a cat is. This started a breakthrough era for neural networks and deep learning funding.
- 2014
→ Google’s first self-driving car passes a state driving test.
→ Amazon’s Alexa, a virtual home smart device, is released. - 2015 → Baidu’s supercomputer called Minwa makes strides in image recognition. It used a special kind of deep neural network called a convolutional neural network to identify and categorize images with a higher rate of accuracy than the average human.
- 2016
→ Sophia was the first humanoid robot citizen was created by Hanson Robotics and is capable of facial recognition, verbal communication and facial expression.
→ Google DeepMind’s AlphaGo defeats world champion Go player Lee Sedol.
Why Was This Event An Important Turning Point in AI History?
The complexity of the ancient Chinese game was one of the major challenges to clear in AI. DeepMind’s AlphaGo program, powered by a deep neural network, beats Lee Sodol, the world champion Go player, in a five-game match. The victory is significant because of the the huge number of possible moves as the game progresses. AlphaGo demonstrated over 14.5 trillion after just four moves. Google then purchased DeepMind for a reported USD 400 million.
- 2018 → Google releases natural language processing engine BERT, reducing barriers in translation and understanding by ML applications.
AI Major Achievements in the 2020s
- 2020
→ During the early phases of the SARS-CoV-2 pandemic, Baidu made its LinearFold AI algorithm available to scientific and medical teams seeking to create a vaccine. The system could anticipate the virus’s RNA sequence in just 27 seconds, which was 120 times faster than prior methods.
→ OpenAI releases natural language processing model GPT-3, which is able to produce text modeled after the way people speak and write. - 2021 → OpenAI builds on GPT-3 to develop DALL-E, which is able to create images from text prompts.
- 2022
→ The National Institute of Standards and Technology releases the first draft of its AI Risk Management Framework “to better manage risks to individuals, organizations, and society associated with artificial intelligence.”
→ DeepMind unveils Gato, an AI system trained to perform hundreds of tasks. It includes playing Atari, captioning images, and using a robotic arm to stack blocks.
→ OpenAI launches ChatGPT, a chatbot powered by a large language model. It gained more than 100 million users in just a few months- but you already know that. - 2023
→ Microsoft launches an AI-powered version of Bing, its search engine, built on the same technology that powers ChatGPT.
→ Google announces Bard, a competing conversational AI.
→ OpenAI Launches GPT-4, its most sophisticated language model yet.
- MARCH 2023
→ Petition to halt training AI that is more advanced than GPT4. Therefore, start an AI summer to reflect on the risks and consequences of AI for the future.