Neural networks are the core of deep learning algorithms. They’re the reason why we have self-driving cars, AI cancer detection, and chatbots. But how did it all start? When was the first neural network and who invented it? The history of neural networks started in the 1940s with neuropsychologist McCulloch and mathematician Pitts and has now reached a continuously growing success.
“I confidently expect that within a matter of 10 or 15 years, something will emerge from the laboratories which is not too far from a robot or a science fiction thing.”
–Claude Shannon, 1961
1940s | Who Invented Artificial Neural Networks?
- In 1943, neuropsychologist Warren S. McCulloch and mathematician Walter Pitts published “A Logical Calculus of the Ideas Immanent in Nervous Activity”.
- They wanted to explain how neurons in the brain might work so they modeled a simple artificial neural network using electrical circuits. The research demonstrated how the brain produces complex patterns that can be simplified down to a binary logic structure with only true/false connections.
- The history of neural networks started with the publication of McCulloch and Pitts’ paper.
1950 – 1960 | What Was The First Neural Network?
- Psychologist Frank Rosenblatt developed the first neural network, the perceptron, in 1958. He documented this in his research and used it to enable a computer to learn how to distinguish between cards marked on the left and cards marked on the right.
- In 1959, Bernard Widrow and Marcian Hoff developed models called “Adaline” and “Madaline”. Their name is short for “Multiple ADAptive LINear Elements”.
- Adaline was able to recognize binary patterns so that if it was reading streaming bits from a phone line, it could predict the next bit.
- Madaline was the first neural network applied to real-life problems. It’s an adaptive filter that eliminated echoes on phone lines and is still used today.
- This started the hype of AI technology or what was known as the “thinking machines”—in 1958, the NY Times published an article on the potential of neural networks.
1970 – 1980 | When Was the First AI Winter?
- 1973: The British government cut funding for AI research in all of its universities except for three.
- 1974: Paul Werbos applied backpropagation within neural networks.
- 1974: DARPA cut funding for AI research as interest had started to fade.
- The AI winter made it extremely hard for neural network projects to find funding. This part of neural networks’ history worried many that it would spread pessimism across the community and halt any innovative projects.
1980 – 1990 | How Did The AI Winter End?
- In 1981, the Japanese Ministry of International Trade and Industry dedicated $850 million to the Fifth Generation computer project. The project’s goal was to build machines that could think and have conversations like human beings.
- 1982: John Hopfield developed the Hopfield Network, a recurrent neural network.
- In 1983, in response to the fifth-generation project, DARPA began to fund AI research through the Strategic Computing Initiative. This is when the history of neural networks picked up again.
- 1985: Geoffrey Hinton and Terrence J. Sjnowski developed the Boltzmann Machine – a type of recurrent neural network.
- 1986: Rummelhart et al. proposed the Multilayer Perceptron – a multilayer neural network.
- In 1987, the International Neural Network Society (INNS) was formed, along with the INNS Neural Networking journal in 1988.
- 1989: Yann LeCun published a paper demonstrating how the use of constraints in backpropagation and its integration into the neural network architecture can be used to train algorithms. This is when deep learning became an actuality.
1990s – Early 2000s | Are Neural Networks Too Good To be True?
- In the late 1990s and early 2000s, people started viewing artificial intelligence as systems that fail to live up to their promises. In 2005, John Markoff said in the New York Times “At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers.”
- Regardless, new innovative projects and advancements were happening in the space of neural networks.
- 1997: Schmidhuber & Hochreiter proposed the long short-term memory network – a recurrent neural network framework.
- 1998: Yann LeCun published “Gradient-Based Learning Applied to Document Recognition”.
- 2006: Geoffrey Hinton introduced deep belief networks in his paper.
- In 2009, Ruslan Salakhutdinov and Geoffrey Hinton introduced deep Boltzmann machines.
- 2009: Google started building the first self-driving car using neural networks.
2010s – Present (as of 2023) | What’s the Future of Neural Networks?
- Between 2009 and 2012, the research group of Schmidhuber developed recurrent neural networks and deep feedforward neural networks.
- Between 2011 and 2014, personal assistants like Siri, Google Now, and Cortana used speech recognition to answer questions and perform simple tasks.
- 2014: Ian Goodfellow designed the Generative Adversarial Network (GAN).
- 2018: Jacob Devlin and his colleagues from Google published BERT, a transformer-based model for Natural Language Processing.
- 2020: OpenAI published GPT-3, a deep learning model that produces human-like text.
- 2022: OpenAI published ChatGPT, an advanced chatbot.
The history of neural networks is full of ups and downs. Many stopped believing in the potential of AI over the years, but we are currently experiencing its hottest summers yet. We’re at a point where one can’t deny the power of AI. Per Edsger W. Dijkstra’s “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” One can only wonder what AI has in stock for us.