The idea of a “thinking machine” dates back to ancient Greece. But since the advent of electronic computing (and relevant to some of the topics discussed in this article), the major events and milestones in the evolution of AI include:
1950
Alan Turing publishes Computing machines and intelligence (link is outside ibm.com) In this article, Turing, famous for cracking the German ENIGMA code during World War II and often referred to as the “father of computing,” asks the question: “Can machines think?”
He then proposes a test, now known as the “Turing test,” in which a human interrogator attempts to distinguish between a written response from a computer and a written response from a human. Although this test has been the subject of much scrutiny since its publication, it remains an important part of the history of AI and a current concept in philosophy because it draws on ideas from linguistics.
1956
John McCarthy coins the term “artificial intelligence” at the first-ever AI conference at Dartmouth College. (McCarthy later invented the Lisp language.) Later that year, Allen Newell, J.C. Shaw, and Herbert Simon create Logic Theorist, the first working AI computer program.
1967
Frank Rosenblatt builds the Mark 1 Perceptron, the first neural network-based computer that “learns” through trial and error. A year later, Marvin Minsky and Seymour Papert publish a book called Perceptrons, which becomes both a standard work on neural networks and, at least for a time, an argument against future neural network research initiatives.
1980
Neural networks, which use a backpropagation algorithm for training, have become widely used in AI applications.
1995
Stuart Russell and Peter Norvig publish Artificial Intelligence: A Modern Approach (link is outside ibm.com), which becomes one of the leading textbooks for studying AI. In it, they examine four potential goals or definitions of AI, which differentiate computing systems based on rationality and reflection versus action.
1997
IBM’s Deep Blue defeats then-world chess champion Garry Kasparov in a chess match (and a rematch).
2004
John McCarthy writes an article, What is artificial intelligence? (link is outside ibm.com) and offers an oft-cited definition of AI. Today, the era of big data and cloud computing is underway, allowing organizations to manage ever-larger data estates, which will one day be used to train AI models.
2011
IBM Watson® beats champions Ken Jennings and Brad Rutter on Jeopardy! Around the same time, data science begins to emerge as a popular discipline.
2015
Baidu’s Minwa supercomputer uses a special deep neural network called a convolutional neural network to identify and categorize images with a higher accuracy rate than the average human.
2016
DeepMind’s AlphaGo program, based on a deep neural network, beats Lee Sodol, the world champion of Go, in a five-game match. This victory is significant given the enormous number of possible moves over the course of the game (over 14.5 trillion after just four moves). Google later acquired DeepMind for $400 million, according to some sources.
2022
An increase of large language models or LLMs, such as OpenAI’s ChatGPT, are creating a huge shift in AI performance and its potential to generate business value. With these new generative AI practices, deep learning models can be pre-trained on large amounts of data.
2024
The last one AI Trends indicate a continuing renaissance of AI. Multimodal models that can take multiple types of data as input provide richer and more robust experiences. These models bring together computer vision Image recognition and NLP speech recognition capabilities are also advancing in a world of diminishing returns with massive models with large numbers of parameters.