The history of Artificial Intelligence is as fascinating as the future it promises. For many, AI is a recent phenomenon, rising with the rise of connected world. But research and speculation in AI have been much older than most believe. Automaton have been described since antiquity. Formal logic and epistemology, which are essential pillars of AI, have been studied since not later than 400 BC in various civilizations. Ever since then, there have been philosophical discussions and practical experiments to build AI. But the true history of AI in the modern sense starts around the time of John von Neumann and Alan Turing (i.e. late 1940’s). The first definition of AI came around this time from Turing who proposed that a machine is intelligent if a human cannot distinguish between conversations with the machine and conversations with a human. This changed the paradigm from “can machines think?” to “can machines think like humans?” This is the now famous Turing Test.

Soon, significant research was stated in AI by academia and corporations like IBM and Bell labs. Specifically research was focused on machines beating humans in the games of chess and checkers. Jonh McCarthy, Marvin Minsky and others gave significant push to AI and cognitive science in the late 1950’s. AI around this time and for a couple subsequent of decades focused on symbolic logic because of their influence. ALGOL, Lisp and other functional languages were developed around this time. 1960’s saw first research in machine learning. Bayesian methods were the first models used for ML. Initial research on neural networks also started around this time. The first set of NNs had only one layer and were called perceptrons. Towards the end of 1960’s, Marvin Minsky and Seymour Papert publish Perceptrons, a book delving on the limitations of perceptrons. This book, amongst other things, lead to a lull in AI research in 70’s, often called the AI winter.

In 1980’s, a new form of AI systems started gaining attention. Called the expert systems, these are machines purported to run niche programs written in Lisp to solve specific problems using rule based systems. But very soon the market for these specialised AI machines collapsed as generic computers became more powerful. Also around this time C++ and object oriented program gained more popularity than Lisp and other list based programming languages.

This led to another very long winter in AI. Now the winter lasted all the way till mid-2000’s. There are several reasons for the winters in AI. The primary being the hype-and-bust cycles. Companies and investors spend significant amount of money in technologies that promise ground-breaking results in AI but eventually do not live up. Another reason is that different approaches to AI until mid-2000’s were looked at in silos. For example symbolic AI research groups rarely interacted with neural network research groups. That changed in 2000’s when focus shifted to solving real-world problems than building siloed research.

Today AI consists of several related fields of linguistics, statistical modelling, machine learning tools, deep learning, cognitive science, signal processing and the often ignored field of symbolic logic. As computing power grew tremendously over the last few years and engineering aspects of AI also have shown high level of maturity we could see a new era of AI. My sincere hope is that we don’t enter into another wither because of over-hype and false promises.

Follow