The History of Artificial Intelligence
Embark on a Journey Through the History of Artificial Intelligence
But first, what is Artificial Intelligence? What has Artificial Intelligence accomplished in the past? And what will it look like in 5, 20, 50 or even 100 years?
There are many questions surrounding the concept of artificial intelligence. We can answer many of them by traveling back to the past and examining how it all began.
Definition of Artificial Intelligence
Artificial Intelligence is composed of two words.
Artificial is something that is not real: something fake, or more importantly, simulated. It can be a replacement for something real. Usually, when it does take the place of something real, there’s a good reason for it.
Intelligence encapsulates a lot of meanings. These include logic, understanding, self-awareness, learning, knowledge, planning and creativity.
We are humans because of intelligence. It allows us to make choices and decisions. The same applies to animals. Interestingly, intelligence in animals can be compared between species. In both cases (human intelligence and animal intelligence) we’re talking about natural intelligence (NI).
To understand more about Artificial Intelligence, we have to look at its histories. We are humans because we perceive our environment, learn from it and take action based on what we discover.
History of Artificial Intelligence
Artificial Intelligence began over 100 years ago.
Rossum’s Universal Robots
In 1920, the Czech writer Karel Čapek published a work of science fiction called Rossumovi Univerzální Roboti (Rossum’s Universal Robots). This is better known as R.U.R, and it introduced the word ‘robot’ to the world.
R.U.R. is about a factory that creates artificial people called robots. In R.U.R, Robots are living creatures that actually more closely resemble clones. At the beginning of the story, they work for the humans that created them. But in the end, they rebel, and ultimately, the human race is wiped out.
Alan Mathison Turing was an English computer scientist, mathematician, logician, and theoretical biologist.
Turing was highly influential in the development of theoretical computer science. He created a formalization of the concepts of algorithms and computation with the Turing machine. This is considered to be the first model of a general-purpose computer. Turing is considered to be the father of theoretical computer science and artificial intelligence.
The Dartmouth Workshop
In the early 1950s, there were various names for the field of “thinking machines.” These include cybernetics, automata theory, and complex information processing.
In 1955 John McCarthy, then a young Assistant Professor of Mathematics at Dartmouth College, decided to organize a group to clarify and develop ideas about thinking machines. He picked the name ‘Artificial Intelligence’ for the new field. He chose the name partly for its neutrality; avoiding a focus on narrow automata theory, and avoiding cybernetics which was heavily focused on analog feedback.
In early 1955, McCarthy approached the Rockefeller Foundation to request funding for a summer seminar at Dartmouth for about 10 participants. In June, he and Claude Shannon, a founder of Information Theory at Bell Labs, met with Robert Morison, Director of Biological and Medical Research, to discuss the idea and possible funding. Morison was unsure whether money would be made available for such a visionary project.
The proposal stated:
We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find out how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.
Deep Blue was a chess-playing computer developed by IBM. It is known for being the first computer chess-playing system to win both a chess game and a chess match against a reigning world champion under regular time controls.
Deep Blue won its first game against a world champion on 10 February 1996, when it defeated Garry Kasparov in game one of a six-game match. However, Kasparov won three and drew two of the following five games, defeating Deep Blue by a score of 4–2. Deep Blue was then heavily upgraded and played Kasparov again in May 1997. Deep Blue won game six, therefore winning the six-game rematch 3½–2½ and becoming the first computer system to defeat a reigning world champion in a match under standard chess tournament time controls. Kasparov accused IBM of cheating and demanded a rematch. IBM refused and retired Deep Blue.
Development for Deep Blue began in 1985 with the ChipTest project at Carnegie Mellon University. This project eventually evolved into Deep Thought, at which point the development team was hired by IBM.
In 2013, DeepMind, one of the most important artificial intelligence research groups in the world, presented an A.I. that could play a couple of Atari games as well as the top human player level. They used reinforcement learning and neural networks to allow artificial intelligence to learn these games. In addition, they only used the pixels as an input for the agent, so no direct reward was assigned to the agent based on the movements it made.
In 2015, they introduced a more intelligent agent, who successfully played 49 classic Atari games.
There are already Artificial Intelligence systems that can surpass humans in specific areas, such as playing GO or in data analysis. Today, if we talk about Artificial Intelligence systems in production, we refer to specialists. But there is still no General Artificial Intelligence (AGI), which can act as a human, and there is no super-intelligence, which is more intelligent than a human being.
Here at Intelygenz, we utilise the power of A.I. to help businesses achieve more. Through Intelligent Process Automation, businesses can automate the most complex aspects of their processes – including making decisions – using A.I. technologies. A.I. can even be applied to custom software projects, turning them into Intelligent Products that can help businesses overcome their biggest challenges and deliver capabilities that may have been unimaginable.
Find out how A.I. can take your business further by exploring our free resources.
Get the latest roundup of the most important, interesting and stories from the past week. In your inbox every Saturday by 10am.
Related Articles you might like
STOP! Read this before overhauling your legacy systems
In today’s hyper competitive digital economy, it can be hard to keep up with all that rapid innovation. This is […]View Blog Post
Should you automate like everyone else does?
Every business is built on processes. No matter what products or services you provide; which markets you serve; or how […]View Blog Post
Should you ever try to build custom A.I. applications yourself?
So you’re looking to solve valuable challenges that are as unique as your business, and have identified that process automation […]View Blog Post