What does your mind jump to when you think of AI? Is it something that already exists? Or something that has not yet come to be?
Artificial intelligence (AI) may be the most popular tech trend in the world today, the experimentation and planning based on it is feverish, and use cases are being thought up for its application constantly.
It is expected to bring innovation to all aspects of human life, from how you live in your home and interact with devices, to how major organisations optimise processes and strategise. This technology holds truly disruptive potential, but it is already ingrained in society, as it is already common place to ask Siri or Alexa for help.
Considerable developments mean that the technology is ready to be integrated into critical business processes, marking a point at which it has become a tool rather than a gimmick. While business application of AI is undoubtedly a landmark, is was undeniably impressive, if not slightly sad, when Google’s AlphaGo AI defeated the world’s best human Go player.
As we await the tidal wave of disruption that AI is bound to bring, it is important to gain perspective on our current circumstance and what could happen in the future. Like with all things, the best way to study the present is to learn about the past, and AI has had a long history, despite its recent rise to fame.
Humans have been thinking about artificial beings since ancient times, this is evidenced in myths and legends, in which creatures with their own volition are crafted by great creators. This perhaps stems from a wish to understand whether we were designed to be as we are.
My research includes examples of mythical ancient Greek and Egyptian statues, capable of moving and thinking freely, these are thought to be the oldest known examples of humans imagining a non-biological being with intelligence.
Theory and philosophy is all well and good, but the birth of the computer allowed for these ancient human thoughts to be researched and explored in reality. Computers were refined from Second World War code breakers such as the Z3 and the Colossus, and then a new era commenced when computers like the IBM 702 came to be.
The IBM 702 was the first computer used for AI research, and artificial intelligence became formally recognised as a credited field of research in 1956. Scientific thinking was symbiotic with AI research conducted using computers, and neurology had developed to a point of understanding that brain is an electrical network of neurons.
Researchers questioned whether a brain could be created, mirroring this newly discovered electronic structure and process, and based on pioneering theories from Alan Turing.
The learning process
Today when we think of AI our immediate association is with conversation, this is due to the string of familiar names and characters created by the likes of Apple and Amazon. The road to this understanding also began soon after the birth of AI, as natural language processing was soon worked on closely.
For technology, speaking begins with reading, unlike humans, and work from American computer scientist Daniel Bobrow began this process. Bobrow’s AI program was called STUDENT, and it could recognise and solve algebra questions that used words.
This process was accelerated by ELIZA, the AI created by German-American computer scientist Joseph Weizenbaum. This platform was so comparatively advanced that it is said some people believed they were communicating with a human being.
The ELIZA platform, down to the human name and sophistication is a trace of what exists in our world today. It is easy, therefore, to understand why Weizenbaum is considered by many to be a founding father of artificial intelligence.
The dawn of the robot
The idea of speaking to an artificial intelligence platform is exciting, but as we have established, what humans are really drawn to is the thought of a moving, physical being, something tangible that shares the world with us.
In fact, truly physical applications of artificial intelligence in the form of robots have not yet reached the same level of practicality as the software version of AI have, having found use in business.
However, huge advancements have been made in robotics, beginning in Japan the late 1960s and 1970s, when the WABOT-1 was completed. This design is attributed as being the first full-scale intelligent humanoid robot.
Not only could the robot move in a formidable way, with limb control right through to the hands, it could also communicate with a conversations system.
The 1980s was a period of massive momentum for AI, and largely due to the arrival of the ‘expert system’. This system was a programme that was capable of carrying out human-like decision making processes, and it made achievements in solving problems so far unmanageable for artificial intelligence.
It was also realised in this period that AI required the ordinary commonsense possessed by a human being, simply so that it could function with human-like intelligence and the ability to reason and be rational.
Cyc was the answer to this in the 1980s, and AI project designed to hold a substantial knowledge base. This project brought the realisation a platform such as this would have to be taught this vast amount of information, potentially requiring many years.
The age of artificial intelligence
Clearly unimaginable quantities of data were needed to build something even vaguely resembling the complexity of human thought, memory and activity; therefore innovation and progress in other areas became necessary.
Big data is crucial for this, involving data sets containing vast amounts of information, also responsible for its capture, storage, and sharing. Innovation in machine learning has also been essential in paving the way to where we are now.
As mentioned in the introduction to this exploration of the history of AI, enterprises across the world are looking to AI platforms to streamline business processes and open the doors to the future. For example, Infor recently released its own cloud based AI platform for the future, its name is Coleman.