OpenAI and Google DeepMind tackle security concerns with development of new machine learning algorithm.
OpenAI and Google DeepMind are working together to make artificial intelligence safer.
The two companies have produced an algorithm that learns from human feedback, providing a more reliable machine learning process.
The move comes after concerns over the rise of AI highlighted by tech luminaries such as Bill Gates, Elon Musk, and even Stephen Hawking weighing in on the need for securer machine learning platforms.
OpenAI and DeepMind have sought to develop a process that will help to make AI safer to use and more easily trainable.
They have achieved this through ‘reinforcement learning’, strengthening the intelligence of the algorithm through several stages of human engagement. The process involves the algorithm completing tasks within a particular environment whilst participants provide responses which are fed back to the machine.
This allows the algorithm to learn and alter its behavior according to desired actions it receives. During the tasks, the algorithm keeps adapting its next moves according to information given in the form of a ‘reward predictor’.
Demonstrations present how the algorithm successfully achieves tasks, responding to human participants training it to recognise and decide what stages it must take to improve future judgement.
One example involved people ‘training’ a graphic of a lamp to do back flips. They would watch two clips, then select the video where they felt the AI graphic was best performing. This was then fed to the algorithm to adapt its following sequence by gaining an awareness of what the preferred course of action would be.
Despite the progress that has been made there are several concerns regarding this method, as training algorithms this way is limited to the particular skill’s ability of the person supplying the information. This can have adverse effects if strong feedback is not provided, taking away the efficiency of machine learning training.
The process in whole provides a way for machine learning to develop and extend intelligence whilst completing complex tasks. This is especially useful in industry sectors such as autonomous driving which rely on AI methods to monitor efficiency in vehicles. It allows machines to predict how to overcome regular challenges.
Training algorithms to become more advanced through authentic human interactions, rather than programmed predictions proves to be highly beneficial. Machine learning devices process selective information regarding frequent tasks carried out by people, to better understand future behaviors.
Read more: Google DeepMind M2M starts dreaming
Future methods that would benefit AI training include reducing the amount of feedback humans need to apply in tasks. This would allow machines to become more sophisticated whilst processing information at a faster rate. Another factor that would benefit productivity includes machines adapting to natural language processing. This would allow AI devices to generate and apply methods tailored towards specific requests to carry out tasks more efficiently.
In a world where cognitive machines are leading the way for businesses to develop increasing technical standards, ways to improve machine intelligence is a crucial factor that companies are rigorously considering.