The encryption method allows AI’s to communicate with eachother
A research paper titled ‘Learning to protect communications with adversarial neural cryptography’, written by Google Brain team members, Martin Abadi and David G. Anderson explains that neutral nets, which were given the names Alice, Bob and Eve, were each trained to specifically master its own role in the communication.
Alice was made responsible to convert her original plain-text message into simple encryption to communicate with Bob, whereby Eve had the task of attempting to eavesdrop without given access.
The experiment identified that machines are able to learn how to protect their messages from others. It did not include the need to teach specific cryptographic algorithms.
As reported by Newscientist, the scenario was tested out 15,000 times before Bob was able to convert Alice’s text back into plain-text, with Eve only being able to guess 8 out of 16 parts of the message.
It has not been revealed how the encryption method works exactly, as a number of elements are left to understand its use.
Encryption messages are a method that has been around for a long while, but it goes to show that Google is putting the old system into good use for its artificial intelligence systems.
The research paper highlights how Google brain researchers aimed to demonstrate that neutral networks can learn how to perform forms of encryption and decryption.
The search engine giant recently acquired API.AI, to develop machine learning technologies to enable natural language interfaces.
In a blog post, Scott Huffman, Google Engineering vice president wrote: “API.AI offers one of the leading conversational user interface platforms and they’ll help Google empower developers to continue building great natural language interfaces.”
Developers use API.AI’s tools to develop their own machining learning, conversational interfaces for several services including chatbots, connected cars, smart home devices, mobile applications, wearables, services, robots and more.
The API.AI tools may essentially be what aided to the Google brain team’s experiment of its AI encryption methods based on the purpose of its tools.
As mentioned in the research paper, encryption alone is enough for security and privacy and as only an AI will have the access to understanding the message of another AI, maximum security will be embedded, which is to prevent hacking.