The lion uses a long short-term memory (LSTM) recurrent neural network (RNN) to predict linear sequences. It was inspired by Google’s efforts to train deep learning models to write
Visitors to Trafalgar Square may have noticed an extra lion sculpture at the foot of Trafalgar Square this week — not to mention one loaded with a neural network, painted bright red, and roaring algorithm-inspired poetry. The party partly responsible for the latest member of the pack? Google.
The tech giant’s interactive Arts & Culture platform has partnered with London Design Project and artist Es Devlin for the public installation in the square. Dubbed “Please Feed the Lions”, the machine learning-powered lion uses public submissions and AI to create “a collective work of art”.
Individuals can input a single word through a Pixelbook in the Square, Google said in a blog post. The installation uses a neural network that’s trained on 25 million words of nineteenth century poems, to create a line of poetry, lit up inside the lion’s mouth.
A collective poem using all submissions is also being projected at night onto both the lion and Nelson’s Column, as well as being published online. Participants outside of London are also invited to submit words for the installation on the London Design Festival site.
Computer Business Review visited the site on Monday evening to “feed” the lion. After patiently waiting for a fellow visitor to find a variation of “Brexit” that the lion would digest – including Brexnational, Brexcrash, and numerous others, the sculpture produced the following line of poetry from our particular choice of proper noun:
“My Swindon steers the sun and sea
And the stream shakes the clouds and shakes the sky”
The project runs from 18-23 September.
Recurrent Neural Network
Creative technologist for Google, Ross Goodwin, worked on the London Design Project installation with artist and designer Es Devlin.
He said the long short-term memory recurrent neural network (RNN) predicts linear sequences; in this case letters and text characters. The company worked on the AI as far back as 2016 in partnership with Stanford University and University of Massachusetts.
“The algorithm is essentially predicting the next text character over and over again, and always taking into account what came before to generate text,” Goodwin told Google Arts & Culture.
“When I started training deep learning models to write, it struck me that I could take prose and poetry, and even non-fiction and mash them together in interesting ways. The material that would come out, regardless of the combination of things used, would always feel very poetic or have those characteristics.”
London Design Project: Putting the “Art” in Artificial Intelligence
Technologies such as machine learning and neural networks have already been used in an artistic context. Tom White at the University of Wellington made artwork that resembling objects seen through a machine’s “algorithmic gaze”.
White developed a series of prints called “The Treachery of ImageNet”, which appear as random shapes to humans but specific objects to an AI.
While earlier this year, androids competed for prizes at the 2018 Robotart competition by creating works of fine art, as reported by Futurism. The winning team reportedly used a machine learning system to generate “outlandish” portraits and landscapes.
Kyoto University researchers, meanwhile, previously partnered with Microsoft for a similar poetry-writing project, using the tech company’s chatbot, XiaoIce.