“We look forward to continuing the great work by Lobe in putting AI development into the hands of non-engineers and non-experts”
Microsoft has bought up San Francisco-based startup Lobe for an undisclosed sum, the company announced today.
Lobe, founded in 2015 by Mike Matas, Adam Menges, and Markus Beissinger, allows users to build, train, and ship custom deep learning models using a simple visual interface that is built on top of the deep learning frameworks TensorFlow and Keras.
The company had only released a beta of its AI-powered app developer in May this year. The acquisition comes amid a major push by companies to make machine learning capabilities more accessible.
In a blog published late Thursday, Microsoft CTO Kevin Scott said: “We’re only just beginning to tap into the full potential AI can provide. This in large part is because AI development and building deep learning models are slow and complex processes even for experienced data scientists and developers. To date, many people have been at a disadvantage when it comes to accessing AI, and we’re committed to changing that.”
Microsoft Lobe Buyout Comes Amid AI Acquisition Spree
He added: “Over the last few months, we’ve made multiple investments in companies to further this goal. The acquisition of Semantic Machines in May brought in a revolutionary new approach to conversational AI, and the acquisition of Bonsai in July will help reduce the barriers to AI development through the Bonsai team’s unique work combining machine teaching, reinforcement learning and simulation. These are just two recent examples of investments we have made to help us accelerate the current state of AI development.”
Lobe has been used to look at 3D models of different styles of houses, along with measurements and meta-data, then learn to identify the style of architecture, in one example of a recent application by architect Kyle Steinfeld, whose aim was to enhance future CAD software to help assist architects when designing.
Other applications include making its tool look at accelerometer data from a phone as a person performs various gestures, such as flicking up or down; Lobe then learns to interpret the movements. This was created to allow apps and games to respond to a person’s movement in order to create a more immersive experience.