Once you accept the fact that AI is here to stay, you can begin to consider how it should work for us.
At the beginning of every year, the world’s press descend on the small mountain town of Davos in Switzerland. They’re there, of course, to listen in on the conversations of the world’s elite in attendance at the annual gathering of the World Economic Forum.
More than 3,000 delegates spend the week moving between talks, workshops, dinners and networking sessions, discussing the pressing issues of the day. Climate change, diversity, and blockchain all received plenty of airtime this year, but one topic in particular was an agenda mainstay throughout: artificial intelligence.
This year’s conference seemed to present something of a tipping point for the technology. Both the UK’s Theresa May and France’s Emmanuel Macron announced new funding for innovation within, and ethical regulation of, AI technologies. Elsewhere at the meet, Alibaba founder Jack Ma suggested that AI – and the disruption it will bring to the job market – could start a third world war.
But the race to the headlines was arguably won by Google’s CEO, Sundar Pichai, who declared that AI’s impact will be comparable to humanity’s harnessing of fire and electricity. Given the bearing that Google – and indeed many of the products that Pichai himself has helped develop – has had on society over the past fifteen years, these words, despite their bluster, come with an ample dose of authority. And he does have a point.
While I won’t commit myself to a comparison as outlandish as Pichai’s, I believe strongly that AI will have a huge influence on the future of society (and not just because its intangible form makes for an invitingly bombastic headline). In fact, I’d argue that it’s already having an impact – being used, as it is, for everything from personalised product recommendations to cancer detection. The difference currently, is how that impact is being characterised.
That AI was on so many Davos attendees’ lips is evidence of at least one thing: everyone is vying to come out as the pack leader when it comes to this core component of the so-called Fourth Industrial Revolution. Both companies and whole countries.
The cynic in me would suggest that this is driven at least in part by a desire to take a share in the AI money pot. PwC predicts that the technology will contribute as much as $15.7 trillion – that’s trillion – to the global economy by 2030, and Deloitte has reported that by 2020 almost 9 in 10 businesses will be making their own AI investments. And when you add in the various governmental finance offerings – including UK Chancellor Phillip Hammond’s £75m fund for AI start-ups and PhD students – things start to look even more attractive.
The technologist in me, meanwhile, can see beyond the bank notes. And, if anything, enjoys a more attractive view.
Once you accept the fact that AI is here to stay, you can begin to consider how it should work for us. As with any powerful new technology, it’s important to recognise (and mitigate) the risks that will inevitably emerge. But to focus on these risks is short-sighted. To take Sundar Pinchai’s analogy as an example: if we’d been attentive only to the bite of an electric shock or the burn of an uncontrolled flame then we might never have learned how to light a bulb or boil a kettle. Understanding the negative side-effects of these elements is key to safely harnessing their power – but we’ll only reap the benefits of that power by taking control and aiming it in a positive direction.
And steadying that aim, it’s clear to me, requires a human hand. That is to say that we’ll only see the full potential of AI if we employ it in combination with an existing human workforce. Far from existing in unassailable competition, the two should be viewed as complementary. Deloitte’s Digital Disruption Index reported that a mere 8% of business leaders believe AI solutions will present a like-for-like replacement for human intelligence. Its real value comes when the technology is used to improve the human decision-making process making it faster and more accurate.
When presented in this way, we can start to develop AI solutions with the foremost purpose of augmenting and thereby expanding a person’s role in their job. Why have an employee spend hours clawing through and categorising datasets when AI can take minutes over the same task and, as a result, free up those hours for the person to spend on complex analysis instead? It’s a basic example, but apply the principle to pretty much any sector and the benefits become clear: your retail staff, freed from manual stock taking, now have more time to spend making meaningful shopper interactions and secure invaluable return custom; the brain surgeon can operate sooner, thanks to a faster diagnosis based on years of accumulated, anonymised patient data; and so on.
The question, then, surely mustn’t be ‘how can a machine do this job for me?’ but rather ‘how can this machine help me do this job better?’
We’ll have to wait and see who has the best answers to that in Switzerland next year.