CBR looks at the ‘Artificial intelligence: an overview for policy-makers’ report from the Government Office for Science.
The report draws on the key point that for AI to succeed and be used productively, it must be trusted. The report presents that a must-take approach in instilling trust in AI is public dialogue, pointing to already deployed methods in starting the AI conversation with the public.
“Work is already underway to engage with the public and understand public attitudes to some of these issues, though further work will build on this. Ipsos MORI have conducted two public engagement pieces in this field – one on machine learning and another on data science more generally, conducted in partnership with government and Sciencewise.”
The report sets out a number of key issues which must be debated and talked about with the public – all with the end game of building trust in AI. The issues which public debate needs to explore were listed as:
• how to treat different mistakes made through the use of artificial intelligence,
• how best to understand probabilistic decision-making, and
• the extent to which we should trust decisions made without artificial intelligence, or against the advice of artificial intelligence systems.
Trust, however, will reply on AI being able to demonstrate benefits to the public, alongside working safeguards. The report said:
“In the end, public trust will be maintained through demonstrating that the technology is beneficial and that safeguards work. This will require, at a minimum:
• Correctly identifying any harmful impacts of artificial intelligence.
• Formal structures and processes that enable citizen recourse to function as intended.
• Appropriate means of redress.
• Clear accountability.
• Clearly communicating the substantial benefits for society offered by artificial intelligence.”