“Not a luxury feature”
The European Union this morning published seven guidelines for the development and implementation of AI ethics.
They form part of the EU’s 2018 AI strategy; which targets AI investment of €20 billion annually over the next decade.
That push comes amid growing concern that Europe is both being left behind by China’s huge drive on neural networks – and concerns that AI is being used by Beijing to build a powerful agent of social control.
Could Ethical AI be a “Competitive Advantage” for Europe?
Europe’s VP for the Digital Single Market Andrus Ansip said: “The ethical dimension of AI is not a luxury feature or an add-on.”
He added: “Ethical AI… can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust.”
The EU is now launching a pilot phase with industry and academia this summer to ensure that its ethical guidelines can be implemented in practice.
“Europe needs to define what normative vision of an AI-immersed future it wants to realise, and understand which notion of AI should be studied, developed, deployed and used in Europe to achieve this vision,” the report notes.
A supplementary pdf notes that regulation can be guided in part by human rights law; e.g. “in an AI context, freedom of the individual for instance requires mitigation of (in)direct illegitimate coercion, threats to mental autonomy and mental health, unjustified surveillance, deception and unfair manipulation.”
In brief, the guidelines include the following 7 principles
AI Ethics: Europe’s New Guidelines
1) Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
2) Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
3) Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
4) Transparency: The traceability of AI systems should be ensured.
5) Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
6) Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility
7) Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.
IBM is one of the companies that played an integral role in shaping the guidelines. IBM’s global AI Ethics Leader, Francesca Rossi, led that engagement.
Martin Jetter, Senior Vice President & Chairman IBM Europe said in an emailed statement: “The EU’s new Ethics Guidelines for Trustworthy AI set a global standard for efforts to advance AI that is ethical and responsible, and IBM is pleased to endorse them… we believe the thoughtful approach to creating them provides a strong example that other countries and regions should follow.”
“We look forward to contributing actively to their implementation.”
The report is just the latest effort to set a benchmark for baking ethics into AI design: late last month the IEEE – the world’s largest technical professional organisation – ets out its own eight general principles for the design of ethical AI.