The EU’s AI ethics guidelines require trustworthy data

Blog Scale Illustration

Last week, the European Union published a new set of AI Ethics Guidelines. These guidelines are very interesting – and what they have to say about the ethical relationship and balance between artificial intelligence (AI), data, and humanity is particularly thought-provoking. Here are a few things that stood out to me from the guidelines, including how it pertains to data management.

Building Trustworthy AI

The new Ethics Guidelines for Trustworthy AI are part of a much larger focus on AI within the EU. In April 2018, the European Commission published Artificial Intelligence for Europe, which sets out a desire to develop a distinctively European approach to AI, including an appropriate ethical and legal framework.

Within that overall initiative, the EU created a High-Level Expert Group on Artificial Intelligence in June 2018, composed of 52 experts drawn from a wide array of backgrounds. It is this group that authored the new AI Ethics Guidelines. The group presented a draft of the guidelines in December 2018, and the consultation on it ended in February 2019, with more than 500 comments received.

As part of the distinctively European approach to AI in the final document, the EU anchors its ethical legal framework towards AI in the EU’s Charter of Fundamental Rights of the European Union. The AI Ethics Guidelines talk about the need for AI systems to be “human-centric, resting on a commitment to their use in the service of humanity and the common good, with the goal of improving human welfare and freedom.”

Flowing from that is the concept that “trustworthiness is a prerequisite for people and societies to develop, deploy, and use AI systems. Without AI systems – and the human beings behind them – being demonstrably worthy of trust, unwanted consequences may ensure and their uptake may be hindered, preventing the realization of the potentially vast social and economic benefits that they can bring. To help Europe realize those benefits, our vision is to ensure and scale Trustworthy AI.” According to the guidelines, Trustworthy AI has three components:

  1. It should be lawful, complying with all applicable laws and regulations
  2. It should be ethical, ensuring adherence to ethical principles and values
  3. It should be robust, both from a technical and social perspective

The document then goes on to talk about how elements of the Charter of Fundamental Rights can be viewed within the context of AI – it draws broadly on many different elements of the Charter to develop its ethical framework. In comparison, for example, GDPR only refers to Article 8.  For these AI Guidelines, the outcome of this is four key ethical principles:

  1. Respect for human autonomy – AI systems should not “unjustifiably subordinate, coerce, deceive, manipulate, condition or herd humans.”
  2. Prevention of harm – AI systems should not “cause or exacerbate harm or otherwise adversely affect human beings.”
  3. Fairness – The development, deployment, and use of AI systems must be fair.
  4. Explicability – Processes need to be transparent, the capabilities and purpose of AI systems openly communicated, and decisions explainable to those directly and indirectly affected.

The group views these as “ethical imperatives,” in that those who work with AI “should always strive to adhere to them.”

Requiring privacy and data governance

In the second section of the document, the EU’s team of experts tries to translate these principles across a number of AI issues. Explicitly, this includes making privacy and data governance a specific requirement of Trustworthy AI. There are three key issues that the group has identified here:

  • Privacy and data protection – A line in the sand is the statement that “AI systems must guarantee privacy and data protection throughout a system’s entire lifecycle.” Both the General Data Protection Regulation (GDPR) and the forthcoming ePrivacy Regulation are cited in a footnote.
  • Quality and integrity of data – The group asks that data is free of biases, inaccuracies, errors, and mistakes. The integrity of the data must be ensured.
  • Access to data – Protocols governing who has access to the data should be in place.

Data is also talked about implicitly across the whole second section of the document.

Examples include:

  • Transparency – Data sets and the processes that yield the AI’s decision, including those of data gathering and data labeling, should be documented to allow for traceability and transparency.
  • Auditability – The algorithms, data, and design processes should all be open to being assessed. AI systems should be independently audited.
  • Technical robustness and safety – Having sufficient security in place to prevent data from being damaged, corrupted, or stolen. Ensuring the AI can make correct decisions based on the data it uses.

The document also includes a pilot version of a Trustworthy AI Assessment List, which includes questions that the business as well as risk, compliance, and audit can ask about AI systems.

In short, the EU’s Ethics Guidelines for Trustworthy AI is just the latest of many regulations that put data management at the heart of its approach. The message is clear – organizations that want to be perceived as trustworthy and ethical when it comes to AI need to ensure that they have the data policies, processes, resources, and culture in place to deliver on that objective.

 

More stories like this one

Mar 28, 2024 - 3 min read

Ensuring data reliability for AI-driven success: The critical role of data...

Read more
Arrow
Mar 11, 2024 - 3 min read

Do more with trusted data: Join us at Data Citizens ’24

Read more
Arrow
Jan 19, 2024 - 2 min read

Why now is the time for AI governance

Read more
Arrow