The EU AI Act is here. Are you ready?

Product

We’ve now hit a new era of AI — no, not some new hyper-powerful model or use case, but rather the enforcement of the EU AI Act. Organizations (and countries) around the world have been talking about this for years, and now it has become a reality. Organizations will need to comply with this new Act or, like we’ve seen in the past with laws like GDPR, face stiff penalties.

Why was the EU AI Act created in the first place?

AI can be a powerful tool, but the citizens of the EU need to be able to trust that AI is being used in their best interests, which is why the AI Act was developed. AI, like many other technologies, has inherent risks associated with it, and those risks should be mitigated as much as possible, or in some cases, have specific AI models banned altogether because the risk is just too high. Bias, discrimination, misinformation, data privacy, confidentiality, data security and intellectual property rights are all real concerns when AI is not properly deployed and governed. 

Citizens are not the only ones that face risks with AI. The companies, organizations and governments that build and deploy AI do as well. Catastrophic monetary, reputational, security and compliance risks are all possibilities with poor AI. And, let us not forget that while the EU AI Act is new and just going into effect, other data laws, specifically GDPR, are also very relevant as AI is fed by data. In theory, organizations could be out of compliance with both the EU AI Act and GDPR at the same time, facing combined monetary penalties.

Who does the EU AI Act apply to?

To understand exactly who the  EU AI Act applies to, read Chapter 1, Article 2. But the short answer is that this applies to companies and organizations far beyond those that are located in the EU. If you are creating or deploying AI models that citizens of the EU use, the EU AI Act applies to you. Are you a US company that’s creating output via AI that EU citizens rely on? You’re on the hook. Are you an automobile manufacturer based in Japan that is deploying AI systems within your vehicles being sold to citizens in the EU? You’re on the hook. Just as GDPR applies to a vast majority of enterprises, so does the EU AI Act. 

What does the AI Act cover? 

The Act generally focuses on on:

  • Prohibiting certain AI systems that pose too high of a risk to society
  • Identifying a large number of material requirements, including the creation of formal assessment in some cases, detailed documentation, reporting and monitoring of both specified high risk systems and general purpose AI systems
  • Placing disclosure and transparency requirements for the deployment of certain AI systems where individuals may not otherwise understand they are interacting with AI

The EU AI Act is a dense document and your organization’s legal team should help you and other AI stakeholders interpret it to ensure you are in compliance. The full text of the EU AI Act can be found here

What are general-purpose AI (GPAI) models?

General-purpose AI is another term that you may not be familiar with (yet) but you’ve certainly come across and used them. A GPAI is simply an AI model that can do a variety of tasks vs a model that has only one specific purpose or task. A great example of a GPAI are large language models (LLMs), like Open AI’s GPT-3 and GPT-4. You can ask them general questions, like “what is AI governance” or “write me a three thousand word paper on the impact of barbed wire on the American west.” They can perform a variety of tasks just like a human would, and, just like a human, can also be error prone and provide inaccurate information. 

Thinking about creating a virtual assistant for your service or product? That could fall under the definition of GPAIs. Further, it could fall under the definition of a GPAI with systemic risk, subject to significant scrutiny under the Act. The Act defines GPAIs as having systemic risk as “If the AI model has high impact capabilities, determined by technical tools and benchmarks, or if it has similar capabilities or impact as decided by the Commission, it is considered to have systemic risk. If the AI model uses a large amount of computation for its training, it is assumed to have high impact capabilities.” Read more here. Again, please consult with your legal team about your specific AI use case and model to see if it falls under a heavily regulated category and what obligations you may be under if you deploy these types of models. 

Helping keep your organization out of trouble: AI literacy 

Not everyone in your organization needs to be an AI expert, however, the EU AI Act does require that by January 2025 “companies that create and use AI systems must make sure their employees and anyone else who operates or uses these systems on their behalf are well-educated about AI.”

While the Act doesn’t define what “well-educated” means, here are a few best practices that will help promote and encourage AI literacy within your organization.

  1. Communicate often about AI: AI projects, both internally and externally available, are likely happening across your organization. Invite users from across your organization to learning and training events. Gather ideas from stakeholders about AI projects that would make them more productive or create new opportunities for customers. Make AI as much of the conversation as you do with topline metrics and KPIs
  2. Continuously document AI use cases: All AI, regardless of scale, should be well documented and model cards created that provide all of the information about the model. Communicate about new models being created and ensure any and all information about AI within your organizations is easy to access and understand
  3. Collaborate: Data scientists aren’t the only ones who should be involved in AI. Stakeholders from across the organization can provide different perspectives and innovative ideas on AI use cases, as well as making decisions on whether or not AI use cases should move forward

The next step: Staying in compliance

Hopefully, you’ve already taken your first steps into compliance with the EU AI Act. If not, don’t worry. Now is the time to take the appropriate actions to ensure your organization doesn’t become the first headline for non-compliance. 

Collibra AI Governance, along with the rest of the Collibra platform, has the critical capabilities required to help you get compliant and stay there. Collibra AI Governance helps organizations catalog, assess and monitor any AI use case for improved AI model performance, reduced data risk, and show compliance with AI laws and regulations.

You can learn more about Collibra AI Governance here

Related resources

Infographic

Five steps to AI governance maturity

Ebook

How GenAI can elevate your status

Factsheet

Collibra AI Governance

View all resources

More stories like this one

Nov 21, 2024 - 4 min read

Celebrating our community at Data Citizens On the Road

Read more
Arrow
Nov 20, 2024 - 3 min read

Beyond technology: Collaborative AI Governance and improved assessments

Read more
Arrow
Nov 6, 2024 - 4 min read

AI and data compliance: How the AI Act will impact your organization

Read more
Arrow