EU AI Act: Six Insights into Its Global Implications
February 2024
By
Axiom Law
The European Union’s Artificial Intelligence (AI) Act is the world’s first comprehensive legal framework for using AI worldwide.
It’s meant to clarify various legal and ethical issues surrounding the use of generative AI, with limitations and safeguards to ensure fundamental rights are protected. It also aims to provide a legal framework for businesses and organizations using these tools to encourage innovation among small and medium-sized organizations in the EU.
The legislation applies to all 27 member states, with global implications for every industry, regardless of whether they're in Continental Europe. It sets the stage for ongoing AI legislative efforts and could potentially influence legal practices and considerations in the U.S.
In this month’s Higher Bar webinar, Axiom's own Daniel Hayter joined Dasein Privacy Managing Director Lucy McGrath, Axiom lawyer Stefania Quintaje, and Stealth Startup-AI Contract Intelligence General Counsel Jason Mark Anderman, to discuss these takeaways and more.
Here are six major takeaways:
1. Engage with Business Units to Identify Risks
The legal statutes for using artificial intelligence are slowly coming into place, and this landmark piece of legislation from the European Parliament is only the beginning. Innovation always outpaces the legislative process, with new technologies coming online before businesses have legal roadmaps for using them responsibly.
This confusion can prevent some companies from experimenting with AI out of an abundance of caution, which could put them at a disadvantage. Other organizations may hold off on allowing legal departments and employees to use these tools until there are clear laws in place in their target market. Using generative AI without established legal protocols inevitably increases a corporation’s risk profile, but hesitance can also reduce innovation and efficiency as the company’s competitors pull ahead.
Legal experts recognize this dilemma, but the absence of legal clarity doesn’t warrant inaction. Companies should start assessing their potential risk exposure using the restrictions and guidelines outlined in the EU AI Act, even though equivalent legislation has yet to pass in the U.S.
“One of the things that I've seen more is when we will analyze better the regulation applicable to that, for example, that there is still some open question about copyright, about intellectual property, about what we can really implement inside the company without the risk to open the big challenges, about what was previously owned by someone else or delivering something that is not completely free,” noted Quintaje. “We need to understand which type of data we can use, which type of things are still valid, which type of process are okay or needs to be rediscussed internally by an internal panel inside your company to understand what you can do to have good foundation for the future.”
Regardless of a company’s policies regarding the use of AI, chances are someone within the organization is using generative tools and general purpose AI models like ChatGPT.
Identifying the potential risks keeps the company abreast of how its assets and reputation may be affected now that there are concrete regulations on the books in one of the largest markets in the world. This risk analysis also gives the company more time to prepare a response once new legislation is passed. The legal team will understand how future laws will affect the company as they work their way through Congress.
The EU AI Act may not be the end-all-be-all in terms of regulation, but it sets clear restrictions for using “high-risk AI systems” in specific industries. Banned applications of generative AI include:
- Remote biometric identification systems and classification based on sensitive characteristics, such as political, religious, philosophical beliefs, sexual orientation, or race
- Untargeted facial recognition using public image databases
- Employee and student emotional recognition
- Social scoring based on behavioral characteristics
- User manipulation designed to deprive them of free will
- Identifying user vulnerabilities based on age, race, disability, economic or social status
- Law enforcement that may interfere with people’s fundamental rights
Companies should start incorporating these provisions into their operations and continue to do so as new regulations come into the fold. Some usages may warrant immediate legal action, including obvious violations, while others can simply be noted for future deliberation. Some firms may want to categorize flagged instances based on the potential threat level. For example, some usages and applications might fall somewhere between approval and prohibition.
The size and focus of the company will affect the scope of the compliance assessment, with larger entities and those that specialize in higher risk areas identified by the EU Artificial Intelligence Act, such as social scoring, needing extensive analysis and legal expertise.
2. Use Past Examples to Predict Future Legal Outcomes
Without a crystal ball, predicting the future of AI legislation can feel like a fool’s errand, but legal firms and companies can look to similar data privacy requirements and regulations that have come into effect over the last few years to get a sense of where things might be headed. The General Data Protection Regulation (GDPR) is perhaps the closest parallel.
Delegating the adoption to individual EU member states could delay the rollout, with implementation happening in waves, as it did with the GDPR, first published in 2016. Competing regulations and policies could also unearth unforeseen privacy issues in years to come.
Buy-in from industry groups may also help predict the success of new regulations. Government-appointed committees of technology experts helped lay the groundwork for the GDPR’s success. We may see similar initiatives in the U.S. and EU regarding generative AI legislation as governments run up against their lack of expertise on the subject.
3. Put AI Regulation into Perspective as It Relates to Existing Laws
Looking at several AI legal case studies from Europe, the new EU AI Act will intersect with various established legal precedents. Companies should begin assessing how these regulations could spill over into other concerns. For example, generative AI could produce information about a person that violates the GDPR. These kinds of issues could expose firms to additional liabilities.
In one case study, an algorithm created risk profiles targeting specific groups. The company used these recommendations to penalize low-income households in specific ethnic groups on suspicion of fraud, resulting in further economic hardship for thousands of families. This use case would violate the banned applications and established privacy laws.
AI tools could, in theory, be used to reveal legally protected financial and medical data. Firms should consider how the new regulations fit into the existing legal ecosystem, such as the GDPR, Health Insurance Portability and Accountability Act (HIPAA), and other privacy regulations.
McGrath notes how many responsibilities providers must consider and poses that they consider the following: “What your role with AI is in the same way that you thought about it with GDPR or HIPAA or the California Consumer Privacy Act (CCPA), and how you deploy it?” She explains that “this is going to build on complementing a lot of your current sort of supply chain requirements. Look at where transparency is involved and how you deploy a chat bot on your website. Make sure that you’re telling people that they're not speaking to a human.”
Compiling the company’s risk profile under these considerations requires a multi-disciplinary approach, with legal teams from diverse backgrounds meeting to discuss the client’s potential vulnerabilities.
4. Increase Demand for Certification Requirements
To increase innovation and quality assurance, the new framework carves out regulatory “sandboxes” for small and medium-sized enterprises looking to develop and test artificial intelligence programs without undue pressure from the dominant suppliers.
The current market is divided into two groups: suppliers like Microsoft and OpenAI that license generative AI tools and users, i.e., corporations and businesses who use these algorithms. Lesser-known developers will and should be able to offer AI services to rival those of the major players, but users will need assurance that using these products won’t subject them to additional risk.
We will likely see more dispute prevention and resolution (DPR) laws appear in various jurisdictions regarding the use of AI, and developers will adapt their products to these regulations to guarantee their customers’ compliance.
Like with ISO 27001 for cloud infrastructure, Anderman believes the industry will “start seeing companies insisting on having shared certification requirements” to provide clarity for developers and users, including various data governance requirements, risk management requirements, and transparency requirements.
However, obtaining an ISO certification is often expensive and time-consuming, which could put small and medium sized enterprises at a disadvantage. This requirement may consolidate the number of software providers in the market, as we saw with cloud infrastructure, now dominated by Amazon Web Services, Azure from Microsoft, or Google. The leading players will give their customers, particularly startups, some much-needed comfort and peace of mind by building compliance into their services.
5. Plan for a Lack of Consensus
The EU AI regulations apply to industries worldwide, but other jurisdictions will inevitably enact their own rules, creating a hodgepodge of different requirements. Within the EU alone, who will enforce and implement these requirements in the individual member states remains to be seen. Spain, for example, says it wants to establish its own AI governmental agency in addition to the European Commission’s AI Office, the central EU agency designated to guide governing bodies in the 27 countries.
The U.S. has been particularly slow in passing federal AI regulations due to a lack of consensus in Congress, but 11 states have passed DPR-type laws thus far.
On the other hand, Chinese leaders recently came out opposed to all AI regulation, removing all safeguards for anyone doing business there. Some worry AI software developers may flock to jurisdictions with fewer regulations to reduce the risk of violation.
But that will put these players in a precarious situation in today’s increasingly globalized society, where marketing campaigns and SaaS tools transcend national borders.
Major AI software companies like OpenAI will likely cater to a more international business clientele by complying with regulations across multiple jurisdictions in different countries. As two of the most lucrative markets in the world, the U.S. and EU will inevitably help shape the standards and requirements for these products.
6. Put the Right Legal Talent in Place
In-house legal teams may not have the right background to deal with the complexity of these issues. Most professionals focus on compliance in relation to specific legal requirements, but AI combines various subjects and concerns. Companies should consider hiring outside legal counsel across multiple practice areas, including artificial intelligence lawyers with a background in AI and privacy compliance, to support their existing operations.
💡 Prepare for the EU AI Act and future regulations on Artificial Intelligence.
Posted by
Axiom Law
Related Content
Understanding the Healthcare Industry from Every Angle
How Axiom lawyer Erica Cori Matos brings medical, analytical, and business perspective to a wide range of healthcare clients.
Practical Advice for Privacy Program Management
New and proliferating privacy regulations present increasingly critical operational challenges for in-house legal departments worldwide.
How One Axiom Lawyer Works at the Intersection of STEM and Legal
Attorney's legal work is strongly rooted in his science background. Here is how he uses it to help Axiom clients grow.