Gain more in-house AI insights at the 2023 ACC Annual Meeting
In-house counsel should pay attention to a significant step in European efforts to regulate Artificial Intelligence:
- On June 14, 2023, the EU Parliament adopted a "negotiating position," its draft version of an EU Artificial Intelligence regulation (the “AI Act”), with amendments to the 2021 version proposed by the European Commission.
- This brings the European Union closer to adopting the regulation. Discussions will now take place between the EU Parliament, the EU Council of Ministers, and the European Commission regarding the content of the Act. The final version is expected for the end of the year.
- In-house counsel must help their organization understand this development and navigate the rapidly evolving AI landscape, in a context of regional pushes to be at the forefront of AI innovation, and regulatory efforts to establish safeguards.
Learn below about key features of the draft AI Act and how in-house counsel can help their organization prepare.
Definition of AI
Under the amended definition adopted by the EU Parliament, an “artificial intelligence system” means a “machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments.”
Different rules for different risk levels
The draft AI Act introduces rules for various categories of AI systems, categorized by risk level – unacceptable risk (prohibited by the AI Act), high risk (tightly regulated), limited risk (fewer requirements), and minimal or no risk. Below are key features under the EU Parliament’s negotiating position:
1. "Unacceptable risk" AI Systems
Systems considered a threat to people are deemed an “unacceptable risk” and will be banned. This would include systems such as:
- Systems that deploy subliminal, manipulative or deceptive techniques that distort people’s behavior in a way that is likely to cause harm to others;
- Systems that exploit people’s vulnerabilities with the effect of distorting their behavior in a way that is likely to cause harm to others;
- Biometric categorization systems that categorize natural persons according to sensitive or protected attributes or characteristics (or based on the inference of those attributes or characteristics)
- Systems for social scoring;
- Real-time remote biometric identification systems in publicly accessible spaces;
- Systems to predict the risk of persons' commission of criminal or administrative offenses;
- Systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.
2. "High risk” AI systems
This category includes AI systems that are intended to be used as a safety component of a product (or that are themselves a product) covered by EU product safety legislation listed at Annex II of regulation, where the AI system or the product must undergo a third-party assessment of conformity with such legislation before being placed on the market or put into service.
The category also includes AI systems in eight areas listed at Annex III of the Regulation, if the system poses a significant risk of harm to the health and safety or the fundamental rights of natural persons, and, where the AI system is used as a safety component of a critical infrastructure, to the environment. Such systems will need to be registered in a European database before being placed in the market or put in service (art. 51). The eight areas listed at Annex III are:
- biometric identification and categorization of natural persons;
- management and operation of critical infrastructure;
- education and vocational training;
- employment, workers management and access to self-employment;
- access to and enjoyment of essential private services and public services and benefits;
- law enforcement;
- migration, asylum and border control management; and
- administration of justice and democratic processes (assisting judicial authority in researching and interpreting facts and applying the law to specific facts).
High risk systems will be subject to various requirements, such as establishing, implementing, maintaining, and documenting a risk management system (art. 9); meeting certain data governance criteria (art. 10); preparing technical documentation (art. 11); ensuring record-keeping/event logging capabilities (art. 12); complying with transparency and the provision of information to users requirements (art. 13); ensuring adequate human oversight (art. 14); and complying with accuracy, robustness, and cybersecurity requirements (art. 15).
3. “Limited risk” AI systems and generative AI
These systems will be subject to transparency requirements (such as informing the user that an AI system is used).
Who will be subject to the regulation?
Whether or not your organization is based in the EU, it may fall within the scope of the AI Act. Under its Article 2 as amended by the EU Parliament, the proposed regulation would apply to:
- Providers who place or put into service AI systems in the EU, regardless of the provider’s location;
- Deployers of AI systems located or established within the EU; and
- Providers and deployers located or established outside the EU, where either Member State law applies by virtue of a public international law or where the output produced by the system is intended to be used in the EU.
Substantial fines and rights to lodge complaints
The EU AI Act would impose substantial administrative penalties for non-compliance with the Act’s requirements. Under the draft wording adopted by the EU Parliament:
- For non-compliance with prohibitions of “unacceptable risk” AI systems, the offender would be subject to administrative fines of up to 40 million Euros or, if the offender is a company, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
- For non-compliance with the Act’s requirements pertaining to data and data governance, or to transparency and the provision of information to users, the offender would be subject to administrative fines of up to 20 million Euros or, if the offender is a company, up to 4% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
- For non-compliance with other requirement of the Act, the offender would be subject to administrative fines of up to 10 million Euros or, if the offender is a company, up to 2% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
- The supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to 5 000 000 EUR or, if the offender is a company, up to 1% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
The EU Parliament’s version also introduces a right for natural persons or groups of natural persons to lodge complaints with a national supervisory authority.
How can in-house counsel help their business prepare?
- Learn more about the upcoming requirements of the EU AI Act being developed.
- Monitor the discussions with the EU member states toward finalization.
- Review how your organization uses or plans to use AI systems.
- Map out your organization’s AI uses in light of the categories defined by the EU AI Act.
- If you haven’t started yet, develop internal guidelines and corporate policies on the use of AI, for example regarding the incorporation of AI systems in the company’s products, the use of AI in internal processes such as recruitment and Human Resources decisions, or employees’ and vendors’ potential use of AI to create content.
- Consider the ethical implications of how your organization uses AI tools, and what safeguards may be needed to mitigate legal and reputational risks.
Check out a selection of resources
From the ACC Library:
- Does Chatting with ChatGPT Unleash Trade Secret or Invention Disclosure Dilemmas? by Seyfarth Shaw (May 2023)
- Why We Should Care About Artificial Intelligence Ethics Frameworks, by Sarah Wedgwood, ACC Docket (August 1, 2022)
- Using AI in HR Decisions: Tips for In-house Counsel (March 2023)
- Find more AI Resources in the ACC Library
External resources:
- Proposal of the European Commission for an Artificial Intelligence Regulation (europa.eu) (April 24, 2021)
- MEPs ready to negotiate first-ever rules for safe and transparent AI, Press Release, European Parliament (europa.eu) (June 14, 2023)
- European Parliament Agrees on Position on the AI Act, Privacy & Information Security Law Blog (huntonprivacyblog.com), Hunton Andrews Kurth LLP (June 15, 2023)
- European Parliament Adopts Negotiating Position on the AI Act, by Kirk J. Nahra, Dr. Martin Braun, Itsiq Benizri, Shannon Togawa Mercer, Ali A. Jessani, of Wilmer Cutler Pickering Hale and Dorr LLP (June 15, 2023)
- EU Paves the Way for U.S. in the Regulation of A.I., by Chanley T. Howell, Kendall Spencer, of Foley & Lardner LLP EU (June 8, 2023)
Connect with in-house peers
Join the ACC IT Privacy and eCommerce Network (ACC members only)