REQUEST INFORMATION


INSIGHTS

Author: Joseph Hoffman

April 08, 2024

Earlier this month, the European Parliament passed the long-anticipated EU AI Act, an important step in global standards and enforcement of AI. We have all become increasingly familiar with how new laws, including those from the EU, can dramatically change how we conduct our business. Business leaders are naturally asking two important questions. What does the law do? What does it mean for me? To answer the former, in a word, the AI Act is all about ‘Risk! As with the GDPR, the AI Act is likely to act as a trailblazer statute, setting general baselines and expectations that many other jurisdictions may use in considering their own legislation. To answer the second question, businesses must be flexible and adaptive, utilizing privacy and technology professionals to find solutions and keep up with requirements. Let’s get into specifics.

SCOPE: WHAT AND WHO IS COVERED

The EU AI Act defines AI as,

“…machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

Practically speaking, this may cover a wide variety of ‘AI,’ from single-purpose, algorithm-based business programs to general purpose AI models that can be applied to many functions.

As with the GDPR, the AI Act has extra-territorial scope, and will affect businesses and third-party vendors both in the EU, and those outside of the EU doing business in the EU. In Data Privacy, we have all become familiar with terms such as Data Subject, Controller, and Processor. The AI introduces several important new terms that help us identify if, and how, a party is covered under the law: *

  • Provider: EU party that develops an AI system and places it on the market
  • Importer: EU party that places in the market a non-EU sourced AI
  • Distributor: Party, other than Provider or Importer, who puts an AI on the market
  • Deployer: Party who uses an AI system in professional activity

* These categories do not differentiate between free or paid development/use of AI

CATEGORIES OF RISK

The new law categorizes the risks into several categories, each including important requirements.

  1. Unacceptable Risk: Systems that use social scoring, biometric categorization, real-time biometric identification, or manipulative techniques to impair decision-making; exploit vulnerable or protected classes; assess criminal propensity; or compile facial recognition databases using internet scraping or CCTV. These systems are prohibited.
  2. High Risk: The largest category, comprising most of the law’s text and many systems a business may use or interact with. This category includes countless examples and use cases, for example; profiling or automated decision-making for work performance, economic concerns, health, personal preferences, behavior, location, movement etc.; filtering job applications; targeted job ads; or determining creditworthiness. These systems have extensive disclosure and assessment requirements, such as risk assessments and mitigation, logging reports, human oversight, compliance documentation, and more.
  3. Limited Risk: Systems with transparency risks like chatbots or deepfake apps. These systems are largely unregulated by the act, only requiring notice/consent to inform the user of the AI components.
  4. Minimal/No Risk: These systems, like AI-enabled games or spam filters, are unregulated by the act.
  5. General Purpose AI (GPAI): This special category covers systems that are designed for a variety of tasks—generative AI. These systems have extensive disclosure, assessment, risk-mitigation, and reporting requirements. These requirements are increased if the GPAI includes ‘systemic risk’ which could be viewed as noteworthy bias. In such cases assessment and oversight of the AI’s design, training, and testing are required.
ASSESSMENTS

Depending on the risk level, a business may need to conduct assessments before deploying an AI system. In addition to Data Protection Impact Assessments (DPIA), parties may need to conduct Fundamental Rights Impact Assessments (FRIA) and Conformity Assessments. FRIA’s are intended to assess whether EU data subjects’ fundamental rights— privacy, liberty, security, education, etc. are protected and to identify and mitigate risks to those rights. Once deployed, AI will require Conformity Assessments. Think of these as product safety tests. A business may want both internal confirmation testing and independent validation of their AI deployments to ensure they have reduced the risks, particularly once the EU AI Office begins vigorous oversight of High-Risk systems.

TIMELINE

The Act is expected to go into effect early this Summer, starting an implementation countdown for certain categories. From approximately June 2024, businesses have 6 months to cease all Unacceptable Risk AI. Within 12 months, all GPAI requirements systems must meet the act’s requirements. Finally, within 24 months, High Risk systems must meet the requirements.

ENFORCEMENT

The European Commission’s new AI Office will lead enforcement of this new law. We should expect the AI Office to conduct audits of high-risk systems, assign fines for violations, and drive updates with evolving technology. The Act does, however, have noteworthy bite, with violations for Unacceptable Risk AI carrying a fine of up to 7% of revenue. High Risk and other categories carry between 1-3% fines, with caveats for small business. Overall, these fines are lower than GDPR’s 4% revenue.

CONCLUSION

The EU AI Act’s deadlines highlight two important takeaways. First, Unacceptable AI needs to be phased out very quickly. Second, the EU understands that businesses will need time to prepare for the High-Risk category, which probably covers many business systems. As with GDPR, there may be an adjustment period where both regulators and private entities establish best practices.

GPAI systems represent a curious case, as these types of technology are rapidly advancing. I strongly expect that there will be much more to come about how GPAI will be regulated, including more granularity about ‘systemic risk,’ bias, and oversight.

Overall, this law is a huge step forward in terms of baseline expectations for AI. Existing Data Privacy processes that produce DPIA’s and other assessments are well situated to incorporate new requirements and regulatory frameworks, like this AI Act, ISO 42001, OECD guidelines, NIST, and others. With this long-awaited law now passed, our LevelUP team is excited to work with clients to them develop and implement programs to address AI risks and requirements.

For information on how LevelUP can help you with any or all these requirements, please contact Wills Catling, Director at LevelUP Consulting Partners at: william.catling@levelupconsult.com and we’d be happy to set up a consultation to hear about your program needs.

Author: Joseph Hoffman

Youtube
Vimeo
Google Maps
Spotify
Sound Cloud
Contact Us