AI in the Courtroom of the World: Understanding the EU AI Act's Reach

Artificial Intelligence (AI) has taken the world by storm during the last decade, developing at a rapid pace and invading many sectors such as healthcare, the judicial system, entertainment, etc. It refers to systems or machines that mimic human intelligence to perform tasks and can repeatedly improve themselves based on the information collected. AI spans from simple algorithms used in everyday applications to complex systems that can reason and learn from data. As AI advances, the imperative of regulating its usage becomes unavoidable for several reasons namely to alleviate the moral complications it raises and to ensure public safety, transparency, and accountability of those who overstep boundaries. In response, countries have adopted various regulating strategies.   

Japan is adopting a "soft law" approach to AI regulation, utilizing existing data protection laws and planning a new law for 2024 to regulate generative AI. The United States (US) is acting towards having AI governance managed by the executive branch of government rather than a broad national AI law. As of August 2023, China's law mandates a pre-state review of algorithms, specifically targeting generative AI, to ensure adherence to socialist values, protect national security, uphold ethics, prevent discrimination, and promote transparency and reliability. The European Union (EU) has been working towards implementing the EU AI Act COM (2021) 206 final, Brussels, 21.4.2021​​ (EU AI Act or AI Act) which is in its last stages before becoming law.

In Lebanon, the National AI Strategy for the Lebanese Industrial Sector (2020-2050) published in August 2019 and spearheaded by the Lebanese Ministry of Industry, stands as a testament to the nation's forward-looking approach to integrating artificial intelligence (AI) into its industrial fabric. However, despite the strategic endeavors aimed at fostering a knowledge-driven economy and digitalizing the Ministry to become a "smart ministry," the Lebanese authorities have yet to establish a formal legal framework specifically designed to navigate the complexities of AI within the Lebanese market.

Given that the EU AI Act stands as the most comprehensive and forward-thinking regulation on a global scale, adopting a proactive stance by foreseeing potential AI applications and setting preemptive measures for yet-to-be-encountered scenarios, this article will concentrate on exploring its fundamental aspects and the impact it has on the field.

 

AI Act Essentials

The purpose of the AI Act is to encourage resorting to AI by all economic operators all while establishing a transparent, ethical, and safe environment. If the AI Act’s main target was the security of AI products, the target evolved covering today, fundamental rights and high-risk AI systems.

What is the implementation timeframe?

Earlier in March, the EU Parliament voted on and approved the proposed AI Act. Before the said act becomes binding law on the Member States (MS), it will need to go through a corrigendum procedure, which is a formal process in which errors and mistakes are identified and corrected before the publishing of the document in the Official Journal (OJ).

The next step would be its approval by the EU Council, which is expected to occur over the upcoming months.  The EU AI Act will become officially effective twenty (20) days after it is published in the OJ.

What is the compliance deadline?

The deadline for compliance will depend on the degree of risk of the system and vary between 6 to 36 months. Provisions concerning prohibited AI practices will take effect six months after the AI Act comes into force. In comparison, those relating to general-purpose AI models will take effect six months later. Subsequent provisions will be implemented later, with most taking effect two to three years after the AI Act enters into force.

Who is concerned by the AI Act?

  • Any economic operator whose headquarters are located in the EU, or
  • whose market targets the EU.

It is worth noting that non-commercial activities are not affected by the AI Act’s provisions (such as research).

What are the prohibited and high-risk AI systems?

Articles 5 and 6 of the EU AI Act indicate the prohibited AI systems and the high-risk AI systems respectively. Some of the most relevant and prohibited AI systems included under Article 5 are the following:

  • Implementing deceptive techniques to hinder decisionmaking, leading to significant harm, e.g. an AI system that subliminally manipulates online shoppers into overspending,
  • Exploiting vulnerable individuals based on their age or disabilities, ultimately leading to significant harm, e.g. an AIdriven game that exploits children's cognitive vulnerabilities to push in-game purchases,
  • Social Scoring, which is, categorizing individuals based on their social actions, leading to their mistreatment, e.g. a local government AI system that scores citizens based on social media activities, affecting their service eligibility,
  • “RealTime” remote biometric identification (RBI) in publicly accessible spaces as part of law enforcement (with exceptions), e.g. law enforcement using live facial recognition in public areas to track individuals without specific legal authorization.

Article 6 goes into detail on high-risk AI systems, and what requirements their providers are subject to. The following are some of the high-risk AI systems:

  • Remote biometric identification systems (those that are not banned), e.g. Biometric systems at airports that match passengers with their passport photos at gates,
  • AI systems that decide whether individuals get access to educational and vocational training, at any level, e.g. an AI tool that processes and evaluates university applications based on academic and extracurricular data,
  • AI systems that are used as a part of recruitment activity, specifically to filter applications, e.g. an AI system that filters job applications by assessing resumes against required qualifications,
  • AI systems that are used to evaluate peoples’ risk of being crime victims, as part of law enforcement, e.g. an AI system evaluates the probability of individuals becoming crime victims based on historical crime data and personal data, used to allocate police resources more effectively.

Some of the obligations that high-risk AI systems must abide by are the following:

  • Create a risk management system, e.g. developing processes to assess and mitigate AI system risks throughout its operational life,
  • Constantly proving compliance by drawing up technical documentation, e.g. maintaining detailed records of AI system design and operation to demonstrate compliance,
  • Record keeping, e.g. ensuring all AI decisionmaking processes are fully logged and traceable,
  • Human oversight is never ignored, e.g. designing the AI system so that humans can understand and oversee its operations, ensuring decisions can be reviewed and intervened if necessary.

What are the non-compliance penalties?

Entities involved in marketing or deploying AI systems categorized under "unacceptable risk" could face administrative fines of up to €35 million or 7% of their total worldwide annual turnover. This category includes AI systems that could manipulate human behavior in subliminal ways, exploit vulnerabilities of specific groups of people, enable government-sponsored social scoring, or utilize real-time remote biometric identification systems in publicly accessible spaces, among other criteria.

For violations not related to the deployment of prohibited AI systems but still under the AI Act's scope, entities may be subject to fines with lower ceilings. These include fines up to €15 million or 3% of the worldwide annual turnover for non-compliance with the AI Act's stipulations outside the "unacceptable risk" category, and fines up to €7.5 million or 1% of worldwide annual turnover for providing incorrect, misleading, or incomplete information to the relevant authorities or bodies.

For each category of infringement, the applied threshold would be the lower of the two amounts for SMEs and the higher amount for other businesses.

 

AI Act’s reach

The European Union has been setting the standard in terms of regulation.

On one hand, the General Data Protection Regulation (GDPR) deals specifically with data privacy and ensures the protection of the data of individuals (consent, data processing, and controlling, data subject rights, etc). On the other hand, the AI Act mainly touches on AI systems (risk levels), ethics, performance, and transparency, specifically when applied to sensitive sectors such as healthcare and transportation.

While the GDPR is considered one of the strictest regulations in terms of data protection, it frequently serves as a reference in contracts, regardless of whether the relevant parties directly fall within its jurisdiction. Similarly, the AI Act, with its comprehensive coverage of AI details, is anticipated to be utilized as a reference point. Together, these regulations not only underscore the EU's commitment to maintaining a fine balance between advancing technological innovation and protecting individual rights and safety but also affirm its influential role in shaping global regulatory frameworks.