European lawmakers have passed groundbreaking legislation known as the AI Act, marking a significant step in regulating artificial intelligence. The law encompasses extensive regulations governing AI systems and imposes restrictions on their use. Approved by the European Parliament after a political agreement with EU member states in December, the rules are slated to be implemented gradually over the coming years. Key provisions include bans on certain AI applications, the introduction of transparency requirements, and mandatory risk assessments for high-risk AI systems.

This move comes amid a global discourse on the implications of AI, with concerns over its risks and benefits. The AI Act is hailed as a milestone in promoting the safe and human-centric development of AI by EU lawmaker Brando Benifei. While awaiting final approval from EU member states, the legislation is poised to exert a global influence. With significant fines attached, companies operating in the EU market, regardless of origin, must comply with the regulations outlined in the AI Act.

European Parliament members participate in a voting session at the European Parliament in Strasbourg, eastern France, on March 13, 2024. (FREDERICK FLORIN/AFP via Getty Images)

Although the law's direct impact is confined to the EU, its broader ramifications are expected to reverberate globally. Major AI companies are unlikely to forsake access to the EU market, which boasts a substantial population. Moreover, other jurisdictions may adopt similar regulatory frameworks, amplifying the law's influence beyond EU borders. Guillaume Couneson, a partner at Linklaters, emphasizes the extensive compliance required by anyone producing or utilizing AI tools under the AI Act.

While various jurisdictions worldwide are enacting or considering AI regulations, the AI Act sets a precedent with its comprehensive approach. Notably, the legislation includes bans on specific AI applications, such as emotion-recognition AI in educational and workplace settings, and mandates technical documentation and data transparency requirements for AI providers. These provisions will be phased in gradually, with enforcement anticipated in the coming years, signifying a significant shift in the governance of AI technology.

The European Union has unveiled stringent regulations targeting the developers of the most potent AI models, categorized as posing a "systemic risk." Under these regulations, such developers must subject their models to advanced safety assessments, report significant incidents to regulators, and implement measures to mitigate potential risks and enhance cybersecurity. These measures aim to bolster accountability and ensure the safe deployment of AI technology.

Initially proposed in 2021, the legislation has evolved to encompass general-purpose AI, spurred by the proliferation of AI-powered chatbots like OpenAI's ChatGPT. However, industry groups and certain European governments have contested the imposition of blanket regulations on general-purpose AI, advocating instead for a focus on regulating the risky applications of AI technology rather than its foundational models.

During the legislative process, France and Germany sought to temper some of the law's provisions, reflecting concerns among stakeholders about the regulatory burden. Despite modifications made during final negotiations, the CEO of Mistral AI, Arthur Mensch, views the AI Act as manageable for his company, albeit emphasizing that the legislation should prioritize regulating AI usage over underlying technology.

The AI Act has encountered substantial lobbying efforts, indicative of its significance. While some corporate watchdogs and lawmakers advocate for more stringent requirements for all general-purpose AI models, others express support for the law's risk-based regulatory approach while raising concerns about its practical implementation.

In response to the finalized legislation, lobby group BusinessEurope backs its risk-based approach but raises questions about its interpretational nuances. Conversely, digital-rights group Access Now criticizes the legislation for containing loopholes and inadequately safeguarding individuals from the potential hazards of AI.

Furthermore, the law mandates clear labeling of deepfakes, addressing concerns about AI-generated content's authenticity. AI systems deemed high-risk, particularly those used in immigration or critical infrastructure, must undergo rigorous risk assessments and adhere to stringent data quality standards, among other stipulations. These provisions underscore the EU's commitment to ensuring responsible AI deployment and safeguarding against potential risks associated with its usage.