EU Artificial Intelligence Act Officially Passed
On May 21, 2024 the Council of Europe adopted the EU Artificial Intelligence Act (AI Act), which aims “to foster the development and uptake of safe and trustworthy AI systems across the EU’s single market by both private and public actors, [...] to ensure respect of fundamental rights of EU citizens and stimulate investment and innovation on artificial intelligence in Europe,” according to the official press release.
The AI Act was proposed by the European Commission in April 2021, with a provisional agreement reached in December 2023, and is now officially adopted and waiting to be signed by the presidents of the European Parliament and of the Council.
Mathieu Michel, Belgian secretary of state for digitization, administrative simplification, privacy protection, and the building regulation made the following statement:
The adoption of the AI act is a significant milestone for the European Union. This landmark law, the first of its kind in the world, addresses a global technological challenge that also creates opportunities for our societies and economies. With the AI act, Europe emphasizes the importance of trust, transparency and accountability when dealing with new technologies while at the same time ensuring this fast-changing technology can flourish and boost European innovation.
The AI Act classifies AI systems based on the risk they pose to individuals and society into unacceptable risk, high risk, and limited/minimal risk and sets out limitations and obligations for these as follows:
- Unacceptable Risk AI Systems: AI systems that pose an unacceptable risk, such as social scoring by governments and real-time biometric identification in public spaces (with some exceptions), are banned.
- High Risk AI Systems: High-risk AI systems are subject to strict requirements, including rigorous risk assessment, data governance, transparency, and human oversight. Examples include AI used in critical infrastructure, education, employment, and law enforcement.
- Limited/Minimal Risk AI Systems: minimal risk AI systems interacting with humans, such as chatbots, must be transparent about their nature. Users should be informed when they are interacting with an AI system.
Other limitations and obligations include:
- Several governing bodies are set up for proper enforcement:
- An AI Office within the Commission to enforce the common rules across the EU.
- A scientific panel of independent experts to support the enforcement activities.
- An AI Board with member states’ representatives to advise and assist the Commission and member states on consistent and effective application of the AI Act.
- An advisory forum for stakeholders to provide technical expertise to the AI Board and the Commission.
- Innovation is encouraged through regulatory sandboxes, which will allow businesses to test AI systems in a controlled environment under regulatory supervision.
- Violations of the AI Act can result in significant fines, similar to those under the GDPR, with penalties up to 7% of global annual turnover or €35 million, whichever is higher.
- Once signed, the AI Act will be published in the EU’s Official Journal and come into force twenty days later. The regulations will apply two years after this date, with certain provisions taking effect sooner, as follows:
- 6 months after publication, countries will be required to ban prohibited AI systems (i.e. AI-powered medical diagnostic tool);
- 1 year after publication the rules for general-purpose AI systems (i.e. Siri, Alexa, Google Assistant) will start applying;
As the first comprehensive AI law, the EU AI Act marks a significant step in addressing the challenges and opportunities presented by artificial intelligence, setting a global standard for AI regulation, emphasizing trust, transparency, and innovation.