EU Parliament approves Artificial Intelligence Act

AI_ACT

On 21 May 2024, MEPs endorsed the Artificial Intelligence Act. An act that guarantees the safety and protection of fundamental rights while encouraging innovation.

The AI Act is the world’s first binding regulation specifically targeting Artificial Intelligence.

It aims to protect human rights, democracy, the rule of law and environmental sustainability from high-risk AI, while fostering innovation and ensuring Europe plays a leading role in this field. Rules for AI have been established based on its potential risks and impact level.

In general, the EU AI Act is an important step in AI governance, which aims to establish global standards for the regulation of AI and ensure the development and adoption of safe and reliable AI systems.

The law divides systems into several categories depending on risk and purpose.

Categorization by risk:

Low risk: simple systems that don’t make significant decisions and don’t have major consequences (customer support chatbots)

Medium risk: systems that make important decisions, but with limited consequences. (systems for recommendation, evaluation, or data analysis)

High risk: refers to the application of an AI system that can have serious consequences for individuals, society or the environment.  (medical diagnostic systems, autonomous cars, financial tools and military applications)

Critical risk: systems affecting human lives and the environment (flight management systems, nuclear power plants and safety applications)

Situations that are considered high risk:

Health system: AI systems that make diagnoses, predict the course of the disease or manage therapy since they can, if incorrect, have serious consequences

Transport: Autonomous driving, traffic control or flight management systems are at high risk because they are associated with human lives and safety.

Security: AI systems used for cryptography, surveillance, or military purposes can have serious consequences if they are vulnerable to attacks or errors.

Finance: AI systems for trading, portfolio management or risk assessment can cause large financial losses.

Biometric identification: Facial, fingerprint or other biometric data recognition systems can seriously compromise privacy and security.

Requirements to be met by high-risk systems:

Risk management system: Developers and users of high-risk AI systems need to establish a reliable risk management system. This system helps identify, evaluate, and mitigate potential risks associated with the AI system.

Data management: high-risk AI systems should adhere to appropriate data management practices. This includes responsible handling of data, ensuring privacy and maintaining the quality of data retention.

Technical documentation: developers must produce comprehensive technical documentation for the AI system. That documentation should cover aspects such as system architecture, algorithms used and data sources.

Record keeping: high-risk AI systems must maintain automatic record keeping systems. These records help track system behaviour, decisions, and any incidents that occur during implementation.

Ensuring transparency and information: Setting up high-risk AI systems must ensure transparency by providing clear information to users. This includes an explanation of how the system works, its limitations and possible biases

Human surveillance: high-risk AI systems should guarantee an adequate level of human surveillance. Human intervention is essential for monitoring the effectiveness of the system, solving errors and preventing harmful outcomes.

Accuracy, robustness and cybersecurity: developers must ensure an adequate level of accuracy, robustness and cybersecurity for high-risk AI systems. This includes rigorous testing, validation and continuous monitoring.

AI-ACT2-2

Also, organizations that develop or use AI systems must comply with obligations such as transparency, testing, documentation and monitoring of systems which are also guidelines for ethical use.

The key points for ethical use are:

– Transparency and accountability

– Non-discrimination

– Security and privacy

– Human control

– Ethical assessment

The law prohibits the use of AI for discrimination, surveillance and manipulation. Also, it sets strict conditions for biometric identification.

The law also imposes severe penalties for. The amounts of fines vary depending on the severity and nature of the offense itself. Serious violations (e.g. using prohibited AI applications) can amount to several million euros or a significant percentage of the global annual turnover of violators.

Penalties for violationof the law can be imposed on providers, implementers, importers, distributors and responsible bodies.

Infringements may include breach of risk management, transparency, technical documentation and human supervision obligations.

The application of penalties in EU member states will depend on their national legislation. The law enters into force twenty days after its publication, and the Regulation itself will begin to apply two years after its entry into force with some exceptions for certain provisions of the law.

To ensure the correct implementation of the new law, it envisages the establishment of several bodies such as the Artificial Intelligence Office within the Commission, a scientific panel of independent experts to support enforcement activities and others.

Artificial intelligence Act (AI Act) – Boost

Newsletter

Subscribe to our newsletter

This web site has been co-funded by the European Regional Development Fund, through assistance of the Competitiveness and Cohesion Operational Programme. Learn more

The final recipient of financial instrument co-financed by the European Regional Development Fund as a part of the Operational programme competitiveness and cohesion.