In June 2023, the European Parliament approved the proposal to regulate the use of Artificial Intelligence. It was necessary to determine clear guidelines for creating, developing and using these systems to guarantee fundamental rights and strengthen digital security, always considering a technology-neutral and adaptable application to the development of artificial intelligence.
Artificial Intelligence has been defined as a machine-operated system designed to function with different levels of autonomy and that can, by explicit or implicit goals, generate outputs such as predictions, recommendations or decisions that influence virtual or physical environments.
This regulation will apply to suppliers marketing or putting into service AI systems in the European Union, whether they are established in the EU or a third country, to users of AI systems located in the EU, as well as to suppliers and users from a third country, provided that the output provided by the AI system is used in the EU.
Due to the increasing number and availability of AI systems, whose purpose is to impact various areas of users’ lives in terms of producing reliable solutions and information on a wide range of subjects and with the autonomous capacity to provide such solutions or related services, there is a need to ensure that this reliable and uniform utilisation.
Therefore, there is a need to ensure that services are provided from reliable and legal sources, guaranteeing the highest possible level of security based on the information the system offers to obtain results.
This regulation aims to determine the category of identifiable risks in the AI systems to be made available, and this analysis should mitigate the problems that such risks present to the digital market and the uniform obligations in resolving them to provide a reliable service.
General compliance principles are: the supervision and control of the AI system by humans; the upgrading of robust security measures to combat illegal use by third parties;
the development of AI software in compliance with European data protection standards; the allowing of logging and tracking of proposed AI solutions; the awareness to the user that they are interacting with an AI system and the sustainable development of AI.
Risk rating: from minimal to unacceptable
The regulation also categorises AI systems as minimal, limited, high, and unacceptable risks, implying that they will be banned and subject to different rules and obligations.
Regarding the qualification of AI systems according to risks, they are considered high risk when they are developed as a safety measure for the products and services covered and/or when they operate in areas of sensitivity to the rights, safety, health and well-being of users. Areas susceptible to high risk are areas of access to public services and documentation, health areas (both in process management and in the implementation of AI systems in hospital products), safety of products aimed at minors, application of AI systems in the labour market, border control management, judicial and legislative area, educational area, among others.
According to the proposal, systems identified as high risk should have a risk management system in place throughout their lifecycle that assesses the impact of threats in the corresponding area, the identification and adoption of risk mitigation measures to eliminate or reduce them, and provide adequate information about the service, the risks and the actions taken, always taking into account the technical knowledge, experience, education of the user, as well as the environment in which the user intends to use the system.
Thus, they are requirements for high-risk AI systems:
- Be developed based on training, validation, and test data sets that fulfil the quality criteria set out in the regulation;
- Provide valid technical documentation before submission to the market;
- Archiving of activity and event logs that allow monitoring of the system’s operation,
- Transparency in providing information to users regarding the capabilities or limitations of systems;
- Inclusion of interface tools that allow human supervision;
- Include accuracy, robustness and cyber security levels for the services presented and their actual use.
As far as unacceptably risky AI systems are concerned, it is prohibited:
- The placing on the market, putting into service or use of an AI system that employs subliminal techniques that bypass a person’s consciousness to substantially distort their behaviour in a way that causes or is likely to cause physical or psychological harm to them or another person;
- The placing on the market, putting into service or use of an AI system that exploits any vulnerabilities of a specific group of persons associated with their age or physical or mental disability to substantially distort the behaviour of a person belonging to that group in a way that causes or is likely to cause physical or psychological harm to that or another person;
- The placing on the market, commissioning or use of AI systems by or on behalf of public authorities to assess or rate the credibility of natural persons over a period of time based on their known or predictable social behaviour or personality or personal characteristics, where the social rating leads to one or both of the following:
1. Prejudicial or unfavourable treatment of certain natural persons or entire groups of them in social contexts unrelated to the contexts in which the data were initially generated or collected;
2. Prejudicial or unfavourable treatment of certain natural persons or whole groups of them which is unjustified and disproportionate to their social behaviour or the seriousness thereof.
- The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement purposes, unless such use is strictly necessary for the targeted investigation of specific potential victims of crime, including missing children, the prevention of a clear, substantial and imminent threat to the life or physical safety of natural persons or a terrorist attack, or the detection, tracing, identification or prosecution of an offender or suspect of a criminal offence.
The proposal also foresees the creation of national supervisory authorities and the establishment of the European Committee for Artificial Intelligence and an EU database on autonomous high-risk artificial intelligence systems.
Finally, providers of AI systems will also be subject to post-market monitoring obligations, as well as information sharing on incidents and anomalies.
Please note that this is still a proposal for a regulation, and there are still more steps towards the approval of a final version of the law. The following steps in the EU proposal approval process are expected to take place in 2023 to present a last piece of legislation apply.
The content of this information does not constitute any specific legal advice; the latter can only be given when faced with a specific case. Please contact us for any further clarification or information deemed necessary in what concerns the application of the law.