The EU AI Act: the first horitzontal AI regulation

ARNAU SALAT│28/01/2023

Europe wants to take the lead on artificial intelligence regulation. While most of the world powers states such as Japan or the United States still not have a specific regulation on artificial intelligence, but a number of fragmented initiatives and frameworks which only covers the use of AI in some sectors and industries, Europe seeks to distinguish itself through a comprehensive regulation of AI. As was already attempted with the 2018 GDPR, Europe not only wants to establish the basis on how is going to be regulated the driving force of the next decade in its territory, but also to influence worldwide democratic states in order this out coming Act could become a global standard.

The EU AI Act, also known as the European Regulation on Artificial Intelligence, it is call to be a set of rules and guidelines governing the development and use of artificial intelligence within the European Union. The Act aims to provide the first worldwide horizontal legal framework for the ethical and safe development and use of AI, while also promoting innovation and competitiveness within the EU. However, can innovation and regulation coexist and thrive together?

Although we can’t already stand on the final text, inasmuch the European commission proposal is still being debated among the EU council and EU parliament, the structure that it will adopt seem to be well-stablished. By deepening into that structure, one should be able to start bearing in mind what would entail to companies and developers the upcoming regulation.

The regulation (which unlike directives will be directly applicable over the union states, somewhat never useless to remember specially for those readers who may not be familiar with European law) will be structured on a risk grade classification caused by the use of AI. As a consequence, one of the main provisions of the EU AI Act will be the requirement for high-risk AI systems to undergo a risk assessment before they can be placed on the market or used in public.

High-risk AI systems are defined as those that are likely to have significant impacts on the safety, livelihoods, or rights of individuals, or on the environment. The risk assessment process involves the identification of potential risks and the implementation of measures to mitigate those risks.

Without going very far, we are all concern that AI it isn’t precisely a fairy tail story. There is no doubt about the fact that AI will bring us a large amount of opportunities that couldn’t even be imagined before, nonetheless it can also be used to break down the current understanding of the world, which is precisely what the EU wants to prevent. But again, how would be companies effected?

While it is true that some high risk AI software developers, like for instance those which could develop massive face recognition systems, may decide moving to China or to another non-democratic regimes in order to seek for better opportunities, the majority of the AI usages will be deregulated. One should bear in mind that evaluation of regulation will be done on a case-by-case basis, as the same product or piece of software could be subject to different requirements or provisions depending on its intended use.

Regulating the use of Artificial Intelligence is a multifaceted endeavor, as it involves a diverse array of actors within the AI value chain, including but not limited to, software developers, providers of training data, and entities that customize AI models, along with those who ultimately offer services to end-users. Determining which actors fall under the purview of regulation will be contingent upon the potential hazards posed by the use of AI in every single specific chain frame.

For instance, along the value chain, as long as a company had only written a code for AI, which still needed to be taken somewhere else to be trained with data in order to then become something that can be used in the market, this first AI treatment will be regulation free thus is still not creating a risk.

While, taking the same example, once that initial code is feed with data by someone else, and then taken to the market seeking a particular purpose while producing a risk that is listed on the regulation, the entity that has feed the code introducing a significant change to that initial machine, would, effectively, face the regulation.

Having distinguished between two different eventual value chain situations, in addition, it is important to point out that those who play with what could be settled as industrial AI will escape regulation, as long as you are not effecting the interest of individuals neither playing with people personal data. That wouldn’t be embraced by the EU AI Act. Not because industrial AI could not matter for safety, but for the reason that in that case the legislator mainly seeks to focus on consumer-facing AI, as there is already a regulatory regime for industrial production.

Apart from the after mentioned essential previsions, which will be nuancedly analyzed by our team when the final text is published, the EU AI Act will also establish a set of principles for the development and use of AI, including transparency, fairness, and non-discrimination. These principles are intended to ensure that AI is developed and used in a way that respects the rights and values of individuals and society.

What about the compliance, how will be guaranteed all those provisions?

The compliance formula chosen by the commission is based on: horizontal EU legislative instrument following a proportionate risk-based approach + codes of conduct for non-high-risk AI systems.

What does that mean? A regulatory framework for high-risk AI systems only, with the possibility for all providers of non-high-risk AI systems to follow a code of conduct. Companies that introduced codes of conduct for other AI systems would do so voluntarily.

The requirements will concern data, documentation and traceability, provision of information and transparency, human oversight and robustness and accuracy among others and would be mandatory for high-risk AI systems.

So as to facilitate compliance, the act will establish an EU-wide database of high-risk AI systems, which will be operated by the Commission and provided with data by the AI system providers.

In order a high risk system could be placed in the market, once it is developed needs to undergo the conformity assessment and comply with the AI requirements. At the time those requirements were accomplished, the commission will proceed to reiterate the stand-alone AI system in the EU database. After that, to put an end to those ex-ante requirements, the developers will need to sign a declaration of conformity and its system would have to bear a (ce) marking, which would lead to the automatic entrance to the market.

Delving straight to the ex-post control, Market surveillance authorities, empowered under the regulation (EU) 2019/1020 on market surveillance, will be responsible for monitoring compliance and investigating incompliances with the obligations and requirements for AI systems already placed in the market. The market surveillance authorities will have the power to intervene in case AI systems generate unexpected risks or incompliances with the provisions of the regulation. In addition, when the authority considers that non-compliance may not be restricted to its notarial territory, shall inform the Commission and other member states. Thus, if you are a system provider you ought to consider that one can go unnoticed once, but probably not always and anywhere.

But what does the regulation refer to when it talks about market authority? The proposal does not foresee the creation of additional bodies or authorities at the member state level, as this task will be carried out by existing sectorial authorities. These sectorial authorities will monitor compliance of operations with their relevant obligations under the regulation. Additionally, the European Data Protection supervisor, will also have the power to impose fines.

What consequences would your company face if it failed to comply with regulations?

It has been settled by the proposal that member states must have to ensure effective implementation of the provisions, and to establish dissuasive penalties, in accordance with the margins and criteria set in the regulation.

By bearing in mind the regulation standards (article 71-12) one could be imposed administrative fines of up to 30 million EUR, or in case of being a company which is non-complaining with articles 5 and 10, fines of up to 6% of the total worldwide annual turnover for the preceding financial year.

 

Overall, while it seems being no doubt that EU AI Act will be an important step towards ensuring the ethical and safe development and use of artificial in order to protect the rights and values of individuals and society, its impact on innovation and competitiveness is yet to be fully determined.

Whatever it may be, what is clear is that companies will need to be proactive in adhering to the regulations to avoid potential penalties or market exclusion. That will only be possible by understanding the real extent of this regulation from the very beginning of the starting of any entrepreneurship project or future product launch.

The EU AI Act: the first horitzontal AI regulation

ARNAU SALAT│28/01/2023

Europe wants to take the lead on artificial intelligence regulation. While most of the world powers states such as Japan or the United States still not have a specific regulation on artificial intelligence, but a number of fragmented initiatives and frameworks which only covers the use of AI in some sectors and industries, Europe seeks to distinguish itself through a comprehensive regulation of AI. As was already attempted with the 2018 GDPR, Europe not only wants to establish the basis on how is going to be regulated the driving force of the next decade in its territory, but also to influence worldwide democratic states in order this out coming Act could become a global standard.

The EU AI Act, also known as the European Regulation on Artificial Intelligence, it is call to be a set of rules and guidelines governing the development and use of artificial intelligence within the European Union. The Act aims to provide the first worldwide horizontal legal framework for the ethical and safe development and use of AI, while also promoting innovation and competitiveness within the EU. However, can innovation and regulation coexist and thrive together?

Although we can’t already stand on the final text, inasmuch the European commission proposal is still being debated among the EU council and EU parliament, the structure that it will adopt seem to be well-stablished. By deepening into that structure, one should be able to start bearing in mind what would entail to companies and developers the upcoming regulation.

The regulation (which unlike directives will be directly applicable over the union states, somewhat never useless to remember specially for those readers who may not be familiar with European law) will be structured on a risk grade classification caused by the use of AI. As a consequence, one of the main provisions of the EU AI Act will be the requirement for high-risk AI systems to undergo a risk assessment before they can be placed on the market or used in public.

High-risk AI systems are defined as those that are likely to have significant impacts on the safety, livelihoods, or rights of individuals, or on the environment. The risk assessment process involves the identification of potential risks and the implementation of measures to mitigate those risks.

Without going very far, we are all concern that AI it isn’t precisely a fairy tail story. There is no doubt about the fact that AI will bring us a large amount of opportunities that couldn’t even be imagined before, nonetheless it can also be used to break down the current understanding of the world, which is precisely what the EU wants to prevent. But again, how would be companies effected?

While it is true that some high risk AI software developers, like for instance those which could develop massive face recognition systems, may decide moving to China or to another non-democratic regimes in order to seek for better opportunities, the majority of the AI usages will be deregulated. One should bear in mind that evaluation of regulation will be done on a case-by-case basis, as the same product or piece of software could be subject to different requirements or provisions depending on its intended use.

Regulating the use of Artificial Intelligence is a multifaceted endeavor, as it involves a diverse array of actors within the AI value chain, including but not limited to, software developers, providers of training data, and entities that customize AI models, along with those who ultimately offer services to end-users. Determining which actors fall under the purview of regulation will be contingent upon the potential hazards posed by the use of AI in every single specific chain frame.

For instance, along the value chain, as long as a company had only written a code for AI, which still needed to be taken somewhere else to be trained with data in order to then become something that can be used in the market, this first AI treatment will be regulation free thus is still not creating a risk.

While, taking the same example, once that initial code is feed with data by someone else, and then taken to the market seeking a particular purpose while producing a risk that is listed on the regulation, the entity that has feed the code introducing a significant change to that initial machine, would, effectively, face the regulation.

Having distinguished between two different eventual value chain situations, in addition, it is important to point out that those who play with what could be settled as industrial AI will escape regulation, as long as you are not effecting the interest of individuals neither playing with people personal data. That wouldn’t be embraced by the EU AI Act. Not because industrial AI could not matter for safety, but for the reason that in that case the legislator mainly seeks to focus on consumer-facing AI, as there is already a regulatory regime for industrial production.

Apart from the after mentioned essential previsions, which will be nuancedly analyzed by our team when the final text is published, the EU AI Act will also establish a set of principles for the development and use of AI, including transparency, fairness, and non-discrimination. These principles are intended to ensure that AI is developed and used in a way that respects the rights and values of individuals and society.

 

What about the compliance, how will be guaranteed all those provisions?

The compliance formula chosen by the commission is based on: horizontal EU legislative instrument following a proportionate risk-based approach + codes of conduct for non-high-risk AI systems.

What does that mean? A regulatory framework for high-risk AI systems only, with the possibility for all providers of non-high-risk AI systems to follow a code of conduct. Companies that introduced codes of conduct for other AI systems would do so voluntarily.

The requirements will concern data, documentation and traceability, provision of information and transparency, human oversight and robustness and accuracy among others and would be mandatory for high-risk AI systems.

 

So as to facilitate compliance, the act will establish an EU-wide database of high-risk AI systems, which will be operated by the Commission and provided with data by the AI system providers.

In order a high risk system could be placed in the market, once it is developed needs to undergo the conformity assessment and comply with the AI requirements. At the time those requirements were accomplished, the commission will proceed to reiterate the stand-alone AI system in the EU database. After that, to put an end to those ex-ante requirements, the developers will need to sign a declaration of conformity and its system would have to bear a (ce) marking, which would lead to the automatic entrance to the market.

 

Delving straight to the ex-post control, Market surveillance authorities, empowered under the regulation (EU) 2019/1020 on market surveillance, will be responsible for monitoring compliance and investigating incompliances with the obligations and requirements for AI systems already placed in the market. The market surveillance authorities will have the power to intervene in case AI systems generate unexpected risks or incompliances with the provisions of the regulation. In addition, when the authority considers that non-compliance may not be restricted to its notarial territory, shall inform the Commission and other member states. Thus, if you are a system provider you ought to consider that one can go unnoticed once, but probably not always and anywhere.

 

But what does the regulation refer to when it talks about market authority? The proposal does not foresee the creation of additional bodies or authorities at the member state level, as this task will be carried out by existing sectorial authorities. These sectorial authorities will monitor compliance of operations with their relevant obligations under the regulation. Additionally, the European Data Protection supervisor, will also have the power to impose fines

 

What consequences would your company face if it failed to comply with regulations?

It has been settled by the proposal that member states must have to ensure effective implementation of the provisions, and to establish dissuasive penalties, in accordance with the margins and criteria set in the regulation.

By bearing in mind the regulation standards (article 71-12) one could be imposed administrative fines of up to 30 million EUR, or in case of being a company which is non-complaining with articles 5 and 10, fines of up to 6% of the total worldwide annual turnover for the preceding financial year.

 

Overall, while it seems being no doubt that EU AI Act will be an important step towards ensuring the ethical and safe development and use of artificial in order to protect the rights and values of individuals and society, its impact on innovation and competitiveness is yet to be fully determined.

Whatever it may be, what is clear is that companies will need to be proactive in adhering to the regulations to avoid potential penalties or market exclusion. That will only be possible by understanding the real extent of this regulation from the very beginning of the starting of any entrepreneurship project or future product launch.

SUBSCRIBE TO BE UPDATED

Join the ecosystem, sign up to receive email updates on Altiora’s latest articles and projects.

    MENU

    @ 2023 altiora ecosystem, All Rights Reserved