Introduction
2023 will be the year of the European regulation on artificial intelligence (known as the AI Act). The final amendments expected in April will change the details, but the overall structure of the legislation applicable uniformly in all EU Member States is defined. In case of violations, the authorities in charge will be able to impose fines of up to 30mln euros or 6 percent of annual worldwide turnover (for SMEs, including start-ups, these fines go up to 3 percent of their worldwide turnover).
Scope
This legislation establishes a uniform legal framework to regulate the development, commercialization, and use of artificial intelligence (AI) systems (1) in accordance with EU constitutional values and rights.
To this end, the AI Act stipulates (Art. 1): (i) harmonized rules for the placing on the market, making operational, and use of AI systems; (ii) prohibition of certain AI practices; (iii) specific requirements for high-risk AI systems and obligations for operators of such systems; (iv) harmonized transparency rules for AI systems aimed at interacting with natural persons, emotion recognition systems, biometric categorization systems, and AI systems used to generate or manipulate images or audio or video content; and (v) rules on market monitoring and surveillance.
The rules established by the AI Act apply to AI system providers regardless of whether they are established in the Union or in a third country, to AI system users established in the EU, and to AI system providers and users established in a third country outside the EU, insofar as the AI systems affect persons located in the EU.
The AI Act does not apply to AI systems developed or used exclusively for military purposes.
Method
The AI Act aims to regulate artificial intelligence through the so-called risk-based approach, which distinguishes, different levels of compliance obligations depending on the risk (low, medium or high) that smart software and applications may cause harm to fundamental rights. The higher the risk, the greater are the compliance burdens and responsibilities of the authors of intelligent applications. Finally, excluding that artificial intelligence can be used for certain purposes identified in the legislation itself as contrary to EU values (e.g., social scoring).
Compliance of “high-risk” AI systems.
Of particular interest is the part of the Regulation concerning AI systems considered “high-risk”, which are a set of technologies that create high risks to the health, safety or fundamental rights of people. Such systems are subject to specific rules including: a requirement to create and maintain an active risk management system; a requirement to ensure that AI systems are developed following specific quality criteria with regard to the data and models used; and a requirement to adequately document how the development of a given AI system took place and how it operated (also in order to demonstrate compliance with the Regulation); requirements for transparency to users about the operation of AI systems; a requirement to ensure that AI systems can be subject to oversight by individuals (“human oversight“); and a requirement to ensure the reliability, accuracy, and security of such systems. In some cases, it will be the manufacturer of AI systems that will autonomously assess its level of compliance; in other cases, it will be necessary to involve an external conformity assessment body.
Standards and Certifications
Such a compliance system will be facilitated by the adoption of appropriate reference standards (for categories of AI systems) by professional certifiers such as ISO and CEN.
High-risk AI systems that complete conformity assessment procedures will be CE-marked. Some systems will also have to be placed on a special public register. Without completion of these procedures, AI systems will not be allowed to be placed on the market.
A Europe Fit for the Digital Age and Over-Regulation.
The AI Act is part of the A Europe Fit for the Digital Age strategy defined by the European Commission and of the copious regulatory production aimed at regulating the impact of the new generation of technologies. This will imply a coordination – and for those impacted –, a mapping-of obligations under, on the one hand, regulations on the protection, enhancement and security of data-personal and otherwise (GDPR, Data Act, Data Governance Act, NIS, etc.); on the other hand, those aimed at regulating the role of service providers,including gatekeepers and platforms (Digital Markets Act, Digital Services Act, European Digital Identity, etc.). The AI Act will also be complemented by the Artificial Intelligence liability directive the proposal of which was submitted by the European Commission at the end of September 2022.
This intersection and to some extent an overlap of the AI Act with other European regulations will impose a – not always smooth – coordination of compliance activities.
AI and ESG policies
The entry of new technologies into the functioning of business realities brings to light new social, environmental and governance issues destined to permeate the functioning of companies that make use of artificial intelligence tools. Adaptation to the new rules foreshadowed by the AI Act – together with compliance with the regulation of the management of personal data subject to computational processing – provides an opportunity to reconsider the potential impact of artificial intelligence tools on environmental, social and governance factors, thus verifying whether and under what conditions these tools are functional in presiding over the prospect of sustainable success.
In more general terms, the penetration of automated tools into business logic introduces new items of corporate social responsibility and thus requires the identification of new parameters for measuring (and commensuring) sustainable success goals to the changed technological environment.
IA and environmental impact (E)
Among environmental risks, the focus so far has mainly been on the environmental impact of blockchain technologies and data storage servers; however, the advent of so-called green tech, i.e., technologies aimed at sustainable information and project management, brings additional sustainability risks related to technological mala gestio. In this regard, it is significant that the latest version of the proposed AI Act included emission detection systems among the other risk instruments. On another front, the use of robo-advice tools in the field of green investments and even automated green bond issuance platforms could undermine, rather than promote, the pursuit of sustainability goals in the event of technological distortion of non-financial information and sustainable investment allocation.
AI and social impact (S)
Risks of technological sustainability are also to be seen from a social perspective. The use of employee surveillance tools, automated HR systems as well as consumer classification and profiling models are likely to affect instances of protection of the rights of workers and stakeholders outside the company, which form the backbone of the “social” taxonomy being developed by the European Commission. Also on this level are the risks of gender discrimination, which have already been recorded with respect to elementary platform algorithms, to which discriminatory filtering on the basis of gender and age of the most qualified job offers is imputed. In this regard, the Report published last year by the European Institute for Gender Equality pointed out that “if unchecked, unaccountable and uncorrected, the design of AI technologies will reproduce gender biases and restrictive conceptions, while the biased dataset will amplify gender inequalities, projecting the current gap into the future.”
AI and governance impact (G)
No less significant is the potential impact of the use of robotic systems on corporate governance. In this regard, it was highlighted that errors in the design or implementation of automated tools are likely to radiate down the corporate chain, even to the point of compromising communication flows. On a more general level, the risk of technological capture of the administrative body was pointed out, as well as the need to involve the board – and in particular the independent directors – in the elaboration of a rigorous AI policy, which would serve to guarantee not only compliance with the rules (mostly from a European Union source) on data and artificial intelligence, but, more generally, the full adherence of the methods of recourse to these tools to the sustainability objectives pursued by the company. A regulation intended to be inscribed in the framework of the more general definition of the nature and level of risk compatible with the strategic purposes of society and inspired by the ethics of responsibility and the precautionary principle, which come to have a peculiar relevance in a field, such as that of new technologies, in which the means co-determine and reconfigure the goals.
A properly governed recourse to new technologies could also be usefully experimented a) to facilitate the verification of compliance with appropriate ESG parameters both, initially, by the group’s subsidiaries and, subsequently, along the chain of suppliers (also in the perspective foreshadowed by the proposed directive on Corporate Sustainability Due Diligence) and b) for the interlocution and engagement of shareholders and stakeholders: engagement policies could find an enabling factor in technology, in particular for a more direct and continuous policy of dialogue and interlocution with communities and stakeholders from time to time involved in ESG policies, also for the purpose of timely verification of the actual results of the same.
AI and technological risks
The growing importance of technology risks also emerges from the recently proposed revision of the OECD Principles of Corporate Governance, submitted in fall 2022. Among the most interesting aspects is the shift from the more established Regtech dimension-“Digital technologies can be used to enhance the supervision and implementation of corporate governance requirements, but also require that supervisory and regulatory authorities pay due attention to the management of associated risks” (Principle I.F)-to the still largely experimental Corptech dimension, which includes digital security risks among the relevant risks that the board should govern and mitigate (Principles IV.A.8 and V). The OECD’s new principles also emphasize that “corporate governance practices are also often influenced by human rights and environmental laws, and increasingly by laws related to digital security and data privacy, including the protection of personal data” (Principle I.C) adding that “as the use of AI and algorithms grows more prevalent, there is a corresponding need to maintain a human element in the process to avoid over-reliance on digital technologies and safeguard against risks of incorporating human biases in algorithmic models. This is crucial to appropriately manage the risks arising from the use of digital technologies as well as to foster trust in these processes.” And from this perspective it is pointed out, more specifically, that “the failure to adequately explain the outcomes of a machine learning process may impede accountability and reduce trust in regulatory processes more generally,” to note conclusively that “collaboration between data scientists and business could mitigate this risk“.
AI and Corporate Digital Responsibility
In the new scenario that is emerging in the light of the convergence of the AI Act and the principles foreshadowed at the international and European Union level, companies are being called upon to face the challenge of digitization and Corporate Digital Responsibility, and that presents itself as one of the most promising new frontiers of sustainability. The deeper the penetration of artificial intelligence into corporate arrangements, the more the sustainability of corporate governance will depend on the governance of the technologies employed.
AI as a Strategic Asset for Foreign Investment Control and the so-called Golden Power
AI is defined as a critical technology by EU Regulation 2019/452 in Art. 4, No. 1 letter B) on Foreign Direct Investments. Consistently,- Italian DPCM 179/2020 places it in the context of “assets and relationships of strategic importance to the national interest” for the purposes of the application of Golden Power regulations, with relevant impact for acquisition transactions, establishment of Newco with extra-EU shareholdings, joint ventures and technology licenses and related obligation of screening by the government and notification for stakeholders (Art. 9).
(1) “Artificial intelligence system” is defined as: “a system designed to operate with elements of autonomy and which, based on data and input provided by machines and/or humans, deduces how to achieve a given set of goals by making use of machine learning and/or logic and knowledge-based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations, or decisions, which influence the environments with which the AI system interacts”(Art. 3, AI Act).
DOWNLOAD PDF