The Empirical Analysis of the EU Artificial Intelligence Act

: AI applications have a significant impact on our online experience, determining the content we see based on predictions of our preferences. AI is also utilized in various domains, such as law enforcement for facial recognition and data analysis, personalized advertising, and even cancer diagnosis and treatment. In essence, AI has become increasingly influential in shaping different aspects of our lives. Recognizing the significance of AI, the European Union has taken a pioneering step by introducing the Artificial Intelligence Act, addressing this important issue on a global scale. This article aims to elucidate and illustrate the mechanisms outlined in the EU's Artificial Intelligence Act.


Introduction
The EU AI Act is introduced by the European Union (EU) to establish regulations for artificial intelligence (AI) systems. Its main goals are to promote the ethical and responsible use of AI, safeguard fundamental rights, and foster innovation and competitiveness in the AI field. The Act encompasses various essential aspects such as a risk-based approach, regulations for high-risk AI systems, requirements for transparency and accountability, among others. As the first comprehensive set of regulations in the AI industry, the EU AI Act holds substantial influence globally. This article will delve into the crucial components of this proposed legislation in order to provide a comprehensive understanding of it.

Legislative Process of the EU AI Act
On April 21, 2021, the European Commission submitted the "Proposal for a Regulation laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts" to the European Parliament. This proposal was the result of multiple discussions and took into consideration the European Union's White Paper on Artificial Intelligence and the European Parliament's proposal on ethical guidelines for artificial intelligence. It also considered proposals from the European Council and the High-Level Expert Group on Artificial Intelligence. The proposal went through the legislative process.
The European Commission focused on the ethical and legal regulation of the AI industry and made efforts to establish specific institutional arrangements based on the understanding and trust of ordinary people. Specifically, the High-Level Expert Group on Artificial Intelligence developed and revised the "Ethics Guidelines for Trustworthy Artificial Intelligence" and the "Assessment List for Trustworthy Artificial Intelligence" in June 2019, which were finalized and published in July 2020, confirming the seven conditions specified in the existing guidelines. In addition, the European Commission published a White Paper in February 2020, discussing the necessity of legal regulation for AI algorithms and actively engaging EU member states and citizens in the consultation process.
Against this backdrop, the European Parliament passed a report in October 2020 on the ethical principles, civil liability, and intellectual property related to artificial intelligence. Further research on AI regulation was conducted in various fields such as AI applications, education, culture, and audiovisual sectors. Combining the content of these reports, the EU AI Act was ultimately proposed. on 14 June 2023, MEPs adopted Parliaments negotiating position on the AI Act. The talks will now begin with EU countries in the Council on the final form of the law. the aim is to reach an agreement by the end of this year.
It is worth noting that the development of this act differs from that of the United States and China, and it is more oriented towards serving as a trade regulatory system to improve the current disadvantageous situation of the EU industry.

Key Framework of the EU AI Act
The act clearly defines the scope of artificial intelligence systems as follows: firstly, systems developed by human design for specific purposes; secondly, systems developed using techniques or methods such as machine learning, logic and knowledge-based, statistical methods, etc.; thirdly, systems explicitly designated as software for enabling interaction between these systems (Article 3(1) and Annex I of the AI Act).
The specific provisions proposed in the act generally follow a "risk-based regulatory approach," which balances the methods for addressing risks and issues related to artificial intelligence without hindering the development of new technologies or increasing costs. This risk-based approach determines the intensity of regulation based on the risk level of AI systems to fundamental rights in the EU, particularly imposing obligations on high-risk AI systems while prescribing modified obligations and codes of conduct for other AI systems. It is worth noting that high-risk AI systems are the primary focus of regulation in the act, and various requirements are established for allowing the use of these systems.
In this context, the EU AI Act applies not only to providers of specific AI systems placed on the market or put into service within the EU and users who utilize AI systems for commercial purposes within the EU but also to providers and users of AI systems located outside the EU if their outcomes are used within the EU (Article 2(1)(a), (b), (c) and Article 3(4) of the AI Act).Regarding regulatory bodies, the act establishes the European Artificial Intelligence Board composed of representatives from member states and the European Commission to ensure effective cooperation between this board, national regulatory authorities, and the European Commission. Additionally, the European Commission will provide consultation and expertise to EU member states and collect and share best practices among member states (Articles 56-58 of the AI Act). In order to consider the decline and risks to industries, especially the necessity of technological innovation for small and startup enterprises, the law provides various support measures, with the most representative being the AI regulatory sandbox system (Articles 53 to 55 of the AI Act).

Risk-based Specific Regulatory Approach
(1) Unacceptable risk situations The attitude taken by the AI Act is to prohibit the use of AI systems with specific purposes that violate the fundamental values of the European Union. Specifically, if an AI system is designed to manipulate people's behavior through subconscious influence, exploit specific vulnerable groups, enable public institutions to assess and categorize the reliability of individuals through AI-based social credit scoring systems, or use real-time remote biometric systems for law enforcement purposes, it falls into this category. The use of such systems is generally prohibited.
(2) High-risk situations a. High-risk AI systems The requirements and conditions for high-risk AI systems are mainly based on the recommendations of the High-Level Expert Group on Artificial Intelligence (HLEG) and the Trustworthy Artificial Intelligence Assessment List. These high-risk AI systems are defined in Articles 6(1) and 6(2) of the AI Act and are regulated in two ways.
Specifically, the AI Act designates products listed in Annex II as AI systems that require conformity assessment or safety components for such products, regardless of their independence (Article 6(2) of the AI Act). For example, Annex II includes machines, toys, recreational and personal watercraft, elevators, explosive gas devices and protection systems, radio equipment, high-voltage equipment, cableways, personal protective equipment, gas combustion devices, medical devices, laboratory diagnostic equipment, cars, civil aircraft, motorcycles, tricycles and equipment for maritime use, railway systems, etc.
On the other hand, Annex III classifies high-risk AI systems based on specific purposes (Article 6(2)). Specifically, it covers AI systems related to eight specific purposes, including biometric recognition and classification of natural persons, management and operation of critical infrastructure, education and vocational training, employment, worker management, and self-employment opportunities, access to and enjoyment of essential private/public services and welfare, law enforcement, immigration, asylum, and border control, and justice and democratic processes.
In particular, for Annex III, the AI Act stipulates that the implementing committee can modify its scope of application based on specific requirements (Article 7(1)) and take into account factors such as the occurrence of harm, the possibility of threatening fundamental rights, and inequalities between users (Article 7(2)). b

. Risk Management System
For high-risk AI systems, legal responsibilities are not only imposed on system suppliers and manufacturers but also on distributors, importers, users, and other third parties.
Specifically, Articles 16 to 29 of the AI Act provide specific obligations, including fulfilling the requirements of the Act, establishing a quality management system (including compliance strategies, system design, testing, data management procedures, risk management systems, postmarket surveillance, liability systems, etc.), obligations to compile technical documentation, obligations to keep logs generated by high-risk AI systems under control, obligations to execute relevant compliance assessment procedures before market release or provision of services for high-risk AI systems, obligations to comply with registration requirements (registering AI systems in the European Union database), obligations to implement necessary corrective measures, obligations to notify national authorities or certification bodies of non-compliance issues with high-risk AI systems used or serviced and the corrective measures taken, obligations to affix the CE marking to high-risk AI systems that comply with the regulations, obligations to provide evidence of compliance with requirements upon request from competent national authorities, and so on.
In the case of market release or provision of services with high-risk AI systems, manufacturers bear the same responsibilities as AI system suppliers, while importers and distributors have obligations to ensure compliance, timely notify risks, properly store and transport the systems, provide information, and cooperate. Furthermore, users of high-risk AI systems also have specific obligations, including following provided user guides, using input data that aligns with the system's purpose, monitoring the system, notifying risks and prohibiting use, retaining records, and protecting data.
In summary, high-risk AI systems must meet requirements such as data governance, transparency, controllability, accuracy, robustness, and security. They need to establish corresponding risk management systems, technical documentation, and document retention systems to be able to be released and used.

Conformity Assessment as a Licensing Condition
(1) Conformity Assessment System In addition to meeting the basic licensing conditions specified in the Act, the most important requirement for supplying or releasing high-risk AI systems is to undergo a conformity assessment procedure as stipulated in Article 43 of the AI Act (Article 19 of the AI Act). In other words, among all the obligated entities defined in the Act, especially suppliers bear the broadest obligations, and for high-risk systems, whether involving internal controls (Annex IV) or controls conducted by notified bodies (Annex VIII), they must undergo a conformity assessment procedure, including obtaining the CE marking.
According to the AI Act, conformity assessment bodies will be supervised by the notifying authorities designated by each member state (Article 30, Article 31 of the AI Act), and these bodies will be responsible for verifying the compliance of high-risk AI systems (Article 33(1) of the AI Act).
(2) Methods of Conformity Assessment For providers, in the case of remote biometric identification systems and critical infrastructure management and operation systems, they need to first demonstrate compliance with EU harmonized standards and then either undergo a selfassessment based on internal control or undergo a conformity assessment by a third-party certification body based on their quality management systems and technical documentation within the specified timeframe. If EU harmonized standards are not applicable or only partially applicable, a conformity assessment by a third-party certification body is required (Article 43(1) of the AI Act).
In particular, self-assessment based on internal control will be carried out by demonstrating compliance with the Act's requirements through quality management systems, information in technical documentation, the system's design and development process, and post-market management and surveillance (Annex VI). During this process, certification bodies will confirm the assessment results by issuing certificates of conformity assessment (certificates) (Article 44 of the AI Act). (

3) Obligations of Notified Bodies and Market Surveillance Authorities
The AI Act grants the appointment and supervisory powers of notified bodies and conformity assessment bodies to the notifying authorities designated by each member state (Article 30, Article 31 of the AI Act).Therefore, notified bodies must provide the notifying authorities with information such as EU technical documentation assessment certificates issued according to the requirements of the Act, annexes to the certificates, approval of quality management systems, and EU technical documentation assessment certificates issued according to the requirements of Annex VII (Article 46(1) of the AI Act). Additionally, each notified body must provide information on refused or revoked quality management systems based on requests from other notified bodies (Article 46(2) of the AI Act). Specifically, each notified body must also provide information regarding the results of conformity assessments to other notified bodies performing similar conformity assessment activities (Article 46(3) of the AI Act).
In exceptional cases, market surveillance authorities may allow exemptions for the conformity assessment of high-risk AI systems. These exemptions may be granted within the territory of relevant member states for specific high-risk AI systems based on reasons such as public safety, protection of individual life and health, environmental protection, and protection of important industrial and infrastructure assets. However, the duration of such exemptions should be limited to a reasonable period for conducting necessary conformity assessment procedures and should be terminated immediately after completing such procedures (Article 47(1) of the AI Act).

EU Self-Declaration and CE Conformity Mark
Suppliers of AI systems must prepare an EU Declaration of Conformity for each AI system and keep the declaration for at least 10 years after the AI system's release or service initiation. If requested by the relevant authorities in the respective countries, the supplier must provide copies of the EU Declaration of Conformity (Article 48(1)). Regarding the content of the EU Declaration of Conformity, the AI Act specifies that it must state the specific high-risk nature of the AI system and its compliance with the requirements outlined in Chapter 2 of the Act, including all the information listed in Annex V (Article 48(2), Article 48 (3)). Additionally, an EU Declaration of Conformity should be prepared to ensure coherence with other relevant EU regulations related to highrisk AI systems and include the necessary information for identification of applicable EU harmonization legislation (Article 48 (3)).
The CE conformity mark must be affixed directly to the high-risk AI system in a legible and permanent manner. However, if direct affixing is impractical, it may be affixed through packaging or accompanying documents (Article 49(1)).

Conclusion
Through the above analysis, it is clear that the AI Act provides for legislative rules and oversight by designated independent national authorities, with certain heavy penalties, while relying on co-regulation, standardization, and certification rules. In doing so, it grants AI providers too prominent a role in the implementation of the regulation, giving them discretionary power, particularly in internal control, and excessively relying on the effectiveness of conformity assessment and the CE mark. Flexible selfregulatory arrangements need to be balanced by strict regulatory authorities. However, the implementation of the AI Act primarily relies on national competencies without ensuring the staffing and funding of member state authorities responsible for enforcing the proposed regulation. In this regard, the limitations already encountered, for example in the implementation of the GDPR, could be reproduced here.