Report
Artificial intelligence (AI) has been portrayed as a beneficial feature for the European Union (EU)’s continued progress in sustaining the environment, for better health sector, for efficient finance and manufacturing, enhanced agriculture, human mobility, augmented elderly care and advancements at all levels of education.
Nonetheless, AI systems and their uptake have created safety risks and challenges for protections for people enshrined within the EU Charter of Fundamental Rights. The right to personal dignity and privacy, freedom from discrimination, freedom of expression, and more, are potentially exponentially and even unpredictably threatened, when an increasingly autonomous machinic actor is invited into relations in the social, public and economic sphere, that is, AI systems. EU leaders have acknowledged this can potentially cause issues for its populations and have been leading globally by initiating a series of steps to regulate, to develop, and to govern, the integration of AI products and services in the digital single market.
People’s social positions and class status; individual subjectivities; and lived experiences in the day to day, are not identical to one another, and can even individually change, from moment to moment (Moore, 2024). Therefore, the impact of the applications and usage of new technologies, is not identical across people, who are called ‘data subjects’ in EU data law. We all experience the world differently depending on which type of data subject we are embodying, such as a citizen, a consumer, or a worker, identities - which occur even simultaneously, within one body. Technology laws have tended to focus on consumers’ rights, however, rather than other types of data subjects’ rights. Regulation tends to mention workplace or employment-related technologies such as the high-risk category in the AI Act, which is progressive, however, isolated categories within policy, or relying on labour law to protect workers, are both insufficient to provide full worker protections today. All types of data extraction and advancements around AI, are likely to have an impact on all types of data subjects. To address these issues, our Artificial Intelligence Policy Observatory for the World of Work (AIPOWW) research is focussed on the way that regulation, development and governance of AI is impacting or is likely to impact, the world of work. Our research attempts to reveal issues emerging, and to predict further issues that will emerge in the world of work, internationally.
The European Union’s Artificial Intelligence Act (EU AI Act) was the first attempt toward a hard law global regulation to manage AI integration, EU AI Act has a jurisdictional reach to any third country (non-EU) because it applies to those who produce or distribute or provide AI systems or models in the Union., and it stands out from other jurisdictions’ approaches in several ways, including its ‘risk’ categorisations, where high-risk products must go through testing procedures before releasing to the market; the interventions surrounding conformity assessment such as the need for regulatory sandboxes; and the attempt at comprehensibility, where the legal framework means it is enforceable in national courts across the EU. As the EU AI Act applies to anyone who markets or places an AI system or model in the Union, and as it also sets regulatory standards higher than other jurisdictions, AI operators will tend to comply with the EU AI Act. This means the AI Act is likely to have a global impact. Because of the global reach this law will have, the impact on workers will also have a global quality, therefore, it is the focus of this Case Study.
The current Case Study, in the same pattern as our other AIPOWW Studies, therefore, outlines first, Regulation of AI across the EU; Development, where businesses are surrounding stakeholders became involved; and Governance, to illuminate civil society and social actors’ engagement including trade unions and other worker representative groups. Actors across the EU have taken specific steps toward AI regulation; development; and governance. This Case Study outlines the recent history and contemporary activities within these areas. Noting stakeholder inclusion and the distinctive angles taken in the earlier and more recent stages aids us in identifying the issues faced in the world of work today.
View the European Union tracker
Regulation
The EU’s major contribution both regionally and imminently in regulation related to AI, globally, is the design of the first comprehensive hard law AI regulation in the world, the AI Act. A series of stakeholder engagement fora were held with a soft law approach, which is covered in the Governance section of this Case Study. This section will focus on the AI Act as a hard law instrument.
The EU operates from a multilevel formation of regulatory and governing agreements, where a series of types of rules are tabled, debated, and voted upon, with requirements for integration, implementation, and respective responsibilities for Member States. This is a unique democratic formula, where civil society and wider stakeholders are regularly consulted before regulations are rolled out. The process of making European law involves a trilogue structure, where the executive and law-making arm of the EU is the European Commission. The Commission has the responsibility to propose EU legislative acts (primarily: directives, regulations, delegated acts and implementing acts) and to implement decisions of the other two bodies who are called co-legislators. The European Parliament, the Council of the European Union, and the Council are part of the governing structure of the EU, but do not have the responsibility of proposing nor implementing Regulations.
The AI Act entered into force on 1st August 2024 and will apply from 2 August 2026 (save for some provisions applicable in 2025 and 2027). Leading up to this, a series of consultations with civil society and wider stakeholders, and then consultations across the European Parliament and The Council of the European Union were held, from 2018 – 2024. The current Regulation section here outlines the key regulatory points along this process. The soft law approach was taken in the early phases for the AI Act, and is therefore included in the Governance section (within the current Case Study).
Our AIPOWW discussions of Regulation enshrine hard law initiatives across jurisdictions. So, this section focusses on the AI Act hard las process which occurred, where the AI Act enters the period of application across the EU.
Towards AI Regulation
Between 2018 and 2020, a series of Guidelines, a Definition, and Assessment documents were published, informed by consultations with over 1,200 stakeholders from business and industry, academia, citizens, and civil society. This extensive consultation culminated in February 2020 with the European Commission's release of the White Paper On Artificial Intelligence: A European Approach to Excellence and Trust (or ‘White Paper on AI’) (see Governance section below for full discussion of the process leading up to hard law regulation in the EU).
The White Paper outlined the foundational elements of a European Commission-led AI regulatory framework, introduced forthcoming regulatory commitments, and called for stakeholders’ consultations over the proposed regulatory framework. This Regulation section covers the evolution of this hard law instrument, the AI Act.
A bit more than one year after the White Paper on AI was published, in April 2021, the European Commission submitted a Proposal for a Regulation of The European Parliament and of The Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, which set out the intentions for a regulatory framework of AI. The decision to horizontally regulate AI meant that the policies agreed upon would eventually be required for implementation in each EU Member State.
When the Proposal was published, the European Parliament and the Council of the European Union went into a period of negotiation with the European Commission for what was originally intended to be two years (but took longer). During this period of time, various amendments to the Commission’s text were suggested by the Parliament’s and Council’s Committees. The Parliament and Council Committees debated specific topics internally; and published a series of reports publicly. The opinions and views expressed by people in respective Committees were, of course, not immediately legally binding, but discussions over this period are part of the consultation phases for what would become a hard law instrument.
To set the scene, the publication of the Communication on Fostering a European Approach to Artificial Intelligence published on the 21st April 2021, also facilitated the shift from the soft law, to a hard law approach, calling for the adoption of a new regulatory framework on AI. The EU already holds fundamental rights, safety and health law, labour law, and consumer rights, but while useful, were not seen to be wholly sufficient for protection from whatever AI would bring. The Communication on Fostering a European Approach to Artificial Intelligence paper included both a Proposal for a Regulation laying down harmonised rules on artificial intelligence and the Coordinated Plan on Artificial Intelligence.
Balance of ‘Competences’ for negotiations
In May 2021, many European Parliament Committees decided to use their right to ask to have an opinion and to submit amendment suggestions as part of standard negotiation procedures for the AI Act, as well as to seek to have exclusive or shared ‘competences’, which refers to representation in the decision-making sphere through e.g. holding the Rapporteur position. In other words, ‘competences’ refers to lines of responsibility and decision-making power of the Committees and individuals leading them, and Rapporteurs have more responsibility and influence of course, than regular Committee members.
The Parliamentary Committees include the European Parliament’s Committee on Legal Affairs (JURI), the Committee on Civil Liberties, Justice and Home Affairs (LIBE), the Committee on Culture and Education (CULT), and the Committee on Industry, Research and Energy (ITRE) (among others). The feedback period took place from 26th April 2021 until 6th August 2021. Although the Commission received 133 unique pieces of feedback in August 2021, the decision was made to stick to the original proposal draft after the first phases, i.e. the text that been published the previous April. This is the text that was then put to the democratic process with Committees, where subsequent amendment discussions over the next two years surrounded what the ‘scope’ would be for legislation; what would be considered ‘high-risk’ in the categorisation list; and whether companies would need to carry out independent conformity assessments or more in-house assessments not requiring the external measure.
In June 2021, the Committee on the Internal Market and Consumer Protection (IMCO) appointed Brando Benifei as the AI Act Rapporteur, via the standard EC internal mechanism and based on election results, from the Group of the Progressive Alliance of Socialists and Democrats in the European Parliament (S&D, Italy). In an interview between the current author and policy adviser Filip Molnar, who worked for a Czech MEP during this period, the view was expressed that Benifei was not seen as a compromise candidate, and that was beneficial for putting pressure on locating more inclusive division of competences across EP committees which represent specific areas of the population. The idea to have two leading Committees, i.e., another one beyond the IMCO, and the idea to seek two co-rapporteurs/co-legislators, was discussed.
Between September and November 2021, the discussion on competences continued. Some Committees started to appoint their own ‘opinion rapporteurs’, but not by all, since some have a tradition in waiting for the ‘competences’ to be settled before they make that decision. Taking that into consideration, for example, CULT appointed an opinion rapporteur in July; ENVI in September; TRAN in November 2021; and JURI and ITRE in January 2022.
In the end, the AI Act file was assigned to the IMCO and to LIBE to manage, which means that specific dimensions, to do with markets and consumers (IMCO); and concepts surrounding justice (LIBE); were prioritised when considering what is at stake with the influx of AI into European societies. However, with Dragos Tudorachewas as LIBE rapporteur, things started to move a bit faster, since he was known to be more willing to listen to industry arguments than perhaps Benifei have done (given relevant party affiliation), but was also seen as a counterbalance, politically, to Benifei. This is interesting when considering the balance across ‘regulation’ and ‘development’ orientations as we have organised them in these AIPOWW Case Studies. In any case, workers were not prioritised, in thinking about and allocations for which Committees would lead this process, and which Rapporteurs would hold competences.
From April 2021 – end of 2023, the Parliament worked to adopt its negotiation position through a period of Implementation phases, and in the final stages, a series of Trilogues. These stages involve negotiations and meetings across Committees, where suggested changes to the original text are discussed. The Council of the European Union was also part of the same process, and published various reports indicating how its position on the AI Act text was developing. In December 2022, the Council of the European Union adopted its general approach and compromise text (Legislative Train Schedule). The Council’s text, inter alia: [etc.]
- narrows down the definition to systems developed through machine learning approaches and logic- and knowledge-based approaches;
- extends to private actors the prohibition on using AI for social scoring;
- adds a horizontal layer on top of the high-risk classification to ensure that AI systems that are not likely to cause serious fundamental rights violations or other significant risks are not captured;
- clarifies the requirements for high-risk AI systems;
- adds new provisions to account of situations where AI systems can be used for many different purposes (general purpose AI);
- clarifies the scope of the AI act (e.g. explicit exclusion of national security, defence, and military purposes from the scope of the AI Act) and provisions relating to law enforcement authorities;
- simplifies the compliance framework for the AI Act;
- adds new provisions to increase transparency and allow users' complaints;
- substantially modifies the provisions concerning measures in support of innovation (e.g. AI regulatory sandboxes).
The European Parliament finally voted on a final text, which had already fielded 3,000 suggested amendments put forward by the internal Committees, and the text prepared in February by the Co-Rapporteurs. After this point, the process entered its final phase, with the intention for the AI Act to become law by summer of 2023.
Notably, on the 15/02/23, an intensive ‘marathon’ discussion was held within the European Parliament to iron out nearly all of the amendments that had been suggested by the Committees. Their amendment suggestions were addressed in the text shared by the co-rapporteurs on 24th February, to be voted upon across the Committees, so that the Parliament would be able to release its common position (at which stage the final trilogue could occur). Changes suggested would involve a requirement for testing of AI-systems to consider ‘intended purpose and the reasonably foreseeable misuse… [emphasis by current author] categories that should be given particular consideration in assessing risk have been limited to vulnerable groups and children’ (Euractiv 2023). The text shared by co-rapporteurs maintained a devised ‘Fundamental rights impact assessment’, which would be required for high-risk areas. The text at that time also placed an emphasis on the ban on social scoring and extended this to private companies. New text was added furthermore, that required authorities who will establish sandboxes to actively supervise developers of high-risk systems, to facilitate and ensure compliance after the sandbox testing process.
Ex Ante position and the emergence of LLMs
The ex ante position, which emphasises that ‘intended purpose’ must be taken into account when assessing the future use for AI systems, has been debated throughout the course of the AI Act’s text deliberations. The issue with this position is that AI, definitively, is not always predictable. ‘Foreseeable misuse’, in fact, may be impossible to predict, because technology develops incredibly quickly, and open source and free software advocates may find the necessity to produce a defined path for use and application of a system as a contradiction for both development of AI and of the liberties and freedoms that technology should permit. Nonetheless, the Parliament’s suggested amendments reflect the complexity of attempts to define ‘high-risk’ and in so doing to attempt to ensure the protections of fundamental rights and promote social justice.
In February 2023, the European Parliament co-rapporteurs (Brando Benifei and Dragoș Tudorache) worked carefully to identify the list of AI use that were seen to pose risks, what practices should be prohibited, and definitions surrounding key concepts in the draft regulation. They unveiled the final text to be voted on 26/05/23, which outlined areas of practice that will be considered high-risk. Helpfully for the world of work, the areas of high-risk suggested here included algorithms assisting decisions related to ‘the initiation, establishment, implementation or termination of an employment relation, notably for allocating personalised tasks or monitoring compliance with workplace rules’ (Euractiv 2023). These provide some good ways forward for thinking about how workers can be protected by new regulation.
One unforeseen disruption to the AI Act process was the launch of large language models (LLMs), which led to emergency meetings across the decision-making bodies for the AI Act. I have outlined this pivotal moment in the Development section below because it illuminates the impact that business innovation had during this regulatory phase. The text, with amendments and negotiations incorporated, went to the plenary vote on 13 March 2024. On this date, the European Parliament voted to adopt it. Then, on 16 April, the Parliament published its ‘corrigendum’, where various details that were not seen as accurate within the text, were corrected. The IMCO reviewed these. The key corrections that had implications for the world of work which are discussed in the Development section of the current Case Study, below.
Then on 21st May 2024, the Council of the European Union voted to approve the AI Act. On 12th July, the final text of the AI Act was published in the EU Official Journal. The next steps are as follows:
- February 2025: Chapters I (general provisions) & II (prohibited AI systems) will apply;
- August 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers);
- August 2026: the whole AI Act will apply, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems);
- August 2027: Article 6(1) & corresponding obligations will apply. (Jarovsky 2024)
The AI Act final Regulation is fairly strong on discussion of the risks posed to workers when AI systems are used in working environments, and encourages some protections. The Introductory text (57), summarises these:
(57) AI systems used in employment, workers management and access to self-employment, in particular for the recruitment and selection of persons, for making decisions affecting terms of the work-related relationship, promotion and termination of work-related contractual relationships, for allocating tasks on the basis of individual behaviour, personal traits or characteristics and for monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may have an appreciable impact on future career prospects, livelihoods of those persons and workers’ rights. Relevant work-related contractual relationships should, in a meaningful manner, involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of such persons may also undermine their fundamental rights to data protection and privacy
AI systems used in employment environments are to be listed as High-risk, meaning they must go through a period of testing in a sandbox environment before being used. There is also emphasis on the responsibility of deployers of high-risk AI systems to provide information to workers about this (AI Act Introduction, 92). While it is not yet clear how different the AI Act is to existing EU law surrounding worker protections, the AI Act does emphasise the risks in deploying AI systems at work, which is a progressive gesture.
In the following section, I turn to look at the Development aspects in EU AI integration, looking at the extent to which markets, business and innovation are emphasised in this process as well as how business activities and innovation, in the form of the introduction of large language models (LLMs) and chatbots impacted this process. My select examples throw light on the balance across fundamental right and business orientation.