Artificial Intelligence Policy Observatory for the World of Work

European Union Case

European flags flying in front of the EU headquarters building.

Artificial intelligence (AI) has been portrayed as a beneficial feature for the European Union (EU)’s continued progress in sustaining the environment, for better health sector, for efficient finance and manufacturing, enhanced agriculture, human mobility, augmented elderly care and advancements at all levels of education.

Nonetheless, AI systems and their uptake have created safety risks and challenges for protections for people enshrined within the EU Charter of Fundamental Rights. The right to personal dignity and privacy, freedom from discrimination, freedom of expression, and more, are potentially exponentially and even unpredictably threatened, when an increasingly autonomous machinic actor is invited into relations in the social, public and economic sphere, that is, AI systems. EU leaders have acknowledged this can potentially cause issues for its populations and have been leading globally by initiating a series of steps to regulate, to develop, and to govern, the integration of AI products and services in the digital single market.

People’s social positions and class status; individual subjectivities; and lived experiences in the day to day, are not identical to one another, and can even individually change, from moment to moment (Moore, 2024). Therefore, the impact of the applications and usage of new technologies, is not identical across people, who are called ‘data subjects’ in EU data law. We all experience the world differently depending on which type of data subject we are embodying, such as a citizen, a consumer, or a worker, identities - which occur even simultaneously, within one body. Technology laws have tended to focus on consumers’ rights, however, rather than other types of data subjects’ rights. Regulation tends to mention workplace or employment-related technologies such as the high-risk category in the AI Act, which is progressive, however, isolated categories within policy, or relying on labour law to protect workers, are both insufficient to provide full worker protections today. All types of data extraction and advancements around AI, are likely to have an impact on all types of data subjects. To address these issues, our Artificial Intelligence Policy Observatory for the World of Work (AIPOWW) research is focussed on the way that regulation, development and governance of AI is impacting or is likely to impact, the world of work. Our research attempts to reveal issues emerging, and to predict further issues that will emerge in the world of work, internationally.

The European Union’s Artificial Intelligence Act (EU AI Act) was the first attempt toward a hard law global regulation to manage AI integration, and it stands out from other jurisdictions’ approaches in several ways, including its ‘risk’ categorisations, where high-risk products must go through testing procedures before releasing to the market; the interventions surrounding conformity assessment such as the need for sandboxes; and the attempt at comprehensibility, where the legal framework means it is enforceable in national courts across the EU. Also, as a European law, will have the Brussels Effect, meaning that corporations, internationally, will tend toward compliance with the Act even though not operating only in EU markets. This means the AI Act is likely to have a global impact, quickly. Because of the global reach this law will have, the impact on workers will also have a global quality, therefore, it is the focus of this Case Study.

The current Case Study, in the same pattern as our other AIPOWW Studies, therefore, outlines first, Regulation of AI across the EU; Development, where businesses are surrounding stakeholders became involved; and Governance, to illuminate civil society and social actors’ engagement including trade unions and other worker representative groups. Actors across the EU have taken specific steps toward AI regulation; development; and governance. This Case Study outlines the recent history and contemporary activities within these areas. Noting stakeholder inclusion and the distinctive angles taken in the earlier and more recent stages aids us in identifying the issues faced in the world of work today.

I. Regulation

The EU’s major contribution both regionally and imminently in regulation related to AI, globally, is the design of the first comprehensive hard law AI regulation in the world, the AI Act. A series of stakeholder engagement fora were held with a soft law approach, which is covered in the Governance section of this Case Study. This section will focus on the AI Act as a hard law instrument.

The EU operates from a multilevel formation of regulatory and governing agreements, where a series of types of rules are tabled, debated, and voted upon, with requirements for integration and respective responsibilities for member states. This is a unique democratic formula, where civil society and wider stakeholders are regularly consulted before regulations are rolled out. The process of making European law involves a trilogue structure, where the executive and law-making arm of the EU is the European Commission. The Commission has the responsibility to propose Regulations and to implement the decisions of the other two bodies who are called co-legislators. The European Parliament and the Council of the European Union are part of the governing structure of the EU, but do not have the responsibility of proposing nor implementing Regulations.

The publication of the AI Act in the Official Journal of the European Union occurred on 12th July 2024, which marks its last step in the legislative process, and the final text went into force on 1st August 2024.  Leading up to this, a series of consultations with civil society and wider stakeholders, and then consultations across the European Parliament and The Council of the European Union were held, from 2018 – 2024. The current Regulation section here outlines the key regulatory points along this process. The soft law approach was taken in the early phases for the AI Act, and is therefore included in the Governance section (within the current Case Study).

Our AIPOWW discussions of Regulation enshrine hard law initiatives across jurisdictions. So, this section focusses on the AI Act hard las process which occurred, where the AI Act enters the period of application across the EU.

Towards AI Regulation

Between 2018 and 2020, a series of Guidelines, a Definition, and Assessment documents were published, informed by consultations with over 1,200 stakeholders from business and industry, academia, citizens, and civil society. This extensive consultation culminated in February 2020 with the European Commission's release of the White Paper On Artificial Intelligence: A European Approach to Excellence and Trust (or ‘White Paper on AI’) (see Governance section below for full discussion of the process leading up to hard law regulation in the EU).

The White Paper consolidated civil societies’ insights from the consultation process, outlined the foundational elements of a European Commission-led AI regulatory framework, and introduced forthcoming regulatory commitments. Key aspects of this White Paper laid the groundwork for the values and regulatory requirements that would later shape the AI Act's subsequent stages. This Regulation section covers the evolution of this hard law instrument, the AI Act.

A bit more than one year after the White Paper on AI was published, in April 2021, the European Commission submitted a Proposal for a Regulation of The European Parliament and of The Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, which set out the intentions for a regulatory framework of AI. The decision to horizontally regulate AI meant that the policies agreed upon would eventually be required for implementation in each EU Member State.

When the Proposal was published, the European Parliament and the Council of the European Union went into a period of negotiation with the European Commission for what was originally intended to be two years (but took longer). During this period of time, various amendments to the Commission’s text were suggested by the Parliament’s and Council’s Committees. The Parliament and Council Committees debated specific topics internally; and published a series of reports publicly. The opinions and views expressed by people in respective Committees were, of course, not immediately legally binding, but discussions over this period are part of the consultation phases for what would become a hard law instrument.

To set the scene, the publication of the Communication on Fostering a European Approach to Artificial Intelligence published on the 21st April 2021, also facilitated the shift from the soft law, to a hard law approach, calling for the adoption of a new regulatory framework on AI. The EU already holds fundamental rights, safety and health law, labour law, and consumer rights, but while useful, were not seen to be wholly sufficient for protection from whatever AI would bring. The Communication on Fostering a European Approach to Artificial Intelligence paper included both a Proposal for a Regulation laying down harmonised rules on artificial intelligence and the Coordinated Plan on Artificial Intelligence.

Balance of ‘Competences’ for negotiations

In May 2021, many European Parliament Committees decided to use their right to ask to have an opinion and to submit amendment suggestions as part of standard negotiation procedures for the AI Act, as well as to seek to have exclusive or shared ‘competences’, which refers to representation in the decision-making sphere through e.g. holding the Rapporteur position. In other words, ‘competences’ refers to lines of responsibility and decision-making power of the Committees and individuals leading them, and Rapporteurs have more responsibility and influence of course, than regular Committee members.

The Parliamentary Committees include the European Parliament’s Committee on Legal Affairs (JURI), the Committee on Civil Liberties, Justice and Home Affairs (LIBE), the Committee on Culture and Education (CULT), and the Committee on Industry, Research and Energy (ITRE) (among others). The feedback period took place from 26th April 2021 until 6th August 2021. Although the Commission received 133 unique pieces of feedback in August 2021, the decision was made to stick to the original proposal draft after the first phases, i.e. the text that been published the previous April. This is the text that was then put to the democratic process with Committees, where subsequent amendment discussions over the next two years surrounded what the ‘scope’ would be for legislation; what would be considered ‘high-risk’ in the categorisation list; and whether companies would need to carry out independent conformity assessments or more in-house assessments not requiring the external measure.

In June 2021, the Committee on the Internal Market and Consumer Protection (IMCO) appointed Brando Benifei as the AI Act Rapporteur, via the standard EC internal mechanism and based on election results, from the Group of the Progressive Alliance of Socialists and Democrats in the European Parliament (S&D, Italy). In an interview between the current author and policy adviser Filip Molnar,  who worked for a Czech MEP during this period, the view was expressed that Benifei was not seen as a compromise candidate, and that was beneficial for putting pressure on locating more inclusive division of competences across EP committees which represent specific areas of the population. The idea to have two leading Committees, i.e., another one beyond the IMCO, and the idea to seek two co-rapporteurs/co-legislators, was discussed.

Between September and November 2021, the discussion on competences continued. Some Committees started to appoint their own ‘opinion rapporteurs’, but not by all, since some have a tradition in waiting for the ‘competences’ to be settled before they make that decision. Taking that into consideration, for example, CULT appointed an opinion rapporteur in July; ENVI in September; TRAN in November 2021; and JURI and ITRE in January 2022.

In the end, the AI Act file was assigned to the IMCO and to LIBE to manage, which means that specific dimensions, to do with markets and consumers (IMCO); and concepts surrounding justice (LIBE); were prioritised when considering what is at stake with the influx of AI into European societies. However, with Dragos Tudorachewas as LIBE rapporteur, things started to move a bit faster, since he was known to be more willing to listen to industry arguments than perhaps Benifei have done (given relevant party affiliation), but was also seen as a counterbalance, politically, to Benifei. This is interesting when considering the balance across ‘regulation’ and ‘development’ orientations as we have organised them in these AIPOWW Case Studies. In any case, workers were not prioritised, in thinking about and allocations for which Committees would lead this process, and which Rapporteurs would hold competences.

From April 2021 – end of 2023, the Parliament worked to adopt its negotiation position through a period of Implementation phases, and in the final stages, a series of Trilogues. These stages involve negotiations and meetings across Committees, where suggested changes to the original text are discussed. The Council of the European Union was also part of the same process, and published various reports indicating how its position on the AI Act text was developing. In December 2022, the Council of the European Union  adopted its general approach and compromise text (Legislative Train Schedule).  The Council’s text, inter alia: [etc.]

  • narrows down the definition to systems developed through machine learning approaches and logic- and knowledge-based approaches;
  • extends to private actors the prohibition on using AI for social scoring; 
  • adds a horizontal layer on top of the high-risk classification to ensure that AI systems that are not likely to cause serious fundamental rights violations or other significant risks are not captured;
  • clarifies the requirements for high-risk AI systems;
  • adds new provisions to account of situations where AI systems can be used for many different purposes (general purpose AI);
  • clarifies the scope of the AI act (e.g. explicit exclusion of national security, defence, and military purposes from the scope of the AI Act) and provisions relating to law enforcement authorities;
  • simplifies the compliance framework for the AI Act;
  • adds new provisions to increase transparency and allow users' complaints;
  • substantially modifies the provisions concerning measures in support of innovation (e.g. AI regulatory sandboxes).

The European Parliament finally voted on a final text, which had already fielded 3,000 suggested amendments put forward by the internal Committees, and the text prepared in February by the Co-Rapporteurs. After this point, the process entered its final phase, with the intention for the AI Act to become law by summer of 2023.

Notably, on the 15/02/23, an intensive ‘marathon’ discussion was held within the European Parliament to iron out nearly all of the amendments that had been suggested by the Committees. Their amendment suggestions were addressed in the text shared by the co-rapporteurs on 24th February, to be voted upon across the Committees, so that the Parliament would be able to release its common position (at which stage the final trilogue could occur). Changes suggested would involve a requirement for testing of AI-systems to consider ‘intended purpose and the reasonably foreseeable misuse… [emphasis by current author] categories that should be given particular consideration in assessing risk have been limited to vulnerable groups and children’ (Euractiv 2023). The text shared by co-rapporteurs maintained a devised ‘Fundamental rights impact assessment’, which would be required for high-risk areas. The text at that time also placed an emphasis on the ban on social scoring and extended this to private companies. New text was added furthermore, that required authorities who will establish sandboxes to actively supervise developers of high-risk systems, to facilitate and ensure compliance after the sandbox testing process.

Ex Ante position and the emergence of LLMs

The ex ante position, which emphasises that ‘intended purpose’ must be taken into account when assessing the future use for AI systems, has been debated throughout the course of the AI Act’s text deliberations. The issue with this position is that AI, definitively, is not always predictable. ‘Foreseeable misuse’, in fact, may be impossible to predict, because technology develops incredibly quickly, and open source and free software advocates may find the necessity to produce a defined path for use and application of a system as a contradiction for both development of AI and of the liberties and freedoms that technology should permit. Nonetheless, the Parliament’s suggested amendments reflect the complexity of attempts to define ‘high-risk’ and in so doing to attempt to ensure the protections of fundamental rights and promote social justice.

In February 2023, the European Parliament co-rapporteurs (Brando Benifei and Dragoș Tudorache) worked carefully to identify the list of AI use that were seen to pose risks, what practices should be prohibited, and definitions surrounding key concepts in the draft regulation. They unveiled the final text to be voted on 26/05/23, which outlined areas of practice that will be considered high-risk. Helpfully for the world of work, the areas of high-risk suggested here included algorithms assisting decisions related to ‘the initiation, establishment, implementation or termination of an employment relation, notably for allocating personalised tasks or monitoring compliance with workplace rules’ (Euractiv 2023). These provide some good ways forward for thinking about how workers can be protected by new regulation.

One unforeseen disruption to the AI Act process was the launch of large language models (LLMs), which led to emergency meetings across the decision-making bodies for the AI Act. I have outlined this pivotal moment in the Development section below because it illuminates the impact that business innovation had during this regulatory phase.  The text, with amendments and negotiations incorporated, went to the plenary vote on 13 March 2024. On this date, the European Parliament voted to adopt it. Then, on 16 April, the Parliament published its ‘corrigendum’, where various details that were not seen as accurate within the text, were corrected. The IMCO reviewed these. The key corrections that had implications for the world of work which are discussed in the Development section of the current Case Study, below.

Then on 21st May 2024, the Council of the European Union voted to approve the AI Act. On 12th July, the final text of the AI Act was published in the EU Official Journal. The next steps are as follows:

  • February 2025: Chapters I (general provisions) & II (prohibited AI systems) will apply;
  • August 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers);
  • August 2026: the whole AI Act will apply, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems);
  • August 2027: Article 6(1) & corresponding obligations will apply. (Jarovsky 2024)

The AI Act final Regulation is fairly strong on discussion of the risks posed to workers when AI systems are used in working environments, and encourages some protections. The Introductory text (57), summarises these:

(57) AI systems used in employment, workers management and access to self-employment, in particular for the recruitment and selection of persons, for making decisions affecting terms of the work-related relationship, promotion and termination of work-related contractual relationships, for allocating tasks on the basis of individual behaviour, personal traits or characteristics and for monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may have an appreciable impact on future career prospects, livelihoods of those persons and workers’ rights. Relevant work-related contractual relationships should, in a meaningful manner, involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of such persons may also undermine their fundamental rights to data protection and privacy

AI systems used in employment environments are to be listed as High-risk, meaning they must go through a period of testing in a sandbox environment before being used. There is also emphasis on the responsibility of deployers of high-risk AI systems to provide information to workers about this (AI Act Introduction, 92). While it is not yet clear how different the AI Act is to existing EU law surrounding worker protections, the AI Act does emphasise the risks in deploying AI systems at work, which is a progressive gesture.

In the following section, I turn to look at the Development aspects in EU AI integration, looking at the extent to which markets, business and innovation are emphasised in this process as well as how business activities and innovation, in the form of the introduction of large language models (LLMs) and chatbots impacted this process. My select examples throw light on the balance across fundamental right and business orientation.

II. Development

This section of the current Case Study focusses on the ways in which ‘development’ of AI has occurred in the EU. Specific regulatory and legislative gestures have prioritised 'development'; and business has had an impact on the way AI will or can be developed across the EU, notably with the introduction of large language models (LLMs).

The important White Paper On Artificial Intelligence: A European Approach to Excellence and Trust (or ‘White Paper on AI’) discussed in the Regulation section above in fact, actively promoted the uptake and development of AI. The explicit gestures at addressing ‘risks’ associated with the uptake of AI have been foundational to every phase of regulatory consultation and identification were balanced with attention both market fundamentals and democratic principles, which had been mainstreamed by the Treaty on the Functioning of the European Union in 2009 (TFEU). Prepared by the AI HLEG group of experts, the Policy and investment recommendations for trustworthy AI were published just after the first White Paper, that had been published on 8th April 2019. The key take-aways from the 'Policy and investment' document are the need for necessary education for human skills and ongoing research to facilitate good understanding of AI; safeguards from adverse impacts; stakeholder involvement; the development of a physical infrastructure to enable data sharing and use as well as interoperability for a European data economy; the securing of a Single European Market for Trustworthy AI; the adoption of a 'risk-based governance approach' to AI with the appropriate regulatory framework; the stimulation of an open and lucrative investment environment; and a ‘holistic way of working’ that combines a ten year vision with a rolling action plan, so that, preferably, all AI regulation should become a long term and durable strategy with annual updates and continuous monitoring.

Support for innovation in the AI Act

Here I outline specific moments along the process of the AI Act to identify where support for development and innovation of AI was promoted. Firstly, the Committee for Legal Affairs of the European Parliament released a draft opinion on the EU AI Act draft on 02/03/22, which demonstrated priorities around support for AI development. The many suggestions from the Committee included ‘clear[er] rules supporting the development of AI systems’; the establishment of a ‘High-Level Expert Group on Artificial Intelligence’ to oversee the development of ethical guidelines; narrowing the scope of what constitutes an AI system; and expanding the regulatory reach of the AI Act beyond simply systems ‘placed on the market, nor put into service, nor used in the Union.’

The next day, 03/02/22, the EU Parliamentary Committee on Industry, Research, and Energy also published their draft opinion on the AI Act. The short justification contained within the opinion identifies the committee’s overarching concerns with the bill at the time, namely, striking the right balance between ‘freedom and supervision’, promoting small and medium sized enterprises’ (SME’s) competitiveness, issuing clear guidelines to businesses.

To these ends, the Rapporteur for this Committee, Eva Maydell, proposed the following four adjustments:

  1. Enhancing measures to support innovation, such as the ones foreseen for regulatory sandboxes, with a particular focus on start-ups and SMEs
  2. Providing a concise and internationally recognised definition of Artificial intelligence System and setting high but realistic standards for accuracy, robustness, cybersecurity, and data
  3. Encouraging the uptake of of AI systems by industry by placing an emphasis on social trust and value chain responsibility
  4. Future-proofing the Act through better linkages to the green transition and possible changes in the industry, technology and power of AI

These early documents, whilst not legally binding during these phases, demonstrate the market and industry focus that many of the Committees in the European Parliament emphasised for AI. Fast forward to the end point of the AI Act, which is now

Corrigendum

Now I turn to look at an important phase in the AI Act process, the Corrigendum. This stage happens after the text for a Regulation has been published but those who were present at meetings correct the text to reflect the agreements and the discussions that occurred. The European Parliament published this text 19/04/24, which is a technical intervention designed to correct errors for how agreements across the trilogue bodies are represented, throughout the text, but there are also a series of additions of new text, which some scholars state amount to ‘material amendments’ (Bobek 2009) rather than just corrections.

Throughout all stages of the writing, this Regulation emphasised a human-centric approach. The AI Act Corrigendum however advances this, by inserting a series of phrases and usage of terminology which are favourable toward AI ‘development’, where the support for innovation, the involvement/opinion of European Central Bank, public/private partnerships, SMEs and so on, is evident. Therefore the Corrigendum document demonstrates a series of changes which advance the developmental aspects of how businesses will be supported as well as regulated, as depicted in the AI Act, some of which, have implications for the world of work.

One example is where the Corrigendum inserted the following text which appears in italics, in the first introductory paragraph, and the second introductory paragraph was inserted, which emphasises ‘boosting innovation’ [corrections and additions within the Regulation are in italics, in the original text]:

(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems (AI systems) in the Union, in accordance with Union values, to promote the uptake of human centric and trustworthy artificial intelligence (AI) while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’), including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation. This Regulation ensures the free movement, cross-border, of AI-based goods and services, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.

(2) This Regulation should be applied in accordance with the values of the Union enshrined as in the Charter, facilitating the protection of natural persons, undertakings,
democracy, the rule of law and environmental protection, while boosting innovation and employment and making the Union a leader in the uptake of trustworthy AI

Inserted text places a new weighting more toward innovation and development of AI systems in a series of further locations throughout the Corrigendum, where in the third paragraph, sentence 3 states:

(3) A consistent and high level of protection throughout the Union should therefore be ensured in order to achieve trustworthy AI, while divergences hampering the free circulation, innovation, deployment and the uptake of AI systems and related products and services within the internal market should be prevented…

In total, I noted 16 additions to the Introductory description (1) – (172), which emphasise a market focus, product development, SME and microenterprise focus, and innovation for AI systems.

The next item that was updated in the Corrigendum, which has implications for the world of work, is the advanced discussions of the participation of open source software (OSS) community.  The original text was published stated that developers in the OSS community *would not be* exempted, but the language should have read that the OSS *would be* exempted - except for from Art 5 (Prohibited AI Practices) and 50 (Transparency Obligations) – which was picked up in the Corrigendum put forward by Parliament (Article 1, 12). There are issues surrounding these exemptions, for example it is more difficult to identify ‘fake news’ with OSS generators, and there are already attempts to circumvent this by ensuring OSS systems publish model architecture. This correction means that OSS workers will have less restraints on product development than others, which looks good in principle. However, OSS workers have fewer protections than workers with formal recognition in law and this exemption potentially puts workers in these communities at further risk.

The next regulatory item I noted in the Parliament’s Corrigendum is related to terminology and definitions of ‘biometric data’ within the AI Act. The corrections (which appear in the final version) information about how biometric data can be used, in a more detailed way than the earlier versions, as follows: ‘Biometric data can allow for the authentication, identification or categorisation of natural persons and for the recognition of emotions of natural persons.’ The first reading did not contain a granulated breakdown of activities that can be carried out with the use of biometric data, from identification, to verification, to categorisation, which is a positive development that potentially provides protections for workers from extensive biometric surveillance.

Profiling based on biometric data was one of the main sticking points for negotiation surrounding the AI Act process, and this emphasis on three practices of ‘authentication, identification or categorisation’ and the illumination of, and forbidding of, ‘the recognition of emotions’ is important, and will be important for the world of work. Workers have already faced discrimination due to AI usage (Kiliç and Kahyaoğlu, 2024; Rhue, 2019; Pasquale, 2024; Boussaurd, 2023;). While emotion recognition was named in the list of Definitions in earlier drafts, the Corrigendum includes a significant amount of new text outlining the risks of related practices and restrictions.

The emphasis on restrictions to biometric data gathering in the AI Act is a positive outcome with significant implications for the world of work. Several biometric applications are listed in Art 5 (the list prohibited AI practices), including the ‘placing on the market a service with the purpose of inferring emotions of a natural person in the workplace and in educational environments, except where it is in place for medical or safety reasons’. The AI Act bars emotion recognition in Art 5(1)(f), where the use in workplaces and educational system is prohibited, unless they serve a medical or a safety purpose.

Definitions of AI systems

This section looks at the evolving definitions of AI as they appeared in suggested versions of the AI Act. The definitions demonstrate different approaches to the concept, and from our analysis, they became increasingly committed to recognising AI’s autonomous competences. The ‘development’ of AI is very clearly already defined in computing literature, where the IBM Data and IBM Team makes it clear that ‘Machine learning is a subset of AI. Deep learning is a subfield of machine learning, and neural networks make up the backbone of deep learning algorithms’ (IBM). There are three main categories within AI, which are cumulative in meaning

  • Artificial Narrow Intelligence (ANI)
  • Artificial General Intelligence (AGI)
  • Artificial Super Intelligence (ASI) (IBM)

ANI is the weakest type, where AI augmented chatbots and virtual assistants such as Siri can complete specific tasks. AGI is where machinic intelligence is at the level of a human’s, and ASI is where AI is more intelligent than humans. Interpreting human emotions would be one of the functionalities of AGI or ASI. These elements are understood here to have a developmental dimension as they are seen in competition with human intelligence, where the intention is to surpass human competence.

The first definition, in April 2021, was proposed by the EU Commission and written within Article 3 as follows:

'artificial intelligence system' (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;

Later, in November 2022, the Council of the European Union suggested this definition:

'artificial intelligence system' (AI system) means a system that is designed to operate with elements of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic- and knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts;

The European Parliament, in June 2023, then suggested this definition:

'artificial intelligence system' (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments.

In November 2023, there was discussion of adopting the OECD definition that was included in the Council’s Recommendation on AI, but finally, the legislature adopted this definition:

'AI system' means a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

The final definition is better than previous suggestions because it recognises that AI is a technology that recognises that AI is the overarching system and other functionalities are subsets.

There has been confusion over definitions in part because, as was seen in the rapid market rollout of ChatGPT, the functionalities of technologies are known to one degree but not entirely known regarding intended and future use, and given its categorisation as a GPAI system, with its own specific area of regulation within the final AI Act. Regulation needs to be adaptable and the way that the definition is written, the development aspect is taken into account, i.e. that it may ‘exhibit adaptiveness after deployment’ (AI Act Art. 3 Definitions). In these ways, development of AI has been impacted by corporate innovations with transversal impact on policy.

Large language models (LLMs) interrupt the AI Act process

A disruption to the regulatory process emerged in seemingly quite sudden introduction of large language models (LLMs) natural language processing (NLP) systems on the market in the form of a chatbot called ChatGPT, on 30/11/22. GPT-3 had been launched to the public as a beta version in June 2020, but really took form when the chatbot was released in November 2022, and caused significant waves in the AI Act process. As the AI Act is intended to deal specifically with risk and the real possibility that AI can lead to harm, questions for the extent to which LLMs may cause harm, in part because they can be used for many reasons rather than a single task alone – which potentially put them into the category of a ‘general purpose AI’ system (GPAI).

By December 2022, a definition of GPAI systems was inserted to the text of the AI Act as follows: “[GPAI is an] AI system that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of tasks”. The European Parliament co-rapporteurs worked quickly over that period to produce a document addressing the ‘sensitive topic’ of GPAI, the first draft of which was shared 14/03/23 (Bertuzzi 2023). The document indicated that systems that are designed for a specific set of tasks and applications would not be considered GPAI. Fast forward to the final text of the AI Act, and LLMs are defined as GPAI systems, because they can be used for a variety of reasons rather than a specified and singular reason. The focus on ‘intended usage’ became increasingly difficult for the text (Boine and Rolnick, 2023), but the final version of the AI Act outlines responsibilities of GPAI providers, where GPAI are regulated separately from the other categories of AI systems (Hunton, 2024). 

III. Governance

This section turns to look at how civil society, trade unions and soft law constellate to impact the way AI is governed in the EU. There are a series of stages leading up to the AI Act which demonstrate the intentional inclusion of civil society into this process, which I discuss first. Then, I look at the ways that EU trade unions have been involved in the integration of AI.

EU AI Act Soft Law and Consultation with Civil Society

A series of discussions and meetings occurred from 2018 – 2024 surrounding the EU AI Act, where the European Parliament and the European Council read and provided lists of proposed amendments to the proposed AI Act text set out by the Commission. Simultaneously to the high-level debates occurring across committees within the Parliament and Council of Europe, civil society was consulted, and hundreds of commentary pieces and academic papers have been published by the academic community. This process is probably most similar to the Canadian approach of our jurisdictions covered within this AI Observatory report.

In the first phases of AI regulation and governance, a soft law approach was taken within the EU. The soft law approach was only seen to have so much benefit and could prevent international norms around AI regulation. Therefore, to horizontally regulate AI within the EU, which means that the policies agreed upon will be required for implementation in each EU Member State, a more hard law approach was seen to be needed. Indeed, the AI Act the first legal framework of its kind to ‘attempt to enact a horizontal regulation of AI’.

Very early on, policymakers expressed the commitment to a ‘human centric’ approach.  From its first inclination, there has been a commitment to the focus on EU Charter of Fundamental Rights and occupational safety and health (OSH) risks and benefits. There are some possible issues arising, which we outline, as well as possible benefits for the world of work. Toward the beginning of discussions, several cautionary papers were published. Of specific interest here, as with the other themes emerging from our ILO Work in the Digital Economy (WIDE) Observatory Reports (the platform economy and remote work), are the implications that the governance, development and regulation, of AI will have for workers.

Ethics Guidelines for Trustworthy AI

The first iteration of what is now the AI Act was manifest in a multistakeholder forum, which was organised in June 2018. At that time, the European AI Alliance was launched, as part of what was called the European Strategy on Artificial Intelligence framework. The objective for the European AI Alliance was formed to give guidance and to assist the High-Level Expert Group on AI (AI HLEG) in policy development, from a soft law orientation. The Alliance attracted more than 4,000 members and ran a series of events and talking shops. In December 2018, an open consultation process began with the publication of a draft of Ethics Guidelines for Trustworthy Artificial Intelligence which was prepared by the AI HLEG. More than 500 comments were fielded during this process, via consultations on this draft.

On 8th April 2019, the High-Level Expert Group on AI distilled comments and inputs from the stakeholders and released the final version of the Guidelines, which begin by indicating that ‘trustworthy AI should be’:

  1. Lawful: respecting all applicable laws and regulations.
  2. Ethical: respecting ethical principles and values.
  3. Robust: both from a technical perspective while taking into account its social environment.

These early sets of argumentation and outreach to civil society reflect the original advances the EU took to governance processes that should ideally surround any inroads to ensuring a human centred, ethical adoption of AI into society. None of the discussion papers published during these phases are identically presented in the later AI Act proposal, but it is worth reviewing the premise from which discussions advanced. Now, we look at the early phases for identification of the best route for AI development within the EU process.

A series of Guidelines, a Definition, and Assessment documents were published, during which time over 1,200 stakeholders were consulted from business and industry, academics, citizens, and civil society from 2018 – 2020. Then, in February 2020, the European Commission published the White Paper On Artificial Intelligence: A European Approach to Excellence and Trust (or ‘White Paper on AI’), which brought together the themes of the consultation phase, presented the elements of a developing European Commission led AI regulation framework, announcing, and outlining regulatory commitments for action on the horizon. Some key points to come out of this White Paper on AI are worth going into detail, because it set the basis for specific values upheld, and regulatory requirements later agreed. the following stages of the AI Act.

At the time of publication (February 2020), the White Paper on AI stated that there had been insufficient joined-up thinking around AI across Member States, and too many different approaches, stating that:

Developers and deployers of AI are already subject to European legislation on fundamental rights (e.g. data protection, privacy, non-discrimination), consumer protection, and product safety and liability rules. Consumers expect the same level of safety and respect of their rights whether or not a product or a system relies on AI. However, some specific features of AI (e.g. opacity) can make the application and enforcement of this legislation more difficult. For this reason, there is a need to examine whether current legislation is able to address the risks of AI and can be effectively enforced, whether adaptations of the legislation are needed, or whether new legislation is needed. (p10)

For example, Malta had introduced a ‘voluntary certification scheme’ for AI, Germany had called for a ‘five-level risk-based system of regulation that would go from no regulation for the most innocuous AI systems to a complete ban for the most dangerous ones’ - an approach which is in fact, reflected in the current AI Act. Denmark had launched a prototype of a ‘Data Ethics Seal’. There was a real risk, the White Paper on AI noted, of fragmentation of the internal market, ‘which would undermine the objectives of trust, legal certainty and market uptake’ (p10).

Simultaneous to condoning a market focus, the White Paper on AI condoned a regulatory framework for ‘trustworthy AI’, which it noted would protect European citizens and help to ‘create a frictionless internal market for the further development and uptake of AI as well as strengthening Europe’s industrial basis in AI’ (p10). This foundational Paper suggested ex ante conformity assessments. This means that products and services containing AI systems should be tested before being released onto the market or adopted by users, where ‘prior conformity assessment would be necessary to verify and ensure that certain of the above mentioned mandatory requirements applicable to high-risk applications…are complied with’ (italics by current author) (p23). The risk-based approach is characteristic to the EU’s approach, where AI systems deemed as ‘high-risk’ would need to go through specific procedures before determining market eligibility. Later publications gave clearer guidelines for how, for example, ‘Sandboxes’ should be set up nationally, where products would be tested in simulated environments, to determine their correct ‘risk’ categorisation, and therefore their eligibility for release onto the EU market.

A risk-based approach became increasingly solidified at these early stages. The use of ‘risk’ for judging product status is fascinating, because ‘risk’ is originally a concept coming out of safety and health research and legislation. In the current and final version of the AI Act, the Definition of ‘risk’ is ‘the combination of the probability of an occurrence of harm and the severity of that harm’ (AI Act Art 3). This earlier White Paper on AI, in fact, explicitly mentions consumers, citizens and workers in its discussion of potential risks. The Paper emphasises that human decision-making is always at risk of discrimination and bias, but biases resulting from AI augmented technologies can ‘have a much larger effect’, e.g. when ‘the AI system “learns” while in operation’ (p11). Honing in on workers, the White Paper further states that: ‘there is a potential risk that AI may be used, in breach of EU data protection and other rules, by state authorities or other entities for mass surveillance and by employers to observe how their employees behave’ (p11).

The reason why regulation was increasingly seen to be necessary, this White Paper on AI noted, was due to the risks to fundamental rights surrounding personal data and discrimination on the one hand; and safety and liability issues, on the other. To uphold the ‘EU’s values’ and provide an ‘ecosystem of excellence’ that should provide protections against risks for fundamental rights and liability and safety, the following aspects and commitments were listed as necessary to foster joined-up AI regulation:

  • A. Working with member states;
  • B. Focusing the efforts of the research and innovation community;
  • C. Skills;
  • D. Focus on small and medium enterprises (SMEs);
  • E. Partnership with the private sector;
  • F. Promoting the adoption of AI by the public sector;
  • G. Securing access to data and computing infrastructures; and
  • H. International aspects.

This 2020 ‘On AI’ White Paper goes on to put forward a requirement for ex ante conformity assessments, where products and services with AI systems should be tested before being released onto the market or adopted by users to ensure they do not pose excessive risks to people.

A risk-based approach became illuminated in the at that time imminent AI Act, is the focus on ‘risk’ as an important indicator for determining how to implement AI. Importantly, this White Paper actively promotes the uptake of AI, embracing a market-driven approach. Nonetheless, the explicit gestures at addressing ‘risks’ associated with the uptake of AI for humans, have been foundational to every phase of regulatory consultation and identification, which provides a good balance between corporate and market-led innovation and social focus for regulation.

Unions across the EU have generally pushed for the AI Act to include stronger protections for workers, greater transparency, and more involvement of workers in the governance of AI systems. Their call for legislation tends to focus on ways to to ensure that AI is used in ways that benefit workers, rather than exploiting, posing risk to, or replacing them.

European Trade Union Confederation (ETUC). The ETUC represents 45 million workers across Europe. They have been vocal about the need for the AI Act to include robust worker protections. This includes calls for transparency and accountability in AI decision-making processes, especially in hiring, monitoring, and evaluating workers. They have advocated for workers to be informed and consulted about AI systems used in their working environments, where human Oversight in AI decision-making, particularly where decisions have significant impacts on workers' lives and careers, is key. The ETUC has also emphasized the need for worker representatives to be involved in development, implementation, and monitoring of AI systems; and urged that risk assessments must be undertaken before introducing new technologies.

UNI Global Union. UNI Global Union represents workers in the services sector, and also responded to the AI Act with a focus on protecting workers from risks associated with AI. They advocate for algorithmic fairness, where AI systems should not perpetuate or exacerbate existing biases in recruitment, performance evaluation, and disciplinary measures. UNI showed concern about the invasive use of AI for employee surveillance and called for strict regulations to protect workers’ privacy. Further to this, raised alarms about AI’s potential to displace jobs and advocated for the AI Act to include provisions for job retraining and reskilling where AI replaces jobs or automates tasks.

IndustriAll European Trade Union. IndustriAll, representing workers in manufacturing, energy, and mining, highlighted the need for the AI Act to address the specific challenges in their sectors. This includes job displacement, where IndustriALL showed concern about AI leading to job losses in manufacturing and called for measures to ensure that the introduction of AI does not result in mass layoffs. Given the kinds of industries that this union represents, where worker safety is of utmost importance, it makes sense that they emphasised the need for any AI system to enhance rather than undermine, occupational safety and health. They argued for strict guidelines on how AI systems should be deployed in hazardous environments. IndustriAll further called for mandatory consultation with workers and their representatives when AI systems are introduced in the workplace.

European Federation of Public Service Unions (EPSU). EPSU, which represents workers in the public sector, showed concern surrounding the use of AI in public services regarding its impact on employment and service quality. EPSU is wary about the increasing automation of public sector jobs and called for the AI Act to ensure that AI does not lead to a decline in the quality of public services. EPSU further advocated for the ethical use of AI in public services, ensuring that AI systems are used to support, not replace, human workers. With regard to training and development, EPSU has called for the AI Act to include provisions for ongoing training and professional development for public sector workers to adapt to AI technologies.

European Transport Workers' Federation (ETF). The ETF, representing transport workers, has been critical of the AI Act for not adequately addressing the specific challenges faced by workers in the transport sector. The ETF are particularly concerned about the impact of AI on job security in the transport sector, including the potential for AI to replace jobs in logistics, driving, and other transport-related fields. They emphasise the need for AI systems in transport to be rigorously tested for safety, ensuring that they do not put workers or the public at risk. This union also advocates for mandatory consultation with transport workers and their unions before AI systems are implemented in their sector.

European Federation of Food, Agriculture and Tourism Trade Unions (EFFAT). EFFAT, represents workers in food, agriculture, and tourism, and have also responded to the AI Act with a focus on protecting workers from potential exploitation. EFFAT is concerned about the use of AI for worker surveillance in industries like agriculture and hospitality, where monitoring could lead to exploitative practices. They have expressed concerns about AI replacing low-wage jobs in their sectors and have called for safeguards to protect employment. EFFAT stresses the importance of involving workers and their unions in discussions about AI deployment in their industries.

ver.di (Germany). ver.di, one of Germany's largest trade unions, has been actively involved in the discourse on AI regulation, emphasizing workers' rights. Ver.di has called for greater transparency in algorithmic decision-making processes in workplaces and the right for workers to understand and contest decisions made by AI. The union has further advocated for collective bargaining agreements to include provisions on the use of AI in the workplace. They stress the need for robust data protection measures to prevent misuse of workers' data by AI systems.