Development
This section of the current Case Study focusses on the ways in which ‘development’ of AI has occurred in the EU. Specific regulatory and legislative gestures have prioritised 'development'; and business has had an impact on the way AI will or can be developed across the EU, notably with the introduction of large language models (LLMs).
The important White Paper On Artificial Intelligence: A European Approach to Excellence and Trust (or ‘White Paper on AI’) discussed in the Regulation section above in fact, actively promoted the uptake and development of AI. The explicit gestures at addressing ‘risks’ associated with the uptake of AI have been foundational to every phase of regulatory consultation and identification were balanced with attention both market fundamentals and democratic principles, which had been mainstreamed by the Treaty on the Functioning of the European Union in 2009 (TFEU). Prepared by the AI HLEG group of experts, the Policy and investment recommendations for trustworthy AI were published just after the first White Paper, that had been published on 8th April 2019. The key take-aways from the 'Policy and investment' document are the need for necessary education for human skills and ongoing research to facilitate good understanding of AI; safeguards from adverse impacts; stakeholder involvement; the development of a physical infrastructure to enable data sharing and use as well as interoperability for a European data economy; the securing of a Single European Market for Trustworthy AI; the adoption of a 'risk-based governance approach' to AI with the appropriate regulatory framework; the stimulation of an open and lucrative investment environment; and a ‘holistic way of working’ that combines a ten year vision with a rolling action plan, so that, preferably, all AI regulation should become a long term and durable strategy with annual updates and continuous monitoring.
Support for innovation in the AI Act
Here I outline specific moments along the process of the AI Act to identify where support for development and innovation of AI was promoted. Firstly, the Committee for Legal Affairs of the European Parliament released a draft opinion on the EU AI Act draft on 02/03/22, which demonstrated priorities around support for AI development. The many suggestions from the Committee included ‘clear[er] rules supporting the development of AI systems’; the establishment of a ‘High-Level Expert Group on Artificial Intelligence’ to oversee the development of ethical guidelines; narrowing the scope of what constitutes an AI system; and expanding the regulatory reach of the AI Act beyond simply systems ‘placed on the market, nor put into service, nor used in the Union.’
The next day, 03/02/22, the EU Parliamentary Committee on Industry, Research, and Energy also published their draft opinion on the AI Act. The short justification contained within the opinion identifies the committee’s overarching concerns with the bill at the time, namely, striking the right balance between ‘freedom and supervision’, promoting small and medium sized enterprises’ (SME’s) competitiveness, issuing clear guidelines to businesses.
To these ends, the Rapporteur for this Committee, Eva Maydell, proposed the following four adjustments:
- Enhancing measures to support innovation, such as the ones foreseen for regulatory sandboxes, with a particular focus on start-ups and SMEs
- Providing a concise and internationally recognised definition of Artificial intelligence System and setting high but realistic standards for accuracy, robustness, cybersecurity, and data
- Encouraging the uptake of of AI systems by industry by placing an emphasis on social trust and value chain responsibility
- Future-proofing the Act through better linkages to the green transition and possible changes in the industry, technology and power of AI
These early documents, whilst not legally binding during these phases, demonstrate the market and industry focus that many of the Committees in the European Parliament emphasised for AI. Fast forward to the end point of the AI Act, which is now
Corrigendum
Now I turn to look at an important phase in the AI Act process, the Corrigendum. This stage happens after the text for a Regulation has been published but those who were present at meetings correct the text to reflect the agreements and the discussions that occurred. The European Parliament published this text 19/04/24, which is a technical intervention designed to correct errors for how agreements across the trilogue bodies are represented, throughout the text, but there are also a series of additions of new text, which some scholars state amount to ‘material amendments’ (Bobek 2009) rather than just corrections.
Throughout all stages of the writing, this Regulation emphasised a human-centric approach. The AI Act Corrigendum however advances this, by inserting a series of phrases and usage of terminology which are favourable toward AI ‘development’, where the support for innovation, the involvement/opinion of European Central Bank, public/private partnerships, SMEs and so on, is evident. Therefore the Corrigendum document demonstrates a series of changes which advance the developmental aspects of how businesses will be supported as well as regulated, as depicted in the AI Act, some of which, have implications for the world of work.
One example is where the Corrigendum inserted the following text which appears in italics, in the first introductory paragraph, and the second introductory paragraph was inserted, which emphasises ‘boosting innovation’ [corrections and additions within the Regulation are in italics, in the original text]:
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems (AI systems) in the Union, in accordance with Union values, to promote the uptake of human
centric and trustworthy artificial intelligence (AI) while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’), including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation. This Regulation ensures the free movement, cross-border, of AI-based goods and services, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.
(2) This Regulation should be applied in accordance with the values of the Union enshrined as in the Charter, facilitating the protection of natural persons, undertakings,
democracy, the rule of law and environmental protection, while boosting innovation and employment and making the Union a leader in the uptake of trustworthy AI
Inserted text places a new weighting more toward innovation and development of AI systems in a series of further locations throughout the Corrigendum, where in the third paragraph, sentence 3 states:
(3) A consistent and high level of protection throughout the Union should therefore be ensured in order to achieve trustworthy AI, while divergences hampering the free circulation, innovation, deployment and the uptake of AI systems and related products and services within the internal market should be prevented…
In total, I noted 16 additions to the Introductory description (1) – (172), which emphasise a market focus, product development, SME and microenterprise focus, and innovation for AI systems.
The next item that was updated in the Corrigendum, which has implications for the world of work, is the advanced discussions of the participation of open source software (OSS) community. The original text was published stated that developers in the OSS community *would not be* exempted, but the language should have read that the OSS *would be* exempted - except for from Art 5 (Prohibited AI Practices) and 50 (Transparency Obligations) – which was picked up in the Corrigendum put forward by Parliament (Article 1, 12). There are issues surrounding these exemptions, for example it is more difficult to identify ‘fake news’ with OSS generators, and there are already attempts to circumvent this by ensuring OSS systems publish model architecture. This correction means that OSS workers will have less restraints on product development than others, which looks good in principle. However, OSS workers have fewer protections than workers with formal recognition in law and this exemption potentially puts workers in these communities at further risk.
The next regulatory item I noted in the Parliament’s Corrigendum is related to terminology and definitions of ‘biometric data’ within the AI Act. The corrections (which appear in the final version) information about how biometric data can be used, in a more detailed way than the earlier versions, as follows: ‘Biometric data can allow for the authentication, identification or categorisation of natural persons and for the recognition of emotions of natural persons.’ The first reading did not contain a granulated breakdown of activities that can be carried out with the use of biometric data, from identification, to verification, to categorisation, which is a positive development that potentially provides protections for workers from extensive biometric surveillance.
Profiling based on biometric data was one of the main sticking points for negotiation surrounding the AI Act process, and this emphasis on three practices of ‘authentication, identification or categorisation’ and the illumination of, and forbidding of, ‘the recognition of emotions’ is important, and will be important for the world of work. Workers have already faced discrimination due to AI usage (Kiliç and Kahyaoğlu, 2024; Rhue, 2019; Pasquale, 2024; Boussaurd, 2023;). While emotion recognition was named in the list of Definitions in earlier drafts, the Corrigendum includes a significant amount of new text outlining the risks of related practices and restrictions.
The emphasis on restrictions to biometric data gathering in the AI Act is a positive outcome with significant implications for the world of work. Several biometric applications are listed in Art 5 (the list prohibited AI practices), including the ‘placing on the market a service with the purpose of inferring emotions of a natural person in the workplace and in educational environments, except where it is in place for medical or safety reasons’. The AI Act bars emotion recognition in Art 5(1)(f), where the use in workplaces and educational system is prohibited, unless they serve a medical or a safety purpose.
Definitions of AI systems
This section looks at the evolving definitions of AI as they appeared in suggested versions of the AI Act. The definitions demonstrate different approaches to the concept, and from our analysis, they became increasingly committed to recognising AI’s autonomous competences. The ‘development’ of AI is very clearly already defined in computing literature, where the IBM Data and IBM Team makes it clear that ‘Machine learning is a subset of AI. Deep learning is a subfield of machine learning, and neural networks make up the backbone of deep learning algorithms’ (IBM). There are three main categories within AI, which are cumulative in meaning
- Artificial Narrow Intelligence (ANI)
- Artificial General Intelligence (AGI)
- Artificial Super Intelligence (ASI) (IBM)
ANI is the weakest type, where AI augmented chatbots and virtual assistants such as Siri can complete specific tasks. AGI is where machinic intelligence is at the level of a human’s, and ASI is where AI is more intelligent than humans. Interpreting human emotions would be one of the functionalities of AGI or ASI. These elements are understood here to have a developmental dimension as they are seen in competition with human intelligence, where the intention is to surpass human competence.
The Commission defined AI in Article 3 of its AI Act’s proposal (COM(2021) 206 final) as follows:
'artificial intelligence system' (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;
Later, in November 2022, the Council of the European Union suggested this definition:
'artificial intelligence system' (AI system) means a system that is designed to operate with elements of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic- and knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts;
The European Parliament, in June 2023, then suggested this definition:
'artificial intelligence system' (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments.
In November 2023, there was discussion of adopting the OECD definition that was included in the Council’s Recommendation on AI, but finally, the legislature adopted this definition:
'AI system' means a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
The final definition is better than previous suggestions because it recognises that AI is a technology that recognises that AI is the overarching system and other functionalities are subsets.
There has been confusion over definitions in part because, as was seen in the rapid market rollout of ChatGPT, the functionalities of technologies are known to one degree but not entirely known regarding intended and future use, and given its categorisation as a GPAI system, with its own specific area of regulation within the final AI Act. Regulation needs to be adaptable and the way that the definition is written, the development aspect is taken into account, i.e. that it may ‘exhibit adaptiveness after deployment’ (AI Act Art. 3 Definitions). In these ways, development of AI has been impacted by corporate innovations with transversal impact on policy.
Large language models (LLMs) interrupt the AI Act process
A disruption to the regulatory process emerged in seemingly quite sudden introduction of large language models (LLMs) natural language processing (NLP) systems on the market in the form of a chatbot called ChatGPT, on 30/11/22. GPT-3 had been launched to the public as a beta version in June 2020, but really took form when the chatbot was released in November 2022, and caused significant waves in the AI Act process. As the AI Act is intended to deal specifically with risk and the real possibility that AI can lead to harm, questions for the extent to which LLMs may cause harm, in part because they can be used for many reasons rather than a single task alone – which potentially put them into the category of a ‘general purpose AI’ system (GPAI).
By December 2022, a definition of GPAI systems was inserted to the text of the AI Act as follows: “[GPAI is an] AI system that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of tasks”. The European Parliament co-rapporteurs worked quickly over that period to produce a document addressing the ‘sensitive topic’ of GPAI, the first draft of which was shared 14/03/23 (Bertuzzi 2023). The document indicated that systems that are designed for a specific set of tasks and applications would not be considered GPAI. Fast forward to the final text of the AI Act, and LLMs are defined as GPAI systems, because they can be used for a variety of reasons rather than a specified and singular reason. The focus on ‘intended usage’ became increasingly difficult for the text (Boine and Rolnick, 2023), but the final version of the AI Act outlines responsibilities of GPAI providers, where GPAI are regulated separately from the other categories of AI systems (Hunton, 2024).