AI Policy Observatory for the World of Work

Project overview

Artificial intelligence (AI) technologies are being used to manage, augment, or otherwise transform work. Algorithmic management and other data driven systems are expected to help firms improve productivity and profitability but can lead to unreasonable surveillance and other psychosocial harms in the world of work today. The uptake in the use of AI and for gig work was seen to provide jobs sometimes to otherwise informal economies but has been challenged now in many countries for producing low quality work environments.

AIPOWW focusses on strategies for reducing harms to workers in the digital economy, who should be protected from all challenges to decent work.

In response to the widespread and burgeoning deployment of digitalisation into the world of work, governments have been pursuing policy programmes to promote the benefits and minimise the risks associated with work in the digital economy, via regulation, at various stages of hard and soft law. Businesses have prioritised innovation and development in work-related digitalisation and have developed codes of conduct. Civil society, including trade unions, are focussing on protecting workers through lobbying and collective bargaining, and research organisations are producing typologies for governance systems for identifying best practice in possible futures of work.

To help track and critically evaluate what this all means for the world of work research leads in each of our jurisdictions have prepared a series of case studies which track and critically evaluate a) regulation, b) governance, and c) development, surrounding digitalisation strategies in 7 jurisdictions: Brazil, Canada, China, the EU, India, United Kingdom and the USA. Our AIPOWW Tracker furthermore outlines how the various regulation and policy orientations correspond, meet or are likely to challenge the ILO Fundamental Principles.

AIPOWW provides a one-stop shop for relevant stakeholders, i.e. business and policy leads; and beneficiaries, AI regulation, governance and development for the world of work.

Not one day passes without the news being full of the latest advancements in artificial intelligence (AI). Most recently, generative AI, a series of programs that automatically produce long essays, lyrics, music pieces or images in any style possible following simple user prompts have struck people’s imagination. Yet, the pitfalls of these new tools have quickly become apparent as well. Biased assessments, false information or outright manipulation of users, the behaviour of the latest generation of AI tools remains highly unpredictable requiring careful monitoring and regulation of its use.

Graphic of Artificial Intelligene on skyline)
Graphic of Artificial Intelligene on skyline

Unsurprisingly, the development and regulation of AI has become a major concern for policy makers, businesses, and civil society organisations worldwide. Striking the right balance between technological development and safety concerns has produced an intensive dialogue between different actors both nationally and internationally. It is no longer sufficient to simply assess the pros and cons of AI. Equally important will be to understand the dynamics behind regulatory and policy action in order to assess likely outcomes of these political decisions and possible implications for the world of work. In this document, we provide a first conceptual framework for understanding the political economy that affects the evolution of AI regulation and illustrate political dynamics using various case studies within specific jurisdictions: India, China, Brazil, USA, Canada and the EU. The reason for case selection is to provide a balanced spread of narratives from emerging, developing and advanced countries.

Our research demonstrates that AI regulation evolves within a limited space of possible policy objectives. On the one hand, most advanced countries and jurisdictions have started putting forward strategic objectives for AI development as a core technology and industrial policy (OECD, 2021a). This includes support to public and private research in this area, the use of public procurement for industrial development and the creation of a conducive regulatory environment in which these technologies can or should be deployed. Part of this involves securing access to relevant resources, whether it is the capacity to develop dedicated computer chips or to specialists and their knowledge and influencing standard setting bodies.

At the same time, policy makers and civil society are concerned about the possible negative implications for privacy and personal sovereignty that these technologies imply. Attempts to limit applications to specific cases and the provision of litigation where AI can be shown to have led to individual or collective harms have increased. As a start, following the General Data Protection Regulation (GDPR) issued by the European Commission, many countries around the world have adopted similar forms of data protection and regulation or are in the process of doing so. Currently, policy action focuses on the regulation of the development and use of specific algorithms in particular applications. At the time of writing, however, no country or other jurisdiction had adopted an encompassing regulation in this regard. The proposed EU’s AI Act is the only hard law regulation to date and is set to go the parliament vote, at the time of writing.

Finally, regulators are increasingly aware of the need for impact assessment and post-market-introduction surveillance of the consequences of AI tools. Most countries rely on a substantial body of product safety and health regulation that would also apply to cases of algorithmic malfunction or unintended consequences of the use of AI. Similarly, existing anti-discrimination, disability and labour laws should limit the use and applicability of AI tools in the workplace, at least in principle. However, society-wide implications of the use of AI are poorly understood and regulated, as exemplified by the pervasive use of social media and potential implications for political polarization. Similarly, existing anti-trust regulation and competition policy have difficulties integrating the specific characteristics of the data economy, let alone the autonomous decision-making capacity of algorithms.

These three dimensions of AI development and regulation are embedded in a pre-existing institutional and regulatory framework that determines legal precedents, institutional mechanisms for change, political actors and wider societal goals enshrined in each country’s constitution. Moreover, a country’s regulatory action is influenced by a host of domestic and foreign civil society and economic actors that try to shape the regulatory outcome. In particular the international economic linkages a country has will play a significant role in determining the policy space for regulation in an industry that is highly international and is dominated by few leading companies with worldwide operations. Similarly, constitutional predispositions and limitations will shape the type of interventions different jurisdictions will prefer: while some will rely more on case law, others rely entirely on industry self-regulation (soft law) or refer to international standards, while yet another group of countries might favour hard law and regulation based on human rights considerations, for instance.

This report is part of a wider effort at the ILO to document, analyse and understand the dynamics around the technological developments of artificial intelligence, its regulation and its impact on the world of work. It will feed into a newly set up AI Policy Observatory for the World of Work that is scheduled to document and assess policy instruments, focussing on the implications of AI and other digital technologies on changes in the world of work. This first report sets up a conceptual framework that aims at identifying the relevant dimensions influencing the development, deployment and use of AI. Through a comparative analysis of country case studies we hope to understand how countries develop their specific “styles” of national AI strategies that informs their regulation and digital development objectives. In particular, using expert interviews to help understand governmental responses to AI we assess the current evolution around AI development and regulation in Brazil, Canada, China, the EU, India and the United States and discuss the way the regulatory dynamics influence each other through international regulatory spillovers.