Unsurprisingly, the development and regulation of AI has become a major concern for policy makers, businesses, and civil society organisations worldwide. Striking the right balance between technological development and safety concerns has produced an intensive dialogue between different actors both nationally and internationally. It is no longer sufficient to simply assess the pros and cons of AI. Equally important will be to understand the dynamics behind regulatory and policy action in order to assess likely outcomes of these political decisions and possible implications for the world of work. In this document, we provide a first conceptual framework for understanding the political economy that affects the evolution of AI regulation and illustrate political dynamics using various case studies within specific jurisdictions: India, China, Brazil, USA, Canada and the EU. The reason for case selection is to provide a balanced spread of narratives from emerging, developing and advanced countries.
Our research demonstrates that AI regulation evolves within a limited space of possible policy objectives. On the one hand, most advanced countries and jurisdictions have started putting forward strategic objectives for AI development as a core technology and industrial policy (OECD, 2021a). This includes support to public and private research in this area, the use of public procurement for industrial development and the creation of a conducive regulatory environment in which these technologies can or should be deployed. Part of this involves securing access to relevant resources, whether it is the capacity to develop dedicated computer chips or to specialists and their knowledge and influencing standard setting bodies.
At the same time, policy makers and civil society are concerned about the possible negative implications for privacy and personal sovereignty that these technologies imply. Attempts to limit applications to specific cases and the provision of litigation where AI can be shown to have led to individual or collective harms have increased. As a start, following the General Data Protection Regulation (GDPR) issued by the European Commission, many countries around the world have adopted similar forms of data protection and regulation or are in the process of doing so. Currently, policy action focuses on the regulation of the development and use of specific algorithms in particular applications. At the time of writing, however, no country or other jurisdiction had adopted an encompassing regulation in this regard. The proposed EU’s AI Act is the only hard law regulation to date and is set to go to parliament vote, at the time of writing.
Finally, regulators are increasingly aware of the need for impact assessment and post-market-introduction surveillance of the consequences of AI tools. Most countries rely on a substantial body of product safety and health regulation that would also apply to cases of algorithmic malfunction or unintended consequences of the use of AI. Similarly, existing anti-discrimination, disability and labour laws should limit the use and applicability of AI tools in the workplace, at least in principle. However, society-wide implications of the use of AI are poorly understood and regulated, as exemplified by the pervasive use of social media and potential implications for political polarization. Similarly, existing anti-trust regulation and competition policy have difficulties integrating the specific characteristics of the data economy, let alone the autonomous decision-making capacity of algorithms.
These three dimensions of AI development and regulation are embedded in a pre-existing institutional and regulatory framework that determines legal precedents, institutional mechanisms for change, political actors and wider societal goals enshrined in each country’s constitution. Moreover, a country’s regulatory action is influenced by a host of domestic and foreign civil society and economic actors that try to shape the regulatory outcome. In particular the international economic linkages a country has will play a significant role in determining the policy space for regulation in an industry that is highly international and is dominated by a few leading companies with worldwide operations. Similarly, constitutional predispositions and limitations will shape the type of interventions different jurisdictions will prefer: while some will rely more on case law, others rely entirely on industry self-regulation (soft law) or refer to international standards, while yet another group of countries might favour hard law and regulation based on human rights considerations, for instance.
This Observatory is part of a wider effort at the ILO to document, analyse and understand the dynamics around the technological developments of artificial intelligence, its regulation and its impact on the world of work. It will feed into a newly set up AI Policy Observatory for the World of Work that is scheduled to document and assess policy instruments, focussing on the implications of AI and other digital technologies on changes in the world of work. This Observatory sets up a conceptual framework that aims at identifying the relevant dimensions influencing the development, deployment and use of AI. Through a comparative analysis of country case studies we hope to understand how countries develop their specific “styles” of national AI strategies that informs their regulation and digital development objectives. In particular, using expert interviews to help understand governmental responses to AI we assess the current evolution around AI development and regulation in Brazil, Canada, China, the EU, India and the United States and discuss the way the regulatory dynamics influence each other through international regulatory spillovers.