Regulation
China’s response to the onset of AITs stands out in several ways, but arguably mostly for its regulatory and legislative speed. In the supposed race to set global regulatory standards, China has made an ambitious bid with a flurry of policy enactments over the last decade. China commenced a new era of regulatory activity around AITs beginning with the New Generation AI Development Plan (AIDP) in 2017 [1]. Its purpose, as set out by the State Council, was to develop a strategy that would see China become the world leader in AI by 2030. The Plan identifies AI as key to challenge current and arising issues like economic growth, national security, and technological innovation. It proposes as solutions cultivating internal research and development capacities via building out greater AI infrastructure.
Two years later in 2019, the New Generation AI Governance Expert Committee (established by the Ministry of Science and Technology MOST) published Governance Principles for a New Generation of Artificial Intelligence.[2] This document introduces eight core principles for the development and regulation of 'responsible AI': harmony and friendliness, fairness and justice, inclusivity and sharing, respect for privacy, secure/safe and controllable, shared responsibility, open collaboration, and agile governance.[3] This served as a foundation for the Ethical Norms for New Generation Artificial Intelligence (ENGAI) published by the same committee in 2021.[4] ENGAI is responsible for implementing the 2019 governance principle in a 'detailed manner' by providing 'ethical guidance to natural persons, legal persons, and other related institutions engaged in AI-related activities'.[5] ENGAI is comprised of five sections that total 25 Articles, each of which provide ethical guidance on a variety of issues from research and development to AI management.
An important white paper was published by the Big Data Security Standards Special Working Group of the National Information Security Standardization Technical Committee in 2019.[6] The authors note that 'AI security standardization is an important component of the development of the AI industry' because 'It plays a fundamental, normative, and leading role in stimulating healthy and benign AI applications and in promoting the orderly and healthy development of the AI industry'.[7] The document is concerned with a range of security issues ranging from protecting critical information infrastructure to managing data security risks in AI development.
A signature policy to have come out of the Cyber Security Administration (CAC) is the Internet Information Service Algorithmic Recommendation Management Provisions, jointly issued with the State Internet Information Office, the Ministry of Industry and Information Technology, the Ministry of Public Security and the State Administration for Market Regulation. A draft proposal was first released on 27 August of 2021, followed by a public comment period until 26 September of that year. A finalised version was published on 4 January 2022, and the law went fully into effect on 1 March 2022. The IISARMP consists of several dozen rules or provisions that regulate the development and use of internet algorithmic recommendation services with wide-reaching implications. The law codifies technical and policy requirements, ethical requirements, and prohibited behaviour for algorithmic providers and operators.
A central theme running through the IISARMP is enhancing the capacity of individuals to interact with algorithms or platforms on their own terms. Considerable emphasis is placed on the 'protection of user rights', which includes appeals to user notification, norms disclosure, procedures for obtaining consent, provisions for opting out of monitoring and surveillance, and control over personal data. These protections constitute a considerable set of tools for individuals to circumscribe and resist the power of algorithms and contest their outputs. This may have major implications for workers as labour is increasingly pushed into digitalised space. Workers confront a future characterised by heightened surveillance and management by smart technologies, and key provisions within the IISARMP could be leveraged by employee representatives to limit and, in some cases eliminate, exploitative and dominating workplace practices. Examples of this include deleting user tags aimed at personal characteristics, demanding explanation of algorithmic practice that influence worker interests, as well as amplified enforcement of existing labour rights.
Another core theme of the IISARMP is a commitment to aligning algorithmic recommendation services to the promotion of the 'common good'. This objective underscores the Chinese government’s intention to implement a social model of governance that directly oversees the absorption of algorithmic and AI technologies and is reflected in the IISARMP’s commitments to increased social supervision, heightened transparency, and the instalment of clearly defined enforcement mechanisms and bodies. Furthermore, the duties imposed on algorithmic recommendation service providers mandate explicitly socially oriented observances like advancing 'the social public interest', respecting 'national security', not 'upsetting the economic or social order', and preventing 'addiction or excessive consumption' among other imperative related to the general welfare. Again, this policy goal may be utilised to promote worker well-being because improved working conditions is ostensibly in the social interest and provides stability to the economic order.
The IISARMP works in tandem with a bevy of other policies related to governance of digital platforms, algorithms, and artificial intelligence. The most notable examples are the Personal Information Protection Law (PIPL), the Data Security Law, and the Cybersecurity Law. A final version of PIPL was released on 20 August of 2021 and became effective law several weeks later on 1 November. PIPL is "a special legislation on personal information protection… [that] contains the basic principles, requirements and related systems for the protection of personal information."[8] It is therefore often described as a Chinese version of the EU’s GDPR given the deep parallels and similarities between them, however there are small differences. This legislation is expected to have a significant impact because it "solves the [current] problem of inadequate and scattered personal information protection legislation."
A central objective of PIPL is to clarify the rights of users over their personal information when interacting with internet-based services. The law introduces new protections against profiling and extends new rights for users to customize how their data is used by data processors. Examples include the ability to turn off targeting based on individual characteristics, request exclusion from automated decision-making, and the ability to provide user feedback. The bill also advances key changes to the control, ownership, and use of personal data. Through PIPL, individuals gain the rights to, "inquire about what personal data is being collected and stored by the data processor… to request a copy of their personal data, correct any inaccurate personal information, and delete their personal information when withdrawing consent or terminating the use of the product or service."[9]
Other relevant rights from PIPL come in the form of explicit duties placed on data processors. The requirement for processors to obtain user’s consent (to data collection) will encourage processors to introduce an 'opt-in' interface as opposed to an opt-out one. Relatedly, Article 15 stipulates that processors must "provide a convenient way to let the user withdraw their consent".[10] These strong consent-based rights are further enhanced by the fact that, under PIPL, processors cannot refuse service if a user denies or withdraws their consent, unless the service could not process without it. Finally, the law also provides safeguards against unwanted sharing of data with third-party entities by requiring processors to obtain additional and separate consent to do so. Additional consent is also required for the processing of 'sensitive personal information', which is a considerable step given that, "The scope of 'sensitive personal information' in the PIPL is much broader than in the GDPR," with, "financial information, transaction records, and location tracking are [being] regarded as sensitive personal information."
One of the most recent regulatory focuses of the Chinese government is 'deep fake' technologies and services. On 10 January 2023 China released is 'Deep Synthesis Provisions' to 'to strengthen its supervision over deep synthesis technologies and services'.[11] These provisions extend obligations on both generators and users of deep synthesis services, with the aim of promoting transparency, data and personal information protection, content management and labelling, and technical security. The 'comprehensive scope' of the deep synthesis provisions makes China a leader in the regulation of this kind of technology. As Kachra (2023) explains, "While the UK is also intending to ban the creation and dissemination of deepfake videos without consent, China’s law goes beyond this. The regulation creates rules for every stage of the process involved in the use of deepfakes, from creation to labelling to dissemination, leaving room for the potential suppression of organically captured content as well."
Provincial and local regulation
AI regulation and development has not only come from national legislatures and agencies. Provincial and local actors have started to put forward their own policies as well. A notable example is the 'Regulations on promoting artificial intelligence industry' passed in the Shenzhen Special Economic Zone. The purpose of the Shenzhen AI Regulation is to 'to promote the AI industry by encouraging governmental organizations to be the forerunners in utilizing related technology and increasing financial support for AI research in the city'.[12] These efforts align with the Shenzhen’s interest in being China’s leading site for AI and tech development. The city has pledged an investment of $108 billion over five years (2021-2025) to 'to reinforce its position as China’s innovation powerhouse'.[13] The regulations also contain guidelines for how public data is to be shared when supporting organisations and businesses. Other important elements of the policy include green-lighting trials of 'low risk' technologies and a call for establishing AI ethics council to 'develop safety standards and examine how the technology will affect things like employment, data protection, and other societal concerns'.[14]
Shanghai city has also been proactive in the realm of AI development and regulation. In September 2022 the provincial government passes a law entitled 'Shanghai Regulations on Promoting the Development of the AI Industry'. Government sources explain that the purpose of the bill is to 'promote the building and use of public computing resource platforms and provide public computing power support for AI technology and industrial development'.[15] Similar to the Shenzhen law, this policy contains many key components, including the introduction of a 'grading management and "sandbox" supervision' to help guide the testing process new technologies, establishing ethical norms and guidelines for AI research and development, and setting boundaries for toleration of ‘minor infractions to encourage exploration of scientific frontiers and inspiring innovation’.[16]