3. GOVERNANCE
3.1 Federal & Provincial Divisions of Legislative Powers
Due to the nature of Canadian federalism, AI regulation and labour regulation may become fragmented and decentralized as a result with varying degrees of coordination and collaboration. A 2024 research paper on Canada’s AI governance system finds that Canada’s federalist model has resulted in a fragmentary landscape of AI policies as Blair Attard-Frost noted:
The development of AI, as well as AI governance, in Canada is very decentralised. It gets distributed across all kinds of different actors. It is dependent on a giant network of public-private partnerships, as well as the three research centres. They are funded at the federal level and at the provincial level. The different levels of government cooperate with a large network of stakeholders with varying levels of interest between them.
For instance, the Canadian Labour Code only applies to federally regulated private industries and workplaces like in the federal public service, banking, interprovincial and international transport or telecommunications, which covers approximately 6 percent of the Canadian workforce. Therefore, the vast majority of workers are employed in provincially regulated industries and workplaces, like in education, natural resources, healthcare or social services are governed by relevant province-specific legislations covering labour standards, working hours, pay equity, and health and safety regulations.
For the rest of this section, we will focus on the governance of the proposed Bill C-27, which is the omnibus legislation passed the House of Commons on second reading on 24 April 2023, and it contains three separate pieces of legislation: the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence and Data Act (AIDA). Bill-27 was subsequently referred to the parliamentary Standing Committee on Industry and Technology (INDU) for further study. During this period, the INDU hearings on the proposed Bill were launched, resulting in numerous responses and studies.
3.2 Need for Due Consultation & Omnibus Bill
Several industry experts, civil society organizations, and academics, in their brief submissions, have called against the lack of due consultation with stakeholders before the drafting of the legislation. The Assembly of First Nations, in their brief to the INDU, emphasized that “consultation and cooperation” are crucial before the introduction of legislation. Similarly, the Information Technology & Innovation Foundation (ITIF) has argued that the expedited discussions of Bill C-27, driven by “fear-based rhetoric about dangerous AI”, could be extremely harmful in the long run and has ostensibly resulted in the flawed draft legislation. This is echoed by other civil society organizations, industry experts, and academicians, who have urged the Standing Committee to redraft AIDA and consider a full public consultation.
A recurring theme across the submissions has also been the conflation of AIDA with the other parts of the Bill. In this regard, the Canadian Labour Congress (CLC) in its submission pointed out that “instead of a stand-alone Bill, AIDA is bundled into a larger Bill reforming commercial sector privacy law”. This, as noted by the Canadian Association of Professional Employees (CAPE), results in the “dispersion of the necessary attention required to adequately assess AIDA’s specific implications”.
3.3 Employment & Definitions of Harms & High Impact Systems
Other over-arching concerns relate to the purpose and scope of the proposed Act. Currently, the purpose of the Act is only to discrimination and protect narrow individual harms – namely, property, economic, physical, and psychological harms. However, as Prof. Ignacio Cofone points out, harm could also be to society and democracy more broadly, such as through intentional misinformation. Therefore, the CLC, the Alliance of Canadian Cinema Television and Radio Artists (ACTRA), and the Association of Canadian Publishers, inter alia, recommend the broadening of the scope to include harms to both individuals and society. Meanwhile, the scope of the Act has been heavily criticized by organizations. In its current form, AIDA does not apply to “government department, ministry, institution, or Crown corporation”. The Canadian Union of Public Employees (CUPE) (the largest trade union in Canada), the CLC, the Canadian Bar Association, and the International Civil Liberties Monitoring Group (ICLMG), inter alia, have all noted that the AIDA should apply to all government institutions. This concern stems from previous allegations that law enforcement agencies, such as the Royal Canadian Mounted Police (RCMP), have used AI facial recognition technologies for surveillance.
Several organizations have also raised concerns regarding the lack of definition of “high impact systems”. This criticism was addressed by some of the proposed amendments to the Bill, which set out a list of categories of “high impact” AI systems, including AI usage in court proceedings, healthcare, and employment. Nonetheless, as the CUPE submission points out, AI usage in sectors such as “telecommunications, education, housing, critical infrastructure, transportation, immigration, and border security” is still outside the purview of the “high impact systems”. Prof. Teresa Scassa notes that while this exclusion may be an oversight, “it highlights the rather slap-dash construction of AIDA”. Meanwhile, the ITIF has argued that rather than “high impact” systems, AIDA should govern “high risk” systems, as high impact systems could potentially be of low risk and thus not require governance. On a similar note, CUPE has claimed that AIDA must also have an “unacceptable risk” category for AI systems, as is witnessable under the EU AI Act. This is necessary because certain AI applications, such as cognitive behavioural manipulation, facial image scrapings, emotion recognition in the workplace, and identification and categorization of people using biometrics, ought to be outrightly banned. Beyond that, CUPE argues that AIDA should include compulsory consultation with workers and unions clause before any AI systems are operationalized in the workplace. This should be coupled with a whistleblower protection clause to safeguard workers in cases of reporting “misconduct, unethical decision-making, or violations of the Act without fear of reprisal”.
Also relevant is a May 2024 report adopted by a separate federal House of Commons’ Standing Committee on Human Resources, Skills and Social Development and the Status of Persons with Disabilities (HUMA) on the implications of AI technologies for the Canadian labour force. Eight recommendations were proposed focusing on AI’s potential impacts on workers, businesses and workplaces; the need to develop a mechanism for the federal government to hear from experts on emerging AI issues; and better data collection and research. The recommendations calls on various federal departments and offices to:
- Review federal labour legislation to assess its capacity to protect diverse workers’ rights in the context of current and future implementation of AI technologies;
- Develop a framework, in collaboration with provinces and territories and labour representatives, to support the ethical adoption of AI technologies in workplaces;
- Invest in skills training to increase the adaptability of the Canadian workforce to the use of AI technologies;
- Review the impact of AI on the privacy of Canadian workers and create proper regulations to ensure the protection of Canadians from AI and that those regulations can be and are properly enforced. Also, to consider how this will interact with provinces and territories;
- Ensure the federal Advisory Council on AI encompasses a wide diversity of perspectives, including that of labour, academicians, and civil society and
- Develop a methodology to monitor labour market impacts of AI technologies over time, including AI’s unemployment risks.
3.4 Independence & Oversight of the AI and Data Commissioner
Trade unions like CAPE and civil society organizations like AI Governance and Safety Canada have especially argued for an independent enforcement, oversight, and review of AIDA, advocating for these responsibilities to lie with the AI and Data Commissioner as an independent regulator rather than ostensibly with the Minister of Innovation, Science and Industry. Some of these issues have been addressed by the proposed amendments by providing more powers to the commissioner, yet the amendments fall short of granting complete independence from the Minister.
Furthermore, given the ever-evolving nature of AI, organizations like the ICLMG have suggested annual public reporting and periodic reviews of AIDA and its regulations. Blair Attard-Frost similarly underscores the importance of reviews but also argues that the scope of the Act should be expanded to consider “co-regulation, coordination, and collaboration on AI governance” extending beyond “government and industry and into a great variety of civil society organizations”. More diversity in media coverage and debate on AI issues are needed.
While organisations such as the Council of Canadian Innovators, which comprises over 150 CEOs of high-growth Canadian companies, have noted that Canada should regulate AI at a rapid pace, they also emphasize the need for caution and advocate for the adoption of ‘Responsible AI Leadership Principles’, which include increasing public trust in AI products and establishing unambiguous regulations. As of now, the government, as the CLC noted in their submissions, seems to have a “light-touch” regulatory approach, which is deeply inadequate. Therefore, various stakeholders, including experts, academicians, and civil society organisations, have called on the government to reintroduce a revised and improved AIDA.
3.5 Labour Relations & Collective Bargaining
Beyond submissions to the government, unions are also slowly preparing for the impending technological change. For instance, Unifor, Canada’s largest private sector union, has prepared reports and documents outlining strategies for workers to confront technological changes at workplaces. Similarly, CUPE is aiming to protect its members’ rights and provide them with job security by developing “AI clauses” in its collective agreements. Provisions addressing ‘technological change’ are certainly not novel in the Canadian context, as unions have frequently included these in their negotiated agreements. Depending on the specific language, these ‘technological change’ provisions may, inter alia, require the employer to provide written notice to both employees and the union, offer retraining to affected employees, and engage in consultation and discussion with the union before the implementation of the technology. Additionally, provisions relating to monitoring, tracking, surveillance, and data collection have also been repeatedly incorporated into negotiated agreements. While these may be broad enough to ostensibly encompass issues concerning generative AI, there is still ambiguity, highlighting the urgent need for specific clauses that deal with AI-related concerns.
In the end, the knowledge and capacity that unions and organizations develop to effectively address the use of AI will be most impactful if, as De Stefano and Doellgast note, they possess “real influence on decisions, through legal bargaining rights backed up by encompassing collective agreements, employment protections and data protection rules.”