Introduction

Canada was the first country in the world to introduce a national AI strategy in 2017.

Since, Canada's AI regulation, development, and governance continue to evolve and deepen. This country case maps three aspects of AI regulation, development and governance in Canada with emphasis on its impact on the world of work, when applicable. There is substantial investment in strategy development, research, and talent development, as well as collaboration between public and private sector partners. Canada boasts one of the highest concentrations of AI talents globally, especially along its Québec-Montréal-Toronto-Waterloo corridor. This focus on talent development and research started over thirty years ago, with the “Artificial Intelligence, Robotics and Society” group led by Geoffrey Hinton at the Canadian Institute for Advanced Research (CIFAR). Yoshua Bengio, Richard Sutton, and other Canadian researchers also played key roles in the advancements in AI research.

Multiple levels of government in Canada have recently modified existing or are planning AI regulation. Due to the federal style of government with constitutional divisions of legislative powers and competences, AI policy - regulation, development, and governance - are decentralized and fragmented across different levels and jurisdictions. AI policy and development is mainly led by the federal department of Innovation, Science and Economic Development Canada (ISED), which is responsible for issues of innovation, technological and economic development, and consumer protection. Provincial governments, such as Québec and Ontario, also have sufficient influence in developing their regional AI ecosystems. Additionally, it is important to note that most Canadians workers are in provincially regulated private industries and workplaces, where provinces set the rules for labour standards (see Governance for more information).

At present, Canada's AI regulatory framework, encompassing legislations like the proposed Bill C-27, which includes the Artificial Intelligence and Data Act (AIDA), initiatives from federal and provincial governments, and decisions from the judiciary, reflects a balanced and decentralized approach to managing AI's impact. By aligning legal protections, ethical standards, and innovation-friendly policies, Canada seeks to foster a robust and responsible AI ecosystem. This framework not only prioritizes safeguarding individual rights and ensuring transparency but also supports the integration of AI in a way that complements societal values and economic growth. As AI continues to evolve, Canada's regulatory landscape will likely adapt to address emerging challenges, ensuring that technological advancements do not compromise the fundamental rights of people in Canada. This ongoing development highlights the importance of a dynamic regulatory, development, and governance environment that can navigate the delicate interplay between innovation and protection in the digital age.

Provincial governments in Canada also play a significant role in AI regulation, contributing to a decentralized and multifaceted regulatory landscape. Québec's Act Respecting the Protection of Personal Information in the Private Sector ensures that employees have the right to be first informed in plain language about how and why their information is gathered, including information collected for assessing employee work performance. Aligning more with the European Union's General Data Protection Regulation (GDPR), employees and persons can access, rectify and have the right to withdraw their consent, ensuring transparency, protection of information, and accountability with AI-driven workplace and consumer-related tools.

1. REGULATION

1.1 Proposed Federal Regulation C-27 & Guiding Principles

Canada has been at the forefront of AI regulation and governance, with the federal government actively developing a legislative framework to address the rapid growth and deployment of AI technologies. One of the key federal regulatory initiatives is the Artificial Intelligence and Data Act (AIDA), introduced as part of Bill C-27, the Digital Charter Implementation Act, 2022. Bill C-27 is based on a series of engagements between federal ministers and the Canadian public from 2016 to the end of 2018. AIDA seeks to regulate the international and interprovincial trade and commerce of AI systems by establishing common requirements, regulate high-impact AI systems, focusing on their design, development, and use to ensure safety, accountability, and compliance. It aims to prevent individual and certain collective harms associated with biased AI outputs, establishing a risk-based approach to the regulation of AI activities. The Act introduces the role of an AI and Data Commissioner, who will oversee compliance, support educational initiatives, and enforce the Act’s provisions. AIDA could have far reaching changes to the world of work, including introducing explicit regulation on the use of automated decision making across the workforce, especially in federally regulated industries and workplaces.

Complementing AIDA, Canada’s Digital Charter sets out high-level guiding principles to ensure that privacy is protected, data-driven innovation is human-centered, and Canadian organizations can lead in global AI innovations. Although the Digital Charter itself lacks enforceable power, it establishes a foundational framework for ethical AI development in Canada. The proposed Bill C-27 expands on these principles, aiming to modernize the regulatory landscape by establishing a tribunal for privacy-related appeals and introducing significant penalties for non-compliance. This underscores the government’s commitment to safeguarding Canadians’ data rights and ensuring AI systems are developed responsibly. Canada’s Digital Charter launched in 2019, laid out the 10 principles in which the Canadian approach to AI regulation:

  1. Universal Access: All Canadians will have equal opportunity to participate in the digital world and the necessary tools to do so, including access, connectivity, literacy and skills.
  2. Safety and Security: Canadians will be able to rely on the integrity, authenticity and security of the services they use and should feel safe online.
  3. Control and Consent: Canadians will have control over what data they are sharing, who is using their personal data and for what purposes and know that their privacy is protected.
  4. Transparency, Portability and Interoperability: Canadians will have clear and manageable access to their personal data and should be free to share or transfer it without undue burden.
  5. Open and Modern Digital Government: Canadians will be able to access modern digital services from the Government of Canada, which are secure and simple to use.
  6. A Level Playing Field: The Government of Canada will ensure fair competition in the online marketplace to facilitate the growth of Canadian businesses and affirm Canada’s leadership on digital and data innovation, while protecting Canadian consumers from market abuses.
  7. Data and Digital for Good: The Government of Canada will ensure the ethical use of data to create value, promote openness and improve the lives of people—at home and around the world.
  8. Strong Democracy: The Government of Canada will defend freedom of expression and protect against online threats and disinformation designed to undermine the integrity of elections and democratic institutions.
  9. Free from Hate and Violent Extremism: Canadians can expect that digital platforms will not foster or disseminate hate, violent extremism or criminal content.
  10. Strong Enforcement and Real Accountability: There will be clear, meaningful penalties for violations of the laws and regulations that support these principles.

1.2 Supportive AI Initiatives & Tools from the Federal Government

Canada has implemented several supportive initiatives to guide the responsible development and adoption of AI technologies. Among these, the Advisory Council on AI provides strategic advice to the government on AI policy, governance, and adoption, ensuring that Canada's regulatory approach remains aligned with global standards and best practices. Further reinforcing these efforts, the Directive on Automated Decision-Making, issued by the Treasury Board of Canada, applies specifically to federal departments using automated decision-making systems. This directive mandates that AI applications in government services must align with principles of transparency, accountability, legality, and procedural fairness.

It sets out requirements for federal institutions to conduct Algorithmic Impact Assessments (AIA) to evaluate the risks associated with automated decision systems, thereby promoting responsible AI use within public administration. This tool helps federal institutions determine the level of risk associated with their AI applications and guides the necessary measures to mitigate those risks, supporting transparency and accountability in AI deployment. The federal government also provides a pre-qualified list of AI vendors through the AI Source List, ensuring that government entities can engage with trusted and vetted AI suppliers for their projects.

The Global Partnership on AI (GPAI), of which Canada is a co-founding member, exemplifies Canada’s commitment to international collaboration in AI governance, focusing on the responsible development and use of AI technologies globally. The AI and Data Governance Standardization Collaborative (AIDG) seeks to further the standization strategies across Canada and overseas. The Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems further illustrates Canada’s proactive stance in AI regulation. This code of conduct encourages private companies to adopt responsible AI practices ahead of formal regulations, emphasizing principles such as safety, fairness, transparency, and human oversight.

1.3 Provincial Initiatives

Provincial governments in Canada also play a significant role in AI regulation, contributing to a decentralized and multifaceted regulatory landscape. Québec has taken a robust approach with its Law 25 - Act Respecting the Protection of Personal Information in Private Sector which regulates the collection, use, and disclosure of personal information, including that of its employees, in the private sector. Under this Act, employees have the right to be first informed in plain language on the purposes and means by which information is collected about them, including when it is related to employee work performance. Furthermore, employees and persons can access and rectify the personal information collected and have the right to withdraw their consent. Employers and enterprises need to publish the requisite policies, conduct privacy impact assessments (PIA) and have designated officers appointed to comply with the provisions of this Act. This law aligns closely with the European Union’s General Data Protection Regulation (GDPR), establishing stringent requirements for businesses handling personal data. Québec’s initiative underscores the province’s commitment to safeguarding individual privacy rights in the face of growing AI applications that leverage personal data.

Ontario’s Working for Workers Act, 2022, stands out as a notable example, focusing on workers' rights in the context of AI and digital platforms. The Act mandates that employers with 25 or more employees develop written policies on electronic monitoring, ensuring transparency and accountability in the use of AI-driven employee monitoring tools. This aligns with broader efforts to protect workers’ rights and privacy in an increasingly digital workplace. In 2024, Ontario’s proposed Bill 194, the Enhancing Digital Security and Trust Act, 2024 (EDSTA), further defined AI systems not specified in the Working for Workers Act of 2022. The EDSTA also mandates the development of cybersecurity programs with possible technical standards set by the government. This legislation underscores the province's commitment to safeguarding digital information, particularly data involving minors, and establishing clear accountability for the use of AI in the public sector. Together, these initiatives reflect Ontario’s proactive stance on AI governance, emphasizing the importance of transparency, privacy, and security in digital operations across provincial entities.

1.4 Role of the Privacy Commissioners & Laws

The Offices of the Privacy Commissioner play a pivotal role in overseeing the implementation and enforcement of privacy laws related to AI and data management at the federal and provincial and territorial levels. The Commissioners advocate for the modernization of privacy protections to keep pace with technological advancements, particularly those driven by AI. This includes ensuring that frameworks like the Personal Information Protection and Electronic Documents Act (PIPEDA) and its provincial counterparts are updated to address the unique challenges of AI, such as data privacy, consent, and transparency in AI-driven decision-making processes.

Similarly, Alberta and British Columbia have their own Personal Information Protection Acts, which govern how personal information is handled within their respective jurisdictions. These acts further contribute to Canada’s comprehensive approach to AI regulation. At the time of writing this case, Alberta is actively reviewing and updating its privacy legislation to address the challenges posed by AI and modern data practices. The Alberta Standing Committee on Resource Stewardship is currently examining the province's Personal Information Protection Act (PIPA), signaling potential changes to enhance privacy protections. In June 2024, Alberta's Information and Privacy Commissioner, Diane McLeod, presented recommendations to update PIPA, emphasizing the need to reflect the modern technological landscape and how personal information is shared with organizations. Key suggestions include recognizing the protection of personal information as a fundamental human right, granting individuals explicit access rights to their data held by organizations, introducing a "right to be forgotten," and implementing heightened protections for children's personal information.

At the end of 2023, the federal, provincial and territorial privacy commissioners developed a set of principles to advance the responsible, trustworthy and privacy-protective development and use of generative AI technologies across Canada. Privacy Commissioners also collaborates with other regulatory bodies and stakeholders to address potential gaps in AI governance, ensuring a coordinated approach to protecting Canadians’ rights, with focus on the unique impact on vulnerable groups and in highly impactful contexts such as employment. This includes monitoring the impacts of AI on privacy and making recommendations for legislative changes where necessary.

Further reinforcing this direction, the Privacy Commissioner of Canada, Philippe Dufresne, appeared before the Committee in September 2024, advocating for interoperability between provincial and federal privacy laws to ensure consistent protection of personal information across jurisdictions. The alignment of provincial initiatives with federal efforts, such as the proposed Consumer Privacy Protection Act (CPPA) under Bill C-27, underscores a nationwide commitment to strengthening privacy regulations in response to the increasing integration of AI technologies. Through these efforts, the Commissioner seeks to balance innovation with the imperative to protect individual rights, contributing to a trusted digital environment in Canada.

1.5 Court & Related Decisions

Recent legal rulings in Canada have highlighted the challenges and complexities of applying existing laws to new technologies, including AI, and how these decisions intersect with ongoing regulatory efforts.

The Supreme Court of Canada, in the case York Region District School Board v. Elementary Teachers’ Federation of Ontario, 2024 SCC 22, recognized that Ontario public school teachers’ privacy rights in the workplace are protected by section 8 of the Charter, which protects the right to be secure against unreasonable search and seizure. While the case does not specifically address AI, it is relevant to Canada’s AI regulatory landscape as it underscores the necessity for clear privacy guidelines and accountability, especially when new technologies are used in monitoring or managing employees. In addition, the Court emphasized that an employee’s reasonable expectation of privacy is highly contextual, involving a balance between privacy rights and the employer's operational needs and management rights.

The Privacy Commissioner of Canada’s investigation into the RCMP’s use of Clearview AI’s facial recognition technology further illustrates the complexities of AI regulation in Canada. The investigation concluded that the RCMP's use of Clearview AI violated the Privacy Act, as the technology involved searching a database compiled unlawfully by scraping images from the internet without user consent. This case underscores the need for clear guidelines and compliance with privacy laws when integrating AI technologies, particularly in public institutions. The Privacy Commissioner has also called for Parliament’s amendment to the Privacy Act to explicitly require federal institutions to ensure third-party data sources comply with legal standards. This call reflects a growing recognition of the challenges posed by AI and digital technologies in safeguarding privacy rights.

In the realm of copyright, the Federal Court’s decision in Blacklock’s Reports v. Attorney General of Canada reinforces the principle that digital locks should not override user rights, such as fair dealing, and instead must coexist harmoniously. This ruling appears to align with Canada’s AI regulatory approach, as reflected in the proposed AIDA, which emphasizes a balanced framework that protects both innovation and fundamental rights. The decision underscores the importance of ensuring that AI regulations do not unduly restrict fair access to data or content, which seems crucial for fostering innovation while safeguarding user rights.

From the authors’ perspective, the Blacklock’s Reports case highlights a critical point of caution for AI regulations: the need to avoid overly broad restrictions that could inhibit lawful uses, such as fair dealing or fair access to AI-generated outputs. By affirming that user rights should not be unjustly limited by technological measures, the ruling arguably supports Canada’s AI strategy, which seeks to balance protection against potential harms with the promotion of responsible and equitable access to AI technologies. This balance is seen as crucial in AI governance to prevent a scenario where protective measures could inadvertently suppress legitimate, innovative, or research-related uses of AI. Thus, the decision could be viewed as providing a judicial precedent that aligns with the nuanced approach of Canada's AI regulatory framework, reinforcing the need to carefully navigate the intersection of technological protection and user rights.

Additionally, the Federal Court of Canada has provided guidelines on the use of AI-generated content in legal proceedings, emphasizing the need for transparency and caution when incorporating AI into court documents. This directive reflects broader concerns about the authenticity and reliability of AI-generated information, especially in legal contexts.

Provincial courts have also weighed in on the use of AI, particularly in legal briefs and judgments. The British Columbia Provincial Courts, for instance, issued directives cautioning against the uncritical use of AI in legal cases, stressing the potential dangers of relying on AI-generated legal analyses.

These decisions collectively contribute to a growing body of jurisprudence that shapes the responsible use of AI in legal and administrative contexts across Canada.

As one of first principles for AI, the Montréal Declaration for the Responsible Development of AI is a voluntary collective initiative launched in 2017 aimed at ensuring that AI serves the well-being of all people and guides social change with strong democratic legitimacy. The Declaration outlines key principles for responsible AI development, including enhancing well-being, respecting autonomy, and protecting privacy.

2. DEVELOPMENT

2.1 Canada’s AI Ecosystem: The Pan-Canadian AI Strategy

In mid-2022, Canada launched the second phase of its Pan-Canadian Artificial Intelligence Strategy (PCAIS). With a CAD$443 million investment over ten years, this phase aims to accelerate AI research, commercialization, and adoption across Canada, which include setting standards and assessment programs related to AI, including $125 million for Canada’s Global Innovation Clusters to support AI commercialization.

The purpose of the strategy is to build a Canadian AI talent pipeline and ecosystem supported by centers of research and training at National AI Institutes. The strategy focuses on enhancing Canada's AI research base and continues to develop, retain, and attract academic talent pool, while promoting responsible AI development. Since 2017, the Canadian Institute for Advanced Research (CIFAR) has been at the heart of the PCAIS in designing and implementing the strategy. Key investments in the second phase of PCAIS include $60 million for three National AI Institutes (Amii, Mila, and Vector Institute) to translate research into commercial applications, $160 million for CIFAR to continue developing academic talent. A direct result of this, saw the retainment of over 100 leading AI researchers as Canada CIFAR AI Chairs or 10% of the world’s top-tier AI researchers, while training more than 1,500 graduate students and post-doctoral fellows. By 2023, there were over 140,000 actively engaged AI professionals in Canada, home to 1,500 companies developing AI solutions, and attracted US$15.2 billion in venture capital to the Canadian AI sector from 2012 to 2023.

In 2024, Canada is investing another CAD$2.4 billion to secure its AI advantage. Central to this is the new AI Compute Access Fund to support researchers and industry’s immediate computational power needs and other AI adoption-related programs. Public consultations are underway on the development of a new Canadian AI Sovereign Compute Strategy to secure Canadian-owned and located critical AI infrastructure. Furthering this Canada is investing CAD$400 million to accelerate AI innovation and adoption across key sectors, with CAD$200 million allocated through Regional Development Agencies to support AI start-ups and critical industries like agriculture, clean technology, health care, and manufacturing. An additional CAD$100 million will be invested in the National Research Council Industrial Research Assistance Program (NRC IRAP) AI Assist Program to help small and medium-sized businesses scale up by developing and implementing AI solutions. To address potential workforce disruptions from AI, CAD$50 million will be directed to the Sectoral Workforce Solutions Program for new skills training. Finally, CAD$50 million will establish the Canadian AI Safety Institute to ensure the safe development and deployment of AI, in collaboration with stakeholders and international partners.

Industry groups, such as the Council of Canadian Innovators are advocating for greater AI commercialization with a focus on increasing the AI intellectual property held and building domestic AI companies. Canada’s AI ecosystem continue to face the challenge of higher salaries and job prospects in the United States, particularly Silicon Valley. As James Steinhoff explained in 2022:

It appears that the main function of the CIFAR centres is to act as an anti-brain drain mechanism, keeping Canadian graduates in Canada. I don’t know if they are effective or not, as the salaries are just not high in comparison to the US. This is a thing they talk about all the time in academia and industry. One of the main pillars of the strategy is to try and retain talent.

This connection to the United States is also important in a wider sense, too:

Canada is the second fiddle country to the US. People in power say a lot of grandiose things about the capacities here in Canada, but the fact is we have relatively few people in the area and are outgunned both on resources and jobs. So my sense is that everyone is scrambling to convince people to stay here so that Canada can try to generate some sort of Canadian-based tech giant.

2.2 Subnational Programs to Enable & Strengthen AI Ecosystems

In Québec, the Conseil de l'innovation du Québec (CIQ) 2024 report highlighted 12 priority recommendations to guide the responsible development and deployment of AI. The report advocates for establishing provincial AI legislation with independent oversight, adapting labour laws and education systems to AI advancements, and strategic investments in AI research. It also emphasizes the creation of a high-quality cultural database and suggests forming an interim AI governance committee. The report reflects a human-centered approach aligned with Québec values and encourages ongoing public dialogue. The recommendations are part of broader efforts, supported by the Québec government, to position the province as a leader in responsible AI development. This builds on the province’s 2018 CAD$100 million strategies for developing Québec’s subregional AI ecosystem and integrating AI into public administration to improve public service quality, efficiency, and equity. This is furthered by subnational pension funds funnelling strategic capital to strengthen local AI enterprises. In 2023, the Fonds de recherche du Québec has also supported the second phase of a CAD$15 million five year research program via the Observatoire international sur les impacts sociétaux de l'IA et du numérique (OBVIA), which includes a research hub on Industry 4.0, work, and employment. These programs underscore Quebec's complementary and robust subregional AI ecosystem strategy and enabling resources.

Ontario’s Trustworthy Artificial Intelligence (AI) Framework establishes provincial guidelines for the safe and responsible use of AI across provincial government programs and services, ensuring that AI technologies are deployed ethically and effectively. The framework enhances public trust by promoting transparency, accountability, and fairness in AI applications. This is further supported by CAD$ 77 million in public investments for the Ontario Centre of Innovation’s Critical Technology Initiatives (CTI) and the Vector Institute to develop and adopt AI and other technologies in critical sectors.

As one of first principles for AI, the Montréal Declaration for the Responsible Development of AI is a voluntary collective initiative launched in 2017 aimed at ensuring that AI serves the well-being of all people and guides social change with strong democratic legitimacy. The Declaration outlines key principles for responsible AI development, including enhancing well-being, respecting autonomy, and protecting privacy. It advocates for AI that fosters solidarity, promotes equity, and supports democratic participation. The principles also emphasize diversity, caution in mitigating risks, accountability for AI decisions, and prioritizing sustainability. These guidelines are designed to ensure that AI contributes positively to society while respecting human values and the environment.

2.3 Ongoing Public Consultations & Sandboxes:

The Government of Canada has initiated a series of public consultations to address various aspects of AI. In October 2023, a consultation was launched to explore the implications of generative AI on copyright, focusing on how copyright holders can consent to, receive credit for, and be compensated for the use of their works by AI. Additionally, in March 2024, the Competition Bureau of Canada sought public feedback on the intersection of AI and competition law, aiming to protect and promote competition in AI markets. Lastly, in June 2024, the government launched a consultation to guide the development of the AI Compute Access Fund and the Canadian AI Sovereign Compute Strategy. This initiative, supported by a CAD$2 billion investment, seeks to enhance domestic AI data processing capabilities and protect Canadian data and intellectual property. The consultations invite input from various stakeholders to shape future AI strategies.

For trade unions, a regulatory sandbox “Négocier la gestion algorithmique: Un guide pour les acteurs du monde du travail,” co-designed with trade unions in Québec to facilitate the negotiation of the responsible use of algorithmic management tools in workplaces. There are also a number of regulatory sandboxes by the private sector and private regulatory bodiesAI principles, research, best practices, and human rights frameworks. This certification aims to ensure that AI systems are developed and deployed ethically. Additionally, the Certified Information Privacy Professional/Canada (CIPP/C) credential demonstrates expertise in applying Canadian information privacy laws across federal, provincial, and territorial levels. Additional sandboxes include the CIFAR paper also highlights the importance of safeguarding children’s rights in AI use, offering insights and best practices for regulating AI and establishing a child-centric and rights-based framework for thinking about responsible AI and children that is adaptable to diverse contexts of childhood, including child labour and exploitation, which may influence a newly tabled federal regulation - Online Harms Acts.

In its current form, the proposed AI and Data Act (AIDA) as part of C-27 does not apply to “government department, ministry, institution, or Crown corporation.” The Canadian Union of Public Employees (the largest trade union in Canada), the Canadian Labour Congress, the Canadian Bar Association, and the International Civil Liberties Monitoring Group, and others have all noted that the AIDA should apply to all government institutions. This concern stems from previous allegations that law enforcement agencies, such as the Royal Canadian Mounted Police (RCMP), have used AI facial recognition technologies for surveillance based on a database compiled unlawfully from online users.

3. GOVERNANCE

3.1 Federal & Provincial Divisions of Legislative Powers

Due to the nature of Canadian federalism, AI regulation and labour regulation may become fragmented and decentralized as a result with varying degrees of coordination and collaboration. A 2024 research paper on Canada’s AI governance system finds that Canada’s federalist model has resulted in a fragmentary landscape of AI policies as Blair Attard-Frost noted:

The development of AI, as well as AI governance, in Canada is very decentralised. It gets distributed across all kinds of different actors. It is dependent on a giant network of public-private partnerships, as well as the three research centres. They are funded at the federal level and at the provincial level. The different levels of government cooperate with a large network of stakeholders with varying levels of interest between them.

For instance, the Canadian Labour Code only applies to federally regulated private industries and workplaces like in the federal public service, banking, interprovincial and international transport or telecommunications, which covers approximately 6 percent of the Canadian workforce. Therefore, the vast majority of workers are employed in provincially regulated industries and workplaces, like in education, natural resources, healthcare or social services are governed by relevant province-specific legislations covering labour standards, working hours, pay equity, and health and safety regulations. 

For the rest of this section, we will focus on the governance of the proposed Bill C-27, which is the omnibus legislation passed the House of Commons on second reading on 24 April 2023, and it contains three separate pieces of legislation: the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence and Data Act (AIDA). Bill-27 was subsequently referred to the parliamentary Standing Committee on Industry and Technology (INDU) for further study. During this period, the INDU hearings on the proposed Bill were launched, resulting in numerous responses and studies.

3.2 Need for Due Consultation & Omnibus Bill

Several industry experts, civil society organizations, and academics, in their brief submissions, have called against the lack of due consultation with stakeholders before the drafting of the legislation. The Assembly of First Nations, in their brief to the INDU, emphasized that “consultation and cooperation” are crucial before the introduction of legislation. Similarly, the Information Technology & Innovation Foundation (ITIF) has argued that the expedited discussions of Bill C-27, driven by “fear-based rhetoric about dangerous AI”, could be extremely harmful in the long run and has ostensibly resulted in the flawed draft legislation. This is echoed by other civil society organizations, industry experts, and academicians, who have urged the Standing Committee to redraft AIDA and consider a full public consultation.

A recurring theme across the submissions has also been the conflation of AIDA with the other parts of the Bill. In this regard, the Canadian Labour Congress (CLC) in its submission pointed out that “instead of a stand-alone Bill, AIDA is bundled into a larger Bill reforming commercial sector privacy law”. This, as noted by the Canadian Association of Professional Employees (CAPE), results in the “dispersion of the necessary attention required to adequately assess AIDA’s specific implications”.

3.3 Employment & Definitions of Harms & High Impact Systems

Other over-arching concerns relate to the purpose and scope of the proposed Act. Currently, the purpose of the Act is only to discrimination and protect narrow individual harms – namely, property, economic, physical, and psychological harms. However, as Prof. Ignacio Cofone points out, harm could also be to society and democracy more broadly, such as through intentional misinformation. Therefore, the CLC, the Alliance of Canadian Cinema Television and Radio Artists (ACTRA), and the Association of Canadian Publishers, inter alia, recommend the broadening of the scope to include harms to both individuals and society. Meanwhile, the scope of the Act has been heavily criticized by organizations. In its current form, AIDA does not apply to “government department, ministry, institution, or Crown corporation”. The Canadian Union of Public Employees (CUPE) (the largest trade union in Canada), the CLC, the Canadian Bar Association, and the International Civil Liberties Monitoring Group (ICLMG), inter alia, have all noted that the AIDA should apply to all government institutions. This concern stems from previous allegations that law enforcement agencies, such as the Royal Canadian Mounted Police (RCMP), have used AI facial recognition technologies for surveillance.

Several organizations have also raised concerns regarding the lack of definition of “high impact systems”. This criticism was addressed by some of the proposed amendments to the Bill, which set out a list of categories of “high impact” AI systems, including AI usage in court proceedings, healthcare, and employment. Nonetheless, as the CUPE submission points out, AI usage in sectors such as “telecommunications, education, housing, critical infrastructure, transportation, immigration, and border security” is still outside the purview of the “high impact systems”. Prof. Teresa Scassa notes that while this exclusion may be an oversight, “it highlights the rather slap-dash construction of AIDA”. Meanwhile, the ITIF has argued that rather than “high impact” systems, AIDA should govern “high risk” systems, as high impact systems could potentially be of low risk and thus not require governance. On a similar note, CUPE has claimed that AIDA must also have an “unacceptable risk” category for AI systems, as is witnessable under the EU AI Act. This is necessary because certain AI applications, such as cognitive behavioural manipulation, facial image scrapings, emotion recognition in the workplace, and identification and categorization of people using biometrics, ought to be outrightly banned. Beyond that, CUPE argues that AIDA should include compulsory consultation with workers and unions clause before any AI systems are operationalized in the workplace. This should be coupled with a whistleblower protection clause to safeguard workers in cases of reporting “misconduct, unethical decision-making, or violations of the Act without fear of reprisal”. 

Also relevant is a May 2024 report adopted by a separate federal House of Commons’ Standing Committee on Human Resources, Skills and Social Development and the Status of Persons with Disabilities (HUMA) on the implications of AI technologies for the Canadian labour force. Eight recommendations were proposed focusing on AI’s potential impacts on workers, businesses and workplaces; the need to develop a mechanism for the federal government to hear from experts on emerging AI issues; and better data collection and research. The recommendations calls on various federal departments and offices to:

  • Review federal labour legislation to assess its capacity to protect diverse workers’ rights in the context of current and future implementation of AI technologies;
  • Develop a framework, in collaboration with provinces and territories and labour representatives, to support the ethical adoption of AI technologies in workplaces;
  • Invest in skills training to increase the adaptability of the Canadian workforce to the use of AI technologies;
  • Review the impact of AI on the privacy of Canadian workers and create proper regulations to ensure the protection of Canadians from AI and that those regulations can be and are properly enforced. Also, to consider how this will interact with provinces and territories;
  • Ensure the federal Advisory Council on AI encompasses a wide diversity of perspectives, including that of labour, academicians, and civil society and
  • Develop a methodology to monitor labour market impacts of AI technologies over time, including AI’s unemployment risks.

3.4 Independence & Oversight of the AI and Data Commissioner

Trade unions like CAPE and civil society organizations like AI Governance and Safety Canada have especially argued for an independent enforcement, oversight, and review of AIDA, advocating for these responsibilities to lie with the AI and Data Commissioner as an independent regulator rather than ostensibly with the Minister of Innovation, Science and Industry. Some of these issues have been addressed by the proposed amendments by providing more powers to the commissioner, yet the amendments fall short of granting complete independence from the Minister.

Furthermore, given the ever-evolving nature of AI, organizations like the ICLMG have suggested annual public reporting and periodic reviews of AIDA and its regulations. Blair Attard-Frost similarly underscores the importance of reviews but also argues that the scope of the Act should be expanded to consider “co-regulation, coordination, and collaboration on AI governance” extending beyond “government and industry and into a great variety of civil society organizations”. More diversity in media coverage and debate on AI issues are needed.

While organisations such as the Council of Canadian Innovators, which comprises over 150 CEOs of high-growth Canadian companies, have noted that Canada should regulate AI at a rapid pace, they also emphasize the need for caution and advocate for the adoption of ‘Responsible AI Leadership Principles’, which include increasing public trust in AI products and establishing unambiguous regulations. As of now, the government, as the CLC noted in their submissions, seems to have a “light-touch” regulatory approach, which is deeply inadequate. Therefore, various stakeholders, including experts, academicians, and civil society organisations, have called on the government to reintroduce a revised and improved AIDA.

3.5 Labour Relations & Collective Bargaining

Beyond submissions to the government, unions are also slowly preparing for the impending technological change. For instance, Unifor, Canada’s largest private sector union, has prepared reports and documents outlining strategies for workers to confront technological changes at workplaces. Similarly, CUPE is aiming to protect its members’ rights and provide them with job security by developing “AI clauses” in its collective agreements. Provisions addressing ‘technological change’ are certainly not novel in the Canadian context, as unions have frequently included these in their negotiated agreements. Depending on the specific language, these ‘technological change’ provisions may, inter alia, require the employer to provide written notice to both employees and the union, offer retraining to affected employees, and engage in consultation and discussion with the union before the implementation of the technology. Additionally, provisions relating to monitoring, tracking, surveillance, and data collection have also been repeatedly incorporated into negotiated agreements. While these may be broad enough to ostensibly encompass issues concerning generative AI, there is still ambiguity, highlighting the urgent need for specific clauses that deal with AI-related concerns.

In the end, the knowledge and capacity that unions and organizations develop to effectively address the use of AI will be most impactful if, as De Stefano and Doellgast note, they possess “real influence on decisions, through legal bargaining rights backed up by encompassing collective agreements, employment protections and data protection rules.”