1. REGULATION
1.1 Proposed Federal Regulation C-27 & Guiding Principles
Canada has been at the forefront of AI regulation and governance, with the federal government actively developing a legislative framework to address the rapid growth and deployment of AI technologies. One of the key federal regulatory initiatives is the Artificial Intelligence and Data Act (AIDA), introduced as part of Bill C-27, the Digital Charter Implementation Act, 2022. Bill C-27 is based on a series of engagements between federal ministers and the Canadian public from 2016 to the end of 2018. AIDA seeks to regulate the international and interprovincial trade and commerce of AI systems by establishing common requirements, regulate high-impact AI systems, focusing on their design, development, and use to ensure safety, accountability, and compliance. It aims to prevent individual and certain collective harms associated with biased AI outputs, establishing a risk-based approach to the regulation of AI activities. The Act introduces the role of an AI and Data Commissioner, who will oversee compliance, support educational initiatives, and enforce the Act’s provisions. AIDA could have far reaching changes to the world of work, including introducing explicit regulation on the use of automated decision making across the workforce, especially in federally regulated industries and workplaces.
Complementing AIDA, Canada’s Digital Charter sets out high-level guiding principles to ensure that privacy is protected, data-driven innovation is human-centered, and Canadian organizations can lead in global AI innovations. Although the Digital Charter itself lacks enforceable power, it establishes a foundational framework for ethical AI development in Canada. The proposed Bill C-27 expands on these principles, aiming to modernize the regulatory landscape by establishing a tribunal for privacy-related appeals and introducing significant penalties for non-compliance. This underscores the government’s commitment to safeguarding Canadians’ data rights and ensuring AI systems are developed responsibly. Canada’s Digital Charter launched in 2019, laid out the 10 principles in which the Canadian approach to AI regulation:
- Universal Access: All Canadians will have equal opportunity to participate in the digital world and the necessary tools to do so, including access, connectivity, literacy and skills.
- Safety and Security: Canadians will be able to rely on the integrity, authenticity and security of the services they use and should feel safe online.
- Control and Consent: Canadians will have control over what data they are sharing, who is using their personal data and for what purposes and know that their privacy is protected.
- Transparency, Portability and Interoperability: Canadians will have clear and manageable access to their personal data and should be free to share or transfer it without undue burden.
- Open and Modern Digital Government: Canadians will be able to access modern digital services from the Government of Canada, which are secure and simple to use.
- A Level Playing Field: The Government of Canada will ensure fair competition in the online marketplace to facilitate the growth of Canadian businesses and affirm Canada’s leadership on digital and data innovation, while protecting Canadian consumers from market abuses.
- Data and Digital for Good: The Government of Canada will ensure the ethical use of data to create value, promote openness and improve the lives of people—at home and around the world.
- Strong Democracy: The Government of Canada will defend freedom of expression and protect against online threats and disinformation designed to undermine the integrity of elections and democratic institutions.
- Free from Hate and Violent Extremism: Canadians can expect that digital platforms will not foster or disseminate hate, violent extremism or criminal content.
- Strong Enforcement and Real Accountability: There will be clear, meaningful penalties for violations of the laws and regulations that support these principles.
1.2 Supportive AI Initiatives & Tools from the Federal Government
Canada has implemented several supportive initiatives to guide the responsible development and adoption of AI technologies. Among these, the Advisory Council on AI provides strategic advice to the government on AI policy, governance, and adoption, ensuring that Canada's regulatory approach remains aligned with global standards and best practices. Further reinforcing these efforts, the Directive on Automated Decision-Making, issued by the Treasury Board of Canada, applies specifically to federal departments using automated decision-making systems. This directive mandates that AI applications in government services must align with principles of transparency, accountability, legality, and procedural fairness.
It sets out requirements for federal institutions to conduct Algorithmic Impact Assessments (AIA) to evaluate the risks associated with automated decision systems, thereby promoting responsible AI use within public administration. This tool helps federal institutions determine the level of risk associated with their AI applications and guides the necessary measures to mitigate those risks, supporting transparency and accountability in AI deployment. The federal government also provides a pre-qualified list of AI vendors through the AI Source List, ensuring that government entities can engage with trusted and vetted AI suppliers for their projects.
The Global Partnership on AI (GPAI), of which Canada is a co-founding member, exemplifies Canada’s commitment to international collaboration in AI governance, focusing on the responsible development and use of AI technologies globally. The AI and Data Governance Standardization Collaborative (AIDG) seeks to further the standization strategies across Canada and overseas. The Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems further illustrates Canada’s proactive stance in AI regulation. This code of conduct encourages private companies to adopt responsible AI practices ahead of formal regulations, emphasizing principles such as safety, fairness, transparency, and human oversight.
1.3 Provincial Initiatives
Provincial governments in Canada also play a significant role in AI regulation, contributing to a decentralized and multifaceted regulatory landscape. Québec has taken a robust approach with its Law 25 - Act Respecting the Protection of Personal Information in Private Sector which regulates the collection, use, and disclosure of personal information, including that of its employees, in the private sector. Under this Act, employees have the right to be first informed in plain language on the purposes and means by which information is collected about them, including when it is related to employee work performance. Furthermore, employees and persons can access and rectify the personal information collected and have the right to withdraw their consent. Employers and enterprises need to publish the requisite policies, conduct privacy impact assessments (PIA) and have designated officers appointed to comply with the provisions of this Act. This law aligns closely with the European Union’s General Data Protection Regulation (GDPR), establishing stringent requirements for businesses handling personal data. Québec’s initiative underscores the province’s commitment to safeguarding individual privacy rights in the face of growing AI applications that leverage personal data.
Ontario’s Working for Workers Act, 2022, stands out as a notable example, focusing on workers' rights in the context of AI and digital platforms. The Act mandates that employers with 25 or more employees develop written policies on electronic monitoring, ensuring transparency and accountability in the use of AI-driven employee monitoring tools. This aligns with broader efforts to protect workers’ rights and privacy in an increasingly digital workplace. In 2024, Ontario’s proposed Bill 194, the Enhancing Digital Security and Trust Act, 2024 (EDSTA), further defined AI systems not specified in the Working for Workers Act of 2022. The EDSTA also mandates the development of cybersecurity programs with possible technical standards set by the government. This legislation underscores the province's commitment to safeguarding digital information, particularly data involving minors, and establishing clear accountability for the use of AI in the public sector. Together, these initiatives reflect Ontario’s proactive stance on AI governance, emphasizing the importance of transparency, privacy, and security in digital operations across provincial entities.
1.4 Role of the Privacy Commissioners & Laws
The Offices of the Privacy Commissioner play a pivotal role in overseeing the implementation and enforcement of privacy laws related to AI and data management at the federal and provincial and territorial levels. The Commissioners advocate for the modernization of privacy protections to keep pace with technological advancements, particularly those driven by AI. This includes ensuring that frameworks like the Personal Information Protection and Electronic Documents Act (PIPEDA) and its provincial counterparts are updated to address the unique challenges of AI, such as data privacy, consent, and transparency in AI-driven decision-making processes.
Similarly, Alberta and British Columbia have their own Personal Information Protection Acts, which govern how personal information is handled within their respective jurisdictions. These acts further contribute to Canada’s comprehensive approach to AI regulation. At the time of writing this case, Alberta is actively reviewing and updating its privacy legislation to address the challenges posed by AI and modern data practices. The Alberta Standing Committee on Resource Stewardship is currently examining the province's Personal Information Protection Act (PIPA), signaling potential changes to enhance privacy protections. In June 2024, Alberta's Information and Privacy Commissioner, Diane McLeod, presented recommendations to update PIPA, emphasizing the need to reflect the modern technological landscape and how personal information is shared with organizations. Key suggestions include recognizing the protection of personal information as a fundamental human right, granting individuals explicit access rights to their data held by organizations, introducing a "right to be forgotten," and implementing heightened protections for children's personal information.
At the end of 2023, the federal, provincial and territorial privacy commissioners developed a set of principles to advance the responsible, trustworthy and privacy-protective development and use of generative AI technologies across Canada. Privacy Commissioners also collaborates with other regulatory bodies and stakeholders to address potential gaps in AI governance, ensuring a coordinated approach to protecting Canadians’ rights, with focus on the unique impact on vulnerable groups and in highly impactful contexts such as employment. This includes monitoring the impacts of AI on privacy and making recommendations for legislative changes where necessary.
Further reinforcing this direction, the Privacy Commissioner of Canada, Philippe Dufresne, appeared before the Committee in September 2024, advocating for interoperability between provincial and federal privacy laws to ensure consistent protection of personal information across jurisdictions. The alignment of provincial initiatives with federal efforts, such as the proposed Consumer Privacy Protection Act (CPPA) under Bill C-27, underscores a nationwide commitment to strengthening privacy regulations in response to the increasing integration of AI technologies. Through these efforts, the Commissioner seeks to balance innovation with the imperative to protect individual rights, contributing to a trusted digital environment in Canada.
1.5 Court & Related Decisions
Recent legal rulings in Canada have highlighted the challenges and complexities of applying existing laws to new technologies, including AI, and how these decisions intersect with ongoing regulatory efforts.
The Supreme Court of Canada, in the case York Region District School Board v. Elementary Teachers’ Federation of Ontario, 2024 SCC 22, recognized that Ontario public school teachers’ privacy rights in the workplace are protected by section 8 of the Charter, which protects the right to be secure against unreasonable search and seizure. While the case does not specifically address AI, it is relevant to Canada’s AI regulatory landscape as it underscores the necessity for clear privacy guidelines and accountability, especially when new technologies are used in monitoring or managing employees. In addition, the Court emphasized that an employee’s reasonable expectation of privacy is highly contextual, involving a balance between privacy rights and the employer's operational needs and management rights.
The Privacy Commissioner of Canada’s investigation into the RCMP’s use of Clearview AI’s facial recognition technology further illustrates the complexities of AI regulation in Canada. The investigation concluded that the RCMP's use of Clearview AI violated the Privacy Act, as the technology involved searching a database compiled unlawfully by scraping images from the internet without user consent. This case underscores the need for clear guidelines and compliance with privacy laws when integrating AI technologies, particularly in public institutions. The Privacy Commissioner has also called for Parliament’s amendment to the Privacy Act to explicitly require federal institutions to ensure third-party data sources comply with legal standards. This call reflects a growing recognition of the challenges posed by AI and digital technologies in safeguarding privacy rights.
In the realm of copyright, the Federal Court’s decision in Blacklock’s Reports v. Attorney General of Canada reinforces the principle that digital locks should not override user rights, such as fair dealing, and instead must coexist harmoniously. This ruling appears to align with Canada’s AI regulatory approach, as reflected in the proposed AIDA, which emphasizes a balanced framework that protects both innovation and fundamental rights. The decision underscores the importance of ensuring that AI regulations do not unduly restrict fair access to data or content, which seems crucial for fostering innovation while safeguarding user rights.
From the authors’ perspective, the Blacklock’s Reports case highlights a critical point of caution for AI regulations: the need to avoid overly broad restrictions that could inhibit lawful uses, such as fair dealing or fair access to AI-generated outputs. By affirming that user rights should not be unjustly limited by technological measures, the ruling arguably supports Canada’s AI strategy, which seeks to balance protection against potential harms with the promotion of responsible and equitable access to AI technologies. This balance is seen as crucial in AI governance to prevent a scenario where protective measures could inadvertently suppress legitimate, innovative, or research-related uses of AI. Thus, the decision could be viewed as providing a judicial precedent that aligns with the nuanced approach of Canada's AI regulatory framework, reinforcing the need to carefully navigate the intersection of technological protection and user rights.
Additionally, the Federal Court of Canada has provided guidelines on the use of AI-generated content in legal proceedings, emphasizing the need for transparency and caution when incorporating AI into court documents. This directive reflects broader concerns about the authenticity and reliability of AI-generated information, especially in legal contexts.
Provincial courts have also weighed in on the use of AI, particularly in legal briefs and judgments. The British Columbia Provincial Courts, for instance, issued directives cautioning against the uncritical use of AI in legal cases, stressing the potential dangers of relying on AI-generated legal analyses.
These decisions collectively contribute to a growing body of jurisprudence that shapes the responsible use of AI in legal and administrative contexts across Canada.