Harper and Chrétien Weigh In on Alberta Separatism
Former prime ministers Stephen Harper and Jean Chrétien spoke in Ottawa, rejecting separatism and ur
Canada’s Privacy Commissioner is urging the federal government to develop a comprehensive national strategy on artificial intelligence that prioritizes privacy protections and shields individuals from AI-related harms, saying current rules lag behind rapid technological shifts. In remarks delivered to policymakers and stakeholders, the commissioner outlined a range of concerns about how AI systems are deployed and regulated and stressed the need for clear guardrails that uphold Canadians’ rights.
In particular, the commissioner highlighted risks to privacy and personal data inherent in many AI-driven technologies, noting that algorithms and automated decision-making tools can collect, analyse and infer sensitive information without sufficient transparency or consent. “Without clear direction and enforceable standards, Canadians remain vulnerable,” the commissioner said, pushing for stronger safeguards and a strategic approach that addresses both innovation and human rights.
The call comes amid a global race among governments to regulate AI, with other jurisdictions such as the European Union and select U.S. states already advancing frameworks that emphasize accountability, risk assessments and rights-based safeguards. Canada’s current legislative framework — including the Personal Information Protection and Electronic Documents Act (PIPEDA) — predates many modern AI applications and has faced criticism from privacy advocates for being outdated in the face of emerging technologies.
The commissioner’s recommendations include mandatory privacy impact assessments for high-risk AI systems, stronger requirements for explainability, and clearer avenues for individuals to seek redress when automated systems affect their rights. Officials also called for better resourcing of regulatory bodies that oversee data and privacy laws so they have the capacity to enforce new standards.
Business leaders and technology proponents argue that any strategy must strike a balance between protecting rights and enabling innovation. AI is increasingly central to sectors such as healthcare, finance and public services, where it has the potential to improve outcomes but also poses risks if deployed without adequate oversight. The privacy commissioner acknowledged that innovation is important but said it should not come at the expense of fundamental rights.
In testimony to lawmakers, the commissioner pointed to examples where AI systems have led to biased outcomes, discriminatory decisions and opaque data practices that left individuals unsure how or why decisions were made about them. She emphasised that embedding human-centric principles into AI governance would help ensure technologies serve the public good while minimizing unintended harms.
Experts say a national AI strategy could also enhance public trust and provide a competitive advantage for Canadian industries that adopt ethical AI practices. Without clear rules, companies may face uncertainty or reputational risks, and individuals may lose confidence in systems that handle their data.
The federal government has previously acknowledged the need for AI governance frameworks, but a fully articulated national strategy has yet to be finalized. The commissioner’s call adds urgency to ongoing discussions among lawmakers, regulators and advocacy groups about how best to protect both privacy and innovation in the age of artificial intelligence.