Today marks Data Privacy Day (or as it’s known in some parts of the world, Data Protection Day), a time to reflect on the state of privacy protection in Canada and around the world.
We almost take it for granted these days that privacy is under pressure as emerging technologies find new ways to transform yesterday’s cautionary tales into today’s lived realities. It’s indeed been almost 2 decades since our highest court recognized that while George Orwell’s fears of a surveillance society had not yet been fully realized, he was certainly an astute early observer of the direction things were going, and the privacy landscape has only gone downhill since then.
Among the growing pool of privacy threats on the horizon, one deserves particular attention—an array of neuro-technologies are developing the ability to read people’s very thoughts, putting one of the last remaining private refuges up on the technological chopping block.
As far back as 2017, a group of leading researchers published a manifesto on the technology in Nature warning that “existing ethics guidelines are insufficient for this realm”, while a 2024 CIGI report is worried about what will happen if people become subjected to thought-based manipulation as this technology begins moving towards non-medical mass adoption. These concerns are regrettably apt—data-driven persuasion, where our every digital interaction is monitored and used to manipulate us, has already become an all too familiar feature of our online ecosystem.
If brain reading tech is next week’s dystopic nightmare, AI is undoubtedly today’s. AI systems are leveraging our personal data to reshape everything from our schools, to our workplaces, to our interactions with the government. In Canada and around the world, this reliance on data-fueled automation is obscuring how decisions are made, with meaningful scrutiny often occurring only after many people’s lives have been ruined.
Too often, AI systems are error-prone and embed discrimination into their automated assessments. Facial recognition systems, for example, mistakenly identify indigenous women 120 times more frequently than they do white men. Despite this tendency, AI systems are being adopted with breakneck speed and little thought to the resulting implications.
Some of these algorithms claim to predict what people are going to do, with dire potential consequences like longer prison stays or being refused immigration status.
AI is also changing the nature of policing by hyper-charging older surveillance capabilities. Live video feeds from traffic cameras or municipal CCTV are being transformed (by Montreal police and others) into automated networks that are able to track us and are constantly on the lookout for perceived threats.
Edmonton police recently announced they would be testing facial recognition on live video feeds from their body worn cameras, turning what was once a police accountability mechanism into a powerful, if flawed surveillance capability.
The impacts of these shifts will become more severe as we move towards a world where police rely on opaque technological assessments in real time.
Just last October, an unsuspecting teenager in Baltimore was surrounded by police with guns drawn after an AI tool mistook a bag of chips for a gun on a live video feed and deployed local police. Thankfully, no one was hurt, but we can expect more of these incidents.
Adoption and use of this arsenal of capabilities is happening carelessly, without any meaningful framework and the federal government’s “adoption first” approach to AI leaves little room for the rules we need in place to curb the technology’s many negative implications.
Far from it – a provision buried in Bill C-15 (the federal budget proposal) could exempt any company, government official or agency from complying with any federal law (other than the Criminal Code) for up to 6 years as long as the government believes it would encourage innovation, competition or economic growth. In its rush to adopt AI, the government could use this wide-ranging provision to sweep away what minimal privacy or other legal impediments are already in place and conduct real-world AI testing, impacting millions.
The government is also in the process of exempting federal political parties from provincial privacy laws, while putting no meaningful federal rules in place. In an era where data and AI driven political campaigning is already creating dangerous opportunities for political manipulation, this move to immunize political parties from baseline legal protections is troubling.
But there are also some encouraging developments.
Already in 2026, the Ontario privacy commissioner and Human Rights Commission jointly issued a framework to guide responsible use of AI and the BC privacy commissioner prevented the city of Richmond from building a camera network on behalf of the RCMP absent clear legal authority. The Law Commission of Ontario is also spearheading a project to guide how these powerful tools are adopted and used in the criminal justice system.
Last week, CCLA joined a broad civil society initiative in launching a public consultation on AI to better understand all the different ways AI is impacting people’s lives.
We are also seeking input from the public and from experts on the types of controls that we need to have in place to ensure AI is adopted in a way that respects civil liberties and reduces the negative implications of the technology.
Solutions to our myriad privacy challenges are not straightforward.
So on this Data Privacy Day, take a moment to read about the challenges posed by AI and to share your thoughts on the implications of this emerging technology.



