Facial Recognition

The issue

BREAKING: Read our comments on the Privacy Commissioners’ Investigation Report on Clearview AI here!

Facial recognition uses the physical characteristics of our face to create a mathematical model that is unique to us, just like a fingerprint. At CCLA, we think it’s better to call it “facial fingerprinting” rather than recognition, because it gives a more accurate impression of what we’re talking about: an identifier inextricably linked with our body. In a sense, it is an extreme form of carding, because it renders all of us walking ID cards.

Facial recognition runs the risk of annihilating privacy in public. Imagine if someone was able to identify your name, address, place of work, friend group, or many other private factors simply by taking a picture of you in public and running it through a database. While this may sound like the talk of a conspiracy theorist, private companies, such as Clearview AI, have already made this possible by collecting billions of photographs from the internet and social media platforms, then giving access to anyone who pays for a subscription. These companies and their subscribers can then track our consumer behaviour, politicians can use our data to influence decision making, strangers know where you live and work. We believe facial recognition puts your Charter rights at risk: your freedom of association, your freedom of expression, your right to be free from unreasonable search and seizure, the presumption of innocence, and even your protest rights.

There is also the issue of discrimination that is inextricably tied to current facial recognition programs. Study after study has demonstrated that facial recognition technology is most accurate on white male faces, and gets worse for women, youth, and people of colour, particularly Black individuals. Indeed, many companies who make this technology, including IBM, Microsoft, and Amazon, have voluntarily chosen to stop selling it because they recognize it may do terrible social harm.

Yet in Canada, facial recognition technology has been deployed by police forces without notice, meaningful consultation, or public oversight and accountability.

CCLA does not believe this lack of transparency and privacy should be accepted as a normal part of life in the 21st century. CCLA believes the question is not how or when to use facial recognition, but if – and until there is a chance to fully assess the risks, the accuracy of the technology, and the cost of mistakes and failures, this technology should not be used, particularly for law enforcement purposes.

Our recent work


Clearview AI is an American tech company that scraped three billion photos of people from the internet, created a facial recognition system to exploit that database, and is marketing access to police forces and other individuals. Our concerns go deeper than the revelations about the use of Clearview AI technology, recognizing that there are other facial recognition systems available and used.

CCLA has grave concerns regarding police transparency in the use of facial recognition technology by Canadian law enforcement. These concerns led CCLA to write to various police forces across the country to ask which of them had considered or were using Clearview without the public’s knowledge and consent.

The Ontario Provincial Police, Edmonton Police Service, and Halifax Regional Police all stated they had no records that matched our inquiry. However, the media subsequently exposed that all three of those police services were already using Clearview AI. We contacted the RCMP and Calgary Police, who asked for an extension of time, and while we waited, the media then reported on their use of Clearview AI. Media stories broke out about three more police forces who we then reached out to. Out of the nine police departments we contacted, eight have been exposed as using Clearview AI.

An investigation launched by the Privacy Commissioner of Canada, and the Commissioners of Quebec, Alberta and BC, recently resulted in the withdrawal of Clearview AI from Canada, but results of that investigation are still anticipated and will, we hope, reveal the ways in which Clearview AI failed to meet requirements of Canadian privacy laws, and the degree to which police forces erred in using it without properly assessing it for privacy compliance.

In May 2019, our Executive Director, Michael Bryant, wrote a deputation recommending a moratorium on the future use of facial recognition technology. He further wrote that facial recognition technology is illegal in that it is a mass, indiscriminate, disproportionate, unnecessary and warrantless search of innocent people without reasonable and probable cause.

In July 2020, CCLA co-signed a letter from International Civil Liberties Monitoring Group to Minister of Public Safety Bill Blair asking for a ban on facial recognition surveillance by federal law enforcement and intelligence agencies, a meaningful, public consultation on all aspects of facial recognition technology in Canada, and to establish clear and transparent policies and laws regulating the use of facial recognition in Canada.   

We think the use of Clearview points to a larger crisis in police accountability when acquiring and using emerging surveillance tools, and we need to stand firm to defend our privacy rights and address the harmful impacts of mass surveillance.