March 19, 2020.
Brenda McPhail, PhD, Canadian Civil Liberties Association
For comments, please email: email@example.com
Technology can be used as a tool to support human health and dignity, or to erode our values and our rights. We have to choose, and our choices need to be justifiable not just during, but after the panic has subsided. Privacy might seem like the least of our worries in the midst of a global pandemic.
But it is precisely when we’re afraid that we might be inclined to offer up the rights we normally hold dear in exchange for safety—or even just feeling safer, which is not the same thing. Rights to liberty in times of quarantine, rights to mobility in times of travel restrictions, and rights to equality when emergency measures affect some more than others, all must be carefully watched and of course, CCLA is on guard to ensure our governments continue on the path of careful, constrained, and minimal restrictions when taking emergency measures. But these Charter rights all have something in common. Liberty, mobility, and equality are all universally acknowledged as so fundamental that when the emergency is over, when the crisis is contained, there’s relatively little question they will be unhesitatingly restored or that there will be hell to pay if they are not.
I worry that privacy isn’t always appropriately recognized as being in that same category. Not because it isn’t fundamental, in fact, it is an internationally recognized human right on its own, and a threshold right that is at the core of liberty and facilitates equality. But privacy is also right that we are actively, albeit incorrectly, told by businesses and law enforcement bodies alike that we might want to trade away. We are habituated into swapping privacy for convenience—I want to know how far I biked so I’ll let an app report my exercise activity to Google—or even convinced it is for our own benefit—I want a safe neighborhood so maybe it won’t hurt to let police check out the data from my Amazon Ring doorbell.
This means that we must be particularly alert to privacy erosions in times of emergency that may shift the social license for such intrusions after the crisis has passed.
Let’s be clear. Timely, detailed and accurate information is absolutely essential for effective public health interventions. We’re sometimes inclined to think of the word “surveillance” as always bad, but of course it is not. When it comes to disease tracking, there is a long and necessary practice of surveillance of infectious disease, which is acknowledged by epidemiologists and public health bodies as core to the ability to design, deliver, and evaluate public health activities. Just as we understand that the employment of lifeguards to surveil us at a public swimming pool mitigates the risk of someone drowning, we know the employment of good disease surveillance practices can support the development of evidence-based risk mitigation strategies. Furthermore, the transparency of information about the progression of disease in times of pandemic is important for public education, and our ability to trust in the decisions of our public health agencies is fostered if we can see the data they’re basing their decisions on and know they are acting based on science and evidence.
The trick, of course, is ensuring that we find ways to get the necessary information that are proportionate and minimally intrusive for the humans whose health is at the core of the data collection efforts —even if the proportionality analysis may look a little different during a pandemic.
In the big data age, there are already examples outside of Canada of governments looking to leverage pools of existing data about people, including location information that so many of our networked devices, particularly the phones most of us carry everywhere we go, collect. Israel has approved emergency measures allowing its security agencies to track individuals identified as possibly ill with COVID-19 using phone-based location information obtained from telecommunications companies, and is using it to determine their compliance with quarantine orders, as well as to figure out who else people may have been in contact with who are then at risk of infection. The temporary laws allowing this to happen were passed in the middle of the night, without parliamentary approval. In the US, the Wall Street Journal reports conversations between the US Government and tech companies Palantir (the company that helps the US Department of Homeland Security conduct their immigration screening and workplace raids) and Clearview AI (which has been served cease and desist letters by most major social media platforms for scraping billions of images from their sites and using them in their facial recognition application marketed to police) about potential screening tools. Taiwan, meanwhile, is crediting intensive data linking between immigration and customs data bases with their national health insurance database, which allowed real time alerts during clinical visits, and mobile phone tracking to enforce quarantines for travelers, with helping them keep infection rates low.
The examples above might sound reasonable or creepy to you, and each is problematic in different ways when it comes to rights and democratic accountability. The bottom line is, while it’s important not to indulge in a knee jerk reaction against leveraging data and technology to surveil disease and more specifically, humans who carry or are at risk of disease, data isn’t going to solve all our problems either and it may well create others. We should be realistic about where more data collection (or better analysis of what we already have) might help support accountable decisions, and where it will hurt human rights, and fundamentally, human dignity. There are many ways in which data-driven surveillance could cross the line from necessary to disproportionate, particularly when it’s untargeted, indiscriminate, or inappropriately restrained. Tools pitched as supporting public good could become tools whose impacts spread out from compromising privacy to facilitating the removal of liberty, mobility, or equality.
So we must tread carefully in allowing such efforts to proceed in our Canadian democracy. There’s a lot to think through, across the continuum of conception, design, implementation, and ultimately, deletion of such programs. Can we design something fit for purpose, with no function creep? What’s necessary as opposed to what might be nice to have, and how do those lines get drawn, by whom? Is individual level data needed, might synthetic data serve the purpose, and when will aggregate data may be sufficient to the identified need(s)? We also must carefully consider the risks of for–profit company engagement in the design and implementation of such surveillance tools. When profit-driven third parties become involved, there is the added risk that profit motives may underlie professions of potential public good and that data provided during the crisis may be retained and used afterwards, absent stringent safeguards.
The Electronic Frontier Foundation has identified some basic principles that must be core to any data-driven approaches to monitoring people who have contracted COVID-19.
To that I’d add, only those who legitimately need the information and who are charged and accountable for using it for public good should get to access it. In the current health crisis, that probably means epidemiologists and legitimate public authorities. And they should only be allowed to use it to promote broadly socially accepted public health objectives for the duration of the crisis, with a public-facing system of oversight and review to ensure that is truly the case. Emergency measures, including the tools to support those measures, must never become permanent. When it comes to individual level surveillance for ‘public good’ we must resist normalizing such efforts or the tools that support them.
Back to all updates