Skip to main content

Authors: Jonathan Obar and David Gelb, York University  

Jonathan A. Obar is an Associate Professor in the Department of Communication & Media Studies at York University. His research and teaching focus on information and communication policy, and the relationship between digital technologies, civil liberties and the inclusiveness of public cultures. Academic publications address big data and privacy, online consent interfaces, corporate data privacy transparency, and digital activism. 

David Gelb is an Associate Professor in the Department of Design at York University. His research focuses on ethical interfaces and design pedagogy. He teaches user-centred design, visual communication design and interaction design theory at the undergraduate and graduate levels. Central to his professional practice is user experience research for clients in the health sciences, education and cultural sectors. 

The Design of an Online Consent Process Is Vital to Ensuring Individuals Can Decide about their Own Information Protections 

Meaningful consent decisions require access to information (i.e. notice materials) about the organization attempting to obtain an individual’s consent. When it comes to artificial intelligence (AI), those notice materials should contain, at minimum, information about the data used to train an AI, the algorithms driving the analyses, and the automated results leading to organizational decision-making. If these details are available, this is one step towards helping people consider whether to consent to the organizational practices in question, and the implications of service term agreement. 

As AI is considered for automated decision-making by employers, financial institutions, universities, marketers, and beyond, individuals require accessible and helpful information to engage in oversight protections. The threats of big data discrimination and related AI-inequities are growing concerns, especially for members of marginalized communities. 

Yet, making information publicly available is not the same as ensuring individuals can access or understand that information.  

The Challenge of Text-Based Notices 

“Notice materials” often refer to terms of service and privacy policies conveyed publicly via static, text-heavy websites. Research suggests few read text-based notices. The reasons vary. Text-based policies are often long, complicated, and boring. Deceptive user interface designs draw attention away from policies and towards appealing “agree” buttons via clickwraps. People are in a rush to join services quickly, to shop, to game, to social network and can’t be bothered to spend hours reading about an organization’s algorithmic process. 

In its Guidelines for Obtaining Meaningful Consent, the Office of the Privacy Commissioner of Canada (OPC) calls on organizations to “be innovative and creative” when designing online consent processes. The goal is to make notice materials more appealing, engaging, and dynamic. The OPC is clear that “organizations should do more than simply transpose in digital form, their paper-based policies from the offline environment”. New preliminary research funded by the OPC suggests that social media-style videos may offer notice improvements that actually contribute to the individual awareness vital to meaningful consent protections online. 

Designing Video Notice Supplements 

 “Videos over documents any day” – a comment from a study participant discussed in a preliminary report by researchers at York University. Many participants expressed similar sentiments suggesting video notices are preferred to text-based notices. 

Funded by the Office of the Privacy Commissioner of Canada and the Social Sciences and Humanities Research Council, the preliminary findings are included in a report entitled Video Notice Design to Support Meaningful Consent Online: An Analysis of Social Media Videos about Artificial Intelligence and Privacy and hosted on the new AI In Focus project website. 

Note: The front cover of a new report about video notice materials by York University, funded by the OPC and SSHRC. Report available at Graphic design by Hannah Palmier Blizzard.

Researchers created four TikTok-style videos designed to supplement text-based privacy policies and make them easier to understand. Each video addressed a privacy issue relevant to AI development and use. Focus group participants were asked what they thought of the videos and how they contribute to meaningful consent processes online.

Preliminary findings from the focus groups were organized into recommendations to be considered by organizations interested in designing video notices. The report emphasizes that based on participant responses, there is a preference for brief (less than a minute), live-action video supplements over text-based privacy policies. The possibility that videos could supplement text-based notice materials was clear, with participants noting that videos not only bring AI details to their attention but may motivate further review of text-based materials. While video length preferences create challenges for explaining AI systems in detail, findings suggest that social media-style videos should be created to help notify about AI systems.

Another report recommendation states the importance of ensuring representation for marginalized communities in video notice materials. Focus group participants suggested this representation on-screen may contribute to notification as well as help establish connections to content. Indeed, ensuring video content communicates AI risks and implications for members of groups more likely to experience AI-inequities is vital.

Lastly, the report recommends that social media-style videos leverage trending memes to drive attention to AI notice materials. Contrasted with boring text-based notices, trending content may help start the notification process and contribute to the understanding essential for meaningful consent. Creating a series of iterative videos aligns with this approach and may help address the challenge of attempting to convey AI complexities in under a minute.

AI Policymakers Should Prioritize Notice Improvements Across AI Contexts

Canada’s proposed Artificial Intelligence and Data Act (AIDA) requires private organizations distributing and using “high-impact” AI to convey information publicly about the system on a website. This component of Bill C-27 indicates that plain-language explanations are required, describing how the AI works, the content and decisions produced, and any mitigation procedures.

Notice requirements should extend well beyond so-called “high-impact” AI systems as various so-called “low-impact” AI systems may also result in harms. This suggests that any organization hoping to benefit from AI should invest in notice material improvements.

Improving notice materials and associated meaningful consent processes requires more than static text on a website buried behind a deceptive clickwrap. As The White House notes in its Blueprint for an AI Bill of Rights, “[y]ou should know that an automated system is being used and understand how and why it contributes to outcomes that impact you”. While more research is needed to address how social media-style video can guide individuals beyond notification to explanation, AI policymakers committed to supporting meaningful consent protections should prioritize the creation of notice materials that people will actually review.

Acknowledgements: Thank you to Desirée de Jesús, Guilherme Cavalcante Silva, Hannah Palmier Blizzard, Grace Noua, Sam Loiselle, and Brenda McPhail. Thank you also to the CCLA for collaborating on this project.

About the Canadian Civil Liberties Association

The CCLA is an independent, non-profit organization with supporters from across the country. Founded in 1964, the CCLA is a national human rights organization committed to defending the rights, dignity, safety, and freedoms of all people in Canada.

For the Media

For further comments, please contact us at

For Live Updates

Please keep referring to this page and to our social media platforms. We are on InstagramFacebook, and Twitter.

en_CAEnglish (Canada)