Weigh in on our next case study: Artificial Intelligence and Public Safety

August 1, 2018

CCLET hopes to keep adding case studies to the Remote Rights site, and we could use some help.

Because we know that projects get better when we include a diversity of opinions and engage with other people who care about rights education, we’re trying something new: we’re sharing our latest idea at the conceptual stage to see what kinds of responses we get. If you like it, if you hate it, if you have expertise that could help us move the idea forward, let us know!

While our ability to move projects forward is contingent on funding (we’re working on it!) we want to get the ball rolling now. So, read about the idea here, check out our initial user experience map, and see what you think.

“Creating Criminals” Case Study

The background: Artificial Intelligence (AI)  increasingly plays a role in the criminal justice and public safety systems via risk assessments. Courts in the US use these tools at many stages to make significant decisions about accused individuals, a trend that may soon extend to Canadian courts. Police departments are also using similar tools in predictive policing applications for a range of activities from predicting crime ‘hot spots’ to creating ‘heat lists’ of people at risk to commit serious criminal acts. These data-driven practices carry a wide range of risks for individual rights, including privacy and equality rights, while running the risk of rendering systems that must be publicly accountable increasingly opaque. The Canadian Civil Liberties Education Trust (CCLET) will bring its 20+ years of experience teaching youth to engage critically with rights issues to this topic. The “Creating Criminals” interactive scenario will help users experience how AI tools work in a playful manner, creating an interactive online learning experience to help participants interrogate questions of privacy, equality, and fairness that emerge when AI is used in criminal justice and public safety systems. We will also create supplemental openly accessible text resources to provide basic background information on AI technology and uses in the criminal justice and public safety systems.

The idea: We want to create an interactive quiz-type program to help users will learn how specific characteristics or activities could lead an algorithm to label people as “risks” in a public safety context. Users will be prompted to select an avatar and then make a series of choices to develop the avatar’s character profile (e.g. employment/citizenship status, social media activity etc.). At the end of the exercise, a risk assessment will be revealed with an explanation of how the algorithm interpreted the user’s choices as having a positive or negative impact on their results.

The target audience: Our Remote Rights online rights education portal and the modules that populate it are designed to appeal to high school students and teachers; however, we hope they are also appropriate and accessible to the public at large.

The concept map: Check it out here!

Contact us!:

required
required
captcha
required