Fighting Quebec’s Religious Symbol’s Ban – As it Unfolds

CCLA is currently challenging the discriminatory religious symbols ban, Bill 21 in Quebec alongside the National Council of Canadian Muslims. We will keep this page up to date with events in the fight to stop this unjust law as it unfolds.

Can a Politician Block you on Twitter?

Cara Zwibel
Director of Fundamental Freedoms Program





Can an elected representative block a critical constituent on Twitter? What about suing another representative for defamation? How much control do politicians have over their online reputation and how much should they?

With a federal election on the horizon, voters will no doubt be relying on a great deal of online content and social media chatter to help them make decisions about candidates. In the buildup to October 2019, those who hope to get elected will be especially careful about their online presence. Candidates will not only ensure that they don’t post anything that could lose them votes but also take care that others aren’t posting items that may damage their chances. Online reputation management is big business – not just for those selling products and services. Reputation is a currency in the political sphere. There is a special incentive for politicians to make sure that the online record casts them in the best possible light, even if that means silencing critical or otherwise inconvenient voices.

If you are not already an elected representative, there is likely to be less online content about you, and you may even have a chance to delete some of those embarrassing tweets or Instagram posts before anyone thinks to take a screenshot for posterity. However, in my view, elected officials have special constitutional duties and responsibilities to their constituents – and this means that they may need to have thicker skin when it comes to online criticism. The question for those already in the public eye is: when does standing up for yourself start to look like heavy-handed silencing of your critics?

Recently, CCLA learned of a woman who has been blocked by her federal Member of Parliament on Twitter. MP John Brassard (Barrie-Innisfil, Ontario) has decided that the critiques that this constituent has voiced about him on Twitter merit retaliation. She no longer has the privilege of getting notifications about his tweets or regular updates about what he is doing in Parliament on behalf of his community. When she asked his staff why she was blocked, one response was that she was “a woman with very strong opinions”. They also told her that she “threatened to harass” the MP – this in response to her promise to be at campaign events and try to correct any misinformation she felt he was spreading about climate change. That is not harassment; that is political engagement, and candidates should welcome the opportunity to engage with an informed citizenry. These responses suggest a fundamental misunderstanding of how the political process works.

Brassard has also recently launched a $100,000 defamation suit and lodged a complaint with Barrie’s Integrity Commissioner regarding a Facebook post made by local Barrie city councillor Keenan Aylwin. Posted just days after the Christchurch massacres, Aylwin criticized Brassard and another Barrie-area MP, Alex Nuttall, for failing to speak out on what Aylwin characterized as Andrew Scheer’s “appearance on the same stage as a neo-Nazi sympathizer, Faith Goldy, at a United We Roll Rally.” Aylwin argues that the MPs are “playing footsies with white supremacists”. Brassard says the statement is false and defamatory, and that it violated the Code of Conduct for Barrie councillors. The Integrity Commissioner appears to agree with Brassard and Aylwin may face consequences from the council when they bring the matter before them.

In my view, these actions show a failure to appreciate the importance of free expression in Canada, particularly when it comes to political speech. I don’t believe that anyone – elected or not – has to subject themselves to repeated harassment in the real world or online. However, that is not what is happening in either of these two instances. An elected representative is going to face criticism, harsh, excessive, or worse: reasonable and eloquent. If the narrative is misleading or just plain wrong, an elected representative has avenues to correct the record. As we get closer to October 2019, Canadians should expect candidates to contribute to our political debate, not to stifle it. Silencing critics is not the answer.

In the United States, courts have already ruled that a public official who blocks a constituent from their Twitter feed has violated the First Amendment’s protection of freedom of speech. I think a Canadian court might well find a Charter violation in similar circumstances since these online spaces have become our new public squares. If these social media tools are used to connect representatives with their constituents, they have to take the good with the bad. Blocking a constituent and suing the city councillor sends a clear message to those who wish to engage with Brassard on matters of policy: tread lightly.

This kind of chill is terrible for our democracy.

Breaking down the “Digital Charter” – Part 1

Cara Zwibel
Director of Fundamental Freedoms Program





In the wake of the live streaming of the massacres in Christchurch, New Zealand, Canada has joined many other nations in answering the “Christchurch call” and vowing to eliminate violent extremist and terrorist content online. But what does the proposed “Digital Charter” mean for people in Canada and our civil liberties? At the moment, the Charter appears to be entirely aspirational: we have a list of principles the government has announced but have no sense of whether, how or when those principles will be embedded in law, policy or practice.

Of the 10 Charter principles, at least one – if implemented into an enforceable law – will have a direct and significant impact on the content that Canadians can create, disseminate and access online. In other words, a very real impact on our freedom of expression which, it’s worth remembering, is protected in our Canadian Charter of Rights and Freedoms. That is a real – not aspirational – Charter with the full force of the Constitution, Canada’s supreme law. The government has said, “Canadians can expect that digital platforms will not foster or disseminate hate, violent extremism or criminal content.” On its own, a principle which sets out an “expectation” for what privately owned platforms will do has little weight, but one of the other principles promises “clear and meaningful penalties” for violations of law and regulations to support the principles.

This principle has me worried. How do we deal with hate and extremism without capturing the merely unpopular or offensive? Don’t get me wrong: I don’t spend time on neo-Nazi sites or seek out graphic acts of violence on video streaming platforms. I don’t like this kind of content and actively avoid it. But I worry about broad rules which “outlaw” some types of content and what this means for a democracy where free expression is supposed to be a fundamental freedom. Regulating expression is notoriously tricky. The sheer volume of content online and the Internet’s fundamentally global character only add to the challenges.

I need to know more about the kind of “violent extremism” from which Canadians can expect to be shielded. It is appalling that the massacre in Christchurch was live streamed using a social media platform, but is there a way to address that problem without also censoring other content which might have significant social value? Think of repressed minorities who suffer violence at the hands of the state. Live streaming those acts of violence might bring the world’s attention to an important issue. Consider also the impact of live streaming videos which have captured horrific acts of police brutality. Video can be an important means of holding the powerful to account. Does the government get to decide who can stream content live? Does Facebook? Should we let an algorithm determine who can be trusted to stream?

What does the government mean when it talks about “fostering or disseminating hate”. The legal definition of “hate speech” is quite narrow, and for good reason. But when most people use the term, it’s not that narrow definition they have in mind, or expect to be enforced. Our Criminal Code prohibition on hate speech (s. 319) has been held to be constitutional by the Supreme Court because it is supposed to only capture the most extreme kind of content. Even so, the legal definition is open to varying interpretations, and courts and judges frequently disagree about whether a given piece of content crosses the line. When does harsh criticism of Israel become anti-Semitism? When do strong statements of religious beliefs about the “proper” definition of marriage become hate propaganda targeting the LGBTQ community? Is the Digital Charter going to place these decisions in the hands of private platforms? If so, will those platforms be punished if, in the eyes of the government, they make the wrong call? If the answer is yes, they will certainly err on the side of censorship rather than free expression. And if dissemination is relatively clear, what does it mean to “foster” hate? Will platforms be expected to interfere in how online networks form to ensure like-minded bigots can’t find each other? If the goal of social media is to help connect people, are we now saying that some people really do need to be isolated? Our constitutionally protected freedom to associate is protected by the same Constitution which safeguards freedom of expression.

Finally, is the principle’s reference to “criminal content” a separate category, or are hate and violent extremism sub-categories of this broader theme? Are platforms responsible for deciding if content is criminal or will they only be expected to remove something which has already been the subject of a criminal conviction? State censorship is dangerous because we never know when our views, opinions or content may be deemed too offensive or harmful (or simply on the wrong end of the political spectrum) for public dissemination. Outsourcing censorship to a corporate entity accountable only to its shareholders is at least as dangerous.

With an election coming up in a few short months, the aspirational Digital Charter may make for talking points with little substance. Nevertheless, it is good to put this issue on the agenda. It is worth having a serious think about how to reconcile a strong commitment to free expression with a commitment or desire to deal with extremism online. And, when we pick our next elected representative, we should at least understand how they feel about free expression, and what they intend to do to protect and promote this right in the digital public square.

CCLA at the Supreme Court: Journalistic Source Protection

Cara Zwibel
Director of Fundamental Freedoms Program





Refusing to burn a confidential source is a hallmark of journalistic integrity.  But does Canadian law protect journalism confidentiality? That’s what we went to the Supreme Court of Canada to argue today.

CCLA is before the Supreme Court of Canada today intervening in a case that addresses the importance of protecting journalists’ confidential sources. In Denis v Cote, the Supreme Court will have its first opportunity to interpret the Journalistic Sources Protection Act (JSPA), legislation that made significant changes to the rules of evidence in recognition of the vital role that confidential sources play in the media’s news gathering function.

The case arises out of a criminal trial in Quebec. Mr. Cote, the accused, alleged that certain documents and information arising out of a police investigation were deliberately leaked by agents of the state to a reporter, Ms. Denis. He argued that this constituted an abuse of process and that the criminal charges against him should be stayed. In support of his motion, he sought to compel Ms. Denis to testify and provide her source’s identity. While his motion was initially denied, a subsequent decision required Ms. Denis to identify her source, and the matter is now before the SCC.

CCLA has been involved in many cases that dealt with the role of the media and the importance of confidential sources. We intervened in the case to assist the Court in developing the principles that should apply when motions for disclosure of sources are brought under the new statutory scheme established by the JSPA. Our goal is to ensure that the law is interpreted in a way that furthers freedom of the press – something that cannot be done unless journalists have adequate protection for confidential sources. CCLA is arguing that the presumption against disclosure of a confidential source can only be overcome where there is no other reasonable means of getting the relevant information, and where the public interest in disclosure clearly outweighs the public interest in non-disclosure. Essentially, disclosure of a confidential source must be both necessary and proportional in light of the interests at stake. We are also encouraging the Court to recognize that when disclosure is ordered, conditions should be attached to such an order to ensure that it minimally impairs Charter protected rights.

We are grateful to Prof. Jamie Cameron of Osgoode Hall Law School and Chris Bredt, Pierre Gemson and Veronica Sjolin of BLG who are representing CCLA on the case pro bono.

Read CCLA’s factum here

CCLA Voices Concern Over Several Measures in Budget Bill

Executive Director and General Counsel, Michael Bryant, speaks on behalf of CCLA at the Standing Committee on Finance and Economic Affairs on May 7, 2019 considering Bill 100, An Act to implement Budget measures and to enact, amend and repeal various statutes.