Monday, March 04, 2024

Canada's New "Online Harms" bill - and overview and a few critiques

 It is finally here: the long-anticipated Online Harms bill. It was tabled in Parliament on February 26, 2024 as Bill C-63. It is not as bad as I expected, but it has some serious issues that need to be addressed if it is going to be Charter-compliant. It also has some room for serious improvement and it represents a real missed opportunity in how it handles “deepfakes”, synthetic explicit images and videos.


The bill is 104 pages long and it was just released, so this will be a high level overview and perhaps incomplete. But I will also focus on some issues that leapt out to me on my first few times reading it.


In a nutshell, it does a better job than the discussion paper first floated years ago by not lumping all kinds of “online harms” into one bucket and treating them all the same. This bill more acutely addresses child abuse materials and non-consensual distribution of intimate images. I think the thresholds for some of this are too low, resulting in removal by default. The new Digital Safety Commission has stunning and likely unconstitutional powers. As is often the case, there’s too much left to the regulations. But let’s get into the substance.


Who does it apply to?


So what does it do and who does it apply to?  It applies to social media companies that meet a particular threshold that’s set in regulation. Social media companies are defined as:


social media service means a website or application that is accessible in Canada, the primary purpose of which is to facilitate interprovincial or international online communication among users of the website or application by enabling them to access and share content. (service de média social)


It also specifically includes: (a) an adult content service, namely a social media service that is focused on enabling its users to access and share pornographic content; and (b) a live streaming service, namely a social media service that is focused on enabling its users to access and share content by live stream.


This seems intended to capture sites like PornHub and OnlyFans, but I think there are arguments that could be made to say that they'll not fit within that definition. 


It specifically excludes services that do not permit a user to communicate to the public (s. 5(1)) and carves out private messaging features. So instead of going after a very long list of service providers, it is much more focused, but this can be tailored by the minister by regulation. 


New bureaucracy


The online news act creates a whole new regulatory bureaucracy, which includes the Digital Safety Commission, the Digital Safety Ombudsperson and the Digital Safety Office. The Digital Safety Commission is essentially the regulator under this legislation and I'll talk a little bit later about what that its role is. The Ombudsperson is more of an advocate for members of the public and the Digital Safety Office is the bureaucracy that supports them both. As an aside, why call the bill the “Online Harms Act” but call the Commission the “Online Safety Commission”? We have a Privacy Act and a Privacy Commissioner. We have a Competition Act and a Competition Commissioner. We have a Human Rights Act and a Human Rights Commissioner. In this bill, it’s just inelegant. 


Duty to act responsibly


The legislation will impose a duty to act responsibly with respect to harmful content by implementing processes and mitigation measures that have to be approved by the Digital Safety Commissioner of Canada. This is extremely open-ended and there is no guarantee or assurance that this will be compatible with the digital safety schemes that these companies would be setting up in order to comply with the laws of other jurisdictions. We need to be very careful that “made in Canada Solutions” don't result in requirements that are disproportionately burdensome in light of our market size. 


The large social media companies that immediately come to mind already have very robust digital safety policies and practices, so whatever is dictated by the Digital Safety Commissioner should be based on existing best practices and not trying to reinvent the wheel.


If you are a very large social media company, you likely are looking to comply with the laws of every jurisdiction where you are active. Canada is but a drop in the internet bucket and work done by organizations to comply with European requirements should be good enough for Canada. If the cost of compliance is too onerous, service providers will look to avoid Canada, or will adopt policies of removing everything that everyone objects to. And the Social Media companies will be required to pay for the new digital bureaucracy, so that adds significantly to their cost of doing business in Canada.


In addition to having to have government approved policies, the Bill does include some mandatory elements like the ability of users to block other users and flag harmful content. They also have to make a “resource person” available to users to hear concerns, direct them to resources and provide guidance on the use of those resources. 

Age appropriate design code


One thing that I was blown away by is largely hidden in section 65. It reads …


Design features

65 An operator must integrate into a regulated service that it operates any design features respecting the protection of children, such as age appropriate design, that are provided for by regulations.


I was blown away by this for two reasons. The first is that it gives the government the power to dictate potentially huge changes or mandatory elements of an online service. And they can do this by simple regulation. Protecting children is an ostensible motive – but often a pretext – for a huge range of legislative and regulatory actions, many of which overreach. 


The second reason why I was blown away by this is that it could amount to an “Age Appropriate Design Code”, via regulation. In the UK, the Information Commissioner’s Office carried out massive amounts of consultation, research and discussion before developing the UK’s age appropriate design code. In this case, the government can do this with a simple publication in the Canada Gazette. 


Harmful content


A lot of this Bill turns on “what is harmful content”? It is defined in the legislation as seven different categories of content, each of which has its own specific definition. they are.. 


(a) intimate content communicated without consent;

(b) content that sexually victimizes a child or revictimizes a survivor;

(c) content that induces a child to harm themselves;

(d) content used to bully a child;

(e) content that foments hatred;

(f) content that incites violence; and

(g) content that incites violent extremism or terrorism.‍ 


Importantly, the bill treats the first two types of harmful content as distinct from the rest. This actually makes a lot of sense. Child sexual abuse materials are already illegal in Canada and is generally easy to identify. I am not aware of any social media service that will abide that sort of content for a second. 


The category of content called “intimate content communicated without consent” is intended to capture what is already illegal in the Criminal Code related to the non-consensual distribution of intimate images. The definition in the online harms bill expands on that to incorporate what are commonly called “deepfakes”. These are images depicting a person in an explicit manner that are either modifications of existing photographs or videos, or are completely synthetic as the result of someone's imagination or with use of artificial intelligence.


I 100% support including deepfake explicit imagery in this Bill and I would also 100% support including it in the Criminal Code given the significant harm that it can cause to victims, but only if the definition is properly tailored. In the Online Harms bill, the definition is actually problematic and potentially includes any explicit or sexual image. Here is the definition, and note the use of “reasonable to suspect”. 


intimate content communicated without consent means


(a) a visual recording, such as a photographic, film or video recording, in which a person is nude or is exposing their sexual organs or anal region or is engaged in explicit sexual activity, if it is reasonable to suspect that


(i) the person had a reasonable expectation of privacy at the time of the recording, and


(ii) the person does not consent to the recording being communicated; and


(b) a visual recording, such as a photographic, film or video recording, that falsely presents in a reasonably convincing manner a person as being nude or exposing their sexual organs or anal region or engaged in explicit sexual activity, including a deepfake that presents a person in that manner, if it is reasonable to suspect that the person does not consent to the recording being communicated.‍ (contenu intime communiqué de façon non consensuelle)


So what is the problem? The problem is that the wording “reasonable grounds to suspect" cannot be found in the Criminal Code definition for this type of content and there is a very good reason for that. Either content is consensual or it is not. In the Criminal Code at section 162.1, it reads:


(2) In this section, "intimate image" means a visual recording of a person made by any means including a photographic, film or video recording,


(a) in which the person is nude, is exposing his or her genital organs or anal region or her breasts or is engaged in explicit sexual activity;

(b) in respect of which, at the time of the recording, there were circumstances that gave rise to a reasonable expectation of privacy; and

(c) in respect of which the person depicted retains a reasonable expectation of privacy at the time the offence is committed.


In the Criminal Code, either there is consent or there is not. In this Bill, the threshold is the dramatically low “reasonable to suspect”. All you need is a reasonable suspicion and it is not just with respect to the circumstances at the time the image was taken or created, assuming we're dealing with an actual person and an actual image. The courts have said 


The words “to suspect” have been defined as meaning to “believe tentatively without clear ground” and “be inclined to think” ... suspicion involves “an expectation that the targeted individual is possibly engaged in some criminal activity. A ‘reasonable’ suspicion means something more than a mere suspicion and something less than a belief based upon reasonable and probable grounds”.


You can be 85% confident that it is consensual, but that remaining 15% results in reasonable suspicion that it is not. When you're dealing with the section related to purported deep fakes, it does not specify that the image has to be of an actual person, whether synthetic or not. It could in fact be a completely fictional person that has been created using photoshop. It would cause no risk of harm to anyone. Given that the image is artificial and the circumstances of its creation are completely unknown, as is the person supposedly depicted in it, you can't help but have reasonable grounds to suspect that it “might” have been communicated nonconsensually. 


Deepfakes of actual people created using artificial intelligence is a real thing and a real problem. But artificial intelligence is actually better at creating images and videos of fake people. You should not be surprised that it is being used to create erotic or sexual content of AI-generated people. While it may not be your cup of tea, it is completely lawful. 


And it actually gets even worse, because with respect to deepfakes, the Online Harms Act turns on whether the actual communication itself may have been without consent, not the creation of the image. Setting aside for a moment that a fictional person can never consent and can ever withhold consent, an example immediately comes to mind drawn directly from Canada's history of bad legislation related to technology and online mischief.


People may recall that a number of years ago, Nova Scotia passed a law called the Cyber-safety Act which was intended to address online bullying. It was so poorly drafted that it was ultimately found to be unconstitutional and thrown out.


During the time when that law was actually enforced, we had an incident in Nova Scotia where two young people discovered that their member of the legislature had previously had a career as an actor. As part of that career, she appeared in a cable television series that was actually quite popular and in at least a couple scenes, she appeared without her top on. These foolish young men decided to take a picture from the internet, and there were hundreds of them to choose from, and tweets it. What happened next? This politician got very mad and contacted the Nova Scotia cyber cops, who threatened the young man with all sorts of significant consequences.


That image, which was taken in a Hollywood studio, presumably after the actor had signed the usual releases, would potentially fit into this category of harmful content if it were tweeted after the Online Harms Act comes into effect because someone reviewing it on behalf of a platform after it had been flagged would have no idea where the image came from. And if anyone says it’s non-consensual, that’s enough to create reasonable suspicion. One relatively explicit scene actually looks like it was taken with a hidden camera. 


Surely, it cannot be the intention of the minister of Justice to regulate that sort of thing. In some ways, it doesn't matter because it would likely be found to be a violation of our freedom of expression, right under section 2B of the charter rights and freedoms, which cannot be justified under section 1 of the charter.


But wait, it gets worse. With respect to the two special categories of harmful content, operators of social media services have an obligation to put in place a flagging mechanism so that objectionable content can be flagged by users. If there are reasonable grounds to believe that the content that has been flagged fits into one of those two categories, they must remove it. Reasonable grounds to believe is also a very low standard. But when you combine the two, the standard is so low that it is in the basement. Reasonable grounds to believe that there are reasonable grounds to suspect is such a low standard that it is probably unintelligible.


Deep fake images are a real, real problem. When a sexually explicit, but synthetic image of a real person is created, it has significant impacts on the victim. If they were doing anything other than window dressing, they would have paid very close attention to the critical definitions and how it is handled. But they have created a scheme in which anything that it's explicit could fit into this category by anybody, rendering the whole thing liable to be thrown out as a violation of the charter, thereby further victimizing vulnerable victims. Victims. And if they had gotten the definition right, which they clearly did not, little code because the harm associated with the dissemination of explicit deep fakes is similar to the harm associated with the already criminalized non-consensual distribution of actual intimate images.


It actually gets even worse, because the digital safety commissioner can get involved and they can order the removal of contents. The removal of content is again based on simple, reasonable grounds to believe that the material is within that category, which again only requires a reasonable ground to suspect a lack of consent. This is a government actor ordering the removal of expressive contents that unquestionably engages the freedom of expression right. Where you have a definition that is so broad that it can include content that does not post any risk of harm to any individual, that definition cannot be upheld as Charter compliant.

Flagging process


If a user flags content as either sexually victimizing a child or as intimate content communicated without consent, the operator has to review it within 24 hours. The operator can only dismiss the flag if it’s trivial, frivolous, vexatious or made in bad faith; or has already been dealt with. If not dismissed, they MUST block it and make it inaccessible to people in Canada. If they block it – which is clearly the default – they have to give notice to the person who posted it and to the flagger, and give them an opportunity to make representations. What this timeline is will be in the regulations. Based on those representations, the operator must decide whether there are reasonable grounds to believe the content is that type of harmful content, and if so, they have to make it inaccessible to persons in Canada. Section 68(4) says they’d have to continue to make it inaccessible to all persons in Canada, which suggests to me they have to have a mechanism to make sure it is not reposted.  There is a reconsideration process, which is largely a repeat of the original flag and review process. 


One thing that I find puzzling is that this mechanism is mandatory and does not seem to permit the platform operator from doing their usual thing, which is to review material posted on their platform and simply removing it if they are of the view that it violates their platform policies. If it is clearly imagery that depicts child sexual abuse, they should be able to remove it without notice or involving the original poster.  

Information grab


Each regulated operator has to submit a “digital safety plan” to the Digital Safety Commissioner. The contents of this are enormous. It’s a full report on everything the operator does to comply with the Act, and also includes information on all the measures used to protect children, preventing harmful content, statistics about flags and takedowns (broken down by category of content), resources allocated by the operator to comply, and information respecting content, other than “harmful content”, that was moderated by the operator and that the operator had reasonable grounds to believe posed a “risk of significant psychological or physical harm.” But that’s not all … it also includes information about complaints, concerns heard and any research the operator has done related to safety on their platform. And, of course, “any other information provided for by regulations.” And most of this also has to be published on the operator’s platform. 


Researchers’ information grab 


The Commission can accredit people (other than individuals) to access electronic data in digital safety plans. These people must be conducting research, education, advocacy, or awareness activities related to the purposes of the act. The Commission can grant access to these inventories and suspend or revoke accreditation if the person doesn't comply with the conditions. Accredited people can also request access to electronic data in digital safety plans from regulated service operators and the Commission can order that the operator provide the data. However, this access is only allowed for research projects related to the act's purposes.


This is another area where the parameters, which are hugely important, will be left to the regulations. There’s no explicit requirement that the accredited researcher have their research approved by a Canadian research ethics board. It’s all left to the regulations. 


We need to remember that “Cambridge Analytica” got their data from a person who purported to be an academic researcher. 


If the operator of a regulated service affected by an order requests it, the Commission may consider changing or canceling the order. The Commission may do so if it finds, according to the criteria in the regulations, that the operator can't comply with the order or that doing so would cause the operator undue hardship. An accredited person who requested an order may complain to the Commission if the operator subject to the order fails to comply.  The Commission must give the operator a chance to make representations. 


Finally, the Commission may publish a list of accredited people and a description of the research projects for which the Commission has made an order.


Submissions from the public


The Act contains a mechanism by which any person in Canada may make a submission to the Commission respecting harmful content that is accessible on a regulated service or the measures taken by the operator of a regulated service to comply with the operator’s duties under the Act. The Commission can provide information about the submission to the relevant operator and there are particular provisions to protect the identity of any employees of an operator that make a submission to the Commission. 


Complaints to the Commission


The real enforcement powers of the Commission come into play in Part 6 of the Act. Any person in Canada may make a complaint to the Commission that content on a regulated service is content that sexually victimizes a child or revictimizes a survivor or is intimate content communicated without consent. These are the particularly acute categories of  deemed “harmful content.”


The Commission has to conduct an initial assessment of the complaint and dismiss it if the Commission is of the opinion that it is trivial, frivolous, vexatious or made in bad faith; or has otherwise been dealt with. 


If the complaint is not dismissed, the Commission must (not may) give notice of the complaint to the operator and make an order requiring the operator to, without delay, make the content inaccessible to all persons in Canada and to continue to make it inaccessible until the Commission gives notice to the operator of its final decision. This is an immediate takedown order without any substantial consideration of the merits of the complaint. All they need is a non-trivial complaint. I don’t mind an immediate takedown if one reasonably suspects the content is child sexual abuse material, but the categories are broader than that.


The operator must ask the user who posted the content on the service whether they consent to their contact information being provided to the Commission. If the user consents, the operator must provide the contact information to the Commission. 


“Hey, you’re being accused of posting illegal content on the internet, do you want us to give your information to the Canadian government?”


The Commission must give the complainant and the user who communicated the content on the service an opportunity to make representations as to whether the content is content that fits into those categories of harmful content. 


Now here is where the rubber hits the road: The Commission must decide whether there are “reasonable grounds to believe” that the content fits into those categories. In a criminal court, the court would have to consider whether the content fits the definition, beyond a reasonable doubt. In a civil court, the court would have to consider whether the content fits the definition, on a balance of probabilities. Here, all the Commission needs to conclude is whether there are “reasonable grounds to believe.” If they do, they issue an order that it be made permanently inaccessible to all persons in Canada.


That is a dramatically low bar for permanent removal. Again, I’m not concerned about it being used with material that is child abuse imagery or is even reasonably suspected to be. But there is a very strong likelihood that this will capture content that really is not intimate content communicated without consent. Recall the definition, and the use of “reasonable to suspect”:


intimate content communicated without consent means


(a) a visual recording, such as a photographic, film or video recording, in which a person is nude or is exposing their sexual organs or anal region or is engaged in explicit sexual activity, if it is reasonable to suspect that


(i) the person had a reasonable expectation of privacy at the time of the recording, and


(ii) the person does not consent to the recording being communicated; and


(b) a visual recording, such as a photographic, film or video recording, that falsely presents in a reasonably convincing manner a person as being nude or exposing their sexual organs or anal region or engaged in explicit sexual activity, including a deepfake that presents a person in that manner, if it is reasonable to suspect that the person does not consent to the recording being communicated.‍ (contenu intime communiqué de façon non consensuelle)


To order a permanent takedown, the Commission just needs to conclude there are reasonable grounds to believe that it is “reasonable to suspect” a lack of consent. There’s no requirement for the complainant to say “that’s me and I did not consent to that.” Unless you know the full context and background of the image or video, and know positively that there WAS consent, there will almost always be grounds to suspect that there wasn’t. And remember that the deepfake provision doesn’t specifically require that it be an actual living person depicted. It could be a complete figment of a computer’s imagination, which is otherwise entirely lawful under Canadian law. But it would still be ordered to be taken down. 


The Commission’s vast powers


The Commission has vast, vast powers. They’re breathtaking, actually. These are set out in Part 7 of the Act. Here’s part of these powers:


86 In ensuring an operator’s compliance with this Act or investigating a complaint made under subsection 81(1), the Commission may, in accordance with any rules made under subsection 20(1),


(a) summon and enforce the appearance of persons before the Commission and compel them to give oral or written evidence on oath and to produce any documents or other things that the Commission considers necessary, in the same manner and to the same extent as a superior court of record;


(b) administer oaths;


(c) receive and accept any evidence or other information, whether on oath, by affidavit or otherwise, that the Commission sees fit, whether or not it would be admissible in a court of law; and


(d) decide any procedural or evidentiary question.


And check out these “Rules of evidence” (or absence of rules of evidence) for the Commission:


87 The Commission is not bound by any legal or technical rules of evidence. It must deal with all matters that come before it as informally and expeditiously as the circumstances and considerations of fairness and natural justice permit.


If the Commissioner holds a hearing – which is entirely in its discretion to determine when a hearing is appropriate – it must be held in public unless it isn’t. There’s a laundry list of reasons why it can decide to close all or part of a hearing to the public. 


I don’t expect we’ll see hearings for many individual complaints.


Inspectors


The next part is staggering. In section 90, the Commission can designate “inspectors” who get a “certificate of designation”. Their powers are set out in section 91. Without a warrant and without notice, an inspector can enter any place in which they have reasonable grounds to believe that there is any document, information or other thing relevant to that purpose. Once they’re in the business, they can 


(a) examine any document or information that is found in the place, copy it in whole or in part and take it for examination or copying;


(b) examine any other thing that is found in the place and take it for examination;


(c) use or cause to be used any computer system at the place to examine any document or information that is found in the place;


(d) reproduce any document or information or cause it to be reproduced and take it for examination or copying; and


(e) use or cause to be used any copying equipment or means of telecommunication at the place to make copies of or transmit any document or information.


They can force any person in charge of the place to assist them and provide documents, information and any other thing. And they can bring anybody else they think is necessary to help them exercise their powers or perform their duties and functions.


There’s also a standalone requirement to provide information or access to an inspector:


93 An inspector may, for a purpose related to verifying compliance or preventing non-compliance with this Act, require any person who is in possession of a document or information that the inspector considers necessary for that purpose to provide the document or information to the inspector or provide the inspector with access to the document or information, in the form and manner and within the time specified by the inspector.


Holy crap. Again, no court order, no warrant, no limit, no oversight.


It’s worth noting that most social media companies don’t operate out of Canada and international law would prevent an inspector from, for example, going to California and inspecting the premises of a company there. 


Compliance orders


The Act grants the Commission staggeringly broad powers to issue “Compliance orders”. All these orders need is “reasonable grounds to believe”. There’s no opportunity for an operator to hear the concerns, make submissions and respond. And what can be ordered is virtually unlimited. There is no due process, no oversight, no appeal of the order and the penalty for contravening such an order is enormous. It’s up to the greater of $25 million or 8% of the operator’s global revenue. If you use Facebook’s 2023 global revenue, that ceiling is $15 BILLION dollars. 


94 (1) If the Commission has reasonable grounds to believe that an operator is contravening or has contravened this Act, it may make an order requiring the operator to take, or refrain from taking, any measure to ensure compliance with this Act.


This is a breathtaking power, without due process, without a hearing, without evidence and only on a “reasonable grounds to believe”. And what can be ordered is massively open-ended. 


You may note that section 124 of the Act says that nobody can be imprisoned in default of payment of a fine under the Act. The reason for this is to avoid due process. Under our laws, if there’s a possibility of imprisonment, there is a requirement for higher due process and procedural fairness. It’s an explicit decision made, in my view, to get away with a lower level of due process. 


Who pays for all this?


The Act makes the regulated operators pay to fund the costs of the Digital Safety Commission, Ombudsperson, and Office. Certainly it has some good optics that the cost of this new bureaucracy will not be paid from the public purse, but I expect that any regulated operator will be doing some math. If the cost of compliance and the direct cost of this “Digital Safety Tax” is sufficiently large, they may think again about whether to continue to provide services in Canada. We saw with the Online News Act that Meta decided the cost of carrying links to news was greater than the benefit they obtained by doing so, and then rationally decided to no longer permit news links in Canada.  

Amendments to the Criminal Code and the Canada Human Rights Act 


Finally, I agree with other commentators in reaching the conclusion that bolting on amendments to the Criminal Code and the Canada Human Rights Act was a huge mistake and will imperil any meaningful discussion of online safety. Once again, the government royally screwed up by including too much in one bill.


The bill makes significant additions to the Criminal Code. Hate propaganda offenses carry harsher penalties. The bill defines "hatred" (in line with Supreme Court of Canada jurisprudence) and creates a new hate crime: "offense motivated by hatred."


Moreover, Bill C-63 amends the Canadian Human Rights Act. It adds "communication of hate speech" through the Internet or similar channels as discriminatory practice. These amendments give individuals the right to file complaints with the Canadian Human Rights Commission which, in turn, can impose penalties of up to $20,000. However, these changes concern user-to-user communication, not social media platforms, broadcast undertakings, or telecommunication service providers.


Bill C-63 further introduces amendments related to the mandatory reporting of child sexual abuse materials. They clarify the definition of "Internet service" to include access, hosting, and interpersonal communication like email. Any person providing an Internet service to the public must send all notifications to a designated law enforcement body. Additionally, the preservation period for data related to an offense is extended.


Conclusion

All in all, it is not as bad as I expected it to be. But it is not without its serious problems. Given that the discussion paper from a number of years ago was a potential disaster and much of that has been improved via the consultation process, I have some hope that the government will listen to those who want to – in good faith – improve the bill. That may be a faint hope, but unless it’s improved, it will likely be substantially struck down as unconstitutional


Monday, February 05, 2024

Canadian Bill S-210 proposes age verification for internet users


There’s a bill working its way through the Parliament that presents a clear and present danger to the free and open internet, to freedom of expression and to privacy online. It’s a private member’s bill that shockingly has gotten traction. 

You may have heard of it, thanks to Professor Michael Geist, who has called the Bill “the Most Dangerous Canadian Internet Bill You’ve Never Heard Of.”

In a nutshell, it will require any website on the entire global internet that makes sexually explicit material available to verify the age of anyone who wants access, to ensure that they are not under the age of eighteen. Keeping sexually explicit material away from kids sounds like a laudable goal and one that most people can get behind. 

The devil, as they say, is in the details. It presents a real risk to privacy, a real risk to freedom of expression and a real danger to the open internet in Canada. The author of the Bill says it does none of that, but I believe she is mistaken.

The bill was introduced in the Senate of Canada in November 2021 by Senator Julie Miville-Dechêne. She is an independent senator, appointed by Prime Minister Justin Trudeau in 2018. Much of her career was as a journalist, which makes her obliviousness of the freedom of expression impact of her bill puzzling. I don’t think she’s acting in bad faith, but I think she’s mistaken about the scope and effect of her Bill. 

In 2022, the Bill was considered by the Senate Standing Committee on Legal and Constitutional Affairs. That Committee reported it back to the Senate in November 2022, and it languished until it passed third reading in April 2023 and was referred to the House of Commons. Many people were surprised when the House voted in December 2023 to send it for consideration before the Standing Committee on Public Safety and National Security. Every Conservative, Block and NDP member present voted in favour of this, while most Liberals voted against it. Suddenly, the Bill had traction and what appeared to be broad support among the opposition parties. 

So what does the bill do and why is it problematic? Let’s go through it clause by clause. 

The main part of it – the prohibition and the offence – is contained in section 5. It creates an offence of “making available” “sexually explicit material” on the Internet to a young person. This incorporates some defined terms, from section 2. 

Making sexually explicit material available to a young person

5 Any organization that, for commercial purposes, makes available sexually explicit material on the Internet to a young person is guilty of an offence punishable on summary conviction and is liable,

(a) for a first offence, to a fine of not more than $250,000; and

(b) for a second or subsequent offence, to a fine of not more than $500,000.

“Making available” is incredibly broad. When a definition says “”includes”, it means that it can mean more than the terms that follow. “Transmitting” is a very, very broad term. Is that intended to cover the people who operate the facilities over which porn is transmitted? It is very broad. 

A “young person” is a person under the age of 18. That’s pretty clear. 

The definition of “sexually explicit material” is taken from the Criminal Code. It should be noted that this definition was created and put in the Criminal Code for a particular purpose. This is not a catch-all offence that makes it illegal to make sexually explicit material available to a young person. This is an element of an offence, where the purpose of providing this material to a young person is to facilitate another offence against a young person. Essentially, grooming a young person. 

Definition of sexually explicit material

(5) In subsection (1), sexually explicit material means material that is not child pornography, as defined in subsection 163.1(1), and that is

(a) a photographic, film, video or other visual representation, whether or not it was made by electronic or mechanical means,

(i) that shows a person who is engaged in or is depicted as engaged in explicit sexual activity, or

(ii) the dominant characteristic of which is the depiction, for a sexual purpose, of a person’s genital organs or anal region or, if the person is female, her breasts;

(b) written material whose dominant characteristic is the description, for a sexual purpose, of explicit sexual activity with a person; or

(c) an audio recording whose dominant characteristic is the description, presentation or representation, for a sexual purpose, of explicit sexual activity with a person.

To be clear, it is not a crime to make this sort of material available to a young person unless you’re planning further harm to the young person. 

Let’s look at what is included in this definition. Visual, written or audio depictions of explicit activity. And visual depictions of certain body parts or areas, if it’s done for a sexual purpose. 

In paragraph 5(a)(i), it does not say that the depiction has to be explicit. It says the activity in which a person is engaged is explicit. 

Let’s take a moment and let this sink in. This is not limited to porn sites. 

This sort of material is broadcast on cable TV. It’s certainly available in adult book stores (which specialize in certain types of publications), but it’s also available in general book stores. This sort of material is available in every large library in Canada. 

This definition would include educational materials. 

This definition is so broad that it covers wikipedia articles related to art, reproduction and sexual health. 

It is certainly not limited to materials that would cause a reasoned risk of harm to a young person. And it doesn’t take any account of the different maturity levels of young people. The sex ed curriculum is very different for 14 year olds, 16 year olds and 18 year olds. 

Section 6 is where the government mandated age verification technology comes in. Essentially, you can’t say that you thought you were only providing access to the defined material to adults. You have to implement a government prescribed age verification method to ensure that the people getting access are not under 18. That’s essentially the only due diligence defence. We’ll talk about government prescribed age verification methods shortly.

There’s another defence, which is “legitimate purpose”. 

No organization shall be convicted of an offence under section 5 if the act that is alleged to constitute the offence has a “legitimate purpose related to science, medicine, education or the arts.” Maybe that will be interpreted broadly so that wikipedia articles related to art, reproduction and sexual health are not included. But it’s a defence, so it has to be raised after the person is charged. The onus is on the accused to raise it, not on the prosecution to take it into account at the time of laying a charge. 

There’s also a defence that’s available if the organization gets a “Section 8” notice and complies with it. “What the heck are those?” you may ask. The bill has an “enforcement authority”, who I’m afraid will be the CRTC.

If they have reasonable grounds to believe that an organization committed an offence under section 5 (by allowing young people to access explicit materials), the enforcement authority may issue a notice to them under this section.

The notice names the organization, tells them they have reasonable grounds to believe they are violating the Act – but does not have to tell them the evidence of this. And they essentially get to order the organization to take “steps that the enforcement authority considers necessary to ensure compliance with this Act”. It doesn’t say “THAT ARE NECESSARY”, but what the enforcement authority thinks is necessary. 

So the organization has twenty days to do all the things specified in the notice. They do get to make representations to the enforcement authority, but that doesn’t stop the clock. The 20 days keeps ticking. 

Here’s where the rubber hits the road. 

The “enforcement authority”, if they are not satisfied that the organization has taken the steps that the enforcement authority deems to be necessary, the enforcement authority gets to go to the Federal Court to get an order essentially blocking the site. Specifically, it says: “for an order requiring Internet service providers to prevent access to the sexually explicit material to young persons on the Internet in Canada.”

Any Internet service provider who would be subject to the order would be named as a respondent to the proceedings, and presumably can make submissions. But I can only think of one or two internet service providers who would do anything other than consent to the order, while privately cheering. 

Take a look at this section, which sets the criteria for the issuance of an order.

(4) The Federal Court must order any respondent Internet service providers to prevent access to the sexually explicit material to young persons on the Internet in Canada if it determines that

(a) there are reasonable grounds to believe that the organization that has been given notice under subsection 8(1) has committed the offence referred to in section 5;

(b) that organization has failed to take the steps referred to in paragraph 8(2)‍(c) within the period set out in paragraph 8(2)‍(d); and

(c) the services provided by the Internet service providers who would be subject to the order may be used, in Canada, to access the sexually explicit material made available by that organization.

It says the Court MUST issue the order – not MAY, but MUST, if there are reasonable grounds to believe that the organization committed the offence under the Act. It doesn’t require proof beyond a reasonable doubt, it doesn’t even require proof by a civil standard (being on a balance of probabilities or more likely than not), and it doesn’t even require actual belief based on evidence that an offence was committed. It requires only “reasonable grounds to believe.” 

And it requires them to have not taken all the steps dictated by the enforcement authority within the extremely brief period of twenty days. 

Finally, the order MUST issue if the court determines “the services provided by the Internet service providers who would be subject to the order MAY be used, in Canada, to access the sexually explicit material made available by that organization”.

That is a really, really low bar for taking a site off the Canadian internet. 

But wait – there’s more!

The act specifically authorizes wide-ranging orders that would have the effect of blocking material that is not explicit and barring adult Canadians from seeking access to that same explicit material.

And if you look at the first sentence of subsection 5, it says “if the federal court determines that it is necessary to ensure that the sexually explicit material is not made available to young persons on the internet in Canada" it doesn't say anything about limiting the continuation of the offense or even tying it to the alleged offense set out in the notice. This is really poorly drafted and constructed.

Effect of order

(5) If the Federal Court determines that it is necessary to ensure that the sexually explicit material is not made available to young persons on the Internet in Canada, an order made under subsection (4) may have the effect of preventing persons in Canada from being able to access

(a) material other than sexually explicit material made available by the organization that has been given notice under subsection 8(1); or

(b) sexually explicit material made available by the organization that has been given notice under subsection 8(1) even if the person seeking to access the material is not a young person.

So, as we’ve seen, all of this hinges on companies verifying the age of users before allowing access to explicit material and the only substantial defence to the offence set out in the act is to use a government-dictated and approved “age verification method.” 

We need to remember, adult Canadians have an unquestioned right to access just about whatever they want, including explicit material.

The criteria for approving an age verification method may be the only bright spot in this otherwise dim Act. And it’s only somewhat bright.

Before prescribing an age-verification method, the government has a long list of things they have to consider. 

Specifically, the Governor in Council must consider whether the method

(a) is reliable;

(b) maintains user privacy and protects user personal information;

(c) collects and uses personal information solely for age-verification purposes, except to the extent required by law;

(d) destroys any personal information collected for age-verification purposes once the verification is completed; and

(e) generally complies with best practices in the fields of age verification and privacy protection.

They just have to consider these. They’re not “must haves”, but good to haves. And there’s no obligation on the part of the government to seek input from the Privacy Commissioner. 

So what’s the current state of age verification? It’s not uncommon to require a credit card, under the assumption that a person with a valid credit card is likely an adult. I’m not sure that’s the case any more and it may not be reliable. 

There’s also ID verification, often coupled with biometrics. You take a photo of your government-issued ID, take a selfie, and software reads the ID, confirms you’re over 18 and compares the photo on the ID to the photo you’ve taken. That involves collecting personal information from your ID, which very likely includes way more information than is necessary to confirm your age. It involves collecting your image, and it involves collecting and using the biometrics from your selfie and your ID.

Do you really want to provide your detailed personal information, that could readily be used for identity theft or fraud, to a porn site? Or a third party “age verification service”?

One scheme was proposed in the UK a number of years ago, in which you would go to a brick and mortar establishment like a pub or a post office, show your ID and be given a random looking code. That code would confirm that someone reliable checked your ID and determined you were of age. Of course, this becomes a persistent identifier that can be used to trace your steps across the internet. And I can imagine a black market in ID codes emerging pretty quickly.

And there are some important things missing. For example, is it universally applicable? Not everyone has government-issued ID. Some systems rely on having a valid credit card. Not everyone has one, let alone a bank account. 

The Bill’s sponsor and supporters say “smart people will come up with something” that is reliable and protects privacy. Why don’t we wait until we have that before considering passing a bill like this?

Let’s game this out with a hypothetical. Imagine, if you will, a massive online encyclopedia. It has thousands upon thousands – maybe millions – of articles, authored by thousands of volunteers. They cover the full range of subjects known to humanity, which of course includes reproduction and sexual health. A very small subset of the content they host and that their volunteers have created would fit into the category of “sexually explicit material”, but it is there, it exists and it is not age-gated. 

The operators of this encyclopedia very reasonably take the view that their mission is educational and they’re entitled to the protection of the legitimate purpose defence that is supposed to protect “science, medicine, education or the arts”.

They also take the view that providing access to their educational material in Canada is protected by the Charter of Rights and Freedoms. And they also reasonably take the view that the Charter protects the rights of Canadians to access the content they produce. 

But one day, a busy-body complains to the CRTC’s porn force that this online encyclopedia contains material that may be sexually explicit. The captain of the porn force drafts up a notice under Section 8, telling them that they must make sure that only people who have confirmed their age of majority via a government approved age verification technique can get access to explicit content. 

The encyclopedia writes back and says “please let us know what is your criteria for judging whether something is published ‘for a sexual purpose’, as required in many parts of the definition.” Also, they say, their purpose is entirely educational, so they have a legitimate purpose. And they also mention the Charter. Meanwhile, 20 days pass by.

So the porn force makes an application in the Federal Court and serves notice on all the major internet service providers. None of the internet service providers show up at the hearing. The publishers of the encyclopedia hire a really good Canadian internet lawyer, who tells the court that the encyclopedia’s purpose is legitimate and related to education. And they’re likely not engaged in “commercial activity”. And cutting off access to the encyclopedia would be unconstitutional as a violation of the Canadian Charter of Rights and Freedoms.  

The government lawyer, on behalf of the porn force, points to section 9(4) and says the court has no discretion to NOT issue the order if there are reasonable grounds to believe an offence has been committed and they didn’t follow the dictates set out in the Section 8 notice. 

Even with the encyclopedia's information about their purposes, the bar of “reasonable grounds to believe” is so low that paragraph (a) is met. Since the encyclopedia didn’t follow the Section 8 order because they were sure they had a defence to the charge, paragraph (b) is met. And an order to all Canadian ISPs to block access to the encyclopedia would have the effect set out in paragraph (c). 

Slam dunk. The Court must issue that order. But what about the fact that it would have the effect of cutting ALL Canadians off from the 99.999% of the site’s content that are not explicit? Tough. Paragraph (5) of Section 9 says that’s ok. No encyclopedia for you!

A Charter challenge would then be raised, and the whole thing would likely be declared unconstitutional as a violation of section 2(b) of the Charter that can’t be justified by section 1. 

In short – even if you think this Bill is well intentioned – it is heavy handed, poorly constructed, doesn’t take freedom of expression into account and imagines that we can manufacture some magical fairy dust technology that will make the obvious privacy issues disappear. In short, it is a blunt instrument that imagines it’ll fix the problem.   

And I should note that it will likely also have the effect of hurting older children who haven’t yet hit eighteen. The internet, its many communities and information repositories, are all critical for young people seeking legitimate information related to sexual health, sexual orientation and gender identity. Much of this information would fit into the broad definition of sexually explicit material, and it will be illegal for someone to allow them access via the internet. It will remain legal for them to get it in a bookstore or a library, but that’s not how young people generally access information in 2024.  

I expect some supporters of this bill will be more than happy to see it limit Canadians’ right to access lawful material.

It’s good to see a discussion of this important issue. Even if you’re in favour of the objectives of this Bill, it is deeply, deeply problematic. It should be parked until there’s a way to deal with this issue without potentially violating the privacy rights and Charter rights of Canadians.