Monday, March 04, 2024

Canada's New "Online Harms" bill - and overview and a few critiques

 It is finally here: the long-anticipated Online Harms bill. It was tabled in Parliament on February 26, 2024 as Bill C-63. It is not as bad as I expected, but it has some serious issues that need to be addressed if it is going to be Charter-compliant. It also has some room for serious improvement and it represents a real missed opportunity in how it handles “deepfakes”, synthetic explicit images and videos.


The bill is 104 pages long and it was just released, so this will be a high level overview and perhaps incomplete. But I will also focus on some issues that leapt out to me on my first few times reading it.


In a nutshell, it does a better job than the discussion paper first floated years ago by not lumping all kinds of “online harms” into one bucket and treating them all the same. This bill more acutely addresses child abuse materials and non-consensual distribution of intimate images. I think the thresholds for some of this are too low, resulting in removal by default. The new Digital Safety Commission has stunning and likely unconstitutional powers. As is often the case, there’s too much left to the regulations. But let’s get into the substance.


Who does it apply to?


So what does it do and who does it apply to?  It applies to social media companies that meet a particular threshold that’s set in regulation. Social media companies are defined as:


social media service means a website or application that is accessible in Canada, the primary purpose of which is to facilitate interprovincial or international online communication among users of the website or application by enabling them to access and share content. (service de média social)


It also specifically includes: (a) an adult content service, namely a social media service that is focused on enabling its users to access and share pornographic content; and (b) a live streaming service, namely a social media service that is focused on enabling its users to access and share content by live stream.


This seems intended to capture sites like PornHub and OnlyFans, but I think there are arguments that could be made to say that they'll not fit within that definition. 


It specifically excludes services that do not permit a user to communicate to the public (s. 5(1)) and carves out private messaging features. So instead of going after a very long list of service providers, it is much more focused, but this can be tailored by the minister by regulation. 


New bureaucracy


The online news act creates a whole new regulatory bureaucracy, which includes the Digital Safety Commission, the Digital Safety Ombudsperson and the Digital Safety Office. The Digital Safety Commission is essentially the regulator under this legislation and I'll talk a little bit later about what that its role is. The Ombudsperson is more of an advocate for members of the public and the Digital Safety Office is the bureaucracy that supports them both. As an aside, why call the bill the “Online Harms Act” but call the Commission the “Online Safety Commission”? We have a Privacy Act and a Privacy Commissioner. We have a Competition Act and a Competition Commissioner. We have a Human Rights Act and a Human Rights Commissioner. In this bill, it’s just inelegant. 


Duty to act responsibly


The legislation will impose a duty to act responsibly with respect to harmful content by implementing processes and mitigation measures that have to be approved by the Digital Safety Commissioner of Canada. This is extremely open-ended and there is no guarantee or assurance that this will be compatible with the digital safety schemes that these companies would be setting up in order to comply with the laws of other jurisdictions. We need to be very careful that “made in Canada Solutions” don't result in requirements that are disproportionately burdensome in light of our market size. 


The large social media companies that immediately come to mind already have very robust digital safety policies and practices, so whatever is dictated by the Digital Safety Commissioner should be based on existing best practices and not trying to reinvent the wheel.


If you are a very large social media company, you likely are looking to comply with the laws of every jurisdiction where you are active. Canada is but a drop in the internet bucket and work done by organizations to comply with European requirements should be good enough for Canada. If the cost of compliance is too onerous, service providers will look to avoid Canada, or will adopt policies of removing everything that everyone objects to. And the Social Media companies will be required to pay for the new digital bureaucracy, so that adds significantly to their cost of doing business in Canada.


In addition to having to have government approved policies, the Bill does include some mandatory elements like the ability of users to block other users and flag harmful content. They also have to make a “resource person” available to users to hear concerns, direct them to resources and provide guidance on the use of those resources. 

Age appropriate design code


One thing that I was blown away by is largely hidden in section 65. It reads …


Design features

65 An operator must integrate into a regulated service that it operates any design features respecting the protection of children, such as age appropriate design, that are provided for by regulations.


I was blown away by this for two reasons. The first is that it gives the government the power to dictate potentially huge changes or mandatory elements of an online service. And they can do this by simple regulation. Protecting children is an ostensible motive – but often a pretext – for a huge range of legislative and regulatory actions, many of which overreach. 


The second reason why I was blown away by this is that it could amount to an “Age Appropriate Design Code”, via regulation. In the UK, the Information Commissioner’s Office carried out massive amounts of consultation, research and discussion before developing the UK’s age appropriate design code. In this case, the government can do this with a simple publication in the Canada Gazette. 


Harmful content


A lot of this Bill turns on “what is harmful content”? It is defined in the legislation as seven different categories of content, each of which has its own specific definition. they are.. 


(a) intimate content communicated without consent;

(b) content that sexually victimizes a child or revictimizes a survivor;

(c) content that induces a child to harm themselves;

(d) content used to bully a child;

(e) content that foments hatred;

(f) content that incites violence; and

(g) content that incites violent extremism or terrorism.‍ 


Importantly, the bill treats the first two types of harmful content as distinct from the rest. This actually makes a lot of sense. Child sexual abuse materials are already illegal in Canada and is generally easy to identify. I am not aware of any social media service that will abide that sort of content for a second. 


The category of content called “intimate content communicated without consent” is intended to capture what is already illegal in the Criminal Code related to the non-consensual distribution of intimate images. The definition in the online harms bill expands on that to incorporate what are commonly called “deepfakes”. These are images depicting a person in an explicit manner that are either modifications of existing photographs or videos, or are completely synthetic as the result of someone's imagination or with use of artificial intelligence.


I 100% support including deepfake explicit imagery in this Bill and I would also 100% support including it in the Criminal Code given the significant harm that it can cause to victims, but only if the definition is properly tailored. In the Online Harms bill, the definition is actually problematic and potentially includes any explicit or sexual image. Here is the definition, and note the use of “reasonable to suspect”. 


intimate content communicated without consent means


(a) a visual recording, such as a photographic, film or video recording, in which a person is nude or is exposing their sexual organs or anal region or is engaged in explicit sexual activity, if it is reasonable to suspect that


(i) the person had a reasonable expectation of privacy at the time of the recording, and


(ii) the person does not consent to the recording being communicated; and


(b) a visual recording, such as a photographic, film or video recording, that falsely presents in a reasonably convincing manner a person as being nude or exposing their sexual organs or anal region or engaged in explicit sexual activity, including a deepfake that presents a person in that manner, if it is reasonable to suspect that the person does not consent to the recording being communicated.‍ (contenu intime communiqué de façon non consensuelle)


So what is the problem? The problem is that the wording “reasonable grounds to suspect" cannot be found in the Criminal Code definition for this type of content and there is a very good reason for that. Either content is consensual or it is not. In the Criminal Code at section 162.1, it reads:


(2) In this section, "intimate image" means a visual recording of a person made by any means including a photographic, film or video recording,


(a) in which the person is nude, is exposing his or her genital organs or anal region or her breasts or is engaged in explicit sexual activity;

(b) in respect of which, at the time of the recording, there were circumstances that gave rise to a reasonable expectation of privacy; and

(c) in respect of which the person depicted retains a reasonable expectation of privacy at the time the offence is committed.


In the Criminal Code, either there is consent or there is not. In this Bill, the threshold is the dramatically low “reasonable to suspect”. All you need is a reasonable suspicion and it is not just with respect to the circumstances at the time the image was taken or created, assuming we're dealing with an actual person and an actual image. The courts have said 


The words “to suspect” have been defined as meaning to “believe tentatively without clear ground” and “be inclined to think” ... suspicion involves “an expectation that the targeted individual is possibly engaged in some criminal activity. A ‘reasonable’ suspicion means something more than a mere suspicion and something less than a belief based upon reasonable and probable grounds”.


You can be 85% confident that it is consensual, but that remaining 15% results in reasonable suspicion that it is not. When you're dealing with the section related to purported deep fakes, it does not specify that the image has to be of an actual person, whether synthetic or not. It could in fact be a completely fictional person that has been created using photoshop. It would cause no risk of harm to anyone. Given that the image is artificial and the circumstances of its creation are completely unknown, as is the person supposedly depicted in it, you can't help but have reasonable grounds to suspect that it “might” have been communicated nonconsensually. 


Deepfakes of actual people created using artificial intelligence is a real thing and a real problem. But artificial intelligence is actually better at creating images and videos of fake people. You should not be surprised that it is being used to create erotic or sexual content of AI-generated people. While it may not be your cup of tea, it is completely lawful. 


And it actually gets even worse, because with respect to deepfakes, the Online Harms Act turns on whether the actual communication itself may have been without consent, not the creation of the image. Setting aside for a moment that a fictional person can never consent and can ever withhold consent, an example immediately comes to mind drawn directly from Canada's history of bad legislation related to technology and online mischief.


People may recall that a number of years ago, Nova Scotia passed a law called the Cyber-safety Act which was intended to address online bullying. It was so poorly drafted that it was ultimately found to be unconstitutional and thrown out.


During the time when that law was actually enforced, we had an incident in Nova Scotia where two young people discovered that their member of the legislature had previously had a career as an actor. As part of that career, she appeared in a cable television series that was actually quite popular and in at least a couple scenes, she appeared without her top on. These foolish young men decided to take a picture from the internet, and there were hundreds of them to choose from, and tweets it. What happened next? This politician got very mad and contacted the Nova Scotia cyber cops, who threatened the young man with all sorts of significant consequences.


That image, which was taken in a Hollywood studio, presumably after the actor had signed the usual releases, would potentially fit into this category of harmful content if it were tweeted after the Online Harms Act comes into effect because someone reviewing it on behalf of a platform after it had been flagged would have no idea where the image came from. And if anyone says it’s non-consensual, that’s enough to create reasonable suspicion. One relatively explicit scene actually looks like it was taken with a hidden camera. 


Surely, it cannot be the intention of the minister of Justice to regulate that sort of thing. In some ways, it doesn't matter because it would likely be found to be a violation of our freedom of expression, right under section 2B of the charter rights and freedoms, which cannot be justified under section 1 of the charter.


But wait, it gets worse. With respect to the two special categories of harmful content, operators of social media services have an obligation to put in place a flagging mechanism so that objectionable content can be flagged by users. If there are reasonable grounds to believe that the content that has been flagged fits into one of those two categories, they must remove it. Reasonable grounds to believe is also a very low standard. But when you combine the two, the standard is so low that it is in the basement. Reasonable grounds to believe that there are reasonable grounds to suspect is such a low standard that it is probably unintelligible.


Deep fake images are a real, real problem. When a sexually explicit, but synthetic image of a real person is created, it has significant impacts on the victim. If they were doing anything other than window dressing, they would have paid very close attention to the critical definitions and how it is handled. But they have created a scheme in which anything that it's explicit could fit into this category by anybody, rendering the whole thing liable to be thrown out as a violation of the charter, thereby further victimizing vulnerable victims. Victims. And if they had gotten the definition right, which they clearly did not, little code because the harm associated with the dissemination of explicit deep fakes is similar to the harm associated with the already criminalized non-consensual distribution of actual intimate images.


It actually gets even worse, because the digital safety commissioner can get involved and they can order the removal of contents. The removal of content is again based on simple, reasonable grounds to believe that the material is within that category, which again only requires a reasonable ground to suspect a lack of consent. This is a government actor ordering the removal of expressive contents that unquestionably engages the freedom of expression right. Where you have a definition that is so broad that it can include content that does not post any risk of harm to any individual, that definition cannot be upheld as Charter compliant.

Flagging process


If a user flags content as either sexually victimizing a child or as intimate content communicated without consent, the operator has to review it within 24 hours. The operator can only dismiss the flag if it’s trivial, frivolous, vexatious or made in bad faith; or has already been dealt with. If not dismissed, they MUST block it and make it inaccessible to people in Canada. If they block it – which is clearly the default – they have to give notice to the person who posted it and to the flagger, and give them an opportunity to make representations. What this timeline is will be in the regulations. Based on those representations, the operator must decide whether there are reasonable grounds to believe the content is that type of harmful content, and if so, they have to make it inaccessible to persons in Canada. Section 68(4) says they’d have to continue to make it inaccessible to all persons in Canada, which suggests to me they have to have a mechanism to make sure it is not reposted.  There is a reconsideration process, which is largely a repeat of the original flag and review process. 


One thing that I find puzzling is that this mechanism is mandatory and does not seem to permit the platform operator from doing their usual thing, which is to review material posted on their platform and simply removing it if they are of the view that it violates their platform policies. If it is clearly imagery that depicts child sexual abuse, they should be able to remove it without notice or involving the original poster.  

Information grab


Each regulated operator has to submit a “digital safety plan” to the Digital Safety Commissioner. The contents of this are enormous. It’s a full report on everything the operator does to comply with the Act, and also includes information on all the measures used to protect children, preventing harmful content, statistics about flags and takedowns (broken down by category of content), resources allocated by the operator to comply, and information respecting content, other than “harmful content”, that was moderated by the operator and that the operator had reasonable grounds to believe posed a “risk of significant psychological or physical harm.” But that’s not all … it also includes information about complaints, concerns heard and any research the operator has done related to safety on their platform. And, of course, “any other information provided for by regulations.” And most of this also has to be published on the operator’s platform. 


Researchers’ information grab 


The Commission can accredit people (other than individuals) to access electronic data in digital safety plans. These people must be conducting research, education, advocacy, or awareness activities related to the purposes of the act. The Commission can grant access to these inventories and suspend or revoke accreditation if the person doesn't comply with the conditions. Accredited people can also request access to electronic data in digital safety plans from regulated service operators and the Commission can order that the operator provide the data. However, this access is only allowed for research projects related to the act's purposes.


This is another area where the parameters, which are hugely important, will be left to the regulations. There’s no explicit requirement that the accredited researcher have their research approved by a Canadian research ethics board. It’s all left to the regulations. 


We need to remember that “Cambridge Analytica” got their data from a person who purported to be an academic researcher. 


If the operator of a regulated service affected by an order requests it, the Commission may consider changing or canceling the order. The Commission may do so if it finds, according to the criteria in the regulations, that the operator can't comply with the order or that doing so would cause the operator undue hardship. An accredited person who requested an order may complain to the Commission if the operator subject to the order fails to comply.  The Commission must give the operator a chance to make representations. 


Finally, the Commission may publish a list of accredited people and a description of the research projects for which the Commission has made an order.


Submissions from the public


The Act contains a mechanism by which any person in Canada may make a submission to the Commission respecting harmful content that is accessible on a regulated service or the measures taken by the operator of a regulated service to comply with the operator’s duties under the Act. The Commission can provide information about the submission to the relevant operator and there are particular provisions to protect the identity of any employees of an operator that make a submission to the Commission. 


Complaints to the Commission


The real enforcement powers of the Commission come into play in Part 6 of the Act. Any person in Canada may make a complaint to the Commission that content on a regulated service is content that sexually victimizes a child or revictimizes a survivor or is intimate content communicated without consent. These are the particularly acute categories of  deemed “harmful content.”


The Commission has to conduct an initial assessment of the complaint and dismiss it if the Commission is of the opinion that it is trivial, frivolous, vexatious or made in bad faith; or has otherwise been dealt with. 


If the complaint is not dismissed, the Commission must (not may) give notice of the complaint to the operator and make an order requiring the operator to, without delay, make the content inaccessible to all persons in Canada and to continue to make it inaccessible until the Commission gives notice to the operator of its final decision. This is an immediate takedown order without any substantial consideration of the merits of the complaint. All they need is a non-trivial complaint. I don’t mind an immediate takedown if one reasonably suspects the content is child sexual abuse material, but the categories are broader than that.


The operator must ask the user who posted the content on the service whether they consent to their contact information being provided to the Commission. If the user consents, the operator must provide the contact information to the Commission. 


“Hey, you’re being accused of posting illegal content on the internet, do you want us to give your information to the Canadian government?”


The Commission must give the complainant and the user who communicated the content on the service an opportunity to make representations as to whether the content is content that fits into those categories of harmful content. 


Now here is where the rubber hits the road: The Commission must decide whether there are “reasonable grounds to believe” that the content fits into those categories. In a criminal court, the court would have to consider whether the content fits the definition, beyond a reasonable doubt. In a civil court, the court would have to consider whether the content fits the definition, on a balance of probabilities. Here, all the Commission needs to conclude is whether there are “reasonable grounds to believe.” If they do, they issue an order that it be made permanently inaccessible to all persons in Canada.


That is a dramatically low bar for permanent removal. Again, I’m not concerned about it being used with material that is child abuse imagery or is even reasonably suspected to be. But there is a very strong likelihood that this will capture content that really is not intimate content communicated without consent. Recall the definition, and the use of “reasonable to suspect”:


intimate content communicated without consent means


(a) a visual recording, such as a photographic, film or video recording, in which a person is nude or is exposing their sexual organs or anal region or is engaged in explicit sexual activity, if it is reasonable to suspect that


(i) the person had a reasonable expectation of privacy at the time of the recording, and


(ii) the person does not consent to the recording being communicated; and


(b) a visual recording, such as a photographic, film or video recording, that falsely presents in a reasonably convincing manner a person as being nude or exposing their sexual organs or anal region or engaged in explicit sexual activity, including a deepfake that presents a person in that manner, if it is reasonable to suspect that the person does not consent to the recording being communicated.‍ (contenu intime communiqué de façon non consensuelle)


To order a permanent takedown, the Commission just needs to conclude there are reasonable grounds to believe that it is “reasonable to suspect” a lack of consent. There’s no requirement for the complainant to say “that’s me and I did not consent to that.” Unless you know the full context and background of the image or video, and know positively that there WAS consent, there will almost always be grounds to suspect that there wasn’t. And remember that the deepfake provision doesn’t specifically require that it be an actual living person depicted. It could be a complete figment of a computer’s imagination, which is otherwise entirely lawful under Canadian law. But it would still be ordered to be taken down. 


The Commission’s vast powers


The Commission has vast, vast powers. They’re breathtaking, actually. These are set out in Part 7 of the Act. Here’s part of these powers:


86 In ensuring an operator’s compliance with this Act or investigating a complaint made under subsection 81(1), the Commission may, in accordance with any rules made under subsection 20(1),


(a) summon and enforce the appearance of persons before the Commission and compel them to give oral or written evidence on oath and to produce any documents or other things that the Commission considers necessary, in the same manner and to the same extent as a superior court of record;


(b) administer oaths;


(c) receive and accept any evidence or other information, whether on oath, by affidavit or otherwise, that the Commission sees fit, whether or not it would be admissible in a court of law; and


(d) decide any procedural or evidentiary question.


And check out these “Rules of evidence” (or absence of rules of evidence) for the Commission:


87 The Commission is not bound by any legal or technical rules of evidence. It must deal with all matters that come before it as informally and expeditiously as the circumstances and considerations of fairness and natural justice permit.


If the Commissioner holds a hearing – which is entirely in its discretion to determine when a hearing is appropriate – it must be held in public unless it isn’t. There’s a laundry list of reasons why it can decide to close all or part of a hearing to the public. 


I don’t expect we’ll see hearings for many individual complaints.


Inspectors


The next part is staggering. In section 90, the Commission can designate “inspectors” who get a “certificate of designation”. Their powers are set out in section 91. Without a warrant and without notice, an inspector can enter any place in which they have reasonable grounds to believe that there is any document, information or other thing relevant to that purpose. Once they’re in the business, they can 


(a) examine any document or information that is found in the place, copy it in whole or in part and take it for examination or copying;


(b) examine any other thing that is found in the place and take it for examination;


(c) use or cause to be used any computer system at the place to examine any document or information that is found in the place;


(d) reproduce any document or information or cause it to be reproduced and take it for examination or copying; and


(e) use or cause to be used any copying equipment or means of telecommunication at the place to make copies of or transmit any document or information.


They can force any person in charge of the place to assist them and provide documents, information and any other thing. And they can bring anybody else they think is necessary to help them exercise their powers or perform their duties and functions.


There’s also a standalone requirement to provide information or access to an inspector:


93 An inspector may, for a purpose related to verifying compliance or preventing non-compliance with this Act, require any person who is in possession of a document or information that the inspector considers necessary for that purpose to provide the document or information to the inspector or provide the inspector with access to the document or information, in the form and manner and within the time specified by the inspector.


Holy crap. Again, no court order, no warrant, no limit, no oversight.


It’s worth noting that most social media companies don’t operate out of Canada and international law would prevent an inspector from, for example, going to California and inspecting the premises of a company there. 


Compliance orders


The Act grants the Commission staggeringly broad powers to issue “Compliance orders”. All these orders need is “reasonable grounds to believe”. There’s no opportunity for an operator to hear the concerns, make submissions and respond. And what can be ordered is virtually unlimited. There is no due process, no oversight, no appeal of the order and the penalty for contravening such an order is enormous. It’s up to the greater of $25 million or 8% of the operator’s global revenue. If you use Facebook’s 2023 global revenue, that ceiling is $15 BILLION dollars. 


94 (1) If the Commission has reasonable grounds to believe that an operator is contravening or has contravened this Act, it may make an order requiring the operator to take, or refrain from taking, any measure to ensure compliance with this Act.


This is a breathtaking power, without due process, without a hearing, without evidence and only on a “reasonable grounds to believe”. And what can be ordered is massively open-ended. 


You may note that section 124 of the Act says that nobody can be imprisoned in default of payment of a fine under the Act. The reason for this is to avoid due process. Under our laws, if there’s a possibility of imprisonment, there is a requirement for higher due process and procedural fairness. It’s an explicit decision made, in my view, to get away with a lower level of due process. 


Who pays for all this?


The Act makes the regulated operators pay to fund the costs of the Digital Safety Commission, Ombudsperson, and Office. Certainly it has some good optics that the cost of this new bureaucracy will not be paid from the public purse, but I expect that any regulated operator will be doing some math. If the cost of compliance and the direct cost of this “Digital Safety Tax” is sufficiently large, they may think again about whether to continue to provide services in Canada. We saw with the Online News Act that Meta decided the cost of carrying links to news was greater than the benefit they obtained by doing so, and then rationally decided to no longer permit news links in Canada.  

Amendments to the Criminal Code and the Canada Human Rights Act 


Finally, I agree with other commentators in reaching the conclusion that bolting on amendments to the Criminal Code and the Canada Human Rights Act was a huge mistake and will imperil any meaningful discussion of online safety. Once again, the government royally screwed up by including too much in one bill.


The bill makes significant additions to the Criminal Code. Hate propaganda offenses carry harsher penalties. The bill defines "hatred" (in line with Supreme Court of Canada jurisprudence) and creates a new hate crime: "offense motivated by hatred."


Moreover, Bill C-63 amends the Canadian Human Rights Act. It adds "communication of hate speech" through the Internet or similar channels as discriminatory practice. These amendments give individuals the right to file complaints with the Canadian Human Rights Commission which, in turn, can impose penalties of up to $20,000. However, these changes concern user-to-user communication, not social media platforms, broadcast undertakings, or telecommunication service providers.


Bill C-63 further introduces amendments related to the mandatory reporting of child sexual abuse materials. They clarify the definition of "Internet service" to include access, hosting, and interpersonal communication like email. Any person providing an Internet service to the public must send all notifications to a designated law enforcement body. Additionally, the preservation period for data related to an offense is extended.


Conclusion

All in all, it is not as bad as I expected it to be. But it is not without its serious problems. Given that the discussion paper from a number of years ago was a potential disaster and much of that has been improved via the consultation process, I have some hope that the government will listen to those who want to – in good faith – improve the bill. That may be a faint hope, but unless it’s improved, it will likely be substantially struck down as unconstitutional


No comments: