Monday, February 05, 2024

Canadian Bill S-210 proposes age verification for internet users


There’s a bill working its way through the Parliament that presents a clear and present danger to the free and open internet, to freedom of expression and to privacy online. It’s a private member’s bill that shockingly has gotten traction. 

You may have heard of it, thanks to Professor Michael Geist, who has called the Bill “the Most Dangerous Canadian Internet Bill You’ve Never Heard Of.”

In a nutshell, it will require any website on the entire global internet that makes sexually explicit material available to verify the age of anyone who wants access, to ensure that they are not under the age of eighteen. Keeping sexually explicit material away from kids sounds like a laudable goal and one that most people can get behind. 

The devil, as they say, is in the details. It presents a real risk to privacy, a real risk to freedom of expression and a real danger to the open internet in Canada. The author of the Bill says it does none of that, but I believe she is mistaken.

The bill was introduced in the Senate of Canada in November 2021 by Senator Julie Miville-DechĂȘne. She is an independent senator, appointed by Prime Minister Justin Trudeau in 2018. Much of her career was as a journalist, which makes her obliviousness of the freedom of expression impact of her bill puzzling. I don’t think she’s acting in bad faith, but I think she’s mistaken about the scope and effect of her Bill. 

In 2022, the Bill was considered by the Senate Standing Committee on Legal and Constitutional Affairs. That Committee reported it back to the Senate in November 2022, and it languished until it passed third reading in April 2023 and was referred to the House of Commons. Many people were surprised when the House voted in December 2023 to send it for consideration before the Standing Committee on Public Safety and National Security. Every Conservative, Block and NDP member present voted in favour of this, while most Liberals voted against it. Suddenly, the Bill had traction and what appeared to be broad support among the opposition parties. 

So what does the bill do and why is it problematic? Let’s go through it clause by clause. 

The main part of it – the prohibition and the offence – is contained in section 5. It creates an offence of “making available” “sexually explicit material” on the Internet to a young person. This incorporates some defined terms, from section 2. 

Making sexually explicit material available to a young person

5 Any organization that, for commercial purposes, makes available sexually explicit material on the Internet to a young person is guilty of an offence punishable on summary conviction and is liable,

(a) for a first offence, to a fine of not more than $250,000; and

(b) for a second or subsequent offence, to a fine of not more than $500,000.

“Making available” is incredibly broad. When a definition says “”includes”, it means that it can mean more than the terms that follow. “Transmitting” is a very, very broad term. Is that intended to cover the people who operate the facilities over which porn is transmitted? It is very broad. 

A “young person” is a person under the age of 18. That’s pretty clear. 

The definition of “sexually explicit material” is taken from the Criminal Code. It should be noted that this definition was created and put in the Criminal Code for a particular purpose. This is not a catch-all offence that makes it illegal to make sexually explicit material available to a young person. This is an element of an offence, where the purpose of providing this material to a young person is to facilitate another offence against a young person. Essentially, grooming a young person. 

Definition of sexually explicit material

(5) In subsection (1), sexually explicit material means material that is not child pornography, as defined in subsection 163.1(1), and that is

(a) a photographic, film, video or other visual representation, whether or not it was made by electronic or mechanical means,

(i) that shows a person who is engaged in or is depicted as engaged in explicit sexual activity, or

(ii) the dominant characteristic of which is the depiction, for a sexual purpose, of a person’s genital organs or anal region or, if the person is female, her breasts;

(b) written material whose dominant characteristic is the description, for a sexual purpose, of explicit sexual activity with a person; or

(c) an audio recording whose dominant characteristic is the description, presentation or representation, for a sexual purpose, of explicit sexual activity with a person.

To be clear, it is not a crime to make this sort of material available to a young person unless you’re planning further harm to the young person. 

Let’s look at what is included in this definition. Visual, written or audio depictions of explicit activity. And visual depictions of certain body parts or areas, if it’s done for a sexual purpose. 

In paragraph 5(a)(i), it does not say that the depiction has to be explicit. It says the activity in which a person is engaged is explicit. 

Let’s take a moment and let this sink in. This is not limited to porn sites. 

This sort of material is broadcast on cable TV. It’s certainly available in adult book stores (which specialize in certain types of publications), but it’s also available in general book stores. This sort of material is available in every large library in Canada. 

This definition would include educational materials. 

This definition is so broad that it covers wikipedia articles related to art, reproduction and sexual health. 

It is certainly not limited to materials that would cause a reasoned risk of harm to a young person. And it doesn’t take any account of the different maturity levels of young people. The sex ed curriculum is very different for 14 year olds, 16 year olds and 18 year olds. 

Section 6 is where the government mandated age verification technology comes in. Essentially, you can’t say that you thought you were only providing access to the defined material to adults. You have to implement a government prescribed age verification method to ensure that the people getting access are not under 18. That’s essentially the only due diligence defence. We’ll talk about government prescribed age verification methods shortly.

There’s another defence, which is “legitimate purpose”. 

No organization shall be convicted of an offence under section 5 if the act that is alleged to constitute the offence has a “legitimate purpose related to science, medicine, education or the arts.” Maybe that will be interpreted broadly so that wikipedia articles related to art, reproduction and sexual health are not included. But it’s a defence, so it has to be raised after the person is charged. The onus is on the accused to raise it, not on the prosecution to take it into account at the time of laying a charge. 

There’s also a defence that’s available if the organization gets a “Section 8” notice and complies with it. “What the heck are those?” you may ask. The bill has an “enforcement authority”, who I’m afraid will be the CRTC.

If they have reasonable grounds to believe that an organization committed an offence under section 5 (by allowing young people to access explicit materials), the enforcement authority may issue a notice to them under this section.

The notice names the organization, tells them they have reasonable grounds to believe they are violating the Act – but does not have to tell them the evidence of this. And they essentially get to order the organization to take “steps that the enforcement authority considers necessary to ensure compliance with this Act”. It doesn’t say “THAT ARE NECESSARY”, but what the enforcement authority thinks is necessary. 

So the organization has twenty days to do all the things specified in the notice. They do get to make representations to the enforcement authority, but that doesn’t stop the clock. The 20 days keeps ticking. 

Here’s where the rubber hits the road. 

The “enforcement authority”, if they are not satisfied that the organization has taken the steps that the enforcement authority deems to be necessary, the enforcement authority gets to go to the Federal Court to get an order essentially blocking the site. Specifically, it says: “for an order requiring Internet service providers to prevent access to the sexually explicit material to young persons on the Internet in Canada.”

Any Internet service provider who would be subject to the order would be named as a respondent to the proceedings, and presumably can make submissions. But I can only think of one or two internet service providers who would do anything other than consent to the order, while privately cheering. 

Take a look at this section, which sets the criteria for the issuance of an order.

(4) The Federal Court must order any respondent Internet service providers to prevent access to the sexually explicit material to young persons on the Internet in Canada if it determines that

(a) there are reasonable grounds to believe that the organization that has been given notice under subsection 8(1) has committed the offence referred to in section 5;

(b) that organization has failed to take the steps referred to in paragraph 8(2)‍(c) within the period set out in paragraph 8(2)‍(d); and

(c) the services provided by the Internet service providers who would be subject to the order may be used, in Canada, to access the sexually explicit material made available by that organization.

It says the Court MUST issue the order – not MAY, but MUST, if there are reasonable grounds to believe that the organization committed the offence under the Act. It doesn’t require proof beyond a reasonable doubt, it doesn’t even require proof by a civil standard (being on a balance of probabilities or more likely than not), and it doesn’t even require actual belief based on evidence that an offence was committed. It requires only “reasonable grounds to believe.” 

And it requires them to have not taken all the steps dictated by the enforcement authority within the extremely brief period of twenty days. 

Finally, the order MUST issue if the court determines “the services provided by the Internet service providers who would be subject to the order MAY be used, in Canada, to access the sexually explicit material made available by that organization”.

That is a really, really low bar for taking a site off the Canadian internet. 

But wait – there’s more!

The act specifically authorizes wide-ranging orders that would have the effect of blocking material that is not explicit and barring adult Canadians from seeking access to that same explicit material.

And if you look at the first sentence of subsection 5, it says “if the federal court determines that it is necessary to ensure that the sexually explicit material is not made available to young persons on the internet in Canada" it doesn't say anything about limiting the continuation of the offense or even tying it to the alleged offense set out in the notice. This is really poorly drafted and constructed.

Effect of order

(5) If the Federal Court determines that it is necessary to ensure that the sexually explicit material is not made available to young persons on the Internet in Canada, an order made under subsection (4) may have the effect of preventing persons in Canada from being able to access

(a) material other than sexually explicit material made available by the organization that has been given notice under subsection 8(1); or

(b) sexually explicit material made available by the organization that has been given notice under subsection 8(1) even if the person seeking to access the material is not a young person.

So, as we’ve seen, all of this hinges on companies verifying the age of users before allowing access to explicit material and the only substantial defence to the offence set out in the act is to use a government-dictated and approved “age verification method.” 

We need to remember, adult Canadians have an unquestioned right to access just about whatever they want, including explicit material.

The criteria for approving an age verification method may be the only bright spot in this otherwise dim Act. And it’s only somewhat bright.

Before prescribing an age-verification method, the government has a long list of things they have to consider. 

Specifically, the Governor in Council must consider whether the method

(a) is reliable;

(b) maintains user privacy and protects user personal information;

(c) collects and uses personal information solely for age-verification purposes, except to the extent required by law;

(d) destroys any personal information collected for age-verification purposes once the verification is completed; and

(e) generally complies with best practices in the fields of age verification and privacy protection.

They just have to consider these. They’re not “must haves”, but good to haves. And there’s no obligation on the part of the government to seek input from the Privacy Commissioner. 

So what’s the current state of age verification? It’s not uncommon to require a credit card, under the assumption that a person with a valid credit card is likely an adult. I’m not sure that’s the case any more and it may not be reliable. 

There’s also ID verification, often coupled with biometrics. You take a photo of your government-issued ID, take a selfie, and software reads the ID, confirms you’re over 18 and compares the photo on the ID to the photo you’ve taken. That involves collecting personal information from your ID, which very likely includes way more information than is necessary to confirm your age. It involves collecting your image, and it involves collecting and using the biometrics from your selfie and your ID.

Do you really want to provide your detailed personal information, that could readily be used for identity theft or fraud, to a porn site? Or a third party “age verification service”?

One scheme was proposed in the UK a number of years ago, in which you would go to a brick and mortar establishment like a pub or a post office, show your ID and be given a random looking code. That code would confirm that someone reliable checked your ID and determined you were of age. Of course, this becomes a persistent identifier that can be used to trace your steps across the internet. And I can imagine a black market in ID codes emerging pretty quickly.

And there are some important things missing. For example, is it universally applicable? Not everyone has government-issued ID. Some systems rely on having a valid credit card. Not everyone has one, let alone a bank account. 

The Bill’s sponsor and supporters say “smart people will come up with something” that is reliable and protects privacy. Why don’t we wait until we have that before considering passing a bill like this?

Let’s game this out with a hypothetical. Imagine, if you will, a massive online encyclopedia. It has thousands upon thousands – maybe millions – of articles, authored by thousands of volunteers. They cover the full range of subjects known to humanity, which of course includes reproduction and sexual health. A very small subset of the content they host and that their volunteers have created would fit into the category of “sexually explicit material”, but it is there, it exists and it is not age-gated. 

The operators of this encyclopedia very reasonably take the view that their mission is educational and they’re entitled to the protection of the legitimate purpose defence that is supposed to protect “science, medicine, education or the arts”.

They also take the view that providing access to their educational material in Canada is protected by the Charter of Rights and Freedoms. And they also reasonably take the view that the Charter protects the rights of Canadians to access the content they produce. 

But one day, a busy-body complains to the CRTC’s porn force that this online encyclopedia contains material that may be sexually explicit. The captain of the porn force drafts up a notice under Section 8, telling them that they must make sure that only people who have confirmed their age of majority via a government approved age verification technique can get access to explicit content. 

The encyclopedia writes back and says “please let us know what is your criteria for judging whether something is published ‘for a sexual purpose’, as required in many parts of the definition.” Also, they say, their purpose is entirely educational, so they have a legitimate purpose. And they also mention the Charter. Meanwhile, 20 days pass by.

So the porn force makes an application in the Federal Court and serves notice on all the major internet service providers. None of the internet service providers show up at the hearing. The publishers of the encyclopedia hire a really good Canadian internet lawyer, who tells the court that the encyclopedia’s purpose is legitimate and related to education. And they’re likely not engaged in “commercial activity”. And cutting off access to the encyclopedia would be unconstitutional as a violation of the Canadian Charter of Rights and Freedoms.  

The government lawyer, on behalf of the porn force, points to section 9(4) and says the court has no discretion to NOT issue the order if there are reasonable grounds to believe an offence has been committed and they didn’t follow the dictates set out in the Section 8 notice. 

Even with the encyclopedia's information about their purposes, the bar of “reasonable grounds to believe” is so low that paragraph (a) is met. Since the encyclopedia didn’t follow the Section 8 order because they were sure they had a defence to the charge, paragraph (b) is met. And an order to all Canadian ISPs to block access to the encyclopedia would have the effect set out in paragraph (c). 

Slam dunk. The Court must issue that order. But what about the fact that it would have the effect of cutting ALL Canadians off from the 99.999% of the site’s content that are not explicit? Tough. Paragraph (5) of Section 9 says that’s ok. No encyclopedia for you!

A Charter challenge would then be raised, and the whole thing would likely be declared unconstitutional as a violation of section 2(b) of the Charter that can’t be justified by section 1. 

In short – even if you think this Bill is well intentioned – it is heavy handed, poorly constructed, doesn’t take freedom of expression into account and imagines that we can manufacture some magical fairy dust technology that will make the obvious privacy issues disappear. In short, it is a blunt instrument that imagines it’ll fix the problem.   

And I should note that it will likely also have the effect of hurting older children who haven’t yet hit eighteen. The internet, its many communities and information repositories, are all critical for young people seeking legitimate information related to sexual health, sexual orientation and gender identity. Much of this information would fit into the broad definition of sexually explicit material, and it will be illegal for someone to allow them access via the internet. It will remain legal for them to get it in a bookstore or a library, but that’s not how young people generally access information in 2024.  

I expect some supporters of this bill will be more than happy to see it limit Canadians’ right to access lawful material.

It’s good to see a discussion of this important issue. Even if you’re in favour of the objectives of this Bill, it is deeply, deeply problematic. It should be parked until there’s a way to deal with this issue without potentially violating the privacy rights and Charter rights of Canadians.


Wednesday, December 20, 2023

How the Grinch Stole Privacy - A Privacylawyer Holiday Special


I also had the oppotunity to talk about this silly take with CBC Information Morning Halifax and Cape Breton. You can listen to the interviews here: Halifax, Cape Breton.

Wednesday, December 13, 2023

Federal Court concludes that a “virtual presence” in Canada is enough to be ordered to assist CSIS

Decision follows trend starting in BC that a virtual presence in Canada is enough to be ordered to produce records

The Federal Court of Canada, in connection with an application for a warrant and an assistance order under the Canadian Security Intelligence Service Act, was required to consider whether an assistance order under s. 22.3(1) of that Act could be issued to order a legal person with no physical presence in Canada to assist CSIS with giving effect to a warrant. The order would have extra-territorial effect.

In a redacted decision, Re Canadian Security Intelligence Service Act (Can), the court concluded that it can, provided that the subject of the assistance order has a “virtual presence” in Canada. The decision notes that the foreign company involved was willing to assist, but needed to see a court order to manage their possible legal liability:

[3]       The affiant explained that [REDACTED] is incorporated and headquartered in [REDACTED] does not have physical offices or employees in Canada. It has a virtual presence in Canada that consists of [_some physical presence in Canada_]. It solicits business from Canadians and [REDACTED].

 

[4]       The affiant also explained that [REDACTED] has been fully cooperative in providing assistance to CSIS to date, but has advised CSIS that it requires a judicial authorization from a Canadian court to minimize its legal risk in the event that CSIS uses the collected intelligence beyond analysis; [REDACTED]. [REDACTED] advised that it would continue to be cooperative pending and upon receipt of an Assistance Order.

The company’s willingness to comply wasn’t particularly material to the Court’s decision.

At the urging of the government and largely supported by a court-appointed amicus, the Court followed a trend of cases that have dealt with similar questions but involving production orders under the Criminal Code. The first of these cases is British Columbia (Attorney General) v. Brecknell, where the Royal Canadian Mounted Police were seeking to obtain a production order naming Craigslist. As with this CSIS case, Craigslist said they’d cooperate but needed to see a court order. The British Columbia Court of Appeal, influenced by the Equustek case from the Supreme  Court of Canada, concluded that a court has jurisdiction to issue a production order naming an entity physically beyond the court’s jurisdiction provided they had a “virtual presence” within the jurisdiction.

The Court concluded:

[49]     I find that the jurisprudence in the context of production orders issued pursuant to section 487.014 of the Criminal Code provides a good analogy and support for finding that this Court has the jurisdiction to issue an Assistance Order where in personam jurisdiction can be established. The two provisions are similar in purpose, albeit in different contexts, both are directed to a person, which includes an organization or entity that is a legal person, and similar considerations arise in determining whether the order should be issued where the subject has only a virtual presence in Canada.

[50]     The considerations noted by the SCC in Equustek lend further support to taking an approach that reflects the realities of the internet dominated storage and transmission of documents and information. As noted in Brecknell, document control may exist in one jurisdiction, and the documents in another or in several others and “formalistic distinctions” between virtual and physical presence defeat the purpose of the legislation.

[51]     Whether an organization or entity with only a virtual presence in Canada can establish a real and substantial connection with Canada sufficient to constitute presence in Canada will be a case-by-case determination. Where such in personam jurisdiction is established, the organization or entity that is subject to the Assistance Order and required to provide documents in their possession or control is considered to be in Canada although the documents may be stored elsewhere.

As with a number of the cases following Brecknell, the Court concluded that its ability to issue the order does not turn on whether it would be able to enforce the order, though that is a relevant consideration:

[53]      I have considered the issue of enforcement of the Assistance Order on [REDACTED]. I note that they have been cooperative to date and indicate their ongoing intention to cooperate. However, I also agree with the submissions of the AGC and amicus and the jurisprudence, that the enforcement of the Order is a separate issue from whether the Court has jurisdiction to issue the Order, but remains a relevant consideration with respect to whether the Order should be issued based on the particular circumstances.

Consistent with the previous production order cases cited, the intended recipient was not a party to the hearing. All were ex parte, but some included amici.

Note: I believe that Brecknell was wrongly-decided, but because all of these orders have not been ex parte and unopposed, it'll be some time before these arguments will be made in court.   See: David T Fraser, "British Columbia (Attorney General) v. Brecknell", Case Comment, (2020) 18:1 CJLT 135.

Sunday, December 03, 2023

Being on the receiving end of a warrant from the Canadian Security Intelligence Service (CSIS)

So someone from CSIS just called ….



There’s a first time for everything. You get a call from an “UNKNOWN NUMBER” and the caller says they work with Public Safety Canada and they’re looking for some information. This happens from time to time at universities, colleges, telecoms, internet-based businesses and others. Likely, they actually work for the Canadian Security Intelligence Service (known as CSIS) and they’re doing an investigation. 


So what happens – or should happen – next? You should ask them what they’re looking for and what is their lawful authority. Get their contact information and then you should call a lawyer who has dealt with this sort of situation before. 


CSIS is an unusual entity. They’re not a traditional law enforcement agency. While they can also get warrants (more about that later), they have a very different mission. The mandate of CSIS is to 


  • investigate activities suspected of constituting threats to the security of Canada (espionage/sabotage, foreign interference, terrorism, subversion of Canadian democracy);

  • take measures to reduce these threats;

  • provide security assessments on individuals who require access to sensitive government information or sensitive sites;

  • provide security advice relevant to the Citizenship Act or the Immigration and Refugee Protection Act; and

  • collect foreign intelligence within Canada at the request of the Minister of Foreign Affairs or the Minister of National Defence.


To carry out this mandate, CSIS may seek and obtain warrants. But they are unlike any warrant or production order you may see handed to you by a cop. CSIS warrants are more complicated to understand and possibly comply with than the more traditional law enforcement variety.


Canadians are often surprised to discover that we have a court that meets in secret, in a virtual bunker and hears applications for TOP SECRET warrants. These warrants can authorize “the persons to whom it is directed to intercept any communication or obtain any information, record, document or thing and, for that purpose, (a) to enter any place or open or obtain access to any thing; (b) to search for, remove or return, or examine, take extracts from or make copies of or record in any other manner the information, record, document or thing; or (c) to install, maintain or remove any thing.” These warrants can be accompanied by an assistance order, directing a person to assist with giving effect to a warrant. 


A problem for third parties with these warrants is that they can be long-term and very open ended. The name of the target of the investigation may be unknown at the time the warrant was obtained, and the warrant may authorize the collection of data related to that unknown person. It can authorize the collection of information about people who are in contact with that unknown person. It may authorize the collection of additional information related to those persons, such as IP addresses, email addresses, communications and even real-time interception of communications. Once the unknown person has been identified by CSIS (by name, an account identifier, online handle, etc.), they will seek to obtain further information. But the warrant itself likely does not name the person or any account identifiers so that the custodian of information cannot easily connect the request to a particular information. And the recipient of the demand must be confident that they are authorized to disclose the requested information, otherwise they would be in violation of privacy laws. 


To complicate things further, because these warrants are generally secret, CSIS is not willing to provide a copy of the complete warrant to a third party from whom they are seeking data. They will generally permit you to look at a redacted version of the warrant but will not let you keep it. Diligent organizations that know they can only disclose personal information if it is authorized and permitted by law, and they have a duty to ensure that they disclose only the responsive  information. To do otherwise risks violating applicable privacy laws. Organizations should also document all aspects of the interaction and disclosure, which is a problem if you can’t get a copy of the warrant. Over time, procedures have been developed by CSIS and third party organizations to address this. 


While all of this may be TOP SECRET, nothing precludes a recipient of a warrant or an assistance order from seeking legal advice on how to properly and lawfully respond. Anyone dealing with such a situation should seek experienced legal advice. 


In just the past few weeks, the Government of Canada launched a consultation on possible reforms to the CSIS Act, mainly under the banner of protecting Canadian democracy against foreign interference. Of course, changes to the statute will affect other aspects of their mission. The consultation is broadly organized under five “issues”, and it’s Issue #2 that is the most relevant to this discussion.

Issue #2: Whether to implement new judicial authorization authorities tailored to the level of intrusiveness of the techniques

Essentially, what they’re proposing is a form of production order similar to what we have in the Criminal Code of Canada. Such an order would still be subject to court approval and could compel a third party to produce information “where CSIS has reasonable grounds to believe that the production of the information is likely to yield information of importance that is likely to assist CSIS in carrying out its duties and functions.” Examples they give are basic subscriber information, call detail records, or transaction records. These would be much more targeted and, in my view, much easier for the custodian of the information to evaluate and respond to. A production order would authorize CSIS to obtain the basic subscriber information of a named person or known account identifier. Under the current warrant authority, those specific people may be unknown at the time the warrant was issued but are still within the ambit of the warrant. Presumably a CSIS production order can be served in the usual way as a criminal code production order and the company can keep a copy of it for its records. I’m generally very skeptical about the expansion of intrusive government powers, particularly when much of it takes place outside of OPEN court but in a closed court, but I don’t see this as an expansion. CSIS can be given this ability, supervised by the court, to streamline its existing authorities. They would need to be very careful if they were to purport to give it extraterritorial effect, since that would likely be very offensive to comity and the sovereignty of other countries. And intelligence collection is generally more offensive and aggressive than investigating ordinary crime. It may specifically be illegal under foreign law for the company to provide data in response to such an order. And I think the order should, like a criminal code production order, explicitly give the recipient the right to challenge it. So that’s the current situation with CSIS investigations, at least from a service provider’s point of view, and a hint at what’s to come. Again, if you find yourself in the uncomfortable and unfamiliar situation of taking a call from “public safety” or CSIS, reach out to get experienced legal advice from a lawyer who has been through the process before.





Saturday, November 18, 2023

What is the "legitimate interests" exception to consent under Canada's proposed privacy law?

So Bill c-27, also known as the digital charter implementation act of 2022 has been before Canada's Parliament for consideration for quite some time. Even before this parliamentary session, a bill substantially similar to the present one was tabled and then died on the order paper in the previous parliamentary session. After more than 20 years of the personal information protection electronic documents act, people have had a long time to think about improvements that perhaps could or should be made to our national privacy regime .

One thing that I've heard over and over again, particularly from privacy activists since 2018 is the suggestion that Canada should simply follow Europe's lead and implement a form of its general data protection directive. Privacy activists and others hail it as the “gold standard”. 

Sometimes when I hear more from these folks, I realize that for some of them, it appears that all they know about the GDPR is the possibility of massive, company-ruining penalties. What they don't seem to understand is that it is relatively rare in Europe for a business to use consent as the basis for the collection, use or disclosure of personal information. This is in stark contrast to the current law, PIPEDA, where consent really is the only lawful basis for collecting, using and disclosing personal information. 

Here is a case in point. It is an op-ed to the globe and mail written by the former co-CEO of research in motion, also known as blackberry, and more recently, the philanthropist behind Canada center for digital rights and the Centre for International Governance Innovation, Jim Balsillie. 

In this op-ed, Balsillie “the EU's landmark general data protection regulation, a law that sets the baseline for modern protections around the world…”

He then goes on to viciously attack a portion of Bill c27 in the CPPA that is modeled directly on a provision from the GDPR: The ability for an organization to collect, use or disclose personal information without consent on the basis of legitimate interests .

Here is what Jim has to say in his op-ed. “ For example, the proposed new law creates a broad car vote for surveillance without knowledge or consent based on legitimate interests… there's worse, it's the businesses themselves that determine what constitutes legitimate interest for surveillance and they are under no obligation to tell the individual they are tracking and profiling them”

Look, either it is the gold standard or it is not.

And I really shouldn't have to tell a business leader that every one of us gets to decide how we comply with the law and if that assessment is incorrect, that is where enforcement comes in. The bill contains detailed information about what can be a legitimate interest in what cannot be a legitimate interest. Frankly, I am getting a little tired of this breathless hyperbole and want to set the record straight on what legitimate interests is and what it is not.

First, we'll look at the GDPR, then we will look at Bill c27.

Article 6 of the GDPR outlines the lawful bases for processing personal data. These include consent, contract, legal obligation, vital interests, public task, and legitimate interests. We’re going to zoom in on the last one – legitimate interests.

Legitimate interests are one of the more flexible lawful bases and probably the most-used. It is also the most open to interpretation. It allows data processing on the basis of the legitimate interests pursued by a data controller or a third party, unless such interests are overridden by the interests or fundamental rights and freedoms of the data subject.

This requires the data controller to carry out an analysis to see if “legitimate interests” can be used instead of another basis, such as consent. 

To rely on legitimate interests, you must:

1. Identify a legitimate interest (be it commercial, individual, or societal benefits).

2. Show that the processing is necessary to achieve it.

3. Balance it against the individual’s interests, rights, and freedoms. This involves conducting a Legitimate Interests Assessment (LIA).

Legitimate interests can include network and information security, preventing fraud, direct marketing, and the like. 

Using “legitimate interests” is not just carte blanche to do whatever you want. When invoking legitimate interests, the controller has to ensure transparency, adhere to data minimization principles, and implement safeguards to protect the rights of individuals. 

The proposed Consumer Privacy Protection Act in Canada has a similar framework. Personally, I think it should be replaced with an almost word for word copy from the GDPR in order to remove – or at least reduce – unnecessary barriers for organizations that operate internationally.

But let's focus on what is in fact written in the bill as it currently exists.

In section 18(3), it says an organization may collect or use an individual's personal information without their knowledge or consent if the collection of use is made for the purpose of an activity in which the organization has a legitimate interest that outweighs any potential adverse effect on the individual resulting from that collection or use. And a reasonable person would expect the collection of use for such an activity. And the personal information is not collected or used for the purpose of influencing the individual’s behavior or decisions.

So like in Europe, it requires balancing that organization's interest against the interest of the individual. Unlike in Europe, it requires that the collection or use be for purposes that would essentially be obvious or expected by the individual. It is unclear what is the intended scope of that paragraph (b) there, since there are so many things that happen in the world that would reasonably be expected to alter somebody's behavior.

Subsection (4) sets a requirement that must be met prior to an organization relying on this legitimate interest for the collection or use of personal information. It says prior to collecting using personal information under subsection (3), the organization must identify any potential adverse effect on the individual that is likely to result from the collection or use, then identify and take reasonable measures to reduce the likelihood that the effects will occur or to mitigate or eliminate them, and comply with any prescribed requirements. That means that additional requirements could be set out in regulations to come.

Then it says in subsection (5) that the organization must record its assessment of how it meets the condition set out in subsection (4) and must, on request, provide a copy of the assessment to the Privacy Commissioner. 

This doesn't, to me, sound like a completely arbitrary mechanism where organizations get to draw the line wherever they want. They have to document that decision-making and have to make it available to the privacy commissioner on request.

But that is not the end of it. Section 62 talks about what an organization has to include in its privacy statement to the public, and this says that they have to provide a general account of how the organization uses the personal information and how it applies the exceptions to the requirement to obtain an individual consent under this act, including a description of any activities referred to in subsection 18(3) in which it has a legitimate interest. 

So this means that every organization that determines that it is appropriate to use legitimate interests for the collection or use of personal information has to document their decision making in a defensible manner, knowing that it could be presented to the Privacy Commissioner. And they don't get to do it sneakily as the breathless critics would have you think, because they have to publish it in black and white, plain language in their public facing privacy statement.

In addition to the legitimate interests basis for the collection or use of personal information, the proposed CPPA also includes certain categories of business activities for which personal information can be collected or used without an individual's knowledge or consent. This is in section 18, sub 1.

This says an organization may collect or use an individual's personal information without their knowledge or consent if the collection or use is made for the purpose of a business activity described in subsection (2). And a reasonable person would expect the collection or use for such an activity. And the personal information is not collected to use for the purpose of influencing the individual’s behavior or decisions. Does that sound familiar? This is a similar framework to what is in 18 sub 3. 

This provision sets out what are the permissible business activities that fit within this exception. The first one is an activity that is necessary to provide a product or service that the individual has requested from the organization. It has to be necessary. Or it can be an activity that is necessary for the organization's information, system or network security. Or an activity that is necessary for the safety of a product or service that the organization provides. Or any other prescribed activity that could be set out in future regulations.

While I would like Canada’s version of “legitimate interests” to more closely parallel the one in the European General Data Protection Regulation, I think it is a completely reasonable addition to Canada’s privacy law. It requires a deliberate analysis and determination of whether it can be used and requires the organization to be transparent with its customers about the practice.


Monday, May 08, 2023

British Columbia Privacy Commissioner shuts down facial recognition



Recently, the information and privacy commissioner of British Columbia issued a decision that essentially shuts down most use of facial recognition technology in the retail context.

What’s interesting is that the Commissioner undertook this investigation on his own accord. In order to see how prevalent the use of facial recognition was among the province’s retailers, the OIPC surveyed 13 of the province’s largest retailers (including grocery, clothing, electronics, home goods, and hardware stores): 12 responded that they did not use FRT. The remaining retailer, Canadian Tire Corporation, requested that the OIPC contact their 55 independently owned Associate Dealer stores in the province. In the result, 12 stores reported using FRT. Based on these 12 responses, the Commissioner commenced an investigation under s. 36(1)(a) of the Personal Information Protection Act of four of the locations, scattered across the province. 

What’s also interesting is that the stores immediately ceased use of the technology, but the Commissioner determined that doing a full investigation was warranted, so that retailers would be aware of the privacy issues with the use of facial recognition in this context. 

The investigated stores used two different vendors’ systems, but they essentially operated the same way: The systems functioned took pictures or videos of anyone who entered the stores, as they came within range of the FRT cameras. This included customers, staff, delivery personnel, contractors, and minors who might have entered the store. Using software, the facial coordinates from these images or videos were mapped to create a unique biometric template for each face. So everyone was analyzed this way.

The systems then compared the biometrics of new visitors with those stored in a database of previously identified "Persons of Interest," who were allegedly involved in incidents such as theft, vandalism, harassment, or assault. When a new visitor's biometrics matched an existing record in the database, the FRT system sent an automatic alert to store management and security personnel via email or a mobile device application. The alerts contained the newly captured image or video that triggered the match, along with a copy of the previously collected image from the Persons of Interest database and any relevant comments or details about the prior incidents. According to store managers, these alerts were “advisory” until the match was confirmed in person by management or security personnel.

Store management reported that after a positive match was verified, the nature of the prior incident allegedly involving the individual helped determine a course of action. If a prior incident included violence, management or security staff would escort the individual from the store. If the prior incident involved theft, management may have chosen to surveil or remove the person in question

The legal questions posed by the Commissioner were (1) whether consent was required under PIPA for the collection and use of images for this purpose, (2) whether the stores provided notification and obtained the necessary consent (through signage or otherwise) and – most importantly – (3) whether this collection and use is for an “appropriate purpose” under s. 11 and 14 of PIPA.

The first question was easy to answer: Yes, consent is required in this context. PIPA, like PIPEDA, requires organizations to obtain consent, either explicitly or implicitly, before collecting, using, or disclosing personal information unless a specific exception applies. No such exceptions applied in this case. Therefore, the Commissioner concluded it was incumbent on the stores to show that individuals gave consent for the collection of their personal information. 

How would you get that consent? Well the stores had signage at the entrances. Clear signage is usually sufficient for the use of surveillance cameras, but the question would be whether these would be sufficient for this use.

Store number 1 had a sign that stated, in part: “these premises are monitored by video surveillance that may include the use of electronic and/or biometric surveillance technologies.”

The Commissioner said this was inadequate. The notice did not state the purposes for the collection of personal information. Also, stating that biometric surveillance “may” be in use did not reflect that the store continuously employed the technology. The Commissioner said the average person cannot reasonably be expected to understand how their information may be handled by “biometric surveillance technologies,” let alone the implications and risks of this new technology. Consent requires that an individual understands what they are agreeing to – and the posted notification failed to adequately alert the public in this case, according to the Commissioner. This store failed to meet notification requirements under PIPA.

The second store had a notice that stated, in part: “facial recognition technology is being used on these premises to protect our customers and our business.” 

This one was also not satisfactory to the Commissioner. The purpose, as set out, is so  broad that the statement would relay no specific meaning to the average person. Furthermore, the notice does not explain what facial recognition technology entails or the nature of the personal information collected. One cannot reasonably assume that members of the public understand what FRT is, nor its privacy implications, according to the Commissioner.

Stores 3 and 4 had better notices, but they still didn’t satisfy the Commissioner. Their notices stated: “video surveillance cameras and FRT (also known as biometrics) are used on these premises for the protection of our customers and staff. These technologies are also used to support asset protection, loss prevention and to prevent persons of interest from conducting further crime. The images are for internal use only, except as required by law or as part of a legal investigation.” 

It has more detail, but was not that well written. It does not say what “FRT” is. The commissioner noted that the abbreviation is not yet well-known or widely understood. Using the full phrase “facial recognition technology” along with a basic explanation of its workings would have provided a more accurate description of the stores’ data-collection activities. Even so, the Commissioner said that North American society is not yet at the point where it is reasonable to assume that the majority of the population understands what personal information FRT collects, or creates, as well as the technology’s privacy implications. All of this would have to be spelled out. 

While you may be able to rely on implied consent for the use of plain old fashioned surveillance cameras, the Commissioner concluded that you cannot for facial recognition technology, at least in this context. 

The Commissioner said facial biometrics are a highly sensitive, unique, and unchangeable form of personal information. Collecting, using, and sharing this information goes beyond what people would reasonably expect when entering a retail store, and using FRT creates a significant and lasting risk of harm. The Commissioner said the distinctiveness and permanence of this biometric data can make it an attractive target for misuse, potentially becoming a tool to compromise an individual's identity. In the wrong hands, the Commissioner wrote, this information can lead to identity theft, financial loss, and other severe consequences. (I am not entirely sure how…)

As a result, the four stores were required to obtain explicit consent from customers before collecting their facial biometrics. However, they did not make any attempts, either verbally or in writing, to obtain such consent.

So the notices were not adequate and the stores didn’t get the right kind of consent. But the last nail in the coffin for this use of biometrics was the Commissioner’s conclusion about whether the use of facial recognition technology for these purposes is reasonable. 

Reasonableness is determined by looking at the amount of personal information collected, the Sensitivity of the information, the likelihood of being effective and whether less intrusive alternatives had been attempted.

With respect to the Amount of personal information collected, it was vast. The commissioner said a large quantity of personal information was collected from various sources, including customers, staff, contractors, and other visitors. The stores reported that their establishments were visited by hundreds of individuals of all ages, including minors, every day so during a single month, the FRT systems captured images of thousands of people who were simply shopping and not engaging in any harmful activities. The sheer volume of information collected suggests that the collection was unreasonable.

You won’t be surprised that the Commissioner concluded that the personal information at issue was super-duper sensitive. 

With respect to the likelihood of being effective, they didn’t really have in place any system to measure it. The commissioner concluded it really wasn’t that effective. 

The Commissioner wrote that before implementing new technology that collects personal information, organizations should establish a reliable method to measure the technology's effectiveness. This typically involves comparing relevant metrics before and after the technology's implementation. 

However, in this case, the stores did not provide any systematic evidence of measuring their FRT system's effectiveness. Instead, they only gave anecdotal evidence of incidents before and after installation. Without a clear way to measure the technology's effectiveness, it is challenging to analyze this factor, particularly when collecting highly sensitive personal information.

The accuracy of FRT technology is also a related issue. Systems such as these have been reported widely to falsely match facial biometrics of people of colour and women. 

The store managers acknowledged that the alerts could be inaccurate and relied on staff to compare database images to a visual observation of the individual. This manual check by staff suggests that the FRT system may not be effective. False identification can have harmful consequences when innocent shoppers are followed or confronted based on an inaccurate match.

Besides the system's accuracy, its effectiveness can also be judged against the existing methods used by the stores to identify potential suspects. The store managers stated that their security guards and managers typically knew the "bad actors" and could recognize them without FRT alerts. The persons of interest were often professional thieves who repeatedly returned to the store.

Moreover, there is little evidence that FRT enhanced customer and employee safety. Whether a person of interest was identified by FRT or by the visual recognition of an employee, the stores' next steps were the same. These involved deciding whether to observe the suspected person or interact with them directly, including escorting them from the premises. In either case, store managers rarely reported contacting the police for assistance.

As for whether less intrusive alternatives had been attempted, the less intrusive measures were what they were doing before. The Commissioner concluded that the use of FRT didn’t add a lot to solving the stores problems, but collected a completely disproportionate amount of sensitive personal information. The less intrusive means – without biometrics – largely did the trick. 

In the end, the Commissioner made three main recommendations. 

The first was that the stores should build and maintain robust privacy management programs that guide internal practices and contracted services. – presumably so they wouldn’t implement practices such as these that are offside the legislation. 

This report also makes two recommendations for the BC government: The BC Government should amend the Security Services Act or similar enactments to explicitly regulate the sale or installation of technologies that capture biometric Information. 

Finally, the BC Government should amend PIPA to create additional obligations for organizations that collect, use, or disclose biometric information, including requiring notification to the OIPC. This would be similar to what’s in place in Quebec where biometric databases need to be disclosed to the province’s privacy commissioner. 

I think, for all intents and purposes, this shuts down the use of facial recognition technology in the retail context, where it is being used to identify “bad guys”. 


Sunday, April 16, 2023

Privacy Commissioner of Canada Loses in Federal Court against Facebook


Just this past week, the Office of the Privacy Commissioner of Canada was on the receiving end of a Federal Court decision that I would characterize as more than a little embarrassing for the Commissioner.

In a nutshell, the Commissioner took Facebook to court over the Cambridge Analytica incident and lost, big time.

You may recall from 2019, when the Privacy Commissioner of Canada and the Information and Privacy Commissioner of British Columbia released, with as much fanfare as possible, the result of their joint investigation into Facebook related to the Cambridge Analytica incident.

Both of the Commissioners concluded, at that time, that Facebook had violated the federal and British Columbia privacy laws, principally related to transparency and consent.

Because Facebook was not prepared to accept that finding, the Privacy Commissioner of Canada commenced an application in the Federal Court to have the Court make the same determination and issue a whole range of orders against the social media company.

The hearing of that application took place a short time ago and a decision was just released from the federal court this past week. It concluded that the Privacy Commissioner did not prove that Facebook violated our federal privacy law in connection with the Cambridge Analytica incident and made a few other interesting findings and observations. 

Just a little bit of additional procedural information: under our current privacy law, the Privacy Commissioner of Canada does not have the ability to issue any orders or to levy any penalties. What can happen after the Commissioner has released his report of findings  is that the complainant, or the Commissioner with the complaint’s okay, can commence an application in the federal court of Canada. This is what is called a de novo proceeding. 

The finding from the privacy commissioner below can be considered as part of the record, but it is not a decision being appealed from. Instead, the applicant, in this case, the Privacy Commissioner, has the burden of proving to a legal standard that the respondent has violated the federal privacy legislation.

This has to be done with actual evidence, which is where the privacy commissioner fell significantly short in the Facebook case.

It has to be remembered that the events being investigated took place almost 10 years ago, and the Facebook platform is substantially different now compared to what it looked like. Then, if you were a Facebook user from that time, you probably remember a whole bunch of apps running on the Facebook platform. You probably were annoyed by friends who were playing Farmville and sending you invitations and updates. Well, these don't exist anymore. Facebook largely is no longer a platform on which third party apps will run.

In a nutshell, at the time, one of the app developers that used the Facebook platform was a researcher associated with a company called Cambridge Analytica. They had an app running on the platform called “this is your digital life”. It operated for some time in violation of Facebook's terms of use for app developers, hoovering up significant amounts of personal information and then selling and/or using that information for, among other things, profiling and advertising targeting. Here’s how the court described it:

[36] In November 2013, Cambridge professor Dr. Aleksandr Kogan launched an app on the Facebook Platform, the TYDL App. The TYDL App was presented to users as a sort of personality quiz. Prior to launching the TYDL App, Dr. Kogan agreed to Facebook’s Platform Policy and Terms of Service. Through Platform, Dr. Kogan could access the Facebook profile information of every user who installed the TYDL App and agreed to its privacy policy. This included access to information about installing users’ Facebook friends. ...

[38] Media reports in December 2015 revealed that Dr. Kogan (and his firm, Global Science Research Ltd) had sold Facebook user information to Cambridge Analytica and a related entity, SCL Elections Ltd. The reporting claimed that Facebook user data had been used to help SCL’s clients target political messaging to potential voters in the then upcoming US presidential election primaries.

One thing to note is that in 2008-2009, the OPC investigated Facebook and the Granular Data Permissions model that it was employing on their platform. Facebook said that the OPC sanctioned and expressly approved its GDP process after testing it after the conclusion of that investigation. They argued that the Commissioner should not be able to now say that a model it approved is inadequate. The Court didn’t have to go there. 

In this application, the Privacy Commissioner alleged that Facebook failed to get adequate consent from users who used apps on Facebook’s platform, and failed to safeguard personal information that was disclosed to third party app developers. The Commissioner failed on both, but for different reasons. 

In the court process, both the Commissioner and Facebook had the opportunity to put their best evidence and best arguments forward. Facebook was able to talk about their policies, their practices with respect to third party developers, and the sorts of educational material that they provided as part of their privacy program. 

Ultimately, the court concluded that the Commissioner had failed to put forward strong evidence to lead to the conclusion that Facebook had not obtained adequate user consent for the collection, use and disclosure of their personal information when using the app in question, or apps more generally.

It’s interesting to me that the Court notes that the Commissioner did not provide any evidence of what Facebook could have done better, in their view, nor did it offer any expert evidence about what would have been reasonable to do in the circumstances. This is from paragraph 71 of the decision:

[71] In assessing these competing characterizations, aside from evidence consisting of photographs of the relevant webpages from Facebook’s affiant, the Court finds itself in an evidentiary vacuum. There is no expert evidence as to what Facebook could feasibly do differently, nor is there any subjective evidence from Facebook users about their expectations of privacy or evidence that any user did not appreciate the privacy issues at stake when using  Facebook. While such evidence may not be strictly necessary, it would have certainly enabled the Court to better assess the reasonableness of meaningful consent in an area where the standard for reasonableness and user expectations may be especially context dependent and are ever evolving.

The Court also seems to be saying that the Commissioner was trying to suck and blow at the same time:

[67] Overall, the Commissioner characterizes Facebook’s privacy measures as opaque and full of deliberate obfuscations, creating an “illusion of control”, containing reassuring statements of Facebook’s commitments to privacy and pictures of padlocks and studious dinosaurs that communicate a false sense of security to users navigating the relevant policies and educational material. On one hand, the Commissioner criticizes Facebook’s resources for being overly complex and full of legalize, rendering those resources as being unreasonable in providing meaningful consent, yet in some instances, the Commissioner criticizes the resources for being overly simplistic and not saying enough. 

The judge then found that Facebook was essentially asking the court to make a whole bunch of negative inferences in the absence of evidence, which they did not appear to try to obtain. Here’s the court at paragraph 72 of the decision: 

[72] Nor has the Commissioner used the broad powers under section 12.1 of PIPEDA to compel evidence from Facebook. Counsel for the Commissioner explained that they did not use the section 12.1 powers because Facebook would not have complied or would have had nothing to offer. That may be; however, ultimately it is the Commissioner’s burden to establish a breach of PIPEDA on the basis of evidence, not speculation and inferences derived from a paucity of material facts. If Facebook were to refuse disclosure contrary to what is required under PIPEDA, it would have been open to the Commissioner to contest that refusal.

The judge then goes on to say at paragraph 77:

[77] In the absence of evidence, the Commissioner’s submissions are replete with requests for the Court to draw “inferences”, many of which are unsupported in law or by the record. For instance, the Court was asked to draw an adverse inference from an uncontested claim of privilege over certain documents by Facebook’s affiant. 

I think there are a couple very important things to note here. The first is that the Privacy Commissioner’s report of findings, which was released with great fanfare and which concluded that Facebook had violated Canada's federal privacy laws, was essentially based on inadequate evidence. The court found it sadly lacking – not enough to convince the Court that it was more likely than not – but apparently this evidentiary record was entirely satisfactory for the purposes of the Commissioner’s investigation and report of findings.

The second thing to note here is that the court application was essentially the privacy commissioner's second kick at the can. More evidence could have been obtained for this hearing had they actually exercised their authorities under the legislation or under the rules of court. If they did that, they came to court with an inadequate evidentiary record.

The second main violation that was alleged by the Privacy Commissioner was that Facebook had failed to adequately safeguard user information that was disclosed to third party app developers. Essentially, the Privacy Commissioner's argument is that Facebook continues to have an obligation to safeguard all of the information even after a user has chosen to disclose that information to a third party app developer. Facebook took the view that the safeguarding obligation transferred to the app developer when the user initiated the disclosure to that app developer. 

This is consistent with the scheme of the Act, in my view, because the responsibility to safeguard information and to limit its use falls on the organization that actually controls that information. Once it is given to an app developer for this purpose, it is under the control of that app developer and the obligation to safeguard it would rest with them.

The Court summarized the Commissioner’s argument on this point in paragraph 85:

[85] The Commissioner counters that Facebook maintains control over the information disclosed to third-party applications because it holds a contractual right to request information from apps. The Commissioner maintains that Facebook’s safeguards were inadequate.

[86] I agree with Facebook; its safeguarding obligations end once information is disclosed to third-party applications. The Court of Appeal in Englander observed that the safeguarding principle imposed obligations on organizations with respect to their “internal handling” of information once in their “possession” (para 41). 

Very importantly here, though, is the statement from the court that companies can expect good faith and honesty in contractual agreements:

[91] In any event, even if the safeguarding obligations do apply to Facebook after it has disclosed information to third-party applications, there is insufficient evidence to conclude whether Facebook’s contractual agreements and enforcement policies constitute adequate safeguards. Commercial parties reasonably expect honesty and good faith in contractual dealings. For the same reasons as those with respect to meaningful consent, the Commissioner has failed to discharge their burden to show that it was inadequate for Facebook to rely on good faith and honest execution of its contractual agreements with third-party app developers.

This is the conclusion that the court reached. So, in the result, the court did not conclude that Facebook had violated PIPEDA in any way in association with the Cambridge analytica incident.

Another important observation, in my view, is that the Privacy commissioner of Canada did not actually investigate Cambridge Analytica itself, but focused all of its regulatory attention at Facebook. It is common ground that Cambridge Analytica and its principal violated Facebook's policies and developer agreements in taking user data off the platform and using it for secondary, unauthorized purposes. But they did not investigate Cambridge Analytica. They went after Facebook.

So what are the takeaways from this?

I think certain folks at the Office of the Privacy Commissioner should take an opportunity to think deeply about their approach to this entire thing. They should not be issuing flashy press releases and lobbing accusations in the way that they did without evidence that could support the allegations in a court of law. 

I also think we need to think carefully about what this says for privacy law reform in Canada. The Commissioner at the time used his finding as an example of why he should be given order making powers and the powers to impose penalties. They even issued a handy-dandy table in which it concluded:

Because “Facebook disputed the validity of the findings and refused to implement the recommendations,” this should lead to the result that:

“The Office of the Privacy Commissioner of Canada’s interpretation of the law should be binding on organizations. 

To ensure effective enforcement, the Commissioner should be empowered to make orders and impose fines for non-compliance with the law.”

Almost certainly, if he’d had those powers, he would have imposed orders and fines on Facebook, based on what the Court concluded was inadequate evidence. The Court even disagreed with the Commissioner’s interpretation of the law. 

If we are going to have fines and orders under PIPEDA’s replacement, which seems inevitable, the OPC should NOT be in a position to impose them. The OPC should be the prosecutor, recommending any such fines or orders to a tribunal that will not show any deference to the Commissioner. 

And finally, this offers some certainty that once information has been disclosed to a third party, it is the third party’s legal obligation to safeguard it. The OPC clearly thought that the obligation remained with the company where it originated, but that view was not shared with the court.

After the OPC filed its application in court, Facebook filed a judicial review application to have the whole thing thrown out. Facebook was not successful on that, mainly because they filed late and were not entitled to an extension. Regardless, there are some very interesting things in that decision, which I’ll discuss in an upcoming episode.


Sunday, December 18, 2022

Where to find me ...

Given the current dumpster fire at Twitter and the recent ban on outbound links to other social platforms, I thought I'd do a post of where to find me: