I also had the oppotunity to talk about this silly take with CBC Information Morning Halifax and Cape Breton. You can listen to the interviews here: Halifax, Cape Breton.
The Canadian Privacy Law Blog: Developments in privacy law and writings of a Canadian privacy lawyer, containing information related to the Personal Information Protection and Electronic Documents Act (aka PIPEDA) and other Canadian and international laws.
I also had the oppotunity to talk about this silly take with CBC Information Morning Halifax and Cape Breton. You can listen to the interviews here: Halifax, Cape Breton.
Decision follows trend starting in BC that a virtual presence in Canada is enough to be ordered to produce records
The Federal Court of Canada, in connection with an application for a warrant and an assistance order under the Canadian Security Intelligence Service Act, was required to consider whether an assistance order under s. 22.3(1) of that Act could be issued to order a legal person with no physical presence in Canada to assist CSIS with giving effect to a warrant. The order would have extra-territorial effect.
In a redacted decision, Re Canadian Security Intelligence Service Act (Can), the court concluded that it can, provided that the subject of the assistance order has a “virtual presence” in Canada. The decision notes that the foreign company involved was willing to assist, but needed to see a court order to manage their possible legal liability:
[3] The affiant explained that [REDACTED] is incorporated and headquartered in [REDACTED] does not have physical offices or employees in Canada. It has a virtual presence in Canada that consists of [_some physical presence in Canada_]. It solicits business from Canadians and [REDACTED].
[4] The affiant also explained that [REDACTED] has been fully cooperative in providing assistance to CSIS to date, but has advised CSIS that it requires a judicial authorization from a Canadian court to minimize its legal risk in the event that CSIS uses the collected intelligence beyond analysis; [REDACTED]. [REDACTED] advised that it would continue to be cooperative pending and upon receipt of an Assistance Order.
The company’s willingness to comply wasn’t particularly material to the Court’s decision.
At the urging of the government and largely supported by a court-appointed amicus, the Court followed a trend of cases that have dealt with similar questions but involving production orders under the Criminal Code. The first of these cases is British Columbia (Attorney General) v. Brecknell, where the Royal Canadian Mounted Police were seeking to obtain a production order naming Craigslist. As with this CSIS case, Craigslist said they’d cooperate but needed to see a court order. The British Columbia Court of Appeal, influenced by the Equustek case from the Supreme Court of Canada, concluded that a court has jurisdiction to issue a production order naming an entity physically beyond the court’s jurisdiction provided they had a “virtual presence” within the jurisdiction.
The Court concluded:
[49] I find that the jurisprudence in the context of production orders issued pursuant to section 487.014 of the Criminal Code provides a good analogy and support for finding that this Court has the jurisdiction to issue an Assistance Order where in personam jurisdiction can be established. The two provisions are similar in purpose, albeit in different contexts, both are directed to a person, which includes an organization or entity that is a legal person, and similar considerations arise in determining whether the order should be issued where the subject has only a virtual presence in Canada.
[50] The considerations noted by the SCC in Equustek lend further support to taking an approach that reflects the realities of the internet dominated storage and transmission of documents and information. As noted in Brecknell, document control may exist in one jurisdiction, and the documents in another or in several others and “formalistic distinctions” between virtual and physical presence defeat the purpose of the legislation.
[51] Whether an organization or entity with only a virtual presence in Canada can establish a real and substantial connection with Canada sufficient to constitute presence in Canada will be a case-by-case determination. Where such in personam jurisdiction is established, the organization or entity that is subject to the Assistance Order and required to provide documents in their possession or control is considered to be in Canada although the documents may be stored elsewhere.
As with a number of the cases following Brecknell, the Court concluded that its ability to issue the order does not turn on whether it would be able to enforce the order, though that is a relevant consideration:
[53] I have considered the issue of enforcement of the Assistance Order on [REDACTED]. I note that they have been cooperative to date and indicate their ongoing intention to cooperate. However, I also agree with the submissions of the AGC and amicus and the jurisprudence, that the enforcement of the Order is a separate issue from whether the Court has jurisdiction to issue the Order, but remains a relevant consideration with respect to whether the Order should be issued based on the particular circumstances.
Consistent with the previous production order cases cited, the intended recipient was not a party to the hearing. All were ex parte, but some included amici.
Note: I believe that Brecknell was wrongly-decided, but because all of these orders have not been ex parte and unopposed, it'll be some time before these arguments will be made in court. See: David T Fraser, "British Columbia (Attorney General) v. Brecknell", Case Comment, (2020) 18:1 CJLT 135.
So someone from CSIS just called ….
There’s a first time for everything. You get a call from an “UNKNOWN NUMBER” and the caller says they work with Public Safety Canada and they’re looking for some information. This happens from time to time at universities, colleges, telecoms, internet-based businesses and others. Likely, they actually work for the Canadian Security Intelligence Service (known as CSIS) and they’re doing an investigation.
So what happens – or should happen – next? You should ask them what they’re looking for and what is their lawful authority. Get their contact information and then you should call a lawyer who has dealt with this sort of situation before.
CSIS is an unusual entity. They’re not a traditional law enforcement agency. While they can also get warrants (more about that later), they have a very different mission. The mandate of CSIS is to
investigate activities suspected of constituting threats to the security of Canada (espionage/sabotage, foreign interference, terrorism, subversion of Canadian democracy);
take measures to reduce these threats;
provide security assessments on individuals who require access to sensitive government information or sensitive sites;
provide security advice relevant to the Citizenship Act or the Immigration and Refugee Protection Act; and
collect foreign intelligence within Canada at the request of the Minister of Foreign Affairs or the Minister of National Defence.
To carry out this mandate, CSIS may seek and obtain warrants. But they are unlike any warrant or production order you may see handed to you by a cop. CSIS warrants are more complicated to understand and possibly comply with than the more traditional law enforcement variety.
Canadians are often surprised to discover that we have a court that meets in secret, in a virtual bunker and hears applications for TOP SECRET warrants. These warrants can authorize “the persons to whom it is directed to intercept any communication or obtain any information, record, document or thing and, for that purpose, (a) to enter any place or open or obtain access to any thing; (b) to search for, remove or return, or examine, take extracts from or make copies of or record in any other manner the information, record, document or thing; or (c) to install, maintain or remove any thing.” These warrants can be accompanied by an assistance order, directing a person to assist with giving effect to a warrant.
A problem for third parties with these warrants is that they can be long-term and very open ended. The name of the target of the investigation may be unknown at the time the warrant was obtained, and the warrant may authorize the collection of data related to that unknown person. It can authorize the collection of information about people who are in contact with that unknown person. It may authorize the collection of additional information related to those persons, such as IP addresses, email addresses, communications and even real-time interception of communications. Once the unknown person has been identified by CSIS (by name, an account identifier, online handle, etc.), they will seek to obtain further information. But the warrant itself likely does not name the person or any account identifiers so that the custodian of information cannot easily connect the request to a particular information. And the recipient of the demand must be confident that they are authorized to disclose the requested information, otherwise they would be in violation of privacy laws.
To complicate things further, because these warrants are generally secret, CSIS is not willing to provide a copy of the complete warrant to a third party from whom they are seeking data. They will generally permit you to look at a redacted version of the warrant but will not let you keep it. Diligent organizations that know they can only disclose personal information if it is authorized and permitted by law, and they have a duty to ensure that they disclose only the responsive information. To do otherwise risks violating applicable privacy laws. Organizations should also document all aspects of the interaction and disclosure, which is a problem if you can’t get a copy of the warrant. Over time, procedures have been developed by CSIS and third party organizations to address this.
While all of this may be TOP SECRET, nothing precludes a recipient of a warrant or an assistance order from seeking legal advice on how to properly and lawfully respond. Anyone dealing with such a situation should seek experienced legal advice.
In just the past few weeks, the Government of Canada launched a consultation on possible reforms to the CSIS Act, mainly under the banner of protecting Canadian democracy against foreign interference. Of course, changes to the statute will affect other aspects of their mission. The consultation is broadly organized under five “issues”, and it’s Issue #2 that is the most relevant to this discussion.
Issue #2: Whether to implement new judicial authorization authorities tailored to the level of intrusiveness of the techniques
Essentially, what they’re proposing is a form of production order similar to what we have in the Criminal Code of Canada. Such an order would still be subject to court approval and could compel a third party to produce information “where CSIS has reasonable grounds to believe that the production of the information is likely to yield information of importance that is likely to assist CSIS in carrying out its duties and functions.” Examples they give are basic subscriber information, call detail records, or transaction records. These would be much more targeted and, in my view, much easier for the custodian of the information to evaluate and respond to. A production order would authorize CSIS to obtain the basic subscriber information of a named person or known account identifier. Under the current warrant authority, those specific people may be unknown at the time the warrant was issued but are still within the ambit of the warrant. Presumably a CSIS production order can be served in the usual way as a criminal code production order and the company can keep a copy of it for its records. I’m generally very skeptical about the expansion of intrusive government powers, particularly when much of it takes place outside of OPEN court but in a closed court, but I don’t see this as an expansion. CSIS can be given this ability, supervised by the court, to streamline its existing authorities. They would need to be very careful if they were to purport to give it extraterritorial effect, since that would likely be very offensive to comity and the sovereignty of other countries. And intelligence collection is generally more offensive and aggressive than investigating ordinary crime. It may specifically be illegal under foreign law for the company to provide data in response to such an order. And I think the order should, like a criminal code production order, explicitly give the recipient the right to challenge it. So that’s the current situation with CSIS investigations, at least from a service provider’s point of view, and a hint at what’s to come. Again, if you find yourself in the uncomfortable and unfamiliar situation of taking a call from “public safety” or CSIS, reach out to get experienced legal advice from a lawyer who has been through the process before.
So Bill c-27, also known as the digital charter implementation act of 2022 has been before Canada's Parliament for consideration for quite some time. Even before this parliamentary session, a bill substantially similar to the present one was tabled and then died on the order paper in the previous parliamentary session. After more than 20 years of the personal information protection electronic documents act, people have had a long time to think about improvements that perhaps could or should be made to our national privacy regime .
One thing that I've heard over and over again, particularly from privacy activists since 2018 is the suggestion that Canada should simply follow Europe's lead and implement a form of its general data protection directive. Privacy activists and others hail it as the “gold standard”.
Sometimes when I hear more from these folks, I realize that for some of them, it appears that all they know about the GDPR is the possibility of massive, company-ruining penalties. What they don't seem to understand is that it is relatively rare in Europe for a business to use consent as the basis for the collection, use or disclosure of personal information. This is in stark contrast to the current law, PIPEDA, where consent really is the only lawful basis for collecting, using and disclosing personal information.
Here is a case in point. It is an op-ed to the globe and mail written by the former co-CEO of research in motion, also known as blackberry, and more recently, the philanthropist behind Canada center for digital rights and the Centre for International Governance Innovation, Jim Balsillie.
In this op-ed, Balsillie “the EU's landmark general data protection regulation, a law that sets the baseline for modern protections around the world…”
He then goes on to viciously attack a portion of Bill c27 in the CPPA that is modeled directly on a provision from the GDPR: The ability for an organization to collect, use or disclose personal information without consent on the basis of legitimate interests .
Here is what Jim has to say in his op-ed. “ For example, the proposed new law creates a broad car vote for surveillance without knowledge or consent based on legitimate interests… there's worse, it's the businesses themselves that determine what constitutes legitimate interest for surveillance and they are under no obligation to tell the individual they are tracking and profiling them”
Look, either it is the gold standard or it is not.
And I really shouldn't have to tell a business leader that every one of us gets to decide how we comply with the law and if that assessment is incorrect, that is where enforcement comes in. The bill contains detailed information about what can be a legitimate interest in what cannot be a legitimate interest. Frankly, I am getting a little tired of this breathless hyperbole and want to set the record straight on what legitimate interests is and what it is not.
First, we'll look at the GDPR, then we will look at Bill c27.
Article 6 of the GDPR outlines the lawful bases for processing personal data. These include consent, contract, legal obligation, vital interests, public task, and legitimate interests. We’re going to zoom in on the last one – legitimate interests.
Legitimate interests are one of the more flexible lawful bases and probably the most-used. It is also the most open to interpretation. It allows data processing on the basis of the legitimate interests pursued by a data controller or a third party, unless such interests are overridden by the interests or fundamental rights and freedoms of the data subject.
This requires the data controller to carry out an analysis to see if “legitimate interests” can be used instead of another basis, such as consent.
To rely on legitimate interests, you must:
1. Identify a legitimate interest (be it commercial, individual, or societal benefits).
2. Show that the processing is necessary to achieve it.
3. Balance it against the individual’s interests, rights, and freedoms. This involves conducting a Legitimate Interests Assessment (LIA).
Legitimate interests can include network and information security, preventing fraud, direct marketing, and the like.
Using “legitimate interests” is not just carte blanche to do whatever you want. When invoking legitimate interests, the controller has to ensure transparency, adhere to data minimization principles, and implement safeguards to protect the rights of individuals.
The proposed Consumer Privacy Protection Act in Canada has a similar framework. Personally, I think it should be replaced with an almost word for word copy from the GDPR in order to remove – or at least reduce – unnecessary barriers for organizations that operate internationally.
But let's focus on what is in fact written in the bill as it currently exists.
In section 18(3), it says an organization may collect or use an individual's personal information without their knowledge or consent if the collection of use is made for the purpose of an activity in which the organization has a legitimate interest that outweighs any potential adverse effect on the individual resulting from that collection or use. And a reasonable person would expect the collection of use for such an activity. And the personal information is not collected or used for the purpose of influencing the individual’s behavior or decisions.
So like in Europe, it requires balancing that organization's interest against the interest of the individual. Unlike in Europe, it requires that the collection or use be for purposes that would essentially be obvious or expected by the individual. It is unclear what is the intended scope of that paragraph (b) there, since there are so many things that happen in the world that would reasonably be expected to alter somebody's behavior.
Subsection (4) sets a requirement that must be met prior to an organization relying on this legitimate interest for the collection or use of personal information. It says prior to collecting using personal information under subsection (3), the organization must identify any potential adverse effect on the individual that is likely to result from the collection or use, then identify and take reasonable measures to reduce the likelihood that the effects will occur or to mitigate or eliminate them, and comply with any prescribed requirements. That means that additional requirements could be set out in regulations to come.
Then it says in subsection (5) that the organization must record its assessment of how it meets the condition set out in subsection (4) and must, on request, provide a copy of the assessment to the Privacy Commissioner.
This doesn't, to me, sound like a completely arbitrary mechanism where organizations get to draw the line wherever they want. They have to document that decision-making and have to make it available to the privacy commissioner on request.
But that is not the end of it. Section 62 talks about what an organization has to include in its privacy statement to the public, and this says that they have to provide a general account of how the organization uses the personal information and how it applies the exceptions to the requirement to obtain an individual consent under this act, including a description of any activities referred to in subsection 18(3) in which it has a legitimate interest.
So this means that every organization that determines that it is appropriate to use legitimate interests for the collection or use of personal information has to document their decision making in a defensible manner, knowing that it could be presented to the Privacy Commissioner. And they don't get to do it sneakily as the breathless critics would have you think, because they have to publish it in black and white, plain language in their public facing privacy statement.
In addition to the legitimate interests basis for the collection or use of personal information, the proposed CPPA also includes certain categories of business activities for which personal information can be collected or used without an individual's knowledge or consent. This is in section 18, sub 1.
This says an organization may collect or use an individual's personal information without their knowledge or consent if the collection or use is made for the purpose of a business activity described in subsection (2). And a reasonable person would expect the collection or use for such an activity. And the personal information is not collected to use for the purpose of influencing the individual’s behavior or decisions. Does that sound familiar? This is a similar framework to what is in 18 sub 3.
This provision sets out what are the permissible business activities that fit within this exception. The first one is an activity that is necessary to provide a product or service that the individual has requested from the organization. It has to be necessary. Or it can be an activity that is necessary for the organization's information, system or network security. Or an activity that is necessary for the safety of a product or service that the organization provides. Or any other prescribed activity that could be set out in future regulations.
While I would like Canada’s version of “legitimate interests” to more closely parallel the one in the European General Data Protection Regulation, I think it is a completely reasonable addition to Canada’s privacy law. It requires a deliberate analysis and determination of whether it can be used and requires the organization to be transparent with its customers about the practice.
Recently, the information and privacy commissioner of British Columbia issued a decision that essentially shuts down most use of facial recognition technology in the retail context.
What’s interesting is that the Commissioner undertook this investigation on his own accord. In order to see how prevalent the use of facial recognition was among the province’s retailers, the OIPC surveyed 13 of the province’s largest retailers (including grocery, clothing, electronics, home goods, and hardware stores): 12 responded that they did not use FRT. The remaining retailer, Canadian Tire Corporation, requested that the OIPC contact their 55 independently owned Associate Dealer stores in the province. In the result, 12 stores reported using FRT. Based on these 12 responses, the Commissioner commenced an investigation under s. 36(1)(a) of the Personal Information Protection Act of four of the locations, scattered across the province.
What’s also interesting is that the stores immediately ceased use of the technology, but the Commissioner determined that doing a full investigation was warranted, so that retailers would be aware of the privacy issues with the use of facial recognition in this context.
The investigated stores used two different vendors’ systems, but they essentially operated the same way: The systems functioned took pictures or videos of anyone who entered the stores, as they came within range of the FRT cameras. This included customers, staff, delivery personnel, contractors, and minors who might have entered the store. Using software, the facial coordinates from these images or videos were mapped to create a unique biometric template for each face. So everyone was analyzed this way.
The systems then compared the biometrics of new visitors with those stored in a database of previously identified "Persons of Interest," who were allegedly involved in incidents such as theft, vandalism, harassment, or assault. When a new visitor's biometrics matched an existing record in the database, the FRT system sent an automatic alert to store management and security personnel via email or a mobile device application. The alerts contained the newly captured image or video that triggered the match, along with a copy of the previously collected image from the Persons of Interest database and any relevant comments or details about the prior incidents. According to store managers, these alerts were “advisory” until the match was confirmed in person by management or security personnel.
Store management reported that after a positive match was verified, the nature of the prior incident allegedly involving the individual helped determine a course of action. If a prior incident included violence, management or security staff would escort the individual from the store. If the prior incident involved theft, management may have chosen to surveil or remove the person in question
The legal questions posed by the Commissioner were (1) whether consent was required under PIPA for the collection and use of images for this purpose, (2) whether the stores provided notification and obtained the necessary consent (through signage or otherwise) and – most importantly – (3) whether this collection and use is for an “appropriate purpose” under s. 11 and 14 of PIPA.
The first question was easy to answer: Yes, consent is required in this context. PIPA, like PIPEDA, requires organizations to obtain consent, either explicitly or implicitly, before collecting, using, or disclosing personal information unless a specific exception applies. No such exceptions applied in this case. Therefore, the Commissioner concluded it was incumbent on the stores to show that individuals gave consent for the collection of their personal information.
How would you get that consent? Well the stores had signage at the entrances. Clear signage is usually sufficient for the use of surveillance cameras, but the question would be whether these would be sufficient for this use.
Store number 1 had a sign that stated, in part: “these premises are monitored by video surveillance that may include the use of electronic and/or biometric surveillance technologies.”
The Commissioner said this was inadequate. The notice did not state the purposes for the collection of personal information. Also, stating that biometric surveillance “may” be in use did not reflect that the store continuously employed the technology. The Commissioner said the average person cannot reasonably be expected to understand how their information may be handled by “biometric surveillance technologies,” let alone the implications and risks of this new technology. Consent requires that an individual understands what they are agreeing to – and the posted notification failed to adequately alert the public in this case, according to the Commissioner. This store failed to meet notification requirements under PIPA.
The second store had a notice that stated, in part: “facial recognition technology is being used on these premises to protect our customers and our business.”
This one was also not satisfactory to the Commissioner. The purpose, as set out, is so broad that the statement would relay no specific meaning to the average person. Furthermore, the notice does not explain what facial recognition technology entails or the nature of the personal information collected. One cannot reasonably assume that members of the public understand what FRT is, nor its privacy implications, according to the Commissioner.
Stores 3 and 4 had better notices, but they still didn’t satisfy the Commissioner. Their notices stated: “video surveillance cameras and FRT (also known as biometrics) are used on these premises for the protection of our customers and staff. These technologies are also used to support asset protection, loss prevention and to prevent persons of interest from conducting further crime. The images are for internal use only, except as required by law or as part of a legal investigation.”
It has more detail, but was not that well written. It does not say what “FRT” is. The commissioner noted that the abbreviation is not yet well-known or widely understood. Using the full phrase “facial recognition technology” along with a basic explanation of its workings would have provided a more accurate description of the stores’ data-collection activities. Even so, the Commissioner said that North American society is not yet at the point where it is reasonable to assume that the majority of the population understands what personal information FRT collects, or creates, as well as the technology’s privacy implications. All of this would have to be spelled out.
While you may be able to rely on implied consent for the use of plain old fashioned surveillance cameras, the Commissioner concluded that you cannot for facial recognition technology, at least in this context.
The Commissioner said facial biometrics are a highly sensitive, unique, and unchangeable form of personal information. Collecting, using, and sharing this information goes beyond what people would reasonably expect when entering a retail store, and using FRT creates a significant and lasting risk of harm. The Commissioner said the distinctiveness and permanence of this biometric data can make it an attractive target for misuse, potentially becoming a tool to compromise an individual's identity. In the wrong hands, the Commissioner wrote, this information can lead to identity theft, financial loss, and other severe consequences. (I am not entirely sure how…)
As a result, the four stores were required to obtain explicit consent from customers before collecting their facial biometrics. However, they did not make any attempts, either verbally or in writing, to obtain such consent.
So the notices were not adequate and the stores didn’t get the right kind of consent. But the last nail in the coffin for this use of biometrics was the Commissioner’s conclusion about whether the use of facial recognition technology for these purposes is reasonable.
Reasonableness is determined by looking at the amount of personal information collected, the Sensitivity of the information, the likelihood of being effective and whether less intrusive alternatives had been attempted.
With respect to the Amount of personal information collected, it was vast. The commissioner said a large quantity of personal information was collected from various sources, including customers, staff, contractors, and other visitors. The stores reported that their establishments were visited by hundreds of individuals of all ages, including minors, every day so during a single month, the FRT systems captured images of thousands of people who were simply shopping and not engaging in any harmful activities. The sheer volume of information collected suggests that the collection was unreasonable.
You won’t be surprised that the Commissioner concluded that the personal information at issue was super-duper sensitive.
With respect to the likelihood of being effective, they didn’t really have in place any system to measure it. The commissioner concluded it really wasn’t that effective.
The Commissioner wrote that before implementing new technology that collects personal information, organizations should establish a reliable method to measure the technology's effectiveness. This typically involves comparing relevant metrics before and after the technology's implementation.
However, in this case, the stores did not provide any systematic evidence of measuring their FRT system's effectiveness. Instead, they only gave anecdotal evidence of incidents before and after installation. Without a clear way to measure the technology's effectiveness, it is challenging to analyze this factor, particularly when collecting highly sensitive personal information.
The accuracy of FRT technology is also a related issue. Systems such as these have been reported widely to falsely match facial biometrics of people of colour and women.
The store managers acknowledged that the alerts could be inaccurate and relied on staff to compare database images to a visual observation of the individual. This manual check by staff suggests that the FRT system may not be effective. False identification can have harmful consequences when innocent shoppers are followed or confronted based on an inaccurate match.
Besides the system's accuracy, its effectiveness can also be judged against the existing methods used by the stores to identify potential suspects. The store managers stated that their security guards and managers typically knew the "bad actors" and could recognize them without FRT alerts. The persons of interest were often professional thieves who repeatedly returned to the store.
Moreover, there is little evidence that FRT enhanced customer and employee safety. Whether a person of interest was identified by FRT or by the visual recognition of an employee, the stores' next steps were the same. These involved deciding whether to observe the suspected person or interact with them directly, including escorting them from the premises. In either case, store managers rarely reported contacting the police for assistance.
As for whether less intrusive alternatives had been attempted, the less intrusive measures were what they were doing before. The Commissioner concluded that the use of FRT didn’t add a lot to solving the stores problems, but collected a completely disproportionate amount of sensitive personal information. The less intrusive means – without biometrics – largely did the trick.
In the end, the Commissioner made three main recommendations.
The first was that the stores should build and maintain robust privacy management programs that guide internal practices and contracted services. – presumably so they wouldn’t implement practices such as these that are offside the legislation.
This report also makes two recommendations for the BC government: The BC Government should amend the Security Services Act or similar enactments to explicitly regulate the sale or installation of technologies that capture biometric Information.
Finally, the BC Government should amend PIPA to create additional obligations for organizations that collect, use, or disclose biometric information, including requiring notification to the OIPC. This would be similar to what’s in place in Quebec where biometric databases need to be disclosed to the province’s privacy commissioner.
I think, for all intents and purposes, this shuts down the use of facial recognition technology in the retail context, where it is being used to identify “bad guys”.
Just this past week, the Office of the Privacy Commissioner of Canada was on the receiving end of a Federal Court decision that I would characterize as more than a little embarrassing for the Commissioner.
In a nutshell, the Commissioner took Facebook to court over the Cambridge Analytica incident and lost, big time.
You may recall from 2019, when the Privacy Commissioner of Canada and the Information and Privacy Commissioner of British Columbia released, with as much fanfare as possible, the result of their joint investigation into Facebook related to the Cambridge Analytica incident.
Both of the Commissioners concluded, at that time, that Facebook had violated the federal and British Columbia privacy laws, principally related to transparency and consent.
Because Facebook was not prepared to accept that finding, the Privacy Commissioner of Canada commenced an application in the Federal Court to have the Court make the same determination and issue a whole range of orders against the social media company.
The hearing of that application took place a short time ago and a decision was just released from the federal court this past week. It concluded that the Privacy Commissioner did not prove that Facebook violated our federal privacy law in connection with the Cambridge Analytica incident and made a few other interesting findings and observations.
Just a little bit of additional procedural information: under our current privacy law, the Privacy Commissioner of Canada does not have the ability to issue any orders or to levy any penalties. What can happen after the Commissioner has released his report of findings is that the complainant, or the Commissioner with the complaint’s okay, can commence an application in the federal court of Canada. This is what is called a de novo proceeding.
The finding from the privacy commissioner below can be considered as part of the record, but it is not a decision being appealed from. Instead, the applicant, in this case, the Privacy Commissioner, has the burden of proving to a legal standard that the respondent has violated the federal privacy legislation.
This has to be done with actual evidence, which is where the privacy commissioner fell significantly short in the Facebook case.
It has to be remembered that the events being investigated took place almost 10 years ago, and the Facebook platform is substantially different now compared to what it looked like. Then, if you were a Facebook user from that time, you probably remember a whole bunch of apps running on the Facebook platform. You probably were annoyed by friends who were playing Farmville and sending you invitations and updates. Well, these don't exist anymore. Facebook largely is no longer a platform on which third party apps will run.
In a nutshell, at the time, one of the app developers that used the Facebook platform was a researcher associated with a company called Cambridge Analytica. They had an app running on the platform called “this is your digital life”. It operated for some time in violation of Facebook's terms of use for app developers, hoovering up significant amounts of personal information and then selling and/or using that information for, among other things, profiling and advertising targeting. Here’s how the court described it:
[36] In November 2013, Cambridge professor Dr. Aleksandr Kogan launched an app on the Facebook Platform, the TYDL App. The TYDL App was presented to users as a sort of personality quiz. Prior to launching the TYDL App, Dr. Kogan agreed to Facebook’s Platform Policy and Terms of Service. Through Platform, Dr. Kogan could access the Facebook profile information of every user who installed the TYDL App and agreed to its privacy policy. This included access to information about installing users’ Facebook friends. ...
[38] Media reports in December 2015 revealed that Dr. Kogan (and his firm, Global Science Research Ltd) had sold Facebook user information to Cambridge Analytica and a related entity, SCL Elections Ltd. The reporting claimed that Facebook user data had been used to help SCL’s clients target political messaging to potential voters in the then upcoming US presidential election primaries.
One thing to note is that in 2008-2009, the OPC investigated Facebook and the Granular Data Permissions model that it was employing on their platform. Facebook said that the OPC sanctioned and expressly approved its GDP process after testing it after the conclusion of that investigation. They argued that the Commissioner should not be able to now say that a model it approved is inadequate. The Court didn’t have to go there.
In this application, the Privacy Commissioner alleged that Facebook failed to get adequate consent from users who used apps on Facebook’s platform, and failed to safeguard personal information that was disclosed to third party app developers. The Commissioner failed on both, but for different reasons.
In the court process, both the Commissioner and Facebook had the opportunity to put their best evidence and best arguments forward. Facebook was able to talk about their policies, their practices with respect to third party developers, and the sorts of educational material that they provided as part of their privacy program.
Ultimately, the court concluded that the Commissioner had failed to put forward strong evidence to lead to the conclusion that Facebook had not obtained adequate user consent for the collection, use and disclosure of their personal information when using the app in question, or apps more generally.
It’s interesting to me that the Court notes that the Commissioner did not provide any evidence of what Facebook could have done better, in their view, nor did it offer any expert evidence about what would have been reasonable to do in the circumstances. This is from paragraph 71 of the decision:
[71] In assessing these competing characterizations, aside from evidence consisting of photographs of the relevant webpages from Facebook’s affiant, the Court finds itself in an evidentiary vacuum. There is no expert evidence as to what Facebook could feasibly do differently, nor is there any subjective evidence from Facebook users about their expectations of privacy or evidence that any user did not appreciate the privacy issues at stake when using Facebook. While such evidence may not be strictly necessary, it would have certainly enabled the Court to better assess the reasonableness of meaningful consent in an area where the standard for reasonableness and user expectations may be especially context dependent and are ever evolving.
The Court also seems to be saying that the Commissioner was trying to suck and blow at the same time:
[67] Overall, the Commissioner characterizes Facebook’s privacy measures as opaque and full of deliberate obfuscations, creating an “illusion of control”, containing reassuring statements of Facebook’s commitments to privacy and pictures of padlocks and studious dinosaurs that communicate a false sense of security to users navigating the relevant policies and educational material. On one hand, the Commissioner criticizes Facebook’s resources for being overly complex and full of legalize, rendering those resources as being unreasonable in providing meaningful consent, yet in some instances, the Commissioner criticizes the resources for being overly simplistic and not saying enough.
The judge then found that Facebook was essentially asking the court to make a whole bunch of negative inferences in the absence of evidence, which they did not appear to try to obtain. Here’s the court at paragraph 72 of the decision:
[72] Nor has the Commissioner used the broad powers under section 12.1 of PIPEDA to compel evidence from Facebook. Counsel for the Commissioner explained that they did not use the section 12.1 powers because Facebook would not have complied or would have had nothing to offer. That may be; however, ultimately it is the Commissioner’s burden to establish a breach of PIPEDA on the basis of evidence, not speculation and inferences derived from a paucity of material facts. If Facebook were to refuse disclosure contrary to what is required under PIPEDA, it would have been open to the Commissioner to contest that refusal.
The judge then goes on to say at paragraph 77:
[77] In the absence of evidence, the Commissioner’s submissions are replete with requests for the Court to draw “inferences”, many of which are unsupported in law or by the record. For instance, the Court was asked to draw an adverse inference from an uncontested claim of privilege over certain documents by Facebook’s affiant.
I think there are a couple very important things to note here. The first is that the Privacy Commissioner’s report of findings, which was released with great fanfare and which concluded that Facebook had violated Canada's federal privacy laws, was essentially based on inadequate evidence. The court found it sadly lacking – not enough to convince the Court that it was more likely than not – but apparently this evidentiary record was entirely satisfactory for the purposes of the Commissioner’s investigation and report of findings.
The second thing to note here is that the court application was essentially the privacy commissioner's second kick at the can. More evidence could have been obtained for this hearing had they actually exercised their authorities under the legislation or under the rules of court. If they did that, they came to court with an inadequate evidentiary record.
The second main violation that was alleged by the Privacy Commissioner was that Facebook had failed to adequately safeguard user information that was disclosed to third party app developers. Essentially, the Privacy Commissioner's argument is that Facebook continues to have an obligation to safeguard all of the information even after a user has chosen to disclose that information to a third party app developer. Facebook took the view that the safeguarding obligation transferred to the app developer when the user initiated the disclosure to that app developer.
This is consistent with the scheme of the Act, in my view, because the responsibility to safeguard information and to limit its use falls on the organization that actually controls that information. Once it is given to an app developer for this purpose, it is under the control of that app developer and the obligation to safeguard it would rest with them.
The Court summarized the Commissioner’s argument on this point in paragraph 85:
[85] The Commissioner counters that Facebook maintains control over the information disclosed to third-party applications because it holds a contractual right to request information from apps. The Commissioner maintains that Facebook’s safeguards were inadequate.
[86] I agree with Facebook; its safeguarding obligations end once information is disclosed to third-party applications. The Court of Appeal in Englander observed that the safeguarding principle imposed obligations on organizations with respect to their “internal handling” of information once in their “possession” (para 41).
Very importantly here, though, is the statement from the court that companies can expect good faith and honesty in contractual agreements:
[91] In any event, even if the safeguarding obligations do apply to Facebook after it has disclosed information to third-party applications, there is insufficient evidence to conclude whether Facebook’s contractual agreements and enforcement policies constitute adequate safeguards. Commercial parties reasonably expect honesty and good faith in contractual dealings. For the same reasons as those with respect to meaningful consent, the Commissioner has failed to discharge their burden to show that it was inadequate for Facebook to rely on good faith and honest execution of its contractual agreements with third-party app developers.
This is the conclusion that the court reached. So, in the result, the court did not conclude that Facebook had violated PIPEDA in any way in association with the Cambridge analytica incident.
Another important observation, in my view, is that the Privacy commissioner of Canada did not actually investigate Cambridge Analytica itself, but focused all of its regulatory attention at Facebook. It is common ground that Cambridge Analytica and its principal violated Facebook's policies and developer agreements in taking user data off the platform and using it for secondary, unauthorized purposes. But they did not investigate Cambridge Analytica. They went after Facebook.
So what are the takeaways from this?
I think certain folks at the Office of the Privacy Commissioner should take an opportunity to think deeply about their approach to this entire thing. They should not be issuing flashy press releases and lobbing accusations in the way that they did without evidence that could support the allegations in a court of law.
I also think we need to think carefully about what this says for privacy law reform in Canada. The Commissioner at the time used his finding as an example of why he should be given order making powers and the powers to impose penalties. They even issued a handy-dandy table in which it concluded:
Because “Facebook disputed the validity of the findings and refused to implement the recommendations,” this should lead to the result that:
“The Office of the Privacy Commissioner of Canada’s interpretation of the law should be binding on organizations.
To ensure effective enforcement, the Commissioner should be empowered to make orders and impose fines for non-compliance with the law.”
Almost certainly, if he’d had those powers, he would have imposed orders and fines on Facebook, based on what the Court concluded was inadequate evidence. The Court even disagreed with the Commissioner’s interpretation of the law.
If we are going to have fines and orders under PIPEDA’s replacement, which seems inevitable, the OPC should NOT be in a position to impose them. The OPC should be the prosecutor, recommending any such fines or orders to a tribunal that will not show any deference to the Commissioner.
And finally, this offers some certainty that once information has been disclosed to a third party, it is the third party’s legal obligation to safeguard it. The OPC clearly thought that the obligation remained with the company where it originated, but that view was not shared with the court.
After the OPC filed its application in court, Facebook filed a judicial review application to have the whole thing thrown out. Facebook was not successful on that, mainly because they filed late and were not entitled to an extension. Regardless, there are some very interesting things in that decision, which I’ll discuss in an upcoming episode.