Showing posts with label pipa. Show all posts
Showing posts with label pipa. Show all posts

Sunday, October 19, 2025

Canada's Privacy Regulators vs. TikTok: A critical overview


(This post is largely a transcript of the YouTube and podcast episode above.)

On September 23, 2025, the Federal Privacy Commissioner and his provincial counterparts in British Columbia, Alberta and Quebec issued a joint report of findings into TikTok.  This is a big one. It raises some interesting — and troubling — questions about jurisdiction, children’s privacy, reasonableness, consent, and what it actually means to protect privacy.

In my view, the Commissioners have imposed an almost impossible standard on TikTok — one that, ironically, could actually reduce privacy for users. Let’s unpack what they found, and why I think they may have gone too far.

I’ll note that the finding is more than thirty pages long, with almost two hundred paragraphs. This should be treated as an overview and not a deep dive into all of the minutiae. 

TikTok Pte. Ltd., a Singapore-based company owned by ByteDance, operates one of the most popular social-media platforms in the world. In Canada alone, about 14 million monthly users scroll, post, and engage on TikTok.

The investigation examined whether TikTok’s collection, use, and disclosure of personal information complied with PIPEDA, Quebec’s Private Sector Act, and the provincial privacy statutes of Alberta and B.C.

A key preliminary issue was jurisdiction

The British Columbia Personal Information Protection Act is a bit quirky. It says 

Application

3    (1)    Subject to this section, this Act applies to every organization.

    (2)    This Act does not apply to the following: (c) the collection, use or disclosure of personal information, if the federal Act applies to the collection, use or disclosure of the personal information;

TikTok argued that because of this, only one of the Federal Act or the British Columbia Act could apply. 

In my view, the response to this argument by the Commissioners is facile. They said: 

[22] Privacy regulation is a matter of concurrent jurisdiction and an exercise of cooperative federalism, which is a core principle of modern division of powers jurisprudence that favours, where possible, the concurrent operation of statutes enacted by the federal and provincial levels of government. PIPA BC has been “designed to dovetail with federal laws” in its protection of quasi-constitutional privacy rights of British Columbians. The legislative history of the enactment of PIPEDA and PIPA BC and their interlocking structure support the interpretation that PIPEDA and PIPA BC operate together seamlessly.

[23] PIPA BC operates where PIPEDA does not, and vice versa. In cases such as the present, which involve a single organization operating across both jurisdictions with complex collection, use, and disclosure of personal information, both acts operate with an airtight seal to leave no gaps. An interpretation of s. 3(2)(c) that would deprive the OIPC BC of its authority in any circumstance the OPC also exercises authority is inconsistent with the interlocking schemes and offends the principle of cooperative federalism.

In my view, this has nothing to do with “cooperative federalism”. In this case, they’re waving their hands instead of engaging in helpful legal analysis. The British Columbia legislature chose to say that if PIPEDA applies, PIPA will not. This is not about constitutional law. The Commissioners could have articulated a much more clear and straightforward response to this argument: TikTok collects personal information across Canada, in BC and elsewhere. PIPA applies to “the collection, use and disclosure of personal information that occurs within the Province of British Columbia” (This is from the federal regulation regarding PIPEDA’s application in British Columbia.) So in this joint investigation, BC’s PIPA applies to the personal information of British Columbians and PIPEDA applies to the personal information of individuals outside of British Columbia. They could have said that, but they didn’t. They did say it was about “overlapping protections” and not “silos”. I think this is incorrect. The British Columbia Act and the Federal Regulation clearly say: this is “the BC Commissioner’s silo”, and this is “the Federal Commissioner’s silo.”

So, the investigation moved forward jointly, setting the stage for three major questions:

  1. Were TikTok’s purposes appropriate?

  2. Was user consent valid and meaningful?

  3. Did TikTok meet its transparency obligations — especially in Quebec?

The first issue asked whether TikTok was collecting and using personal information — particularly from children — for an appropriate and legitimate purpose.

TikTok’s terms forbid users under 13 (14 in Quebec), but the Commissioners found its age-assurance tools were largely ineffective. The platform relied mainly on a simple birth-date gate at signup, plus moderation for accounts flagged by other users or automated scans.

As a result, TikTok said that it removes around half a million under-age Canadian accounts each year — but regulators concluded that many more likely go undetected.

It seems to me that terminating half a million accounts a year because they think the user may be underaged is a pretty strong sign that the company is sincere in its desire to NOT have kids on their platform. 

They also noted TikTok already uses sophisticated facial- and voice-analytics tools for other purposes, like moderating live streams or estimating audience demographics, but not to keep kids off the platform. The regulators want TikTok to re-purpose these tools for age estimation. 

The Commissioners found that TikTok was collecting sensitive information from children — including behavioral data and inferred interests — without a legitimate business need. In their view, that violates the “reasonable person” standard under PIPEDA s. 5(3) and the comparable provisions in the provincial laws.

This part makes my head hurt a bit. The regulators said:

[67] In light of the above (as summarized in paragraphs 64 to 66), we determined that TikTok has no legitimate need or bona fide business interest for its collection and use of the sensitive personal information of these underage users (in the context of PIPEDA, PIPA AB and PIPA BC), nor is this collection and use in support of a legitimate issue (in the context of Quebec’s Privacy Sector Act). It is therefore our finding, irrespective of TikTok’s assertion that this collection and use is unintentional, that TikTok’s purposes for collection and use of personal information of underage users are inappropriate, unreasonable, and illegitimate, and that TikTok contravened subsection 5(3) of the PIPEDA, section 4 of Quebec’s Private Sector Act, sections 11 and 14 of PIPA BC and sections 11 and 16 of PIPA AB.

It’s clear that TikTok does not want children on its platform and takes active steps to keep children off its platform. The regulators were clear that they didn’t think the measures taken were adequate, but I didn’t see them say that TikTok was insincere about this. So they find that TikTok’s purposes for collecting personal information from children was not reasonable. 

But TikTok had no purposes for collecting personal information from children. If kids make it through the age-gate and don’t have their account deleted, TikTok still does not want that data. They essentially said: “Your collection of personal information that you do not want and do not try to get is unreasonable.” Ok. I guess that’s their view. 

The second issue focused on consent — whether TikTok obtained valid and meaningful consent for tracking, profiling, targeting, and content personalization.

The Commissioners said it did not.

They found that TikTok’s privacy policy and consent flows were too complex, too long, and lacked the up-front clarity needed for meaningful understanding. In particular:

  • Key information about what data was being collected and how it was used wasn’t presented prominently.

  • Important details were buried in linked documents.

  • The privacy policy was not available in French until the investigation began.

  • And users were never clearly told how their biometric information — facial and voice analytics — was used to infer characteristics like age and gender.

Even for adults, the Commissioners said consent wasn’t meaningful because users couldn’t reasonably understand the nature and consequences of TikTok’s data practices.

And for youth 13–17, TikTok mostly relied on the same communications used for adults — no simplified, age-appropriate explanations of how data is collected, used, or shared.

Under the Commissioners’ reasoning, because the data involved is often sensitive — revealing health, sexuality, or political views — TikTok needed express consent. They found the platform failed that standard.

[81] Additionally, while users might reasonably expect TikTok to track them while on the platform, which they can use for “free”, it is our determination that they would not reasonably expect that TikTok collects the wide array of specific data elements outlined earlier in this report or the many ways in which it uses that information to deliver targeted ads and personalize the content they are shown on the platform. Many of these practices are invisible to the user. They take place in the background, via complex technological tools such as computer vision and TikTok’s own machine learning algorithms, as the user engages with the platform. Where the collection or use of personal information falls outside of the reasonable expectations of an individual or what they would reasonably provide voluntarily, then the organization generally cannot rely upon implied or deemed consent.

The Commissioners’ reasoning is generally coherent, but I’m not sure that it directly leads to a requirement for express consent. Consent can be implied where the individual understands what information is being collected and how it will be used, and it makes sense to take into account whether the individual expects the collection and use.  The main issue here is that there was collection and use of information outside the reasonable expectations of the individual. TikTok’s data practices are part of its “secret sauce” that has led to its success. Following the reasoning of the Commissioners … if TikTok had better calibrated the expectations of its users, it could have relied on implied consent. 

The Quebec Commissioner took things even further. Under Quebec’s Private Sector Act, organizations must inform the person concerned before collecting personal information.

The CAI found TikTok failed to highlight key elements of its practices and was using technologies like computer vision and audio analytics to infer users’ demographics and interests without adequate disclosure.

The CAI also found that TikTok allowed features that could locate or profile users without an active opt-in action, violating Quebec’s rule that privacy settings must offer the highest level of privacy by default.

Now here’s where I think the Commissioners overreached.

They’re effectively holding TikTok — and by extension, every global digital platform — to a near-impossible standard.

First, on age verification: to exclude all under-13 users, TikTok would need to collect more information from everyone — things like government-issued ID or facial-age scans. That’s exactly the kind of sensitive biometric data privacy regulators have previously warned against.

So in demanding “better” age assurance, the Commissioners are actually requiring more surveillance and more data collection from all users — adults and teens alike. While it may be “protecting the children”, like so many age assurance tools it is actually privacy-invasive.

Second, on consent and transparency: privacy regulators have long said privacy policies are too long, too legalistic, and too hard to read. Yet here, they criticize TikTok for not providing enough detail — for not being even longer and more comprehensive.

So which is it? We can’t reasonably expect the average user to read a novel-length privacy policy, yet that’s what these findings effectively require.

And third, the Commissioners’ reasoning conflates complexity with opacity. TikTok’s algorithms and personalization systems are complex — that’s the nature of modern machine learning. Explaining them “in plain language” is a noble goal, but demanding a full technical manual risks burying users in noise.

In my view, this decision reflects a growing tension in privacy regulation: between idealism — the desire for perfect transparency and perfect protection — and pragmatism — the need for solutions that actually enhance user privacy without breaking the internet.

The regulators seem to be demanding a standard of perfection in a messy and complicated world. These laws can be applied reasonably and flexibly.

One final thing to note: The regulators say that information provided to support consent from young people (over the age of 13 or 14) has to be tailored to the cognitive level of those young people. That means it has to be subjective, in light of the individual. But the Privacy Commissioner of Canada is arguing in the Supreme Court of Canada against Facebook that consent is entirely objective, based on the fictional “reasonable person” (who is NOT a young person). They should pick a lane. 

So, where does this leave us? TikTok has agreed to implement many of the Commissioners’ recommendations — stronger age-assurance tools, better explanations, new teen-friendly materials, and improved consent flows.

But whether these measures will truly protect privacy — or simply demand more data from more users — is a question regulators and platforms alike still need to grapple with.

Saturday, May 17, 2025

Alberta's privacy law unconstitutionally violates freedom of expression -- again -- in a decision that has implications for All Canadian privacy laws

You may have seen some headlines that said that Alberta’s privacy law has been declared unconstitutional. Yup, it’s true that at least part of it was and here’s why …..

This case involves Clearview AI Inc. ("Clearview"), a U.S.-based facial recognition company, challenging an order issued by Alberta’s Information and Privacy Commissioner. The order, based on findings from a joint investigation by Canadian federal and provincial privacy regulators, required Clearview to cease offering services in Alberta, stop collecting, using, and disclosing images and biometric data of Albertans, and delete the relevant data already in its possession.

Clearview sought judicial review of the order on a number of grounds, including that it is not subject to the jurisdiction of Alberta and that the Personal Information Protection Act (aka “PIPA”) does not apply to it, the Commissioner adopted an unreasonable interpretation of the words “publicly available” in PIPA and the Personal Information Protection Act Regulation (the “PIPA Regulation”), and the Commissioner’s finding that Clearview did not have a reasonable purpose for collecting, using, and disclosing personal information is unreasonable. Clearview further asserted that the Commissioner’s interpretation of PIPA and the PIPA Regulation is unconstitutional contrary to Charter s 2(b) which guarantees freedom of expression. That last argument is the one we’re going to focus on.

One thing that is really interesting about the case is that the Court did not really have to address the Charter issues. The Commissioner found that Clearview’s purposes were not reasonable, which is necessary for a company to even collect, use or disclose personal information. The Court agreed, and could have just said “not reasonable!” – don’t have to decide the Charter question – just go follow the Commissioner’s order. But the Court delved into the Charter question as well.

It’s also notable that this is the second time that the Alberta statute has been declared to violate the Charter based on “publicly available information” in the Act and the Regulations as being too narrow. That was done by the Supreme Court of Canada in Alberta (Information and Privacy Commissioner) v. United Food and Commercial Workers, Local 401, when the Act was being applied to video recording by a union at a picket line.

The company at issue in this case, Clearview AI, has been the subject of many privacy investigations around the world. They collect facial images from publicly accessible websites, including social media, and use them to create a biometric facial recognition database, marketed primarily to law enforcement. In 2020, privacy commissioners from Alberta, B.C., Quebec, and Canada investigated Clearview’s operations and concluded in a joint report that its practices violated privacy laws.

In December 2021, Alberta’s Commissioner issued an order directing Clearview to cease operations in Alberta, based on violations of PIPA. The Commissioner essentially said that Clearview must do for Alberta what they agreed to do in setting a lawsuit in Illinois (which is notorious for its biometric laws).

Clearview AI then brought an application for judicial review in the Court of King’s bench, contesting:

  • Jurisdiction of Alberta’s Commissioner,
  • The reasonableness of the Commissioner's interpretation of "publicly available" under PIPA,
  • The constitutionality of PIPA's consent-based restrictions on the collection, use, and disclosure of personal information.

It should be noted that the British Columbia Commissioner issued a similar order, which was upheld by the Supreme Court of British Columbia last year.

In Alberta, as far as the jurisdiction argument went, the Court upheld the Commissioner’s jurisdiction, finding a "real and substantial connection" between Clearview’s activities and Alberta. Clearview had marketed its services in Alberta and its database included images of Albertans. The bar for jurisdiction in Canada is pretty low.

On the statutory interpretation issue, the Court accepted as reasonable the Commissioner’s interpretation that images scraped from the internet, including social media, are not "publicly available" within the meaning of the PIPA Regulation. The Commissioner employed a purposive approach, interpreting the relevant provisions narrowly in light of the quasi-constitutional status of privacy rights.

PIPA, like other privacy regulatory regimes in Canada, provides that consent must be obtained to collect and use “personal information” unless certain exceptions apply. One of the exceptions provided for in PIPA is that the information is “publicly available.” PIPA uses the term “publicly available,” but the definition for those words is found in PIPA Regulation section 7(e). PIPA Regulation s 7(e) provides:

7 ... personal information does not come within the meaning of ... “the information is publicly available” except in the following circumstances: ... (e) the personal information is contained in a publication, including, but not limited to, a magazine, book or newspaper, whether in printed or electronic form, but only if (i) the publication is available to the public, and (ii) it is reasonable to assume that the individual that the information is about provided that information.

The private sector privacy laws of Alberta, British Columbia and Federally have similar, but not identical definitions of what is “publicly available” information that does not require consent for its collection and use. There are other categories, but this decision turned on information in a publication. Here are the three different definitions:

In Alberta, it says

the personal information is contained in a publication, including … but not limited to … a magazine, book or newspaper, whether in printed or electronic form, but only if (i) the publication is available to the public, and (ii) it is reasonable to assume that the individual that the information is about provided that information;

In British Columbia, it does not use “including but not limited to”:

personal information that appears in a printed or electronic publication that is available to the public, including a magazine, book or newspaper in printed or electronic form.

Under PIPEDA’s regulation, the analogous provision reads:

personal information that appears in a publication, including a magazine, book or newspaper, in printed or electronic form, that is available to the public, where the individual has provided the information.

Canadian privacy regulators have interpreted “publication” to exclude social media sites like Facebook and LinkedIn, where Clearview harvests much of its information.

Clearview argued that this narrow interpretation under the Alberta statute and regulation violated its freedom of expression rights under section 2(b) of the Charter of Rights and Freedoms, and could not be saved as a reasonable limitation under section 1 of the Charter.

The Court agreed that:

Clearview’s activities (compiling and using data to deliver a service) were expressive. The consent requirement effectively operated as a prohibition on expression where obtaining consent was impractical.

This amounted to a prima facie infringement of s. 2(b) of the Charter.

I should note that the Alberta Commissioner – ridiculously in my view – argued that the Charter wasn’t even engaged. Here’s what the Court said.

[107] The Commissioner submits that if Clearview’s activity is expressive, it should be excluded from constitutional protection because “the method – mass surveillance – conflicts with the underlying s 2(b) values.” Clearview’s activity, according to the Commissioner, conflicts with the purposes of Charter s 2(b) including the pursuit of truth, participation in the community, self-fulfillment, and human flourishing. The Commissioner offered no authority to support the position that expressive activity could be excluded from protection based on a conflict with underlying constitutional values. Short of violence, all expressive activity is protected by Charter s 2(b).

It’s just a dumb argument to make, in my view.

So once a prima facie infringement is made out, the burden shifts to the government to justify it as a reasonable limitation, prescribed by law that can be justified in a free and democratic society. This follows something called the Oakes test:

The test involves a two-stage analysis: first, the objective of the law must be pressing and substantial; second, the means used to achieve that objective must be proportionate, which requires

  1. a rational connection between the law and its objective,
  2. minimal impairment of the right or freedom, and
  3. a proportionality between the law’s benefits and its negative effects on rights.

In this case, the Court found that there was a Pressing and Substantial Objective: Protecting personal privacy is valid and important. The Court also found that the requirement of consent is logically connected to privacy protection, and thus rationally connected.

The law failed on the “minimal impairment” part of the analysis. The dual requirement of consent and a reasonable purpose, without an exception for publicly available internet data, was overly broad.

In a nutshell, the court has to consider what expressive activities are captured – how broadly the net is cast – and whether everything that is caught in that net is necessary or rationally connected to the pressing and substantial objective.

The Court summarized Clearview’s argument at paragraph 129:

“Clearview asserts that people who put their personal information on the internet without protection do not have a reasonable expectation of privacy. Where there is no reasonable expectation of privacy, the protection of privacy is not a pressing and substantial state objective.”

The Court noted that the way the net is being cast by the Act and the regulations not only captures Clearview’s web-scraping, but it also captures legitimate indexing by beneficial search engines. The Commissioner’s interpretation would exclude search engines, meaning that they would have to get consent for all collection, use and disclosure of personal information obtained from websites.

Here’s what the Court said at paragraph 132 of the decision:

[132] A difficulty with the PIPA consent requirement for personal information publicly available on the internet is that it applies equally to Clearview’s search technology used to create a facial recognition database and regular search engines that individuals use to access information on the internet. … For the most part, people consider Google’s indexing of images and information to be beneficial. And certainly, Albertans use Google and similar search engines for expressive purposes. But according to my interpretation of PIPA and the PIPA Regulation and the Commissioner’s interpretation of those same instruments, Google and similar search engines cannot scrape the internet in Alberta for the purpose of building and maintaining an index of images of people without consent from every individual whose personal information is collected.

The Court then went on to say at paragraphs 136 and 137:

[136] PIPA and the PIPA Regulation are overbroad because they limit valuable expressive activity like the operation of regular search engines. There is no justification for limiting use of publicly available personal information by regular search engines just as there was no justification to limit use of publicly available personal information for reasonable purposes by the union in UFCW Local 401.
[137] Alberta has a pressing and substantial interest in protecting personal information where individuals post images and information to websites and social media platforms subject to terms of service that preserve a reasonable expectation of limited use. This pressing and substantial interest, however, does not extend to the operation of regular search engines. A reasonable person posting images and information to a website or social media platform subject to terms of service but without using privacy settings expects that such images and information will be indexed and retrieved by internet search engines; indeed, that is sometimes the point of posting images and information to the internet without using privacy settings.

Then, at paragraph 138, the court concluded that the “publicly available” exception was too narrow because it specifically would capture general search engines, which do not engage the “pressing and substantial limitation”

[138] The public availability exception to the consent requirement in PIPA and the PIPA Regulation is source-based, not purpose-based. Because it is source-based, it applies to regular internet search engines that scrape images and information from the internet like Clearview even if they use images and information for a different purpose. I find that PIPA and the PIPA Regulation are overbroad because the definition of “publication” in PIPA Regulation s 7(e) is confined to magazines, books, newspapers, and like media. Without a reasonable exception to the consent requirement for personal information made publicly available on the internet without use of privacy settings, internet search service providers are subject to a mandatory consent requirement when they collect, use, and disclose such personal information by indexing and delivering search results. There is no pressing and substantial justification for imposing a consent requirement on regular search engines from collecting, using, and disclosing unprotected personal information on the internet as part of their normal function of providing the valuable service of indexing the internet and providing search results.

The court essentially concluded that it was OK to limit what Clearview is doing, but it is NOT OK to limit what search engines are doing. The law, as written, does not distinguish between the “bad” and the “good”, and as a result, the law did not “minimally impair” this important Charter right.

On the final balancing, the Court concluded that the harm to freedom of expression was not outweighed by the benefit to privacy.

The Court declared that PIPA ss. 12, 17, and 20 and PIPA Regulation s. 7 unjustifiably infringed s. 2(b) of the Charter and could not be saved under s. 1 of the Charter, to the extent that they prohibited the use of publicly available internet data for reasonable purposes.

The Court upheld the Commissioner’s jurisdiction and found her statutory interpretation reasonable. However, the impugned provisions of PIPA and the Regulation were declared unconstitutional insofar as they infringed freedom of expression by unduly restricting the use of publicly available information online.

I fully expect that this decision will be appealed, and I don’t know if the British Columbia decision has been appealed.

In the big picture, though this decision is not binding on the Federal Commissioner, it pretty strongly stands for the proposition that PIPEDA’s publicly available information exception is also unconstitutional. This has implications for “the right to be forgotten” and for collecting data for training AI models, both of which are currently before the federal commissioner.

Monday, May 08, 2023

British Columbia Privacy Commissioner shuts down facial recognition



Recently, the information and privacy commissioner of British Columbia issued a decision that essentially shuts down most use of facial recognition technology in the retail context.

What’s interesting is that the Commissioner undertook this investigation on his own accord. In order to see how prevalent the use of facial recognition was among the province’s retailers, the OIPC surveyed 13 of the province’s largest retailers (including grocery, clothing, electronics, home goods, and hardware stores): 12 responded that they did not use FRT. The remaining retailer, Canadian Tire Corporation, requested that the OIPC contact their 55 independently owned Associate Dealer stores in the province. In the result, 12 stores reported using FRT. Based on these 12 responses, the Commissioner commenced an investigation under s. 36(1)(a) of the Personal Information Protection Act of four of the locations, scattered across the province. 

What’s also interesting is that the stores immediately ceased use of the technology, but the Commissioner determined that doing a full investigation was warranted, so that retailers would be aware of the privacy issues with the use of facial recognition in this context. 

The investigated stores used two different vendors’ systems, but they essentially operated the same way: The systems functioned took pictures or videos of anyone who entered the stores, as they came within range of the FRT cameras. This included customers, staff, delivery personnel, contractors, and minors who might have entered the store. Using software, the facial coordinates from these images or videos were mapped to create a unique biometric template for each face. So everyone was analyzed this way.

The systems then compared the biometrics of new visitors with those stored in a database of previously identified "Persons of Interest," who were allegedly involved in incidents such as theft, vandalism, harassment, or assault. When a new visitor's biometrics matched an existing record in the database, the FRT system sent an automatic alert to store management and security personnel via email or a mobile device application. The alerts contained the newly captured image or video that triggered the match, along with a copy of the previously collected image from the Persons of Interest database and any relevant comments or details about the prior incidents. According to store managers, these alerts were “advisory” until the match was confirmed in person by management or security personnel.

Store management reported that after a positive match was verified, the nature of the prior incident allegedly involving the individual helped determine a course of action. If a prior incident included violence, management or security staff would escort the individual from the store. If the prior incident involved theft, management may have chosen to surveil or remove the person in question

The legal questions posed by the Commissioner were (1) whether consent was required under PIPA for the collection and use of images for this purpose, (2) whether the stores provided notification and obtained the necessary consent (through signage or otherwise) and – most importantly – (3) whether this collection and use is for an “appropriate purpose” under s. 11 and 14 of PIPA.

The first question was easy to answer: Yes, consent is required in this context. PIPA, like PIPEDA, requires organizations to obtain consent, either explicitly or implicitly, before collecting, using, or disclosing personal information unless a specific exception applies. No such exceptions applied in this case. Therefore, the Commissioner concluded it was incumbent on the stores to show that individuals gave consent for the collection of their personal information. 

How would you get that consent? Well the stores had signage at the entrances. Clear signage is usually sufficient for the use of surveillance cameras, but the question would be whether these would be sufficient for this use.

Store number 1 had a sign that stated, in part: “these premises are monitored by video surveillance that may include the use of electronic and/or biometric surveillance technologies.”

The Commissioner said this was inadequate. The notice did not state the purposes for the collection of personal information. Also, stating that biometric surveillance “may” be in use did not reflect that the store continuously employed the technology. The Commissioner said the average person cannot reasonably be expected to understand how their information may be handled by “biometric surveillance technologies,” let alone the implications and risks of this new technology. Consent requires that an individual understands what they are agreeing to – and the posted notification failed to adequately alert the public in this case, according to the Commissioner. This store failed to meet notification requirements under PIPA.

The second store had a notice that stated, in part: “facial recognition technology is being used on these premises to protect our customers and our business.” 

This one was also not satisfactory to the Commissioner. The purpose, as set out, is so  broad that the statement would relay no specific meaning to the average person. Furthermore, the notice does not explain what facial recognition technology entails or the nature of the personal information collected. One cannot reasonably assume that members of the public understand what FRT is, nor its privacy implications, according to the Commissioner.

Stores 3 and 4 had better notices, but they still didn’t satisfy the Commissioner. Their notices stated: “video surveillance cameras and FRT (also known as biometrics) are used on these premises for the protection of our customers and staff. These technologies are also used to support asset protection, loss prevention and to prevent persons of interest from conducting further crime. The images are for internal use only, except as required by law or as part of a legal investigation.” 

It has more detail, but was not that well written. It does not say what “FRT” is. The commissioner noted that the abbreviation is not yet well-known or widely understood. Using the full phrase “facial recognition technology” along with a basic explanation of its workings would have provided a more accurate description of the stores’ data-collection activities. Even so, the Commissioner said that North American society is not yet at the point where it is reasonable to assume that the majority of the population understands what personal information FRT collects, or creates, as well as the technology’s privacy implications. All of this would have to be spelled out. 

While you may be able to rely on implied consent for the use of plain old fashioned surveillance cameras, the Commissioner concluded that you cannot for facial recognition technology, at least in this context. 

The Commissioner said facial biometrics are a highly sensitive, unique, and unchangeable form of personal information. Collecting, using, and sharing this information goes beyond what people would reasonably expect when entering a retail store, and using FRT creates a significant and lasting risk of harm. The Commissioner said the distinctiveness and permanence of this biometric data can make it an attractive target for misuse, potentially becoming a tool to compromise an individual's identity. In the wrong hands, the Commissioner wrote, this information can lead to identity theft, financial loss, and other severe consequences. (I am not entirely sure how…)

As a result, the four stores were required to obtain explicit consent from customers before collecting their facial biometrics. However, they did not make any attempts, either verbally or in writing, to obtain such consent.

So the notices were not adequate and the stores didn’t get the right kind of consent. But the last nail in the coffin for this use of biometrics was the Commissioner’s conclusion about whether the use of facial recognition technology for these purposes is reasonable. 

Reasonableness is determined by looking at the amount of personal information collected, the Sensitivity of the information, the likelihood of being effective and whether less intrusive alternatives had been attempted.

With respect to the Amount of personal information collected, it was vast. The commissioner said a large quantity of personal information was collected from various sources, including customers, staff, contractors, and other visitors. The stores reported that their establishments were visited by hundreds of individuals of all ages, including minors, every day so during a single month, the FRT systems captured images of thousands of people who were simply shopping and not engaging in any harmful activities. The sheer volume of information collected suggests that the collection was unreasonable.

You won’t be surprised that the Commissioner concluded that the personal information at issue was super-duper sensitive. 

With respect to the likelihood of being effective, they didn’t really have in place any system to measure it. The commissioner concluded it really wasn’t that effective. 

The Commissioner wrote that before implementing new technology that collects personal information, organizations should establish a reliable method to measure the technology's effectiveness. This typically involves comparing relevant metrics before and after the technology's implementation. 

However, in this case, the stores did not provide any systematic evidence of measuring their FRT system's effectiveness. Instead, they only gave anecdotal evidence of incidents before and after installation. Without a clear way to measure the technology's effectiveness, it is challenging to analyze this factor, particularly when collecting highly sensitive personal information.

The accuracy of FRT technology is also a related issue. Systems such as these have been reported widely to falsely match facial biometrics of people of colour and women. 

The store managers acknowledged that the alerts could be inaccurate and relied on staff to compare database images to a visual observation of the individual. This manual check by staff suggests that the FRT system may not be effective. False identification can have harmful consequences when innocent shoppers are followed or confronted based on an inaccurate match.

Besides the system's accuracy, its effectiveness can also be judged against the existing methods used by the stores to identify potential suspects. The store managers stated that their security guards and managers typically knew the "bad actors" and could recognize them without FRT alerts. The persons of interest were often professional thieves who repeatedly returned to the store.

Moreover, there is little evidence that FRT enhanced customer and employee safety. Whether a person of interest was identified by FRT or by the visual recognition of an employee, the stores' next steps were the same. These involved deciding whether to observe the suspected person or interact with them directly, including escorting them from the premises. In either case, store managers rarely reported contacting the police for assistance.

As for whether less intrusive alternatives had been attempted, the less intrusive measures were what they were doing before. The Commissioner concluded that the use of FRT didn’t add a lot to solving the stores problems, but collected a completely disproportionate amount of sensitive personal information. The less intrusive means – without biometrics – largely did the trick. 

In the end, the Commissioner made three main recommendations. 

The first was that the stores should build and maintain robust privacy management programs that guide internal practices and contracted services. – presumably so they wouldn’t implement practices such as these that are offside the legislation. 

This report also makes two recommendations for the BC government: The BC Government should amend the Security Services Act or similar enactments to explicitly regulate the sale or installation of technologies that capture biometric Information. 

Finally, the BC Government should amend PIPA to create additional obligations for organizations that collect, use, or disclose biometric information, including requiring notification to the OIPC. This would be similar to what’s in place in Quebec where biometric databases need to be disclosed to the province’s privacy commissioner. 

I think, for all intents and purposes, this shuts down the use of facial recognition technology in the retail context, where it is being used to identify “bad guys”. 


Wednesday, December 11, 2019

Privacy Commissioner again upends the consensus on transfers for processing in Aggregate IQ investigation

You may recall earlier this year when the Canadian Privacy Commissioner completely revised the previous consensus by concluding that a "transfer for processing" was a disclosure that requires consent, along with any cross-border transfer of personal information. The Canadian privacy and business community were shocked by this reversal and the Commissioner eventually reversed this position, returning to the status quo.

Once again, the OPC has upended the consensus on using contractors to process information on behalf of a client.

The Privacy Commissioner of Canada and the Information and Privacy Commissioner of British Columbia together released their reports of findings into Aggregate IQ on November 24, 2019, following their joint investigation of the company. You may recognize the name of the company, as it was implicated in the many international Cambridge Analytica investigations. It was a contractor to the now infamous company that was implicated in a range of mischief related to the Brexit campaign and the US 2016 presidential election.

As a Canadian company, it should not be surprising that Aggregate IQ would come under scrutiny in Canada. What is surprising is that the result of the investigation essentially turns a whole lot of Canadian thinking about privacy and contracting out of services on its head, and also seems to ignore binding precedent from the Federal Court of Canada.

Aggregate IQ is essentially a data processing company that works on behalf of political parties and political campaigns. They take data from the campaigns, sort it, supplement it and sometimes use it on behalf of their clients. They key is that they do this work on behalf of clients.

Superficially, it may make sense to conclude that a Canadian company is subject to Canadian privacy laws. But the working assumption has always been that companies that collect, use and disclose personal information on behalf of clients are subject to the laws that govern their clients and their clients' activities. Those "trickle down" through the chain of contracts and sub-contracts. What's shocking is that the OPC has concluded that compliance with those laws is not enough. Processors in Canada, they say, have to also comply with Canadian laws even when they are incompatible with the laws that regulate the client.

For example, Aggregate IQ did work on a mayoral campaign in Newfoundland. No privacy law applies to a mayoral campaign in Newfoundland, but nevertheless the OPC says that Aggregate IQ needed consent for their use of the information on behalf of the candidate. The campaign did not need consent, but the OPC concluded that by using a contractor, the campaign is subject to more laws and additional burdens than the government of Newfoundland has concluded are necessary. Similarly, the OPC says that Aggregate IQ needed consent under PIPEDA for what they were doing on behalf of US and UK campaigns, even though the activity is largely unregulated in the US and consent is not required in the UK (using legitimate bases for processing under the GDPR). Setting aside whether the campaigns were actually complying with their local laws, the conclusion from the OPC is that additional Canadian requirements will be overlaid on top of the laws that should actually matter and actually have a close connection to what's really going on.

Until this point, the consensus has generally been that when a contractor is handling data for a customer, the obligations that lie on the customer flow down to the contractor. Similar to the “controller” and “processor” scheme in GDPR.

Canadian privacy law applies to the collection, use and disclosure of personal information in the course of commercial activity. And you'd think that Aggregate IQ is engaged in commercial activity so PIPEDA would apply. But that's not the case. If a contractor is collecting, using or disclosing personal information on behalf of a client, you have to look at that client's purposes. The Canadian Federal Court clearly concluded this in State Farm v Privacy Commissioner.* In that case, the OPC asserted its jurisdiction over an insurance company because they were clearly commercial, even when acting on behalf of an individual defendant in a car accident lawsuit. The Federal Court firmly disagreed. One has to look at what's really going on. State Farm was not handling personal information on its own behalf, but on behalf of its insured who was not subject to any privacy regulation for that activity. The same principle applies here. If Newfoundland has decided not to regulate how mayoral candidates collect and use personal information, it makes no difference if they use that information themselves or hire a contractor to do that.

This upends what has been understood to be the way things work. And it has worked.

And it is really bad public policy. It puts Canadian companies at a significant disadvantage in very competitive industries. While many people say that GDPR is much more privacy protective, there are many circumstances where personal data can be processed without consent, but based on a legitimate interest. A company or campaign in Europe would be much better off hiring a European company if hiring a Canadian company meant that the legitimate interest is disregarded and a Canadian consent requirement were superimposed. The same would apply to a Canadian campaign: the campaign that complies with whatever laws apply to it directly is suddenly subject to additional rules if it hires a contractor to carry out what would otherwise be a compliant and lawful activity.

It is also really bad public policy because if you take it to the logical conclusion, it means that Canadian governments cannot hire contractors to process or use personal information on their behalf. All Canadian public sector privacy laws are based on "legitimate purposes", so consent is not required where the collection, use or disclosure is lawfully authorized and legitimate. But this finding by the OPC would say that the contractor has to get consent under PIPEDA for whatever they do for their public sector client. This is not workable and I hope is an unintended consequence.

Beyond that, I'm not sure what to say. It appears that Aggregate IQ has agreed to follow the Commissioner's recommendations, so this will not be given the chance to be corrected by the Federal Court.

How this will play out in future cases remains to be seen.

* I should note that I was counsel to State Farm in that case.