Monday, May 08, 2023

British Columbia Privacy Commissioner shuts down facial recognition



Recently, the information and privacy commissioner of British Columbia issued a decision that essentially shuts down most use of facial recognition technology in the retail context.

What’s interesting is that the Commissioner undertook this investigation on his own accord. In order to see how prevalent the use of facial recognition was among the province’s retailers, the OIPC surveyed 13 of the province’s largest retailers (including grocery, clothing, electronics, home goods, and hardware stores): 12 responded that they did not use FRT. The remaining retailer, Canadian Tire Corporation, requested that the OIPC contact their 55 independently owned Associate Dealer stores in the province. In the result, 12 stores reported using FRT. Based on these 12 responses, the Commissioner commenced an investigation under s. 36(1)(a) of the Personal Information Protection Act of four of the locations, scattered across the province. 

What’s also interesting is that the stores immediately ceased use of the technology, but the Commissioner determined that doing a full investigation was warranted, so that retailers would be aware of the privacy issues with the use of facial recognition in this context. 

The investigated stores used two different vendors’ systems, but they essentially operated the same way: The systems functioned took pictures or videos of anyone who entered the stores, as they came within range of the FRT cameras. This included customers, staff, delivery personnel, contractors, and minors who might have entered the store. Using software, the facial coordinates from these images or videos were mapped to create a unique biometric template for each face. So everyone was analyzed this way.

The systems then compared the biometrics of new visitors with those stored in a database of previously identified "Persons of Interest," who were allegedly involved in incidents such as theft, vandalism, harassment, or assault. When a new visitor's biometrics matched an existing record in the database, the FRT system sent an automatic alert to store management and security personnel via email or a mobile device application. The alerts contained the newly captured image or video that triggered the match, along with a copy of the previously collected image from the Persons of Interest database and any relevant comments or details about the prior incidents. According to store managers, these alerts were “advisory” until the match was confirmed in person by management or security personnel.

Store management reported that after a positive match was verified, the nature of the prior incident allegedly involving the individual helped determine a course of action. If a prior incident included violence, management or security staff would escort the individual from the store. If the prior incident involved theft, management may have chosen to surveil or remove the person in question

The legal questions posed by the Commissioner were (1) whether consent was required under PIPA for the collection and use of images for this purpose, (2) whether the stores provided notification and obtained the necessary consent (through signage or otherwise) and – most importantly – (3) whether this collection and use is for an “appropriate purpose” under s. 11 and 14 of PIPA.

The first question was easy to answer: Yes, consent is required in this context. PIPA, like PIPEDA, requires organizations to obtain consent, either explicitly or implicitly, before collecting, using, or disclosing personal information unless a specific exception applies. No such exceptions applied in this case. Therefore, the Commissioner concluded it was incumbent on the stores to show that individuals gave consent for the collection of their personal information. 

How would you get that consent? Well the stores had signage at the entrances. Clear signage is usually sufficient for the use of surveillance cameras, but the question would be whether these would be sufficient for this use.

Store number 1 had a sign that stated, in part: “these premises are monitored by video surveillance that may include the use of electronic and/or biometric surveillance technologies.”

The Commissioner said this was inadequate. The notice did not state the purposes for the collection of personal information. Also, stating that biometric surveillance “may” be in use did not reflect that the store continuously employed the technology. The Commissioner said the average person cannot reasonably be expected to understand how their information may be handled by “biometric surveillance technologies,” let alone the implications and risks of this new technology. Consent requires that an individual understands what they are agreeing to – and the posted notification failed to adequately alert the public in this case, according to the Commissioner. This store failed to meet notification requirements under PIPA.

The second store had a notice that stated, in part: “facial recognition technology is being used on these premises to protect our customers and our business.” 

This one was also not satisfactory to the Commissioner. The purpose, as set out, is so  broad that the statement would relay no specific meaning to the average person. Furthermore, the notice does not explain what facial recognition technology entails or the nature of the personal information collected. One cannot reasonably assume that members of the public understand what FRT is, nor its privacy implications, according to the Commissioner.

Stores 3 and 4 had better notices, but they still didn’t satisfy the Commissioner. Their notices stated: “video surveillance cameras and FRT (also known as biometrics) are used on these premises for the protection of our customers and staff. These technologies are also used to support asset protection, loss prevention and to prevent persons of interest from conducting further crime. The images are for internal use only, except as required by law or as part of a legal investigation.” 

It has more detail, but was not that well written. It does not say what “FRT” is. The commissioner noted that the abbreviation is not yet well-known or widely understood. Using the full phrase “facial recognition technology” along with a basic explanation of its workings would have provided a more accurate description of the stores’ data-collection activities. Even so, the Commissioner said that North American society is not yet at the point where it is reasonable to assume that the majority of the population understands what personal information FRT collects, or creates, as well as the technology’s privacy implications. All of this would have to be spelled out. 

While you may be able to rely on implied consent for the use of plain old fashioned surveillance cameras, the Commissioner concluded that you cannot for facial recognition technology, at least in this context. 

The Commissioner said facial biometrics are a highly sensitive, unique, and unchangeable form of personal information. Collecting, using, and sharing this information goes beyond what people would reasonably expect when entering a retail store, and using FRT creates a significant and lasting risk of harm. The Commissioner said the distinctiveness and permanence of this biometric data can make it an attractive target for misuse, potentially becoming a tool to compromise an individual's identity. In the wrong hands, the Commissioner wrote, this information can lead to identity theft, financial loss, and other severe consequences. (I am not entirely sure how…)

As a result, the four stores were required to obtain explicit consent from customers before collecting their facial biometrics. However, they did not make any attempts, either verbally or in writing, to obtain such consent.

So the notices were not adequate and the stores didn’t get the right kind of consent. But the last nail in the coffin for this use of biometrics was the Commissioner’s conclusion about whether the use of facial recognition technology for these purposes is reasonable. 

Reasonableness is determined by looking at the amount of personal information collected, the Sensitivity of the information, the likelihood of being effective and whether less intrusive alternatives had been attempted.

With respect to the Amount of personal information collected, it was vast. The commissioner said a large quantity of personal information was collected from various sources, including customers, staff, contractors, and other visitors. The stores reported that their establishments were visited by hundreds of individuals of all ages, including minors, every day so during a single month, the FRT systems captured images of thousands of people who were simply shopping and not engaging in any harmful activities. The sheer volume of information collected suggests that the collection was unreasonable.

You won’t be surprised that the Commissioner concluded that the personal information at issue was super-duper sensitive. 

With respect to the likelihood of being effective, they didn’t really have in place any system to measure it. The commissioner concluded it really wasn’t that effective. 

The Commissioner wrote that before implementing new technology that collects personal information, organizations should establish a reliable method to measure the technology's effectiveness. This typically involves comparing relevant metrics before and after the technology's implementation. 

However, in this case, the stores did not provide any systematic evidence of measuring their FRT system's effectiveness. Instead, they only gave anecdotal evidence of incidents before and after installation. Without a clear way to measure the technology's effectiveness, it is challenging to analyze this factor, particularly when collecting highly sensitive personal information.

The accuracy of FRT technology is also a related issue. Systems such as these have been reported widely to falsely match facial biometrics of people of colour and women. 

The store managers acknowledged that the alerts could be inaccurate and relied on staff to compare database images to a visual observation of the individual. This manual check by staff suggests that the FRT system may not be effective. False identification can have harmful consequences when innocent shoppers are followed or confronted based on an inaccurate match.

Besides the system's accuracy, its effectiveness can also be judged against the existing methods used by the stores to identify potential suspects. The store managers stated that their security guards and managers typically knew the "bad actors" and could recognize them without FRT alerts. The persons of interest were often professional thieves who repeatedly returned to the store.

Moreover, there is little evidence that FRT enhanced customer and employee safety. Whether a person of interest was identified by FRT or by the visual recognition of an employee, the stores' next steps were the same. These involved deciding whether to observe the suspected person or interact with them directly, including escorting them from the premises. In either case, store managers rarely reported contacting the police for assistance.

As for whether less intrusive alternatives had been attempted, the less intrusive measures were what they were doing before. The Commissioner concluded that the use of FRT didn’t add a lot to solving the stores problems, but collected a completely disproportionate amount of sensitive personal information. The less intrusive means – without biometrics – largely did the trick. 

In the end, the Commissioner made three main recommendations. 

The first was that the stores should build and maintain robust privacy management programs that guide internal practices and contracted services. – presumably so they wouldn’t implement practices such as these that are offside the legislation. 

This report also makes two recommendations for the BC government: The BC Government should amend the Security Services Act or similar enactments to explicitly regulate the sale or installation of technologies that capture biometric Information. 

Finally, the BC Government should amend PIPA to create additional obligations for organizations that collect, use, or disclose biometric information, including requiring notification to the OIPC. This would be similar to what’s in place in Quebec where biometric databases need to be disclosed to the province’s privacy commissioner. 

I think, for all intents and purposes, this shuts down the use of facial recognition technology in the retail context, where it is being used to identify “bad guys”. 


Sunday, April 16, 2023

Privacy Commissioner of Canada Loses in Federal Court against Facebook


Just this past week, the Office of the Privacy Commissioner of Canada was on the receiving end of a Federal Court decision that I would characterize as more than a little embarrassing for the Commissioner.

In a nutshell, the Commissioner took Facebook to court over the Cambridge Analytica incident and lost, big time.

You may recall from 2019, when the Privacy Commissioner of Canada and the Information and Privacy Commissioner of British Columbia released, with as much fanfare as possible, the result of their joint investigation into Facebook related to the Cambridge Analytica incident.

Both of the Commissioners concluded, at that time, that Facebook had violated the federal and British Columbia privacy laws, principally related to transparency and consent.

Because Facebook was not prepared to accept that finding, the Privacy Commissioner of Canada commenced an application in the Federal Court to have the Court make the same determination and issue a whole range of orders against the social media company.

The hearing of that application took place a short time ago and a decision was just released from the federal court this past week. It concluded that the Privacy Commissioner did not prove that Facebook violated our federal privacy law in connection with the Cambridge Analytica incident and made a few other interesting findings and observations. 

Just a little bit of additional procedural information: under our current privacy law, the Privacy Commissioner of Canada does not have the ability to issue any orders or to levy any penalties. What can happen after the Commissioner has released his report of findings  is that the complainant, or the Commissioner with the complaint’s okay, can commence an application in the federal court of Canada. This is what is called a de novo proceeding. 

The finding from the privacy commissioner below can be considered as part of the record, but it is not a decision being appealed from. Instead, the applicant, in this case, the Privacy Commissioner, has the burden of proving to a legal standard that the respondent has violated the federal privacy legislation.

This has to be done with actual evidence, which is where the privacy commissioner fell significantly short in the Facebook case.

It has to be remembered that the events being investigated took place almost 10 years ago, and the Facebook platform is substantially different now compared to what it looked like. Then, if you were a Facebook user from that time, you probably remember a whole bunch of apps running on the Facebook platform. You probably were annoyed by friends who were playing Farmville and sending you invitations and updates. Well, these don't exist anymore. Facebook largely is no longer a platform on which third party apps will run.

In a nutshell, at the time, one of the app developers that used the Facebook platform was a researcher associated with a company called Cambridge Analytica. They had an app running on the platform called “this is your digital life”. It operated for some time in violation of Facebook's terms of use for app developers, hoovering up significant amounts of personal information and then selling and/or using that information for, among other things, profiling and advertising targeting. Here’s how the court described it:

[36] In November 2013, Cambridge professor Dr. Aleksandr Kogan launched an app on the Facebook Platform, the TYDL App. The TYDL App was presented to users as a sort of personality quiz. Prior to launching the TYDL App, Dr. Kogan agreed to Facebook’s Platform Policy and Terms of Service. Through Platform, Dr. Kogan could access the Facebook profile information of every user who installed the TYDL App and agreed to its privacy policy. This included access to information about installing users’ Facebook friends. ...

[38] Media reports in December 2015 revealed that Dr. Kogan (and his firm, Global Science Research Ltd) had sold Facebook user information to Cambridge Analytica and a related entity, SCL Elections Ltd. The reporting claimed that Facebook user data had been used to help SCL’s clients target political messaging to potential voters in the then upcoming US presidential election primaries.

One thing to note is that in 2008-2009, the OPC investigated Facebook and the Granular Data Permissions model that it was employing on their platform. Facebook said that the OPC sanctioned and expressly approved its GDP process after testing it after the conclusion of that investigation. They argued that the Commissioner should not be able to now say that a model it approved is inadequate. The Court didn’t have to go there. 

In this application, the Privacy Commissioner alleged that Facebook failed to get adequate consent from users who used apps on Facebook’s platform, and failed to safeguard personal information that was disclosed to third party app developers. The Commissioner failed on both, but for different reasons. 

In the court process, both the Commissioner and Facebook had the opportunity to put their best evidence and best arguments forward. Facebook was able to talk about their policies, their practices with respect to third party developers, and the sorts of educational material that they provided as part of their privacy program. 

Ultimately, the court concluded that the Commissioner had failed to put forward strong evidence to lead to the conclusion that Facebook had not obtained adequate user consent for the collection, use and disclosure of their personal information when using the app in question, or apps more generally.

It’s interesting to me that the Court notes that the Commissioner did not provide any evidence of what Facebook could have done better, in their view, nor did it offer any expert evidence about what would have been reasonable to do in the circumstances. This is from paragraph 71 of the decision:

[71] In assessing these competing characterizations, aside from evidence consisting of photographs of the relevant webpages from Facebook’s affiant, the Court finds itself in an evidentiary vacuum. There is no expert evidence as to what Facebook could feasibly do differently, nor is there any subjective evidence from Facebook users about their expectations of privacy or evidence that any user did not appreciate the privacy issues at stake when using  Facebook. While such evidence may not be strictly necessary, it would have certainly enabled the Court to better assess the reasonableness of meaningful consent in an area where the standard for reasonableness and user expectations may be especially context dependent and are ever evolving.

The Court also seems to be saying that the Commissioner was trying to suck and blow at the same time:

[67] Overall, the Commissioner characterizes Facebook’s privacy measures as opaque and full of deliberate obfuscations, creating an “illusion of control”, containing reassuring statements of Facebook’s commitments to privacy and pictures of padlocks and studious dinosaurs that communicate a false sense of security to users navigating the relevant policies and educational material. On one hand, the Commissioner criticizes Facebook’s resources for being overly complex and full of legalize, rendering those resources as being unreasonable in providing meaningful consent, yet in some instances, the Commissioner criticizes the resources for being overly simplistic and not saying enough. 

The judge then found that Facebook was essentially asking the court to make a whole bunch of negative inferences in the absence of evidence, which they did not appear to try to obtain. Here’s the court at paragraph 72 of the decision: 

[72] Nor has the Commissioner used the broad powers under section 12.1 of PIPEDA to compel evidence from Facebook. Counsel for the Commissioner explained that they did not use the section 12.1 powers because Facebook would not have complied or would have had nothing to offer. That may be; however, ultimately it is the Commissioner’s burden to establish a breach of PIPEDA on the basis of evidence, not speculation and inferences derived from a paucity of material facts. If Facebook were to refuse disclosure contrary to what is required under PIPEDA, it would have been open to the Commissioner to contest that refusal.

The judge then goes on to say at paragraph 77:

[77] In the absence of evidence, the Commissioner’s submissions are replete with requests for the Court to draw “inferences”, many of which are unsupported in law or by the record. For instance, the Court was asked to draw an adverse inference from an uncontested claim of privilege over certain documents by Facebook’s affiant. 

I think there are a couple very important things to note here. The first is that the Privacy Commissioner’s report of findings, which was released with great fanfare and which concluded that Facebook had violated Canada's federal privacy laws, was essentially based on inadequate evidence. The court found it sadly lacking – not enough to convince the Court that it was more likely than not – but apparently this evidentiary record was entirely satisfactory for the purposes of the Commissioner’s investigation and report of findings.

The second thing to note here is that the court application was essentially the privacy commissioner's second kick at the can. More evidence could have been obtained for this hearing had they actually exercised their authorities under the legislation or under the rules of court. If they did that, they came to court with an inadequate evidentiary record.

The second main violation that was alleged by the Privacy Commissioner was that Facebook had failed to adequately safeguard user information that was disclosed to third party app developers. Essentially, the Privacy Commissioner's argument is that Facebook continues to have an obligation to safeguard all of the information even after a user has chosen to disclose that information to a third party app developer. Facebook took the view that the safeguarding obligation transferred to the app developer when the user initiated the disclosure to that app developer. 

This is consistent with the scheme of the Act, in my view, because the responsibility to safeguard information and to limit its use falls on the organization that actually controls that information. Once it is given to an app developer for this purpose, it is under the control of that app developer and the obligation to safeguard it would rest with them.

The Court summarized the Commissioner’s argument on this point in paragraph 85:

[85] The Commissioner counters that Facebook maintains control over the information disclosed to third-party applications because it holds a contractual right to request information from apps. The Commissioner maintains that Facebook’s safeguards were inadequate.

[86] I agree with Facebook; its safeguarding obligations end once information is disclosed to third-party applications. The Court of Appeal in Englander observed that the safeguarding principle imposed obligations on organizations with respect to their “internal handling” of information once in their “possession” (para 41). 

Very importantly here, though, is the statement from the court that companies can expect good faith and honesty in contractual agreements:

[91] In any event, even if the safeguarding obligations do apply to Facebook after it has disclosed information to third-party applications, there is insufficient evidence to conclude whether Facebook’s contractual agreements and enforcement policies constitute adequate safeguards. Commercial parties reasonably expect honesty and good faith in contractual dealings. For the same reasons as those with respect to meaningful consent, the Commissioner has failed to discharge their burden to show that it was inadequate for Facebook to rely on good faith and honest execution of its contractual agreements with third-party app developers.

This is the conclusion that the court reached. So, in the result, the court did not conclude that Facebook had violated PIPEDA in any way in association with the Cambridge analytica incident.

Another important observation, in my view, is that the Privacy commissioner of Canada did not actually investigate Cambridge Analytica itself, but focused all of its regulatory attention at Facebook. It is common ground that Cambridge Analytica and its principal violated Facebook's policies and developer agreements in taking user data off the platform and using it for secondary, unauthorized purposes. But they did not investigate Cambridge Analytica. They went after Facebook.

So what are the takeaways from this?

I think certain folks at the Office of the Privacy Commissioner should take an opportunity to think deeply about their approach to this entire thing. They should not be issuing flashy press releases and lobbing accusations in the way that they did without evidence that could support the allegations in a court of law. 

I also think we need to think carefully about what this says for privacy law reform in Canada. The Commissioner at the time used his finding as an example of why he should be given order making powers and the powers to impose penalties. They even issued a handy-dandy table in which it concluded:

Because “Facebook disputed the validity of the findings and refused to implement the recommendations,” this should lead to the result that:

“The Office of the Privacy Commissioner of Canada’s interpretation of the law should be binding on organizations. 

To ensure effective enforcement, the Commissioner should be empowered to make orders and impose fines for non-compliance with the law.”

Almost certainly, if he’d had those powers, he would have imposed orders and fines on Facebook, based on what the Court concluded was inadequate evidence. The Court even disagreed with the Commissioner’s interpretation of the law. 

If we are going to have fines and orders under PIPEDA’s replacement, which seems inevitable, the OPC should NOT be in a position to impose them. The OPC should be the prosecutor, recommending any such fines or orders to a tribunal that will not show any deference to the Commissioner. 

And finally, this offers some certainty that once information has been disclosed to a third party, it is the third party’s legal obligation to safeguard it. The OPC clearly thought that the obligation remained with the company where it originated, but that view was not shared with the court.

After the OPC filed its application in court, Facebook filed a judicial review application to have the whole thing thrown out. Facebook was not successful on that, mainly because they filed late and were not entitled to an extension. Regardless, there are some very interesting things in that decision, which I’ll discuss in an upcoming episode.


Sunday, December 18, 2022

Where to find me ...

Given the current dumpster fire at Twitter and the recent ban on outbound links to other social platforms, I thought I'd do a post of where to find me:

Monday, August 15, 2022

Can someone legitimately try to stop you from taking photos or recording video in a public place? There are some laws to know about, but the answer for Canada is that you generally have the right to take photos or record video in a public place, and nobody can lawfully stop you from doing so.

How it came up

This past week on Twitter, I saw a couple of discussions about people taking photos in public places, either being called out about it online or being told in person to cut it out.

In the first example, Canadian journalist James MacLeod took it upon himself to get a radar speed gun and document people speeding through a park. He’d take photos of drivers and their speed, and post them on Twitter. One twitter user said doing so seemed “suspect”.

In the second example, a person in Toronto tweeted that he’d been told by a security guard to not take photos of a shipping container put in a public street, blocking a cycling lane. As I replied, “there is no legal basis upon which a security guard can require an individual private citizen to stop taking photos or video in a public place.”

I’ve previously done a video about recording the police in public (link below), but figured it was time to do a more general video about photography and videography in public.

Here’s the general rule: you can take photos in a public place or record video on public property without any legal consequences. That doesn’t always mean you should, but you generally can. You can also photograph or record any place or thing that is visible from a public place, which would include private property as long as you yourself are not trespassing.

There is nothing in our criminal law that makes it illegal to take photos or video in a public place. Other general laws are going to apply. You can’t be a nuisance, and you can’t damage property and you can’t obstruct the police when they are carrying out their duties. You can’t block traffic to get the perfect shot. Short of that, you can generally stand in a public place and take photos of everything and everyone you see.

In fact, you have a Charter right to take photos or record video. The right to freedom of expression protected in section 2(b) of the Charter also protects your right to collect information. Photography and videography are inherently expressive activities and are thus Charter-protected. Any limitation in law on that right would have to be justified under s. 1 of the Charter and any sort of blanket “no photography in public” law would not be justifiable.

Exceptions – voyeurism

That said, there is a crime of voyeurism that has a few nuances and can apply in public or quasi-public places. It was added to the Criminal Code relatively recently.

It involves surreptitiously observing or recording a person where there is a reasonable expectation of privacy. It has to be surreptitious and there has to be a reasonable expectation of privacy.

Paragraph (a) makes it an offence to observe or record in a place in which a person can reasonably be expected to be nude … or to be engaged in explicit sexual activity.

Paragraph (b) makes it an offence where the recording or observing is done for the purpose of observing or recording a person in such a state or engaged in such an activity.

Paragraph (c) covers a broader range of observation or recording, but where it is done for a sexual purpose.

People should be aware that the courts have held you can have a reasonable expectation of privacy in a relatively public place and that the expectation of privacy can vary according to the method of observation. For example, you may not have much of an expectation of privacy with regard to being observed by someone at eye level, but you may have a protected expectation of privacy from being observed or recorded up a person’s dress or from above to look down their top.

One of the leading cases on this is called Jarvis.

The accused was a teacher at a high school. He used a camera concealed inside a pen to make surreptitious video recordings of female students while they were engaged in ordinary school-related activities in common areas of the school. Most of the videos focused on the faces, upper bodies and breasts of female students. The students were not aware that they were being recorded. Of course, they did not consent to the recordings. A school board policy in effect at the relevant time prohibited the type of conduct engaged in by the accused. There were other official surveillance cameras in the school hallways.

The court said:

“Given ordinary expectations regarding video surveillance in places such as schools, the students would have reasonably expected that they would be captured incidentally by security cameras in various locations at the school and that this footage of them could be viewed or reviewed by authorized persons for purposes related to safety and the protection of property. It does not follow from this that they would have reasonably expected that they would also be recorded at close range with a hidden camera, let alone by a teacher for the teacher’s purely private purposes (an issue to which I will return later in these reasons). In part due to the technology used to make them, the videos made by Mr. Jarvis are far more intrusive than casual observation, security camera surveillance or other types of observation or recording that would reasonably be expected by people in most public places, and in particular, by students in a school environment.”

So while the students should have expected to be incidentally observed by the school’s cameras, that did not ultimately affect their expectation of privacy where a teacher with a hidden camera was concerned. He was convicted of voyeurism.

Another key element in the voyeurism offence is that it has to be surreptitious. In Jarvis, the camera was disguised in a pen. There is a case from Ontario called R. v. Lebenfish, 2014 ONCJ 130, in which a person was changed with voyeurism after he was observed taking photos, mainly of women, at a nude beach in Toronto. He was acquitted because he did not make any effort to hide what he was doing. The court also found that the other beach-goers did not have a reasonable expectation of privacy. The court did note that he wasn’t using a long zoom lens or other form of photographic enhancement.

Sneakily taking photos up dresses can be the offence of voyeurism, but standing on a sidewalk obviously taking a photo of someone else would not be.

In Lebenfish, the accused was also charged with mischief. Specifically, it was alleged he committed mischief “by willfully interfering with the lawful enjoyment without legal justification of property,” namely, the beach.

The court found that he did not interfere with the lawful enjoyment of the beach, but also noted that the answer may have been different if there were signs posted saying no photography or if there had been a municipal by-law prohibiting photography at the beach. If photography was prohibited, then part of the enjoyment of the beach would be that it was camera free.

One thing that is worth nothing is that the law doesn’t offer any special protection for children. A while ago, the police here in Halifax were looking for someone who was reported to have been taking photos of kids at a public park. That was followed by a lot of people saying that it is plainly illegal to take photos of other people’s children at a park. That’s not the case. It is certainly creepy and concerning, but likely not illegal in and of itself.

Privacy laws

What about other kinds of laws? We have privacy laws to think about. The ones I deal with most often regulate what businesses can do. An individual taking photos for personal purposes is not a business.

And just to be clear, they have carve-outs for personal use and artistic use. Here’s what PIPEDA says:

(2) This Part does not apply to

(b) any individual in respect of personal information that the individual collects, uses or discloses for personal or domestic purposes and does not collect, use or disclose for any other purpose; or

(c) any organization in respect of personal information that the organization collects, uses or discloses for journalistic, artistic or literary purposes and does not collect, use or disclose for any other purpose.

The other provincial general privacy laws have similar exclusions.

Privacy torts

So what about the risk of being sued for damages for invasion of privacy. That’s not likely either.

In most common law provinces, you can sue or be sued for “intrusion upon seclusion”.

It is, in summary “an intentional or reckless intrusion, without lawful justification, into the plaintiff's private affairs or concerns that would be highly offensive to a reasonable person.”

If you poke into someone’s private life in a way that would be highly offensive, harm and damages are presumed.

You can also be sued for public disclosure of private facts, which also has to engage someone’s private life and be highly offensive to a reasonable person.

It is hard to see how taking photographs or video in a public place would engage someone’s private and intimate life, and be highly offensive to a reasonable person. It could be engaged if one were stalking someone, though.

Statutory torts

Some provinces have what are called statutory torts of invasion of privacy.

Here is the gist of the British Columbia Privacy Act.

1(1) It is a tort, actionable without proof of damage, for a person, wilfully and without a claim of right, to violate the privacy of another.

Note the violation has to be without a claim of right or legitimate justification.

It then goes on and says …

(2) The nature and degree of privacy to which a person is entitled in a situation or in relation to a matter is that which is reasonable in the circumstances, giving due regard to the lawful interests of others.

(3) In determining whether the act or conduct of a person is a violation of another's privacy, regard must be given to the nature, incidence and occasion of the act or conduct and to any domestic or other relationship between the parties.

Note it specifically refers to eavesdropping and surveillance in subsection (4), which reads:

(4) Without limiting subsections (1) to (3), privacy may be violated by eavesdropping or surveillance, whether or not accomplished by trespass.

Again, it is hard to see how obviously taking photographs or video in a public place would engage this tort, but it could be engaged if one were stalking someone.

Private property but public places

Regularly, we go to places where the public is generally invited, but it is private property. This can also include what we often think of as being “public property”, but it is owned by someone else. Think of a park, which is owned by a municipality. People or organisations that own property can put conditions on entry to that property. One of those conditions may be “no photography”. And if you exceed or violate the conditions of your invitation, you could then be trespassing. The property owner would be within their rights to ask you to leave under provincial trespassing statutes. In some provinces, it may be a provincial summary offence. But the owner or occupier of the property would have to put you on notice that photography is prohibited on the premises.

Requests to delete photos

Finally, I’m sometimes asked if you can be required to delete photos taken. The answer is a resounding no. No private individual can take your phone and nobody can require you to delete any photos.

Monday, August 08, 2022

Video: OPC Finding: Spam messages sent by COVID testing contractor

The Privacy Commissioner of Canada just released a report of findings about a company contracted by the Airport of Montreal to do on-arrival covid testing. The company added the people tested to their mailing list and sent them unsolicited commercial electronic messages. The investigation was done jointly with the Information Commissioner of Quebec. The finding raises more questions than it answers.

The complainant in this case arrived at Montreal’s Trudeau International Airport. To comply with the Public Health Agency of Canada’s rules, the individual had to undergo on-arrival COVID testing. Conveniently, the Airport had contracted with a company called Biron Health Group to COVID testing directly at the airport. So the complainant went to the Biron site, provided them with his contact information, had this test done, it was negative and they emailed him the results.

A few days after receiving his test results, the complainant received an email from Biron promoting its other services. The complainant unsubscribed using the link in the email, and never received any further unwanted emails from them. The OPC said “he was shocked to receive such an email” and filed a complaint with the OPC.

The information and privacy commissioner of Quebec also investigated, but does not appear to have released a decision on the case. Instead, they just referred to the OPC’s finding.

During the course of the investigation, the company said it had “implied consent” under Canada’s Anti-Spam Law to send commercial electronic messages and was justified in doing so.

The OPC said there was no implied consent under PIPEDA, however. Here’s what they said specifically:

“The OPC is of the opinion that Biron could not reasonably assume that it had the implicit consent of travellers arriving in Canada. Biron was mandated by the government to conduct COVID-19 testing on travellers and paid by the Montreal Trudeau Airport. Biron was the only company offering this service at this airport. Consequently, travellers arriving in Canada had no choice but to do business with Biron to comply with the rules issued by the Public Health Agency. In this situation, these travellers would not normally expect their personal information to be used for reasons other than the mandatory testing.

Biron collected the travellers’ personal information for the purpose of conducting COVID-19 tests and sending them sensitive information related to their health, notably their test results. Biron was acting as a service provider for the airport. The OPC considers that Biron should have taken these circumstances into account before using the personal information for secondary marketing purposes and for its own purposes.”

Because Biron said they’d stop doing this, the OPC closed the file as “settled during the course of the investigation”. Case closed.

So why is this unsatisfying? There are a couple of key questions in the background, of interest to privacy practitioners, that are unaddressed and thus unanswered.

The first question is what law should actually apply to Biron in this case? The Privacy Commissioner refers to PIPEDA, our federal commercial privacy law. But we have a mess of privacy laws in Canada, more than a few of which could have been applicable.

Quebec has a provincial privacy law that applies to all businesses in that province, unless they are “federal works, undertakings or businesses”. Notably, international airports and airlines are “federal works, undertakings or businesses.”

There really is no doubt that if the testing facility had been off the airport property and operating on its own, the federal privacy Law could not have applied at all and instead the Quebec private sector privacy law would have been applicable. That means the federal Commissioner would have had no jurisdiction to investigate and it would have been entirely up to the Quebec Commissioner to do so.

So does that mean that simply being on or operating from airport property makes you a “federal work, undertaking or business”? I don't think that can really be the case.

Was it because the service they were providing is connected to international travel that places them within Federal jurisdiction? That seems dubious to me.

Were they within Federal jurisdiction because they had been engaged by the airport authority to provide this service? The airport authority is certainly a “federal work, undertaking or business”, but does that mean all of its contractors become “federal works, undertakings or businesses”? Again, I don't think that can really be the case. Would a taxi company given a concession to serve the airport automatically come under federal jurisdiction?

They were performing a function that was required by the Public Health Agency of Canada, but PHAC is subject to the federal Privacy Act, which never came up in the commissioner's report of findings.

This would be more tricky in a province like Alberta, where there is a provincial general privacy law that excludes PIPEDA and a health privacy law that does not. (Quebec doesn’t have a health-specific privacy law.)

Now, it may well be that both the federal and the Quebec Commissioners thought they didn't even have to consider jurisdiction because they got the result they were looking for during the course of the investigation: the company said they would change their practices and what might have been problematic under either the Quebec or the federal law has ceased. This seems likely to me, as in my experience the federal Privacy Commissioner's office we'll bend over backwards to avoid making any statements related to their jurisdiction that could come back to haunt them later.

This is not just a privacy nerd question, because other things turn on whether a company is a “federal work, undertaking or business”. If Biron is in that category, then provincial labour and employment laws don’t apply to that workplace. Instead, the Canada Labour Code applies. Other federal laws would also suddenly apply to them, not just our privacy law. If I was this company, I’d be left scratching my head.

The second element of this that is problematic is the interaction between our privacy laws and Canada's anti-spam law, also known as CASL. You will recall that the company said that they were justified in sending commercial electronic messages because they had an “existing business relationship” with the people who underwent testing. The Privacy Commissioner really did not address that, but instead focused on the Personal Information Protection and Electronic Documents Act which requires consent for all collection, use and disclosure of personal information. That consent can be implied, particularly where it would be reasonable for the individual to expect that their information will be used for a particular purpose in light of the overall transaction. The Commissioner found that individuals would not expect to have their personal information used for the secondary purpose and therefore there was no implied consent under PIPEDA.

But that is contrary to the express scheme of Canada's anti-spam law. Under CASL, an organization can only send a commercial electronic message to a recipient where it has consent to do so. That consent either must be express or implied. Implied consent under CASL is very different from implied consent under PIPEDA. CASL doesn't care about what the consumer's expectation might be. Consent can be implied where there is an existing business relationship. One of the possible existing business relationships is the purchase of goods or services from the organization in the previous two years. Presumably, buying a COVID test from a vendor would meet that threshold and there would be implied consent for sending commercial electronic messages. I do agree with the federal Privacy Commissioner that doing so because you are ordered to by the Public Health Agency of Canada would really be contrary to the individual's expectation.

But this really does highlight some of the absurd dissonance between our anti-spam law and our privacy law. Both use the term “implied consent”, but it means radically different things. From this finding from the federal Commissioner, it appears that he is of the view that implied consent under CASL does not lead to deemed implied consent under PIPEDA. CASL expressly permits it, but PIPEDA does not.

When it comes to consent for sending commercial electronic messages, one would think that the piece of legislation that was expressly written and passed by Parliament for that purpose would be the final say, but the OPC certainly does not seem to be of that view.

The Privacy Commissioner carried out this investigation along with the Quebec commissioner, but there is no mention of whether the CRTC, which is the regulator under CASL, was involved.

At the end of the day, I think an existing business relationship was created between the complainant and the company so that there would have been implied consent to send commercial electronic messages, regardless of whether the consumer would have expected it to do so. The Commissioner did highlight that the individual had to be tested under the rules for the Public Health Agency of Canada, leaving room to argue that had the individual gone to the company for a test for other purposes, that might have been a more direct commercial relationship between the parties.

As my friend and tech law colleague Jade Buchanan pointed out on Twitter, “CASL is completely unnecessary when PIPEDA will apply to the use of personal information (name email, etc.) to send commercial electronic messages.” Personally, I think that one of the reasons why we have CASL is because PIPEDA was seldom enforced by the OPC against spammers when clear jurisdiction to do so existed for more than a decade before CASL was created.

And there’s nothing in the pending Consumer Privacy Protection Act that would address this dissonance between our privacy and spam law.

So that is the finding, and we're left scratching our heads a bit or at least have unanswered questions about important matters of jurisdiction and the intersection between our privacy laws and our spam laws.

Monday, June 27, 2022

Video: Preparing for Canada's new Consumer Privacy Protection Act

The government of Canada tabled the Digital Charter Implementation Act, 2022 in the week before parliament rose for their summer break. While this is in limbo, what, if anything, should Canadian businesses be doing to prepare for the Consumer Privacy Protection Act?

In the week before the summer break, the Industry Minister tabled in parliament the Digital Charter Implementation Act, which will overhaul Canada’s federal private sector privacy law. It has been long anticipated and for many, long overdue. With parliamentarians off the for the summer, what can we expect and what should businesses be doing to get ready for it?

I expect that when the house resumes, the bill will be referred to either the Standing Committee on Industry, which is where PIPEDA went more than 20 years ago, or to the Standing Committee on Access to Information, Privacy and Ethics.

I have to say that the current government is very unpredictable. When Bill C-11 was tabled in 2019 for the Digital Charter Implementation Act of 2019, the bill just sat there with no referral to committee and it seemed to not be a priority at all. If they are serious about privacy reform, they should get this thing moving when they are back in session.

When it gets to committee, the usual cast of characters will appear to provide comments. First up will be the minister of Industry and his staff. Then will be the privacy commissioner of Canada, who will only have had a few months in his office at that point. I would not be surprised to see provincial privacy commissioners have their say, and maybe even data protection authorities from other countries. Then industry and advocacy groups will have their say.

The Commissioner in 2019 was very critical of the C-11 version of the bill, and it appears that most of his suggestions have gone unheeded. I expect that between 2019 and now, there has been a lot of consultation and lobbying going on behind the scenes that resulted in the few changes between C-11 and C-27. It will be interesting to see how responsive the committee and the government are to making changes to the bill.

I would not be surprised to see this bill passed, largely in its current form, before the end of the year. But even if it speeds though the House of Commons and the Senate, I do not expect that we will see this law in effect for some time. In order for the Consumer Privacy Protection Act and the Personal Information and Data Protection Tribunal Act to be fully in force, the government will have a lot of work to do.

The biggest effort will be standing up the new tribunal under the Personal Information and Data Protection Tribunal Act. Doing so will not be a trivial matter. At least three members have to be recruited, and at least three of those have to have expertise in privacy and information law. They’ll need offices, staff, a registry, IT infrastructure, then they’ll need to make their rules of procedure. I can’t see that taking any less than a year, even if the government is currently informally recruiting for those roles.

An example I’d look at is the College of Patent Agents and Trademark Agents, which was established pursuant to a bill passed in December 2018 and came into force on June 28, 2021. Essentially, it took two and a half years between the passing of the bill and when the College was open for business. The college was probably more complicated to set up than the tribunal, but it provides some insight I think.

Personally, I don’t think the CPPA can be phased in without the tribunal operating as a going concern. There are transitional provisions related to complaints that are being dealt with by the Commissioner prior to the coming into force of the CPPA, but otherwise the existence of the tribunal is essential to the operation of the CPPA and the Commissioner’s mandate.

So if I had to look into my crystal ball, I don’t think we’ll see this fully in effect for at least a year and a half.

So should companies be doing anything now? I think so. When the CPPA and the Tribunal Act come into effect they will be fully in effect. In addition to making your politicians aware of any concerns you have, companies should be looking very closely at their current privacy management program – if any – to determine if it will be up to snuff.

Section 9 of the Act says that “every organization must implement and maintain a privacy management program that includes the policies, practices and procedures the organization has put in place to fulfill its obligations under this Act, including policies, practices and procedures respecting

(a) the protection of personal information; (b) how requests for information and complaints are received and dealt with; (c) the training and information provided to the organization’s staff respecting its policies, practices and procedures; and (d) the development of materials to explain the organization’s policies and procedures.”

It then says “In developing its privacy management program, the organization must take into account the volume and sensitivity of the personal information under its control.”

This is, of course, very similar to the first principle of the CSA Model Code that’s in PIPEDA. But section 10 of the CPPA says the Commissioner can ask for it and all of its supporting documentation at any time.

I can imagine the OPC sending out requests for all of this documentation to a huge range of businesses shortly after the Act comes into force.

So what does a privacy management program include? If of course includes your publicly-facing privacy statement described in section 62. What has to be in this document will change a lot compared to PIPEDA. It has to explain in plain language what information is under the organization’s control, a general account of how it uses that personal information.

If the organization uses the “legitimate interest” consent exception, the privacy statement has to include a description of that. If the organization uses any automated decision system to make predictions, recommendations or decisions about individuals that could have a “significant impact on them”, that has to be described. It also has to say whether or not the organization carries out any international or interprovincial transfer or disclosure of personal information that may have reasonably foreseeable privacy implications. You also have to state the retention periods applicable to sensitive personal information, then explain the process for questions, complaints, access requests and requests for deletion. Most privacy statements don’t currently include all this information.

You need to assess what personal information you have, where it is, who has it, who has access to it, what jurisdiction is it in or exposed to, how it is secured, when did you collect it, what were the purposes for that collection, are there any new purposes, and have those purposes expired.

A good starting point for your privacy management program is to document all the personal information under the organizations’ control and the purposes for which it is to be used. Section 12(3) of the CPPA requires that this be documented. You will also need to ensure that all of these purposes are appropriate using the criteria at section 12(2).

You’ll also want to review whether any of the consent exceptions related to business activities under 18(1) or legitimate interests in section 18(3) could be applicable, and document them.

Under s. 18(4), this documentation will have to be provided to the Commissioner on request.

You will also need to document the retention schedule for all of your personal information holdings, and make sure they are being followed. And remember, all information related to minors is deemed to be sensitive and the retention schedule for sensitive information has to be included in your privacy statement.

Next, you’ll want to inventory and document all of your service providers who are collecting, using or disclosing personal information on your behalf. You’ll need to review all of the contracts with those service providers to make sure the service provider provides the same level of protection equivalent to original controlling organizations’ obligations. It should be noted that service providers, in the definition in the Act, expressly includes affiliated companies. So you’ll need to make sure that intercompany agreements are in place to address any personal information that may be transferred to affiliates.

You’ll want to check your processes for receiving questions, complaints and access requests from individuals. You may need to tweak your systems or processes to make sure that you can securely delete or anonymise data where required.

And last, but certainly not least, you’ll want to look very closely at your data breach response plans. It needs to identify all suspected data breaches, make sure they are properly escalated and reviewed. Any breach itself of course has to be stopped, mitigated and investigated. The details will need to be recorded and you’ll also want to think about the processes for getting legal advice at that stage so information you may want to keep privileged will be protected and you can understand your reporting and notification obligations.

At the end of the day, the CCPA is not a radical departure from the existing framework of PIPEDA. It requires greater diligence and what we in the privacy industrial complex call “privacy maturity”. Even if it didn’t, the significant penalties and the cost of dealing with investigations and inquiries by the commissioner and possible hearings before the tribunal should be enough to convince organizations to up their privacy games.

Monday, June 20, 2022

Video: An overview of the Digital Charter Implementation Act, 2022

Finally, the government of Canada has tabled its long-awaited privacy law, intended to completely overhaul Canada’s private sector privacy law, and rocket the country to the front of the pack for protecting privacy. Not quite, but I’ll give you an overview of what it says.

Highlights

On June 26, 2022, the Industry Minister François Philippe Champagne finally tabled in the House of Commons Bill C-27, called the “Digital Charter Implementation Act, 2022”. This is the long-awaited privacy bill that is slated to replace the Personal Information Protection and Electronic Documents Act, which has regulated the collection, use and disclosure of personal information in the course of commercial activity in Canada since 2001.

PIPEDA, contrary to what Minister Champagne said at the press conference later that day, has been updated a number of times but there really has been a broad consensus that it was in need of a more general overhaul.

The bill is very similar to Bill C-11, which was tabled in 2019 as the Digital Charter Implementation Act, 2019, and which languished in parliament until dying when the federal government called the last election.

The bill creates three new laws. The first is the Consumer Privacy Protection Act, which is the main privacy law. The second is the Personal Information and Data Protection Tribunal Act and the third is the Artificial Intelligence and Data Act, which I’ll have to leave to another episode.

I don’t plan to do a deep dive into the bill in this video, as I want to spend more time poring over its detailed provisions. We can’t just do a line-by-line comparison with PIPEDA, as the Bill is in a completely different structure than PIPEDA. You may recall that PIPEDA included a schedule taken from the Canadian Standards Association Model Code for the Protection of Personal Information. The statute largely said “follow that”, and there are a bunch of provisions in the body of the Act that modify those standards or set out how the law is overseen.

The most significant difference is what many privacy advocates have been calling for: the Privacy Commissioner is no longer an ombudsman. The law includes order-making powers and punitive penalties. The Bill also creates a new tribunal called the Personal Information and Data Protection Tribunal, which replaces the current role of the Federal Court under PIPEDA with greater powers.

Other than order making powers, I don’t see much of a difference between what’s required under the new CCPA and what diligent, privacy-minded organizations have been doing for years.

This is a high-level overview of what’s in Bill C-27, and I’ll certainly do deeper dives into its provisions in later videos.

Does the law apply any differently?

PIPEDA applied to the collection, use and disclosure of personal information in the course of commercial activity and to federally-regulated workplaces. That hasn’t changed, but a new section 6(2) says that the Act specifically applies to personal information that it collected, used or disclosed interprovincially or internationally. The privacy commissioner had in the past asserted that this was implied, but it was never written in the Act. Now it will be. Two things about that are problematic: the first is that it’s not expressly limited to commercial activity, so there’s an argument that could be made that it would apply to non-commercial or employee personal information that crosses borders. The second dumb thing is that this means that a company with operations in British Columbia and Alberta, when it moves data from one province to another not only has to comply with the substantially similar privacy laws of each province, now they have to comply with the Consumer Privacy Protection Act. That seems very redundant.

It includes the same carve-outs for government institutions under the Privacy Act, personal or domestic use of personal information, journalistic, artistic and literary uses of personal information and business contact information.

We really could have benefitted from a clear extension of the Act to personal information that is imported from Europe so we can have confidence that the adequacy finding from the EU, present and future, really applies across the board.

It does have an interesting approach to anonymous and de-identified data. It officially creates these two categories. It defines anonymize as: “to irreversibly and permanently modify personal information, in accordance with generally accepted best practices, to ensure that no individual can be identified from the information, whether directly or indirectly, by any means.” So there effectively is no reasonable prospect of re-identification. To de-identify data means “means to modify personal information so that an individual cannot be directly identified from it, though a risk of the individual being identified remains.” You’re essentially using data with the identifiers removed.

The legislation does not regulate anonymous data, because there is no reasonable prospect of re-identification. It does regulate de-identified data and generally prohibits attempts to re-identify it. The law also says that in some cases, de-identified data can be used or even has to be used in place of fully identifiable personal information.

What happened to the CSA model code?

When you look at the CCPA, you’ll immediately see that it is very different. It’s similar in structure to the Personal Information Protection Acts of Alberta and British Columbia, in that the principles of the CSA Model Code are not in a schedule but are in the body of the Act. And the language of these principles has necessarily been modified to be more statutory rather than the sort of language you see in an industry standards document.

Any changes to the 10 CSA Principles?

The ten principles themselves largely haven’t been changed, and this should not be a surprise. Though written in the 90’s, they were based on the OECD guidelines and we see versions of all the ten principles in all modern privacy laws.

What has changed is the additional rigor that organizations have to implement, or more detail that’s been provided about how they have to comply with the law.

For example, principle 1 of the CSA model code required that an organization “implement policies and practices to give effect to the CSA Model Code principles”. The CCPA explicitly requires that an organization have a privacy management program:

Privacy management program

9 (1) Every organization must implement and maintain a privacy management program that includes the policies, practices and procedures the organization has put in place to fulfill its obligations under this Act, including policies, practices and procedures respecting

(a) the protection of personal information;

(b) how requests for information and complaints are received and dealt with;

(c) the training and information provided to the organization’s staff respecting its policies, practices and procedures; and

(d) the development of materials to explain the organization’s policies and procedures.

Volume and sensitivity

(2) In developing its privacy management program, the organization must take into account the volume and sensitivity of the personal information under its control.

This privacy management program has to be provided to the Privacy Commissioner on Request.

With respect to consent, organizations expressly have to record and document the purposes for which any personal information is collected, used or disclosed. This was implied in the CSA Model Code, but is now expressly spelled out in the Act.

Section 15 lays out in detail what is required for consent to be valid. Essentially, it requires not only identifying the purposes but also communicating in plain language how information will be collected, the reasonably foreseeable consequences, what types of information and to whom the information may be disclosed.

I’ll have to save digging into the weeds for another episode.

Collection and use without consent

One change compared to PIPEDA that will delight some and enrage others is the circumstances under which an organization can collect and use personal information without consent. Section 18 allows collection and use without consent for certain business activities, where it would reasonably be expected to provide the service, for security purposes, for safety or other prescribed activities. Notably, this exception cannot be used where the personal information is to be collected or used to influence the individual’s behaviour or decisions.

There is also a “legitimate interest” exception, which requires an organization to document any possible adverse effects on the individual, mitigate them and finally weigh whether the legitimate interest outweighs any adverse effects. It’s unclear how “adverse effects” would be measured.

Like PIPEDA, an individual can withdraw consent subject to similar limitations that were in PIPEDA. But what’s changed is that an individual can require that their information be disposed of. Notably, disposal includes deletion and rendering it anonymous.

Law enforcement access

On a first review, it doesn’t look like there are many other circumstances where an organization can collect, use or disclose personal information compared to section 7 of PIPEDA.

In my view, it is very interesting that the exceptions that can apply when the government or the cops come looking for personal information have not changed from section 7(3) of PIPEDA. For example, the provision that the Supreme Court of Canada in R v Spencer said was meaningless is essentially reproduced in full.

44 An organization may disclose an individual’s personal information without their knowledge or consent to a government institution or part of a government institution that has made a request for the information, identified its lawful authority to obtain the information and indicated that the disclosure is requested for the purpose of enforcing federal or provincial law or law of a foreign jurisdiction, carrying out an investigation relating to the enforcement of any such law or gathering intelligence for the purpose of enforcing any such law.

The Supreme Court essentially said “what the hell does lawful authority mean”? And the government has made no effort to do so in Bill C-27. but that’s just as well, since Companies should always say “come back with a warrant”.

Investigations

The big changes are with respect to the role of the Privacy Commissioner. The Commissioner is no longer an ombudsman with a focus on nudging companies to compliance and solving problems for individuals. It has veered strongly towards enforcement.

As with PIPEDA, enforcement starts with a complaint by an individual or the commissioner can initiate it on his own accord. There are more circumstances under the CCPA where the Commissioner can decline to investigate. After the investigation, the matter can be referred to an inquiry.

Inquiries seem to have way more procedural protections for fairness and due process than under the existing ad hoc system. For example, each party is guaranteed a right to be heard and to be represented by counsel. They’ve always done this to my knowledge, but this will be baked into the law. Also, the commissioner has to develop rules of procedure and evidence that have to be followed. These rules have to be made public.

At the end of the inquiry, the Commissioner can issue orders to measures to comply with the Act or to stop doing something that is in contravention of the Act. The commissioner can continue to name and shame violators. Notably, the Commissioner cannot levy any penalties.

The Commissioner can recommend that penalties be imposed by the new Privacy and Data Protection Tribunal.

The Tribunal

The legislation creates a new specialized tribunal which hears cases under the CCPA. It is expected that its jurisdiction will likely grow to include more matters. The “online harms” consultation that took place in the last year anticipated that certain questions would be determined by this tribunal as well.

Compared to C-11, the new bill requires that at least three of the tribunal members have expertise in privacy.

Its role is to determine whether any penalties recommended by the Privacy Commissioner are appropriate. It also hears appeals of the Commissioner’s findings, appeals of interim or final orders of the Commissioner and a decision by the Commissioner not to recommend that any penalties be levied.

Currently, under PIPEDA, complainants and the Commissioner can seek a hearing in the federal court after the commissioner has issued his finding. That hearing is “de novo”, so that the court gets to make its own findings of fact and determinations of law, based on the submissions of the parties. The tribunal, in contrast, has a standard of review that is “correctness” for questions of law and “palpable and overriding error” for questions of fact or questions of mixed law and fact. These decisions are subject to limited judicial review before the Federal Court.

So what about these penalties? They are potentially huge and I have a feeling that the big numbers were pulled out of the air in order to support political talking points that they are the most punitive in the G7. The maximum administrative monetary penalty that the tribunal can impose in one case is the higher of $10,000,000 and 3% of the organization’s gross global revenue in its financial year before the one in which the penalty is imposed.

The Act also provides for quasi-criminal prosecutions, which can get even higher.

The Crown prosecutor can decide whether to proceed as an indictable offence with a fine not exceeding the higher of $25,000,000 and 5% of the organization’s gross global revenue or a summary offence with a fine not exceeding the higher of $20,000,000 and 4% of the organization’s gross global revenue. If it’s a prosecution, then the usual rules of criminal procedure and fairness apply, like the presumption of innocence and proof beyond a reasonable doubt.

Sunday, May 29, 2022

The problem with Bill S-7: Device searches at the border

The government wants border agents to be able to search your smartphones and laptops without any suspicion that you’ve done anything wrong. I think that’s a problem. There are a lot of problematic bills currently pending before parliament but one in particular is not getting enough attention. It’s Bill S-7, called An Act to amend the Customs Act and the Preclearance Act, 2016. Today I’m going to talk about the bill, digital device searches and what I think about it all.

I don’t know about you, but my smartphone and my laptop contain a vast amount of personal information about me. My phone is a portal to every photo of my kids, messages to my wife, my banking and other information. It contains client information. And Canada Border Services Agency wants to be able to search it without any suspicion that I’d committed a crime or violated any law.

Bill S-7, which was introduced in the Senate on March 31, 2022, is intended to give the CBSA the power to go browsing through your smartphone and mine on what amounts to a whim. It also extends the same powers to US Homeland Security agents who carry out pre-departure pre-clearance at Canadian airports.

If you’ve ever watched the TV show “Border Security Canada”, you would have seen how routine these sorts of searches are. Many of the searches do produce evidence of illegal activity, like smuggling, immigration violations and even importation of child sexual abuse materials. The question is not whether these searches should ever be permissible, but under what circumstances. The government wants it to be with a very low threshold, while I’m confident that the Charter requires more than that.

We all know there’s a reduced expectation of privacy at the border, where you can be pulled over to secondary screening and have your stuff searched. The Customs Act specifically gives CBSA the power to search goods. But a big problem has arisen because the CBSA thinks the ones and zeros in your phone are goods they can search.

Smartphones were unheard of when the search powers of the Customs Act were last drafted and the CBSA thinks it gives them carte blanche to search your devices. Now, in the meantime, the courts have rightly said that’s going too far. So the government is looking to amend the Customs Act to authorize device searches if the CBSA officer has a “reasonable general concern” about a contravention of the law.

One big issue is what the hell does “reasonable general concern” mean? In law, we’re used to language like “reasonable grounds to believe a crime has been committed” or even “reasonable grounds to suspect”, but reasonable general concern is not a standard for any sort of search in Canadian law. Your guess is as good as mine, but it seems pretty close to whether the officer's “spidey sense is tingling”.

S-7 is trying to fix a problem and I think the way they’re doing it will ultimately be found to be unconstitutional. To see that, we have to look at the competing interests at play in this context and look at what the courts have recently said about device searches at the border.

It is clear that you have a reduced expectation of privacy at the border, but it is not completely eliminated. And the Charter is not suspended at the border. For example, border officers can’t detain and strip search you just because they want to. These searches legally cannot be performed unless an officer has reasonable grounds to suspect some legal contravention, notably the concealment of goods. And they can’t strip search you unless there is a reason to do so, like looking for contraband smuggled on your person.

Meanwhile, there is a growing body of case law that says individuals have a very high expectation of privacy in our digital devices. For example, in a case called Fearon from 2014, the Supreme Court modified the common law rule related to search incident to arrest for smartphones, specifically due to the immense privacy implications in searching such devices. Upon arrest, they can routinely search you, your clothes and your belongings, but they can only search your smartphone if certain criteria are met.

The Supreme Court has clearly established that the greater the intrusion on privacy, the greater the constitutional protections and a greater justification for the search is required. And while there may be a diminished expectation of privacy at the border, this expectation is not completely extinguished.

At the same time, there has been a developing body of case law saying that suspicionless searches of personal electronic devices at the border violate the Charter.

The leading Supreme Court of Canada case on privacy at the border is from 1988 called Simmons. In that case, the Court recognized that the degree of personal privacy reasonably expected by individuals at the border is lower than in most other situations. Three distinct types of border searches, with an increasing degree of privacy expectation, were identified: (1) routine questioning which every traveller undergoes at a port of entry, sometimes accompanied by a search of baggage and perhaps a pat or frisk of outer clothing; (2) a strip or skin search conducted in a private room after a secondary examination; and (3) a body cavity search. The first category was viewed as the least intrusive type of routine search, not raising any constitutional issues or engaging the rights protected by the Charter. Essentially, this category can be done without any suspicion of wrongdoing.

So since then, customs agents have seen a search of a phone to be the same as the search of your luggage, which they conclude they can do without any suspicion of wrongdoing.

The Alberta Court of Appeal in 2020, in a case called Canfield, said that customs’ treatment of personal electronic devices was wrong, and it does not fit into that first category. The court noted:

“There have been significant developments, both in the technology of personal electronic devices and in the law relating to searches of such devices, since Simmons was decided in 1988. A series of cases from the Supreme Court of Canada over the past decade have recognized that individuals have a reasonable expectation of privacy in the contents of their personal electronic devices, at least in the domestic context. While reasonable expectations of privacy may be lower at the border, the evolving matrix of legislative and social facts and developments in the law regarding privacy in personal electronic devices have not yet been thoroughly considered in the border context.”

The court then said:

“We have also concluded that s 99(1)(a) of the Customs Act is unconstitutional to the extent that it imposes no limits on the searches of such devices at the border, and is not saved by s 1 of the Charter. We accordingly declare that the definition of “goods” in s 2 of the Customs Act is of no force or effect insofar as the definition includes the contents of personal electronic devices for the purpose of s 99(1)(a).”

The Court in Canfield essentially said there has to be a minimal threshold in order to justify a search of a digital device, but they would leave it to parliament to determine what that threshold is.

But the next year, the same Alberta Court of Appeal considered an appeal in a case called Al Askari. In that case, the question was related to a search of a personal electronic device justified under immigration legislation. The Court found that like in Canfield, there has to be a threshold and it can’t be suspicionless.

The court commented favourably on the very reasoned approach put forward by my friend and Schulich School of Law colleague Professor Robert Currie.

“Prof Currie suggests that the critical issue is measuring the reasonably reduced expectation of privacy at the border and the extent of permissible state intrusion into it. In his view, this is best achieved through the established test in R v Collins, [1987] 1 SCR 265, 308. Was the search authorized by law? Is the law itself reasonable? Is the search carried out in a reasonable manner?

When assessing whether the law itself is reasonable, Prof Currie proposes a standard of reasonable suspicion because it is tailor-made to the border context. It must amount to more than a generalized suspicion and be based on objectively reasonable facts within the totality of the circumstances: 311. On the reasonableness of the search, he advocates for an inquiry into whether the search was limited in scope and duration.”

The Court in both Canfield and Al Askari noted that not all searches are the same, and there are degrees of intrusion into personal electronic devices. Asking to look at a receipt for imported goods on a phone is very different from just perusing the full device looking for anything at all.

So fast forward to March 2022. The Alberta Court of Appeal said it’s up to Parliament to set the threshold and for the courts to determine whether it is compliant with the Charter. So Parliament is proposing a threshold of “reasonable general concern” to search documents on a personal digital device. This is setting things up for years of further litigation.

The creation of a ‘’reasonable general concern’ standard is not only new, and the bill doesn’t give it any sort of definition, it is inconsistent with other legislation governing border searches. It also does not impose any obligation that the type of search carried out must be appropriate to what is “of general concern” or set any limits on what can be searched on the device when the “reasonable general concern” (whatever that means) is met.

If you look at the case of Fearon, which addressed device searches incident to arrest, the court imposed a bunch of conditions and limits in order to take account of the nature of device searches. Importantly, the extent of the permitted search has to be appropriate to what they legitimately have an interest in. The court said:

“In practice, this will mean that, generally, even when a cell phone search is permitted because it is truly incidental to the arrest, only recently sent or drafted emails, texts, photos and the call log may be examined as in most cases only those sorts of items will have the necessary link to the purposes for which prompt examination of the device is permitted. But these are not rules, and other searches may in some circumstances be justified. The test is whether the nature and extent of the search are tailored to the purpose for which the search may lawfully be conducted. To paraphrase Caslake, the police must be able to explain, within the permitted purposes, what they searched and why”

In the border context, if they are looking for whether someone appearing on a tourism visa actually has a job waiting for them, you don’t go looking for evidence of that in their camera roll. You scan the subject lines of emails, and not go prowling through all the mail in the inbox.

Fearon also requires police to carefully document their searches, the rationale, what they looked at and why. There is no such requirement in Bill S-7.

Given years of growing jurisprudence confirming that personal electronic devices contain inherently private information, and the tendency of the courts to impose the creation of this lower threshold is unreasonable, inconsistent with other search standards, and anticipated to run afoul of the Charter.

I think after Canfiled and Al Askari, government lawyers and policy makers huddled and and tried to invent a threshold that could plausibly be called a threshold but was miles below reasonable suspicion. And this is what they came up with. You’ll note that they ignored all the really smart and sensible things that Professor Currie proposed.

What is also very notable is that the government ignored the recommendations made by the House of Commons Standing Committee on Access to Information, Privacy and Ethics in 2017 after it had carried out an extensive study and consultation on the issue of privacy at borders and airports. (I testified at those hearings on behalf of the Canadian Bar Association.) It recommended that the threshold of “reasonable grounds to suspect” should be the threshold.

The threshold is so low that it’s hardly a threshold at all. It’s a license for the CBSA to continue their practices of routinely searching electronic devices, and will continue the legal challenges. I just really wish the legislators would listen to the experts and the courts.

Monday, May 16, 2022

Video: Law enforcement requests for customer information - Come Back With A Warrant

Canadian businesses are routinely asked by police agencies to provide customer information in order to further their investigations or intelligence gathering. The police generally do not care whether the business can legally disclose the information and, in my experience, the police are generally ignorant of privacy laws that restrict the ability of Canadian businesses to cooperate with law enforcement investigations.

For some time, there was some degree of uncertainty about the extent to which Canadian businesses could voluntarily provide information to the police upon request, but this uncertainty has been completely resolved so that it is clear that if the police come knocking, Canadian businesses must respond with “come back with a warrant”.

The uncertainty that used to exist is rooted in section 7 of the personal information protection and electronic documents act, also known as PIPEDA. Section 7 is that part of the law that allows businesses to collect, use or disclose personal information without the consent of individuals. Not surprisingly, there is a provision that dictates whether an organization can or cannot give the police customer information if the police come knocking.

Section 7(3)(c.1) allows a business to disclose personal information to a police agency upon request if they have indicated that the information is necessary for a range of purposes and have identified their lawful authority to obtain the information. There's another provision in the act that deals with what happens when the police show up with a warrant or a production order.

It is clear that in those circumstances, personal information can be disclosed. If it is a valid Canadian Court order, it is likely that not providing the information could subject the business to prosecution.

There's also a provision in the Canadian criminal code that makes it clear that the police can ask for anything from a person who is not prohibited by law from disclosing, which further fed this uncertainty.

So for some time in Canada, the police believed that businesses could disclose information without a warrant as long as it was associated with the lawful investigation. Police believed that the fact that they were investigating a crime is all the “lawful authority” they needed.

Where this would come up most often would be if police had identified illegal online conduct and had the IP address of a suspect. They would seek from an internet service provider the customer name and address that was associated with that IP address at that time. Without that information, they had no suspect to investigate and ISPs hold the keys connecting that IP address with a suspect.

The Canadian association of Internet providers actually concluded a form of protocol with Canadian police that would facilitate the provision of this information. Surprisingly, the CAIP was of the view that this was not private information. What would be required would be a written request from a police agency indicating that the information was relevant to an investigation of certain categories of online offenses, principally related to child exploitation. These letters cited that they were issued under the “authority of PIPEDA”, which is simply absurd.

It is my understanding that the internet providers were generally comfortable with providing this information in connection with such important investigations. For other categories of offenses, they would require a production order.

It is also my understanding that some internet providers fine-tuned their terms of service and privacy policies to permit these sorts of disclosures, so that the businesses would have additional cover by saying in fact the customer had consented to disclosure under these circumstances.

One thing to bear in mind, of course, is that this provision in PIPEDA is permissive, meaning that if this interpretation was correct businesses could voluntarily provide this information, but does not compel them to do so. They could always insist on a court order, but very often did not.

Some courts found this agreeable and found that evidence provided voluntarily under this scheme was permissible, while other courts found it to be a violation of the suspect’s Section 8 rights under the Charter.

Then along came a case called R. v Spencer. In this case, a police officer in Saskatoon, Saskatchewan detected someone sharing a folder containing child pornography using a service called LimeWire. The officer was able to determine the IP address of the internet connection being used by that computer and was able to determine that the IP address was allocated to a customer of Shaw Communications. So the cop sent a written “law enforcement request” to Shaw and Shaw handed over the customer information associated with the account. The cops did not try to obtain a production order first.

The IP address was actually in the name of the accused’s sister.

It finally found its way up to the Supreme Court of Canada where the court had to determine whether the request was a “search” under the Charter. It was. And then the question was whether the search was authorized by law. The Court said it was not.

The police and prosecution, of course, argued that this is just “phone book information” that doesn’t implicate any serious privacy issues. The court disagreed, quoting from a Saskatchewan Court of Appeal decision from 2011 called Trapp:

“To label information of this kind as mere “subscriber information” or “customer information”, or nothing but “name, address, and telephone number information”, tends to obscure its true nature. I say this because these characterizations gloss over the significance of an IP address and what such an address, once identified with a particular individual, is capable of revealing about that individual, including the individual’s online activity in the home.”

Justice Cromwell writing for the court concluded that “Here, the subject matter of the search is the identity of a subscriber whose Internet connection is linked to particular, monitored Internet activity.”

The court said that constitutionally protected privacy includes anonymity. Justice Cromwell wrote, and then quoted from the Spencer decision of the Court of Appeal:

[51] I conclude therefore that the police request to Shaw for subscriber information corresponding to specifically observed, anonymous Internet activity engages a high level of informational privacy. I agree with Caldwell J.A.’s conclusion on this point:
. . . a reasonable and informed person concerned about the protection of privacy would expect one’s activities on one’s own computer used in one’s own home would be private. . . . In my judgment, it matters not that the personal attributes of the Disclosed Information pertained to Mr. Spencer’s sister because Mr. Spencer was personally and directly exposed to the consequences of the police conduct in this case. As such, the police conduct prima facie engaged a personal privacy right of Mr. Spencer and, in this respect, his interest in the privacy of the Disclosed Information was direct and personal.

The court then was tasked with considering what “lawful authority” means in subsection 7(3)(c.1).

The court concluded that the police, carrying out this investigation, did not have the lawful authority that would be required to trigger and permit the disclosure under the subsection. Well the police can always ask for the information, they did not have the lawful authority to obtain the information. If they had sought a production order, their right to obtain the information and Shaw's obligation to disclose it would be clear.

What the court did not do was settle what exactly lawful authority means. It does not mean a simple police investigation, even for a serious crime, but what it might include remains unknown.

What is clear, however, is the end result that this subsection of PIPEDA simply does not permit organizations to hand over customer information simply because the police agency is conducting a lawful investigation. If they want the information, they have to come back with a court order.

Just a quick note about other forms of legal process. While production orders are the most common tool used by law enforcement agencies to seek and obtain customer information, a very large number of administrative bodies are able to use different forms of orders or demands. For example, the CRTC spam investigators can use something called a notice to produce under the anti-spam legislation, which is not reviewed or approved by judge in advance.

It is not uncommon for businesses to receive subpoenas, and they need to tread very carefully and read the details of the subpoena. In order to comply with privacy legislation, the organization can only do what it is directed to do in The subpoena, no more. In the majority of cases, the subpoena will direct the company to send somebody to court with particular records. Just sending those records to the litigants or the person issuing the subpoena is not lawful.

Before I wrap up, it should be noted that the rules are different if it is the business itself reporting a crime. Paragraph (c.1) applies where the police come knocking looking for information. Paragraph d is the provision that applies where the organization itself takes the initiative to disclose information to the police or a government institution. It's specifically says that an organization May disclose personal information without consent where it is made on the initiative of the organization to a government institution and the organization has reasonable grounds to believe that the information relates to a contravention of the laws of Canada, a province or foreign jurisdiction that has been, is being or is about to be committed.

This paragraph gives much more discretion to the organization, but it is still limited to circumstances where they have reasonable grounds to believe sub-paragraph 1 applies and they can only disclose the minimum amount of personal information that's reasonably necessary for these purposes.

A scenario that comes up relatively often would be if a store is robbed, and there is surveillance video of the robbery taking place including the suspect. The store can provide that video to the police on their own initiative. Contrast that to another common scenario, where the police are investigating a crime and evidence may have been captured on surveillance video. If it is the police asking for it, and not the organization reporting it on their own initiative, the police have to come back with a court order.

At the end of the day, the safest and smartest thing that a business can do when asked for any customer personal information is to simply say come back with a warrant. Even if you think you can lawfully disclose the information, it simply makes sense that it be left to an impartial decision maker such as a judge or a Justice of the Peace to do the balancing between the public interest in the police having access to the information and the individual privacy interest at play.