Sunday, May 29, 2022

The problem with Bill S-7: Device searches at the border

The government wants border agents to be able to search your smartphones and laptops without any suspicion that you’ve done anything wrong. I think that’s a problem. There are a lot of problematic bills currently pending before parliament but one in particular is not getting enough attention. It’s Bill S-7, called An Act to amend the Customs Act and the Preclearance Act, 2016. Today I’m going to talk about the bill, digital device searches and what I think about it all.

I don’t know about you, but my smartphone and my laptop contain a vast amount of personal information about me. My phone is a portal to every photo of my kids, messages to my wife, my banking and other information. It contains client information. And Canada Border Services Agency wants to be able to search it without any suspicion that I’d committed a crime or violated any law.

Bill S-7, which was introduced in the Senate on March 31, 2022, is intended to give the CBSA the power to go browsing through your smartphone and mine on what amounts to a whim. It also extends the same powers to US Homeland Security agents who carry out pre-departure pre-clearance at Canadian airports.

If you’ve ever watched the TV show “Border Security Canada”, you would have seen how routine these sorts of searches are. Many of the searches do produce evidence of illegal activity, like smuggling, immigration violations and even importation of child sexual abuse materials. The question is not whether these searches should ever be permissible, but under what circumstances. The government wants it to be with a very low threshold, while I’m confident that the Charter requires more than that.

We all know there’s a reduced expectation of privacy at the border, where you can be pulled over to secondary screening and have your stuff searched. The Customs Act specifically gives CBSA the power to search goods. But a big problem has arisen because the CBSA thinks the ones and zeros in your phone are goods they can search.

Smartphones were unheard of when the search powers of the Customs Act were last drafted and the CBSA thinks it gives them carte blanche to search your devices. Now, in the meantime, the courts have rightly said that’s going too far. So the government is looking to amend the Customs Act to authorize device searches if the CBSA officer has a “reasonable general concern” about a contravention of the law.

One big issue is what the hell does “reasonable general concern” mean? In law, we’re used to language like “reasonable grounds to believe a crime has been committed” or even “reasonable grounds to suspect”, but reasonable general concern is not a standard for any sort of search in Canadian law. Your guess is as good as mine, but it seems pretty close to whether the officer's “spidey sense is tingling”.

S-7 is trying to fix a problem and I think the way they’re doing it will ultimately be found to be unconstitutional. To see that, we have to look at the competing interests at play in this context and look at what the courts have recently said about device searches at the border.

It is clear that you have a reduced expectation of privacy at the border, but it is not completely eliminated. And the Charter is not suspended at the border. For example, border officers can’t detain and strip search you just because they want to. These searches legally cannot be performed unless an officer has reasonable grounds to suspect some legal contravention, notably the concealment of goods. And they can’t strip search you unless there is a reason to do so, like looking for contraband smuggled on your person.

Meanwhile, there is a growing body of case law that says individuals have a very high expectation of privacy in our digital devices. For example, in a case called Fearon from 2014, the Supreme Court modified the common law rule related to search incident to arrest for smartphones, specifically due to the immense privacy implications in searching such devices. Upon arrest, they can routinely search you, your clothes and your belongings, but they can only search your smartphone if certain criteria are met.

The Supreme Court has clearly established that the greater the intrusion on privacy, the greater the constitutional protections and a greater justification for the search is required. And while there may be a diminished expectation of privacy at the border, this expectation is not completely extinguished.

At the same time, there has been a developing body of case law saying that suspicionless searches of personal electronic devices at the border violate the Charter.

The leading Supreme Court of Canada case on privacy at the border is from 1988 called Simmons. In that case, the Court recognized that the degree of personal privacy reasonably expected by individuals at the border is lower than in most other situations. Three distinct types of border searches, with an increasing degree of privacy expectation, were identified: (1) routine questioning which every traveller undergoes at a port of entry, sometimes accompanied by a search of baggage and perhaps a pat or frisk of outer clothing; (2) a strip or skin search conducted in a private room after a secondary examination; and (3) a body cavity search. The first category was viewed as the least intrusive type of routine search, not raising any constitutional issues or engaging the rights protected by the Charter. Essentially, this category can be done without any suspicion of wrongdoing.

So since then, customs agents have seen a search of a phone to be the same as the search of your luggage, which they conclude they can do without any suspicion of wrongdoing.

The Alberta Court of Appeal in 2020, in a case called Canfield, said that customs’ treatment of personal electronic devices was wrong, and it does not fit into that first category. The court noted:

“There have been significant developments, both in the technology of personal electronic devices and in the law relating to searches of such devices, since Simmons was decided in 1988. A series of cases from the Supreme Court of Canada over the past decade have recognized that individuals have a reasonable expectation of privacy in the contents of their personal electronic devices, at least in the domestic context. While reasonable expectations of privacy may be lower at the border, the evolving matrix of legislative and social facts and developments in the law regarding privacy in personal electronic devices have not yet been thoroughly considered in the border context.”

The court then said:

“We have also concluded that s 99(1)(a) of the Customs Act is unconstitutional to the extent that it imposes no limits on the searches of such devices at the border, and is not saved by s 1 of the Charter. We accordingly declare that the definition of “goods” in s 2 of the Customs Act is of no force or effect insofar as the definition includes the contents of personal electronic devices for the purpose of s 99(1)(a).”

The Court in Canfield essentially said there has to be a minimal threshold in order to justify a search of a digital device, but they would leave it to parliament to determine what that threshold is.

But the next year, the same Alberta Court of Appeal considered an appeal in a case called Al Askari. In that case, the question was related to a search of a personal electronic device justified under immigration legislation. The Court found that like in Canfield, there has to be a threshold and it can’t be suspicionless.

The court commented favourably on the very reasoned approach put forward by my friend and Schulich School of Law colleague Professor Robert Currie.

“Prof Currie suggests that the critical issue is measuring the reasonably reduced expectation of privacy at the border and the extent of permissible state intrusion into it. In his view, this is best achieved through the established test in R v Collins, [1987] 1 SCR 265, 308. Was the search authorized by law? Is the law itself reasonable? Is the search carried out in a reasonable manner?

When assessing whether the law itself is reasonable, Prof Currie proposes a standard of reasonable suspicion because it is tailor-made to the border context. It must amount to more than a generalized suspicion and be based on objectively reasonable facts within the totality of the circumstances: 311. On the reasonableness of the search, he advocates for an inquiry into whether the search was limited in scope and duration.”

The Court in both Canfield and Al Askari noted that not all searches are the same, and there are degrees of intrusion into personal electronic devices. Asking to look at a receipt for imported goods on a phone is very different from just perusing the full device looking for anything at all.

So fast forward to March 2022. The Alberta Court of Appeal said it’s up to Parliament to set the threshold and for the courts to determine whether it is compliant with the Charter. So Parliament is proposing a threshold of “reasonable general concern” to search documents on a personal digital device. This is setting things up for years of further litigation.

The creation of a ‘’reasonable general concern’ standard is not only new, and the bill doesn’t give it any sort of definition, it is inconsistent with other legislation governing border searches. It also does not impose any obligation that the type of search carried out must be appropriate to what is “of general concern” or set any limits on what can be searched on the device when the “reasonable general concern” (whatever that means) is met.

If you look at the case of Fearon, which addressed device searches incident to arrest, the court imposed a bunch of conditions and limits in order to take account of the nature of device searches. Importantly, the extent of the permitted search has to be appropriate to what they legitimately have an interest in. The court said:

“In practice, this will mean that, generally, even when a cell phone search is permitted because it is truly incidental to the arrest, only recently sent or drafted emails, texts, photos and the call log may be examined as in most cases only those sorts of items will have the necessary link to the purposes for which prompt examination of the device is permitted. But these are not rules, and other searches may in some circumstances be justified. The test is whether the nature and extent of the search are tailored to the purpose for which the search may lawfully be conducted. To paraphrase Caslake, the police must be able to explain, within the permitted purposes, what they searched and why”

In the border context, if they are looking for whether someone appearing on a tourism visa actually has a job waiting for them, you don’t go looking for evidence of that in their camera roll. You scan the subject lines of emails, and not go prowling through all the mail in the inbox.

Fearon also requires police to carefully document their searches, the rationale, what they looked at and why. There is no such requirement in Bill S-7.

Given years of growing jurisprudence confirming that personal electronic devices contain inherently private information, and the tendency of the courts to impose the creation of this lower threshold is unreasonable, inconsistent with other search standards, and anticipated to run afoul of the Charter.

I think after Canfiled and Al Askari, government lawyers and policy makers huddled and and tried to invent a threshold that could plausibly be called a threshold but was miles below reasonable suspicion. And this is what they came up with. You’ll note that they ignored all the really smart and sensible things that Professor Currie proposed.

What is also very notable is that the government ignored the recommendations made by the House of Commons Standing Committee on Access to Information, Privacy and Ethics in 2017 after it had carried out an extensive study and consultation on the issue of privacy at borders and airports. (I testified at those hearings on behalf of the Canadian Bar Association.) It recommended that the threshold of “reasonable grounds to suspect” should be the threshold.

The threshold is so low that it’s hardly a threshold at all. It’s a license for the CBSA to continue their practices of routinely searching electronic devices, and will continue the legal challenges. I just really wish the legislators would listen to the experts and the courts.

Monday, May 16, 2022

Video: Law enforcement requests for customer information - Come Back With A Warrant

Canadian businesses are routinely asked by police agencies to provide customer information in order to further their investigations or intelligence gathering. The police generally do not care whether the business can legally disclose the information and, in my experience, the police are generally ignorant of privacy laws that restrict the ability of Canadian businesses to cooperate with law enforcement investigations.

For some time, there was some degree of uncertainty about the extent to which Canadian businesses could voluntarily provide information to the police upon request, but this uncertainty has been completely resolved so that it is clear that if the police come knocking, Canadian businesses must respond with “come back with a warrant”.

The uncertainty that used to exist is rooted in section 7 of the personal information protection and electronic documents act, also known as PIPEDA. Section 7 is that part of the law that allows businesses to collect, use or disclose personal information without the consent of individuals. Not surprisingly, there is a provision that dictates whether an organization can or cannot give the police customer information if the police come knocking.

Section 7(3)(c.1) allows a business to disclose personal information to a police agency upon request if they have indicated that the information is necessary for a range of purposes and have identified their lawful authority to obtain the information. There's another provision in the act that deals with what happens when the police show up with a warrant or a production order.

It is clear that in those circumstances, personal information can be disclosed. If it is a valid Canadian Court order, it is likely that not providing the information could subject the business to prosecution.

There's also a provision in the Canadian criminal code that makes it clear that the police can ask for anything from a person who is not prohibited by law from disclosing, which further fed this uncertainty.

So for some time in Canada, the police believed that businesses could disclose information without a warrant as long as it was associated with the lawful investigation. Police believed that the fact that they were investigating a crime is all the “lawful authority” they needed.

Where this would come up most often would be if police had identified illegal online conduct and had the IP address of a suspect. They would seek from an internet service provider the customer name and address that was associated with that IP address at that time. Without that information, they had no suspect to investigate and ISPs hold the keys connecting that IP address with a suspect.

The Canadian association of Internet providers actually concluded a form of protocol with Canadian police that would facilitate the provision of this information. Surprisingly, the CAIP was of the view that this was not private information. What would be required would be a written request from a police agency indicating that the information was relevant to an investigation of certain categories of online offenses, principally related to child exploitation. These letters cited that they were issued under the “authority of PIPEDA”, which is simply absurd.

It is my understanding that the internet providers were generally comfortable with providing this information in connection with such important investigations. For other categories of offenses, they would require a production order.

It is also my understanding that some internet providers fine-tuned their terms of service and privacy policies to permit these sorts of disclosures, so that the businesses would have additional cover by saying in fact the customer had consented to disclosure under these circumstances.

One thing to bear in mind, of course, is that this provision in PIPEDA is permissive, meaning that if this interpretation was correct businesses could voluntarily provide this information, but does not compel them to do so. They could always insist on a court order, but very often did not.

Some courts found this agreeable and found that evidence provided voluntarily under this scheme was permissible, while other courts found it to be a violation of the suspect’s Section 8 rights under the Charter.

Then along came a case called R. v Spencer. In this case, a police officer in Saskatoon, Saskatchewan detected someone sharing a folder containing child pornography using a service called LimeWire. The officer was able to determine the IP address of the internet connection being used by that computer and was able to determine that the IP address was allocated to a customer of Shaw Communications. So the cop sent a written “law enforcement request” to Shaw and Shaw handed over the customer information associated with the account. The cops did not try to obtain a production order first.

The IP address was actually in the name of the accused’s sister.

It finally found its way up to the Supreme Court of Canada where the court had to determine whether the request was a “search” under the Charter. It was. And then the question was whether the search was authorized by law. The Court said it was not.

The police and prosecution, of course, argued that this is just “phone book information” that doesn’t implicate any serious privacy issues. The court disagreed, quoting from a Saskatchewan Court of Appeal decision from 2011 called Trapp:

“To label information of this kind as mere “subscriber information” or “customer information”, or nothing but “name, address, and telephone number information”, tends to obscure its true nature. I say this because these characterizations gloss over the significance of an IP address and what such an address, once identified with a particular individual, is capable of revealing about that individual, including the individual’s online activity in the home.”

Justice Cromwell writing for the court concluded that “Here, the subject matter of the search is the identity of a subscriber whose Internet connection is linked to particular, monitored Internet activity.”

The court said that constitutionally protected privacy includes anonymity. Justice Cromwell wrote, and then quoted from the Spencer decision of the Court of Appeal:

[51] I conclude therefore that the police request to Shaw for subscriber information corresponding to specifically observed, anonymous Internet activity engages a high level of informational privacy. I agree with Caldwell J.A.’s conclusion on this point:
. . . a reasonable and informed person concerned about the protection of privacy would expect one’s activities on one’s own computer used in one’s own home would be private. . . . In my judgment, it matters not that the personal attributes of the Disclosed Information pertained to Mr. Spencer’s sister because Mr. Spencer was personally and directly exposed to the consequences of the police conduct in this case. As such, the police conduct prima facie engaged a personal privacy right of Mr. Spencer and, in this respect, his interest in the privacy of the Disclosed Information was direct and personal.

The court then was tasked with considering what “lawful authority” means in subsection 7(3)(c.1).

The court concluded that the police, carrying out this investigation, did not have the lawful authority that would be required to trigger and permit the disclosure under the subsection. Well the police can always ask for the information, they did not have the lawful authority to obtain the information. If they had sought a production order, their right to obtain the information and Shaw's obligation to disclose it would be clear.

What the court did not do was settle what exactly lawful authority means. It does not mean a simple police investigation, even for a serious crime, but what it might include remains unknown.

What is clear, however, is the end result that this subsection of PIPEDA simply does not permit organizations to hand over customer information simply because the police agency is conducting a lawful investigation. If they want the information, they have to come back with a court order.

Just a quick note about other forms of legal process. While production orders are the most common tool used by law enforcement agencies to seek and obtain customer information, a very large number of administrative bodies are able to use different forms of orders or demands. For example, the CRTC spam investigators can use something called a notice to produce under the anti-spam legislation, which is not reviewed or approved by judge in advance.

It is not uncommon for businesses to receive subpoenas, and they need to tread very carefully and read the details of the subpoena. In order to comply with privacy legislation, the organization can only do what it is directed to do in The subpoena, no more. In the majority of cases, the subpoena will direct the company to send somebody to court with particular records. Just sending those records to the litigants or the person issuing the subpoena is not lawful.

Before I wrap up, it should be noted that the rules are different if it is the business itself reporting a crime. Paragraph (c.1) applies where the police come knocking looking for information. Paragraph d is the provision that applies where the organization itself takes the initiative to disclose information to the police or a government institution. It's specifically says that an organization May disclose personal information without consent where it is made on the initiative of the organization to a government institution and the organization has reasonable grounds to believe that the information relates to a contravention of the laws of Canada, a province or foreign jurisdiction that has been, is being or is about to be committed.

This paragraph gives much more discretion to the organization, but it is still limited to circumstances where they have reasonable grounds to believe sub-paragraph 1 applies and they can only disclose the minimum amount of personal information that's reasonably necessary for these purposes.

A scenario that comes up relatively often would be if a store is robbed, and there is surveillance video of the robbery taking place including the suspect. The store can provide that video to the police on their own initiative. Contrast that to another common scenario, where the police are investigating a crime and evidence may have been captured on surveillance video. If it is the police asking for it, and not the organization reporting it on their own initiative, the police have to come back with a court order.

At the end of the day, the safest and smartest thing that a business can do when asked for any customer personal information is to simply say come back with a warrant. Even if you think you can lawfully disclose the information, it simply makes sense that it be left to an impartial decision maker such as a judge or a Justice of the Peace to do the balancing between the public interest in the police having access to the information and the individual privacy interest at play.

Thursday, May 12, 2022

Presentation: Privacy civil claims

I had the honour this week of presenting to a continuing education event for judges on privacy civil claims, past, present and future. I was jointed by Antoine Aylwin and Erika Chamberlain.

To make it a little more daunting, some of the judges who wrote the decisions I referred to were in the room...

It may be of interest to the privacy nerds who follow my blog, so here's the presentation:

Thursday, May 05, 2022

Presentation: Lawyers and InfoSec professionals - playing nicely with lawyers to provide more value in your engagements

I was very kindly invited back to give a keynote at the Canadian Cyber Summit for the High Technology Crime Investigation Association. I spoke about the role of lawyers in incident response and how greater understanding between lawyers and the technical folks of their respective roles can add value to the overall engagement. I also discussed the importance of legal advice privilege in indicent response. Here is a copy of the presentation I gave, in case it's of interest ...

Monday, April 18, 2022

Canada's Anti-Spam Law and the installation of software

Canada’s anti-spam law is about much more than just spam. It also regulates the installation of software. Like the rest of the law, it is complicated and convoluted and has significant penalties. If you’re a software developer or an IT admin, you definitely need to know about this.

So we’re talking about Canada’s anti-spam law. The official title is much longer, and it also includes two sets of regulations to make it more complicated.

Despite the snappy title that most of us use: Canada’s Anti-Spam Law, it is about more than just spam. It has often-overlooked provisions that make it illegal to install software on another person’s computer – or cause it to be installed – without their consent.

It was clearly put into the law to go after bad stuff like malware, viruses, rootkits, trojans, malware bundled with legitimate software and botnets. But it is not just limited to malevolent software. It potentially affects a huge range of software.

So here is the general rule from Section 8 of the Act:

8. (1) A person must not, in the course of a commercial activity, install or cause to be installed a computer program on any other person’s computer system or, having so installed or caused to be installed a computer program, cause an electronic message to be sent from that computer system, unless

(a) the person has obtained the express consent of the owner or an authorized user of the computer system and complies with subsection 11(5); or

(b) the person is acting in accordance with a court order.

Let’s break that down. The first part is that it has to be part of a commercial activity. I’m not sure they meant to let people off the hook if they’re doing it for fun and giggles. The “commercial activity” part is likely there so that the government can say this is justified under the federal “general trade and commerce power”.

They could have used the federal criminal law jurisdiction, but then they’d be subject to the full due process and fairness requirements of the Canadian Charter of Rights and Freedoms, and the government did not want to do that. They’d rather it be regulatory and subject to much lower scrutiny.

Then it says you can’t install a computer program on another’s computer without the express consent of the owner or authorized user of the computer. (The definition of “computer system” would include desktops, laptops, smartphones, routers and appliances.) or cause to be installed.

The express consent has to be obtained in the manner set out in the Act, and I’ll discuss that later.

It also additionally prohibits installing a computer program on another’s computer and then causing it to send electronic messages. This makes creation of botnets for sending spam extra bad.

The definition of the term “Computer Program” is taken from the Criminal Code of Canada

“computer program” means computer data representing instructions or statements that, when executed in a computer system, causes the computer system to perform a function; (programme d’ordinateur)

In addition to defined terms, there are some key ideas and terms in the Act that are not well-understood.

It talks about “installing” a computer program, but what that is has not been defined in the legislation and the CRTC hasn’t provided any helpful guidance.

I wouldn’t think that running malware once on someone’s system for a malevolent purpose would be captured in the definition of “install”, though it likely is the criminal offence of mischief in relation to data.

What about downloading source code that is not yet compiled? Or then compiling it?

It is certainly possible to load up software and have it ready to execute without being conventionally “installed”. Does it have to be permanent? Or show up in your installed applications directory?

I don’t know.

There’s also the question of who is an owner or an authorized user of a computer system.

If it is leased, the leasing company likely owns the computer and we’ve certainly seen reports and investigations of spyware and intrusive software installed on rented and leased laptops.

My internet service provider owns my cable modem, so it’s apparently ok if they install malware on it.

For authorized users, it means any person who is authorized by the owner to use the computer system. Interestingly, it is not limited by the scope of the authorization. It seems to be binary. Either you are authorized or you are not.

There are some scenarios to think about when considering owners and authorized users.

For example, if a company pre-installs software on a device at the factory or before ownership transfers to the end customer, that company is the owner of the device and can install whatever they like on it.

Many companies issue devices like laptops and smartphones to employees. Those employers own the devices and can install any software on them.

But increasingly, employees are using devices that they own for work-related purposes, and employers may have a legitimate interest in installing mobile device management and security software on those devices. Unless there’s a clear agreement that the employer gets to do so, they may find themselves to be offside the law.

So, in short, it is prohibited to do any of these things without the express consent of the owner or authorized user:

  • (a) install a computer program of any kind;
  • (b) cause a computer program of any kind to be installed, such as hiding or bundling additional software in an installer that the owner or authorized user has installed. We sometimes see this when downloading freeware or shareware, and the installer includes other software that the user didn’t ask for;
  • (c) or cause such a program that has been installed to send electronic messages after installation.

Of course, someone who is the owner or authorized user of the particular device can put whatever software they want on the device. This only covers installation by people who are not the owner or the authorized user of the device.

There are some exceptions that people should be aware of.

It is common to install software and to have it automatically update. This is ok if the user consents to the auto updates. But that probably doesn't apply if the update results in software that does very different things compared to when it was first installed.

There are some cases where consent is deemed or implied.

CASL deems users to consent to the installation of the following list of computer programs if the user’s conduct shows it is reasonable to believe they consented to it. It is a weird list.

At the top of the list are “cookies”. To start with, anyone who knows what cookies are knows they are not computer programs. They are text files, and including them on this list tells me that the people who wrote this law may not know as much as you may hope about this subject.

It then includes HTML code. HTML is hypertext markup language. I suppose it is data that represents instructions to a computer on how to display text and other elements. I guess the next question is whether this includes the variations of HTML like XHTML? I don’t know. But if HTML is a computer program, then so are fonts and Unicode character codes.

Next it refers to “Java Scripts”. Yup. That’s what it says. We are told by industry Canada that this is meant to refer to JavaScript, which is different from a Java script. Not only could have have maybe not made such a stupid mistake, but maybe they could have been clear about whether they were referring to JavaScript run in a browser (with its attendant sandbox) or something else.

Next on the list are “operating systems”, which seems very perverse to include. The operating system is the mostly invisible layer that lies between the computer hardware and the software that runs on top of it. Changes to the operating system can have a huge impact on the security and privacy of the user, and much of it happens below the system. And there is no clarity about whether an “operating system” on this list includes the software that often comes bundled with it. When I replace the operating system on my Windows PC, I get a new version of a whole bunch of standard software that comes with it like the file explorer and notepad. It would make sense that a user who upgrades from one version of MacOS or Windows to another. But I can make an open source operating system distro that’s full of appalling stuff, in addition to the operating system.

Finally, it says any program executable only through use of another computer program for which the user has already consented to installation. Does this include macros embedded in word documents? Not sure.

It makes sense to have deemed consent situations or implied consent, but we could have used a LOT more clarity.

There are some exceptions to the general rule of getting consent, two of which are exclusively reserved to telecommunications service providers, and a final one that related to programs that exclusively correct failures in a computer system or a computer program.

This is understandable, but this would mean that a telco can install software on my computer without my knowledge or consent if it’s to upgrade their network.

So how do you get express consent. It’s like the cumbersome express consent for commercial electronic messages, but with more.

When seeking express consent, the installer has to identify

  • the reason;
  • Their full business name;
  • Their mailing address, and one of: telephone number, email address, or web address;
  • if consent is sought on behalf of another person, a statement indicating who is seeking consent and on whose behalf consent is being sought;
  • a statement that the user may withdraw consent for the computer program’s installation at any time; and
  • a clear and simple description, in general terms, of the computer program’s function and purposes.

But if an installer “knows and intends” that a computer program will cause a computer system to operate in a way its owner doesn’t reasonably expect, the installer must provide a higher level of disclosure and acknowledgement to get the user’s express consent.

This specifically includes the following functions, all of which largely make sense:

  • collecting personal information stored on the computer system;
  • interfering with the user’s control of the computer system;
  • changing or interfering with settings, preferences, or commands already installed or stored on the computer system without the user’s knowledge;
  • changing or interfering with data stored on the computer system in a way that obstructs, interrupts or interferes with lawful access to or use of that data by the user;
  • causing the computer system to communicate with another computer system, or other device, without the user’s authorization;
  • installing a computer program that may be activated by a third party without the user’s knowledge; and
  • performing any other function CASL specifies (there are none as yet).

Like the unsubscribe for commercial electronic messages, anyone who installs software that meets this higher threshold has to include an electronic address that is valid for at least one year to the user can ask the installer to remove or disable the program.

A user can make this request if she believes the installer didn’t accurately describe the “function, purpose, or impact” of the computer program when the installer requested consent to install it. If the installer gets a removal request within one year of installation, and consent was based on an inaccurate description of the program’s material elements, then the installer must assist the user in removing or disabling the program as soon as feasible – and at no cost to the user.

So how is this enforced? CASL is largely overseen by the enforcement team at the Canadian Radio-television and Telecommunications Commission.

Overall, I see them at least making more noise about their enforcement activities in the software arena than the spam arena.

In doing this work, the CRTC has some pretty gnarly enforcement tools.

First of all, they can issue “notices to produce” which are essentially similar to Criminal Code production orders except they do not require judicial authorization. These can require the recipient of the order to hand over just about any records or information, and unlike Criminal Code production orders, they can be issued without any suspicion of unlawful conduct. They can be issued just to check compliance. I should do a whole episode on these things, since they really are something else in the whole panoply of law enforcement tools.

They can also seek and obtain search warrants, which at least are overseen and have to be approved by a judge.

Before CASL, I imagine the CRTC was entirely populated by guys in suits and now they get to put on raid jackets, tactical boots and a badge.

I mentioned before that there can be some significant penalties for infractions of CASL’s software rules.

It needs to be noted that contraventions involve “administrative monetary penalties” - not a “punishment” but intended to ensure compliance. These are not fines per se and are not criminal penalties. That’s because if they were criminal or truly quasi-criminal, they’d have to follow the Charter’s much stricter standards for criminal offences.

The maximum for these administrative monetary penalties are steep. Up to $1M for an individual offender and $10M for a corporation.

The legislation sets out a bunch of factors to be considered in determining the amount of penalty, including the ability of the offender to pay.

There is a mechanism similar to a US consent decree where the offender can give an “undertaking” that halts enforcement, but likely imposes a whole bunch of conditions that will last for a while.

Officers and directors of companies need to know they may be personally liable for penalties and of course the CRTC can name and shame violators.

There is a due diligence defence, but this is a pretty high bar to reach.

We have seen at least three reported enforcement actions under the software provisions of CASL.

The first was involving two companies called Datablocks and Sunlight Media in 2018. They were found by the CRTC to be providing a service to others to inject exploits onto users’ computers through online ads. They were hit with penalties amount to $100K and $150K, respectively.

The second was in 2019 and involved a company called Orcus Technologies, which was said to be marketing a remote access trojan. They marketed it as a legitimate tool, but the CRTC concluded this was to give a veneer of respectability to a shady undertaking. They were hit with a penalty of $115K.

The most recent one, in 2020, involved a company called Notesolution Inc. doing business as OneClass. They were involved in a shady installation of a Chrome extension that collected personal information on users’ systems without their knowledge or expectation. They entered into an undertaking, and agreed to pay $100K.

I hope this has been of interest. The discussion was obviously at a pretty high level, and there is a lot that it unknown about how some of the key terms and concepts are being interpreted by the regulator.

If you have any questions or comments, please feel free to leave them below. I read them all and try to reply to them all as well. If your company needs help in this area, please reach out. My contact info is in the notes, as well.

Wednesday, April 13, 2022

Video: Privacy and start-ups ... what founders need to know

In my legal practice, I have seen businesses fail because they did not take privacy into account. I’ve seen customers walk away from deals because of privacy issues and I’ve seen acquisitions fail due diligence because of privacy.

Today, I’m going to be talking about privacy by design for start-ups, to help embed privacy into growing and high-growth businesses.

Episode 2 of Season 4 of HBO’s “Silicon Valley” provides a good case study on the possible consequences of not getting privacy compliance right.

Privacy means different things to different people. And people have wildly variable feelings about privacy. As a founder, you need to understand that and take that into account.

In some ways, privacy is about being left alone, not observed and surveilled.

It is about giving people meaningful choices and control. They need to understand what is happening with their personal information and they should have control over it. What they share and how it is used. They should get to choose whether something is widely disseminated or not.

Privacy is also about regulatory compliance. As a founder you need to make sure your company complies with the regulatory obligations imposed on it. If you are in the business to business space, you will need to understand the regulatory obligations imposed on your customers. I can guarantee you that your customers will look very, very closely at whether your product affects their compliance with their legal obligations. And they’ll walk away if there’s any realistic chance that using your product puts their compliance at risk.

Privacy is about trust in a number of ways. If you are in the business to consumer space, your end-users will only embrace your product if they trust it. If they know what the product is doing with their information and they trust you to keep it consistent. If you are in the business to business space, your customers will only use your product or service if they trust you. If you’re a start-up, you don’t yet have a track record or wide adoption to speak on your behalf. A deal with a start-up is always a leap of faith, and trust has to be built. And there are a bunch of indicators of trustworthiness. I have advised clients to walk away from deals where the documentation, responses to questions don’t suggest privacy maturity. If you have just cut and pasted your privacy policy from someone else, we can tell.

Privacy is not just security, but security is critical to privacy. Diligent security is table stakes. And a lack of security is the highest risk area. We seldom see class-action lawsuits for getting the wrong kind of consent, but most privacy/security breaches are followed by class-action lawsuits. Your customers will expect you to safeguard their data with the same degree of diligence as they would do it themselves. In the b2b space, they should be able to expect you to do it better.

You need to make sure there are no surprises. Set expectations and meet them.

In my 20+ years working with companies on privacy, one thing is clear. People don’t like it when something is “creepy”. Usually this is a useless word, since the creepy line is drawn very differently for different people. But what I’ve learned is that where the creepy line is depends on their expectations. But things are always creepy or off-putting when something happens with your personal information that you did not expect.

As a founder, you really have to realize that regardless of whether or not you care about privacy yourself, your end users care about privacy. Don't believe the hype, privacy is far from dead.

If you are in the business to business arena, your customers are going to care very deeply about the privacy and security of the information that they entrust you with. If you have a competitor with greater privacy diligence or a track record, you have important ground to make up.

And, of course, for founders getting investment is critical to the success of their business. The investors during your friends and family round or even seed funding might not be particularly sophisticated when it comes to privacy. But Mark my words, sophisticated funds carry out due diligence and know that privacy failures can often equal business failures. I have seen investments go completely sideways because of privacy liabilities that are hidden in the business. And when it comes time to make an exit via acquisition, every single due diligence questionnaire has an entire section if not a chapter on privacy and security matters. The weeks leading up to a transaction are not the time to be slapping Band-Aids on privacy problems that were built into the business or the product from the very first days. As a founder, you want to make sure that potential privacy issues are, at least, identified and managed long before that point.

The borderless world

I once worked with a founder and CEO of a company who often said that if you are a startup in Canada, and your ambition is the Canadian market, you have set your sights too low and you are likely to fail. The world is global, and digital is more global than any other sector. You might launch your minimally viable product or experiment with product market fit in the local marketplace, but your prospective customers are around the world. This also means that privacy laws around the world are going to affect your business.

If your product or services are directed at consumers, you will have to think about being exposed to and complying with the privacy laws of every single jurisdiction where your end users reside. That is just the nature of the beast.

If you're selling to other businesses, each of those businesses are going to be subject to local privacy laws that may differ significantly from what you're used to. Once you get into particular niches, such as processing personal health information or educational technology, the complexity and the stakes rise significantly.

You definitely want to consult with somebody who is familiar with the alphabet soup of PIPEDA, PIPA, CASL, PHIA, GDPR, COPPA, CCPA, CPRA, HIPAA.

You're going to want to talk carefully and deeply with your customers to find out what their regulatory requirements are, which they need to push down onto their suppliers.

The consequences of getting it wrong can be significant. You can end up with a useless product or service, one that cannot be sold or that cannot be used by your target customers. I’ve seen that happen.

A privacy incident can cause significant reputational harm, which can be disastrous as a newcomer in a marketplace trying to attract customers.

Fixing issues after the fact is often very expensive. Some privacy and security requirements may mandate a particular way to architect your back-end systems. Some rules may require localization for certain customers, and if you did not anticipate that out of the gate, implementing those requirements can be time and resource intensive.

Of course, there's always the possibility of regulatory action resulting in fines and penalties. Few things stand out on a due diligence checklist like having to disclose an ongoing regulatory investigation or a hit to your balance sheet caused by penalties.

All of these, individually or taken together, can be a significant impediment to closing an investment deal or a financing, and can be completely fatal to a possible exit by acquisition.

So what's the way to manage this? It's something called privacy by design, which is a methodology that was originally created in Canada by Dr Ann Cavoukian, the former information and privacy commissioner of Ontario.

Here's what it requires at a relatively high level.

First of all, you need to be proactive about privacy and not reactive. You want to think deeply about privacy, anticipate issues and address them up front rather than reacting to issues or problems as they come up.

Second, you need to make privacy the default. You need to think about privacy holistically, focusing particularly on consumers and user choice, and setting your defaults to be privacy protective so that end users get to choose whether or not they deviate from those privacy protective defaults.

Third, you need to embed privacy into your design and coding process. Privacy should be a topic at every project management meeting. I'll talk about the methodology for that in a couple minutes.

You need to think about privacy as positive sum game rather than a zero-sum game. Too often, people think about privacy versus efficiency, or privacy versus innovation, or privacy versus security. You need to be creative and think about privacy as a driver of efficiency, innovation and security.

Fifth, you need to build in end-to-end security. As I mentioned before, security may in fact be the highest risk area given the possibility of liability and penalties in this area, you need to think about protecting end users from themselves, from their carelessness, and from all possible adversaries.

Sixth, you need to build visibility and transparency. Just about every single privacy law out there requires that an organization be open and transparent about its practices. In my experience, the more proactive an organization is in talking about privacy and security, and how they address it, it is a significant “leg up” compared to anybody else who does not.

Seventh, and finally, you need to always be aware that and users are human beings who have a strong interest in their own privacy. They might make individual choices that differ from your own privacy comfort levels, but that is human. Always look at your product and all of your choices through the eyes of your human and users. Think about how you will explain your product and services to an end user, and can the choices that you have made in its design be justified to them?

A key tool to implement this is to document your privacy process and build it iteratively into your product development process. For every single product or feature of a product, you need to document what data from or about users is collected. What data is generated? What inferences are made? You will want to get very detailed, knowing every single data field that is collected or generated in connection with your product.

Next you need to carefully document how each data element is used? Why do you need that data, how do you propose to use it and is it necessary for that product or feature? If it is not “must have” but “good to have”, how do you build that choice into your product?

You need to ask “is this data ever externally exposed”? Does it go to a third party to be processed on your behalf, is it ever publicly surfaced? Are there any ways that the data might be exposed to a bad guy or adversary?

In most places, privacy regulations require that you give individual users notice about the purposes for which personal information is collected, used or disclosed. You need to give users control over this. How are the obligations for notice and control built into your product from day one? When a user clicks a button, is it obvious to them what happens next?

You will then need to ask “where is the data”? Is it stored locally on a device or server managed by the user or the customer? Is it on servers that you control? Is it a combination of the two? Is the data safe, wherever it resides? To some people, local on device storage and processing is seen as being more privacy protective than storage with the service provider. But in some cases, those endpoints are less secure than a data center environment which may have different risks.

Finally, think about life cycle management for the data. How long is it retained? How long do you or the end user actually need that information for? If it's no longer needed for the purpose identified to the end user, it should be securely deleted. You'll also want to think about giving the end user control over deleting their information. In some jurisdictions, this is a legal requirement.

Everybody on your team needs to understand privacy as a concept and how privacy relates to their work function. Not everybody will become a subject matter expert, but a pervasive level of awareness is critical. Making sure that you do have subject matter expertise properly deployed in your company is important.

You also have to understand that it is an iterative process. Modern development environments can sometimes be likened to building or upgrading an aircraft while it is in flight. You need to be thinking of flight worthiness at every stage.

When a product or service is initially designed, you need to go through that privacy design process to identify and mitigate all of the privacy issues. No product should be launched, even in beta until those issues have been identified and addressed. And then any add-ons or enhancements to that product or service need to go through the exact same scrutiny to make sure that no new issues are introduced without having been carefully thought through and managed.

I have seen too many interesting and innovative product ideas fail because privacy and compliance simply was not on the founder’s radar until it is too late. I have seen financing deals derailed and acquisitions tanked for similar reasons.

Understandably, founders are often most focused on product market fit and a minimally viable product to launch. But you need to realize that a product that cannot be used by your customers or that has significant regulatory and compliance risk is not a viable product.

I hope this has been of interest. The discussion was obviously at a pretty high level, but my colleagues and I are always happy to talk with startup founders to help assess the impact of privacy and compliance on their businesses.

If you have any questions or comments, please feel free to leave them below. I read them all and try to reply to them all as well. If your company needs help in this area, please reach out.

And, of course, feel free to share this with anybody in the startup community for whom it may be useful.

Sunday, March 27, 2022

Video: Canada - US announce beginning of CLOUD Act negotiations

Today, I’m going to be talking about the newly announced “CLOUD Act” agreement negotiation process between Canada and the US to facilitate cross-border law enforcement investigations.

This is just beginning, so I’ll necessarily be doing some speculating.

This week, the United States Department of Justice announced that the governments of the US and Canada are currently negotiating an agreement under the CLOUD Act to facilitate cross-border law enforcement investigations.

This is a big deal. This will mean that Canadian police can use Canadian court orders to get evidence in the US, and American search warrants can be served on Canadians.

It is intended to be a solution to an issue that affects law enforcement in both countries who want evidence that is on the other side of the border.

Every country has absolute sovereignty over what happens in their territory

No “sovereign” can do anything in another sovereign’s territory without permission or invitation.

Canadian enforcement powers end – abruptly – at the border. A criminal court can’t order anyone outside of its jurisdiction to do anything, including the production of records.

It’s reciprocal: foreign states can’t extend their law enforcement into Canada without permission or invitation.

As it currently stands, a US search warrant has no effect in Canada. A Canadian production order has no effect in the US. Canadian law ends at the border, as does American law.

The Criminal Code does not authorize the issuance of a production order directed at a person or entity outside of Canada.

(It is important to remember that there’s a big difference between civil lawsuits and criminal investigations.)

Notice I said “without permission or invitation”. To provide that permission, countries have often entered into mutual legal assistance treaties with one another. If you’re investigating something in our country and some evidence is in our country, tell us about it and maybe we’ll assist you in getting it. I’ll discuss this a bit more later.

The reality is that most reputable US service providers will provide information to Canadian law enforcement under a Canadian production order, as long as they can do so without risking a violation of US law.

For example, in the first half of 2021, Twitter reports that it received 56 information requests about 63 accounts and it complied with 45% of them.

During the same time, Meta/Facebook reports it received 1,110 “legal process requests” from Canada and complied with 82% of the requests it received.

As I said, a Canadian production order doesn’t really have any effect in the US. But they generally do follow them, voluntarily, when they can.

Currently, a US privacy law called the Stored Communications Act prevents certain service providers from providing certain categories of data except with a qualifying US warrant. This annoys a lot of Canadian investigators, who have to go through formalities under the Mutual Legal Assistance Treaty between the two countries in order to get a US qualifying warrant.

A CLOUD Act agreement would remove that barrier and permit most US warrants for records and information to have effect in Canada. It is reciprocal, so Canadian law enforcement can get court orders in Canada for records that are in the custody of American service providers.

What is the CLOUD Act?

The CLOUD Act, or “Clarifying Lawful Overseas Use of Data Act”, was enacted in 2018. At the time, it got a lot of attention because it rendered moot a very high profile case in which US law enforcement was looking for data stored by Microsoft in one of their data centres in Ireland. Microsoft sensibly resisted the order, saying that US law did not extend to data that was outside of the US.

The case finally found its way to the Supreme Court of the United States, but before a decision was rendered, the US enacted the CLOUD Act that made it clear that a US warrant could compel US companies to provide stored data for a customer or subscriber on any server they own and operate, regardless where it is located, when demanded by warrant. The CLOUD Act also has a mechanism to challenge the warrant if they believe the request violates the privacy rights of the foreign country the data is stored in.

What the CLOUD Act also does is create a framework by which the US government can negotiate agreements with other governments for mutual recognition of the other country’s legal processes, subject to limitations set out in the agreement.

Before coming into effect, the bilateral or multi-lateral agreement needs to be put before the US congress, and the US Attorney General has to certify that the partner country has robust substantive and procedural protections for privacy and civil liberties.

The US has already negotiated such an agreement with the United Kingdom and Australia. Now it’s Canada’s turn.

This will be welcome news to Canadian law enforcement, who regularly seek evidence from US-based technology companies but sometimes find themselves hampered by a number of factors. In fact, Canadian law enforcement lobbying groups like the Canadian Association of Chiefs of Police have been pushing hard to get Canada to negotiate a CLOUD Act agreement with the United States.

Mutual Legal Assistance

There has for some time been a mutual legal assistance treaty between Canada and the United States, which provides a government-to-government pathway for law enforcement in Canada to obtain access to information in the United States. It is a two-way street, which similarly provides American law enforcement with access to Canadian data.

Without an agreement like the MLAT, carrying out searches on foreign territory violates international law and sovereignty.

The mutual legal assistance process has been said to be cumbersome and time-consuming, mainly because all requests from Canadian law enforcement are routed through the department of Justice Canada in Ottawa, who then sends a request to the United States Department of Justice. Both of these entities review the request and there is an element of discretion on the part of the receiving government as to whether or not they wish to process it. Assuming it is OK with the Canadian and US central authorities, a lawyer from the US Department of Justice seeks an order from the United States Federal Court that is addressed to the service provider, requiring them to provide the data to the US DOJ, which then sends the data to the Canadian DOJ and then to the law enforcement agency.

A key part of this process is the review and approval by the central authorities in each country. They ask “does this fit within the treaty?” “Does it meet the legal thresholds?” “Is it appropriately tailored – not too broad?” “Is it consistent with our laws and values?” “Does it implicate any of our own domestic interests?”

Canadian law enforcement generally would prefer to avoid this, and have tried to do so by seeking production orders in Canadian courts that name US based service providers.

The Canadian Criminal Code does not authorize the service of production orders outside of Canada, mainly because a Canadian court does not have jurisdiction over someone who is not in Canada. Some Courts simply will not issue these orders, but more are issuing these sorts of orders after a decision from the British Columbia Court of appeal called Brecknell. For a bunch of reasons, I think that decision is wrongly decided but for more information on that you can read my case comment.

In my experience, most US service providers will provide data in response to Canadian Court orders, but they are prohibited under US criminal law from providing the content of any communications except with a qualifying US warrant. That can be obtained through the MLAT process, but a “qualifying US warrant” is not available from a Canadian court.

A few years ago, I was involved in a case on behalf of an American company where a Canadian law enforcement agency sought and obtained a production order that would have required the US company to violate American law. The case ultimately became moot before it went to a hearing, so there's no written decision I can point you to. But it was clear that the attempt to do so was out of frustration with the mutual legal assistance process and the perception of the time it takes. In reality, urgent orders can be turned around quite quickly and the average turnaround time is around 2 months.

The process we have ahead likely looks like this: it will take some time to negotiate the agreement between Canada and the US. It is not “one size fits all”. Once the agreement is negotiated, it will have to go to the US congress – a process that is at least six months. And Canada would have to amend a bunch of laws before it can go into effect.

What to expect

So what would implementing a CLOUD Act agreement look like on the Canadian side of the border? I would only be speculating, because we don't have a final agreement to look at, but a number of laws would have to be amended.

For example, all of our existing privacy laws in Canada prohibit the disclosure of personal information or personal health information except to comply with a warrant, production order, court order or where required by law. Currently, that would be read as we're required by Canadian federal or provincial law. Or under a Canadian court order.

Complying with a US order would not fit within that. Those barriers would need to be taken down, or a new law would need to be passed so that these American orders could be complied with in Canada.

I don't think making US orders mandatory in Canada is how it would likely play out. On the American side of the border, the CLOUD Act does not make foreign orders mandatory in the United states. What it did was take down the barriers, mainly in the Stored Communications Act, that prevented US-based companies from disclosing certain categories of information. In order to be truly reciprocal, Canadian laws would need to be amended to permit disclosures to US law enforcement in response to a US court order or subpoena.

This is where I think things will get a little bit controversial in Canada. After all, two provinces went so far as to prohibit personal information from being stored outside of Canada or being accessed from outside of Canada because of an overblown concern about the USA PATRIOT act. In some instances, it is an offense to disclose personal information in response to a “foreign demand for disclosure”. All that would have to change, and I think that will attract some interesting responses.

At the end of the day, it makes sense that Canadian police should be able to go to a Canadian judge to get an order for access to information about Canadian suspects of a crime that took place in Canada.

It also makes sense that American police should be able to go to an American judge to get an order for access to information about American suspects of a crime that took place in the US.

The CLOUD Act agreements with the UK and Australia provide some idea about the guardrails that should be included in an agreement with Canada.

First, it should be limited to serious crimes and not triviality or just administrative and regulatory tribunals.

Second, it should not permit one country to investigate the citizens or residents of the other country. It should be limited to Canadian authorities investigating Canadian crimes, or American authorities investigating American crimes.

Third, there would be a mechanism by which either country gets to say for a particular request that the agreement would not apply in that instance.

Fourth, there should be a mechanism by which a company that receives a legal process to challenge it.

As a final note, when this progresses and we see what the agreement looks like, Canadians should be very careful to make sure that it is not used to further the Canadian so-called “lawful access” agenda that has been pursued for years and years by Canadian law enforcement. In particular, Canadian law enforcement have been trying to get the laws amended so they can get warrantless access to personal information.

Monday, March 21, 2022

Video: Privacy laws and the media (Part 1)

Today, I’m going to be talking about privacy rights and freedom of expression in Canada. Specifically, I’m going to be talking about privacy and news reporting.

This is a pretty big topic that could fill an entire course at both law school and journalism school, but I’m hoping to provide an overview of the significant laws and principles at play.


Most of us would be familiar with the idea of freedom of expression or freedom of the press.

In Canada, it is guaranteed in section 2(b) of our Charter of Rights and Freedoms under the heading of “Fundamental Freedoms”.

This section reads:

“Everyone has the following fundamental freedoms: (b) freedom of thought, belief, opinion and expression, including freedom of the press and other media of communication;”

In Canada, we regularly talk about freedom of expression, which is guaranteed to everyone. It does include “freedom of the press.”

Charter s. 1

In understanding how section 2(b) works, we also have to understand that it is not absolute. The freedom of expression guarantee is subject to section 1 of the Charter, which allows some limitations on Charter guaranteed rights.

Section one says:

“The Canadian Charter of Rights and Freedoms guarantees the rights and freedoms set out in it subject only to such reasonable limits prescribed by law as can be demonstrably justified in a free and democratic society.”

Let’s break that down. Charter guaranteed rights can be subject only to “reasonable limits”, that are “prescribed by law” that have to be demonstrably justified in a free and democratic society.

It is always up to the government to justify these limitations.

It is important to note that freedom of expression not only includes the right to express oneself, but the courts have found that it includes a right to receive information. Limiting a journalist’s right to report on something also limits the public’s right to receive that reporting.

The Oakes test

The Supreme Court of Canada has given us the test for how to determine if an infringement of a Charter right can be justified under section 1. This is called the Oakes test, from a 1986 decision of the Supreme Court.

This also could be its own law school course, but in summary here it is:

First the limitation has to be “prescribed by law”. That’s right from section one. It can be a federal or provincial statute. It can be a regulation or a by-law. But it can’t be a whim of a state actor. It has to be rooted in the law. In some cases, the law could be so vague that it does not qualify as prescribed by law.

Second, the objective of the law has to be pressing and substantial. The courts will not permit Charter rights to be infringed for trivial objectives, so the law has to be for an important purpose.

Third, the impact on the Charter right has to be proportional. This has three parts:

The means chosen by the legislature to address these objectives must be rationally connected to the objective.

In doing so, the measures need to minimally intrude on the impairment of the rights at issue.

Finally, there must be proportionality between the infringement and objective. This is a final balancing step.

In order for an infringement of a Charter right to be justified, the government has to satisfy all parts of this test. If it fails one part, its justification fails.

The Common law

The Oakes test is only used for limitations that are prescribed by law, and something different is done for the common law. The common law is that substantial portion of our laws that are judge made and a bit more fluid.

Many of the privacy claims that I’ll be talking about are “common law”, including “intrusion upon seclusion” and “public disclosure of private facts”. These aren’t subject, strictly speaking, to the Charter.

The Charter limits what governments can do, how our parliament can legislate. The Common law isn’t generally a government imposing limits on what people can do, but most usually regulate what legal claims one person can have against another.

But the Supreme Court has said that the Common law needs to evolve in line with Charter principles and Charter values. For example, in a 2009 case called Grant v Torstar, the Supreme Court of Canada said that the common law of defamation needed to include a defence of “responsible communication on a matter of public interest” to take into account freedom of expression.

The protection of reputation was an important value that had to be balanced against the important right of freedom of expression.

Privacy statutes

So, is the press subject to privacy statutes like the federal Personal Information Protection and Electronic Documents Act or the BC and Alberta Personal Information Protection Acts?

Generally speaking, when engaged in journalism, they are not subject to these laws.

To do otherwise would be unworkable: journalists would have to get consent from politicians before reporting about them, whether it is favourable or critical. That would be a significant intrusion into freedom of expression.

As a result, all three of these laws specifically exclude all collection, use and disclosure that is exclusively for journalistic purposes.

Here is what PIPEDA says…

4(2) This Part does not apply to …
(c) any organization in respect of personal information that the organization collects, uses or discloses for journalistic, artistic or literary purposes and does not collect, use or disclose for any other purpose.

Alberta PIPA

Here is what Alberta’s PIPA says …

4(3) This Act does not apply to the following: …
the collection, use or disclosure of personal information, other than personal employee information that is collected, used or disclosed pursuant to section 15, 18 or 21, if the collection, use or disclosure, as the case may be, is for journalistic purposes and for no other purpose;

Common law claims

But journalists are subject to the common law, like defamation, and could be subject to common law privacy claims.

I am not aware of any cases where a journalist has been sued for “intrusion upon seclusion” or “public disclosure of embarrassing private facts” in Canada. If one were to be sued, the Court would have to take into account freedom of expression.

Public disclosure of private facts

This tort says that one who gives publicity to a matter concerning the private life of another is subject to liability to the other for invasion of privacy if the matter publicized or the act of the publication (a) would be highly offensive to a reasonable person and (b) is not of legitimate concern to the public.

Note it includes the “not of legitimate concern to the public.” So a lack of public interest is an important element of the tort, and proving public interest would overturn the claim.

Intrusion upon seclusion

In this tort, a person can sue another for an intentional (or reckless) intrusion into the private affairs of another without lawful justification, and that intrusion must be highly offensive to a reasonable person, causing distress, humiliation or anguish.

This tort was introduced into Canada in 2012 from the United States, and may be subject to some refining. It may well be that a court would have to read in the public interest factors that exist in the public disclosure tort in order to be consistent with the freedom of expression right in a case involving legitimate news reporting. Freedom of expression also includes the information gathering stage of reporting.

(You may have noticed that “public interest” came up in my discussion of the defamation defence created in Grant v Torstar and also in the public disclosure tort. Public interest in reporting is important.)

Privacy Act (BC)

Some provinces, like British Columbia, have statutory torts of invasion of privacy. They also use “public interest” to provide a defence. Here’s the wording from BC:

1 (1) It is a tort, actionable without proof of damage, for a person, wilfully and without a claim of right, to violate the privacy of another.
(2) The nature and degree of privacy to which a person is entitled in a situation or in relation to a matter is that which is reasonable in the circumstances, giving due regard to the lawful interests of others.

Which could include news reporting.

(3) In determining whether the act or conduct of a person is a violation of another's privacy, regard must be given to the nature, incidence and occasion of the act or conduct and to any domestic or other relationship between the parties.

This last part would very likely take into account whether the intrusion were done by a journalist pursuing a story in the public interest.

The statute also specifically includes a defence for “Publications in the public interest or comment on a matter of public interest”.

But it is notable that this only extends to the publication, and not the collection of information leading to the publication.


We also have criminal laws that are designed to protect privacy.

For example, we have a wiretapping law that makes it an offence to intercept a private communications. It does not include a public interest defence and I suppose it could be challenged if a reporter was engaged in wiretapping or eavesdropping as part of a story.

But just because it could be challenged, doesn’t mean it would necessarily be successful. It may well be that a court would say that any restriction on freedom of expression is justified, and everyone’s interest in being free from having their conversations overheard or phones tapped outweighs any impact on freedom of expression.


We also have an offence of voyeurism, which includes a specific “public good” defence, which reads:

“(6) No person shall be convicted of an offence under this section if the acts that are alleged to constitute the offence serve the public good and do not extend beyond what serves the public good.”

It is hard to imagine a hypothetical scenario where a member of the press may be engaged in voyeurism and to use the public good defence, but it is there. And the legitimate information of the public on a matter of public interest would arguably be for the public good.


In Canada, freedom of expression and freedom of the press are important values. They are rights that are baked into our constitution and all laws in Canada that affect expression or the ability of the media to do their jobs have to be justified.

This includes privacy laws, which may be engaged every time a reporter is looking into the private affairs or the private life of a subject.

Thankfully, to take account of freedom of the press, journalists and journalistic purposes are specifically excluded from the application of our general privacy laws, which require individual consent for all collection, use and disclosure of personal information.

So what we’re left with are the general rules in the common law and statutes that regulate very problematic intrusions into privacy. On one hand, we have the general common law and statutes related to invasions of privacy. While they haven’t been tested in the context of journalism, they do take freedom of the press into account.

Similarly, we have laws that criminalize wiretapping and voyeurism, which could be subject to challenge related to possible impacts on freedom of the press, but these guardrails are likely justifiable under section 1 of our Charter.

Monday, March 14, 2022

Video: Home surveillance cameras

In my legal practice, I exclusively advise businesses on matters related to privacy and technology law. But I am sometimes asked by individuals about the use of home surveillance cameras. Because of advances in technology and low cost, they’re everywhere. The rise of home delivery has led to porch pirates who steal packages, and people want to deter that or to try to catch porch pirates in the act.

If you keep an eye out walking down a suburban street, you’ll often see them. Doorbell cameras are very popular, but so are other cameras.

The purpose of this discussion is to review the laws that do and do not apply to individuals who use these devices on their own private property. At least in this discussion, I’m not going to talk about the laws as they may apply to companies that provide these services used by individuals.

Different rules

Many people are familiar with privacy regulations like the Personal Information Protection and Electronic Documents Act or the provincial Freedom of Information and Protection of Privacy Acts.

Businesses are regulated by commercial privacy laws, whether federal or provincial.

Government and police are regulated by public sector privacy laws.

But the personal and “domestic” collection of personal information is unregulated in Canada.

General privacy regulations do not apply

Commercial privacy regulations do not apply to private individuals collecting, using or disclosing personal information for their own personal purposes.

For example, the Personal Information Protection and Electronic Documents Act, known as PIPEDA, only applies to the collection, use and disclosure of personal information in the course of commercial activity.

And just to be more clear, paragraph 4(2)(b) of that Act excludes personal or domestic purposes:

It says This Part does not apply to …

(b) any individual in respect of personal information that the individual collects, uses or discloses for personal or domestic purposes and does not collect, use or disclose for any other purpose;

If you are collecting personal information – which includes video and images that include a person – only for personal or domestic purposes, that is excluded from the Act.

The Personal Information Acts of British Columbia and Alberta are very similar.

For example, paragraph 3(2)(a) has an exclusion that is very similar to PIPEDA’s.

“This Act does not apply to the following: (a) the collection, use or disclosure of personal information, if the collection, use or disclosure is for the personal or domestic purposes of the individual who is collecting, using or disclosing the personal information and for no other purpose;”

Other “Privacy” laws

Just because this activity is not captured by our general privacy laws, other laws may apply.

Our Criminal Code includes offences for voyeurism and the interception of private communications.


The crime of voyeurism was added to the Criminal Code relatively recently.

It involves surreptitiously observing or recording a person where there is a reasonable expectation of privacy.

Paragraph (a) makes it an offence to observe or record in a place in which a person can reasonably be expected to be nude … or to be engaged in explicit sexual activity.

Paragraph (b) makes it an offence where the recording or observing is done for the purpose of observing or recording a person in such a state or engaged in such an activity.

Paragraph (c) covers a broader range of observation or recording, but where it is done for a sexual purpose.

People should be aware that the courts have held you can have a reasonable expectation of privacy in a relatively public place and that the expectation of privacy can vary according to the method of observation. For example, you may not have much of an expectation of privacy with regard to being observed by someone at eye level, but you may have a protected expectation of privacy from being observed or recorded up a person’s dress or from above to look down their top.

Don’t point a camera where someone has a reasonable expectation of privacy.

This would include pointing at a neighbour’s windows, fenced back yards, pool, hot tub, etc.

Interception of private communications

Audio recording is particularly hazardous in Canada.

Using a device to knowingly intercept a private communication can be a very serious offence in Canada.

If your camera can record audio, don’t put it where it might record a private communication or disable that feature. And be careful.

You may have a camera on your fence-post that is exclusively pointed at your property, but it may capture private conversations among your neighbours on the other side of the fence.

Consent is a defence to a charge under this section, but it’s unclear if signage can create adequate consent.

Other privacy laws

In addition to the criminal law, people should also be mindful of the laws where you can be sued.

This includes the law of nuisance, the law of trespass, and privacy claims under “intrusion upon seclusion” and some provincial privacy statutes.


Nuisance is a very old, and well established legal claim. It boils down to “unreasonable interference with the ordinary enjoyment of property.”

A lot of traditional, old nuisance claims relate to noises, bad smells, smoke and things like that, but we are starting to see cases where people claim that someone’s use of surveillance cameras is interfering with their enjoyment of their own property.

The case of Suzuki and Monroe from the British Columbia Supreme Court in 2009 is instructive.

In this case, the Suzukis sued the Monroes for having a loud air conditioner and for having a surveillance camera that included part of the Suzuki property. In finding in favour of the plaintiffs, the judge wrote:

“I have no doubt that a surveillance camera continuously observing the entrance areas to a neighbouring property, or any part thereof, in these circumstances, is an intolerable interference with the use and enjoyment of the neighbouring property…

No useful purpose of any kind is served by having the camera directed at any part of the Suzuki property.

I am forced to conclude that the Munroes installed the camera and refused to remove or redirect it at least in part in order to provoke and annoy the Suzukis.

Acts done with the intention of annoying a neighbour and actually causing annoyance will be a nuisance, although the same amount of annoyance would not be a nuisance if done in the ordinary and reasonable use of the property….”

It is important to note that the judge said the use of cameras was not really necessary for any legitimate purposes of the defendants. If it had been legit, it might not have been found to be a nuisance.

We’ll talk about another, similar BC case in a bit.


Trespassing is unlawful. It can be a criminal offence, a provincial offence or someting you can sue someone for.

Don’t enter a neigbhour’s property to install or locate a camera without their permission. Putting a camera physically on a property that is not yours without permission is also unlawful.

Intrusion upon seclusion

In addition to the more traditional torts that I just mentioned, we are seeing more pure privacy claims.

In most common law provinces, you can sue or be sued for “intrusion upon seclusion”.

It is, in summary “an intentional or reckless intrusion, without lawful justification, into the plaintiff's private affairs or concerns that would be highly offensive to a reasonable person.”

If you poke into someone’s private life in a way that would be highly offensive, harm and damages are presumed.

Statutory torts

Some provinces have what are called statutory torts of invasion of privacy.

Here is the gist of the British Columbia Privacy Act.

1(1) It is a tort, actionable without proof of damage, for a person, wilfully and without a claim of right, to violate the privacy of another.

This means that the plaintiff doesn’t have to prove they were actually harmed. That is presumed.

Note the violation has to be without a claim of right or legitimate justification.

It then goes on and says …

(2) The nature and degree of privacy to which a person is entitled in a situation or in relation to a matter is that which is reasonable in the circumstances, giving due regard to the lawful interests of others.

(3) In determining whether the act or conduct of a person is a violation of another's privacy, regard must be given to the nature, incidence and occasion of the act or conduct and to any domestic or other relationship between the parties.

Note it specifically refers to eavesdropping and surveillance in subsection (4), which reads:

(4) Without limiting subsections (1) to (3), privacy may be violated by eavesdropping or surveillance, whether or not accomplished by trespass.

For the use of home surveillance cameras to protect your private property, paragraph 2(2)(b) is important:

2(2) An act or conduct is not a violation of privacy if any of the following applies:

(b) the act or conduct was incidental to the exercise of a lawful right of defence of person or property; …

Let’s see how that plays out in practice.

This specifically came up in another British Columbia case called Minicucci and Liu, a 2021 decision from the British Columbia Supreme Court.

This was another dispute between neighbours.

For backyard privacy, the plaintiff planted eight 25-foot cedars and twenty 10-foot cedars along the property line. This is the property line between the parties’ homes. The plaintiffs had a pool in their backyard, and the defendants had one as well.

Sometime later, the defendants asked the plaintiffs to “top” the trees because they were interfering with the defendants’ view. The plaintiffs refused.

Sometime later, while the plaintiff was away from their home, the defendant topped numerous of the cedar trees.

The plaintiff installed cameras pointed at the trees, and the camera also could see into the defendant’s backyard.

So the plaintiff sued the defendants seeking damages and injunctive relief for trespass and damage to the cedar trees.

The defendants filed a counterclaim seeking damages from the plaintiff for nuisance and for invasion of privacy by the camera.

The defendant’s privacy claim was dismissed because the use and location of the cameras was justified. Capturing a portion of the defendant’s backyard was incidental and the camera had been installed because of the defendant’s trespass and topping their trees.

The court also noted that it would not have been possible to record the trees without incidentally including some of the backyard.

Other rules - Condo rules

In some cases, there may be other rules that affect whether or how someone can install surveillance cameras.

In a 2022 Alberta court case called Lupuliak and Condo Plan 82111689, the Court of Queen’s bench found against a condo owner because the installation of a doorbell camera on the person’s door violated the condo rules. A similar camera that had been installed on the person’s patio was found not to be an issue.

Other rules - Leases

If you’re a tenant, you would want to check your lease or check with your landlord before installing any device outside your leased space. This would also include your door.

Purely public places

Many cameras that people install to observe their front doors or driveways will also include coverage of public spaces like sidewalks and roads.

There’s a diminished expectation of privacy in a completely public space like a road or a sidewalk.

However, expectation of privacy is not binary but is more nuanced.

If it came up, the courts will likely do a balancing test: is your legitimate need to use the device proportionate to the intrusion for others?

What if police ask for your footage?

Since most people use home surveillance cameras to deter or detect criminal activity, it’s worth asking what to do if the police ask for your footage.

With the increasing adoption of the devices, police are more commonly doing a video or CCTV canvas as part of their investigations. This involves going around the area to see if there are any cameras that may have captured something that can further their investigation.

So if the police come knocking looking for footage from your camera, what should you do?

Unlike businesses that are subject to general privacy regulations, you can give them footage without a warrant or a court order. That doesn’t mean you have to. It’s entirely up to you, unless they have something called a production order, which requires you to provide it.

Personally, I would ask them what they’re investigating and I’d decide whether to hand it over on that basis.

And if you are dealing with the police to report a crime and your cameras captured anything relevant, you can feel free to hand it over.

Best practices

So at the end of the day, what are the best practices?

In short, don’t be an idiot.

Be a good neighbour and minimise any recording of anything that is not your own property.

Let people – residents and visitors – know what’s going on. Talk to your neighbours and put up signs. Your neighbour may actually appreciate that you have cameras.

Certainly, don’t point it at any place you’d expect people to be nude or doing “things”

Think about what you’re actually using the cameras for and adjust your settings accordingly. If you are concerned about prowlers at night or someone on your property when you’re not at home, some of these more advanced cameras can be set to only record at night or when you’re not at home.


Remember that though an individual in their private capacity is outside the usual privacy regulations, other laws and rules can still apply.

Respect your neighbours and their privacy interests

Monday, March 07, 2022

Video: Individual access requests under PIPEDA

New on my YouTube Channel.


Today I am going to be speaking about individual personal information access requests. If you're from Europe, you probably have heard the term data subject access requests, which is essentially the same concept.

This is where an individual gets to ask a business what information they have about them, expects a copy of it and perhaps disputes its accuracy.

I remember when our federal privacy law was being debated and phased in, many businesses were concerned they would be overrun with individual access requests. They were particularly concerned with frivolous or vexatious ones. We really haven’t seen that in practice.

But the right exists and any organization that does business in Canada needs to know about it and should be able to manage it.

Today, I am only going to be talking about Canada's personal information protection and electronic documents act. This law includes a general rule that individuals have an access right. Like most rules, this is not absolute and there are some exceptions. I plan to cover many of these exceptions in this discussion.

While this discussion is limited to Canada's personal information protection and electronic documents act, you should probably know that every single Canadian privacy law includes an access right.

Most of our public sector laws are divided between freedom of information and protection of privacy. In the federal public sector, there is a separate Privacy Act and an Access to Information Act. Many provinces also have health privacy laws, all of which include an individual access right.

Though I am talking about the federal private sector law, you should know that some of the details can differ from law to law.

If you have followed any of these discussions, you will know that the Personal Information Protection and Electronic Documents Act is weird. This federal law is based on the general principles of the Canadian Standards Association Model Code for the Protection of Personal Information. In fact, this standard of Canada is appended as a schedule to the law.

If you read it, you will see that it is written as a general list of principles, not like most of our laws. The general rules are in the schedule but there are exceptions in the body of the statute. The body of the law and the Schedule have to be read together.

The General Principle of Access

So we will be looking at Principle 9 from the CSA Model Code and then sections 8 through 10 of the Act.

Of course, we have to start with the general rule of access. This is in Principle 9, entitled “individual access”. It says…

“Upon request, an individual shall be informed of the existence, use, and disclosure of his or her personal information and shall be given access to that information. An individual shall be able to challenge the accuracy and completeness of the information and have it amended as appropriate.”

This talks about access to the information itself. It also refers to access to information about how it has been used. And the individual also gets to challenge the accuracy and completeness of that information.

There are some sub-principles that elaborate on this. Sub principle 9.1 says…

“9.1 Upon request, an organization shall inform an individual whether or not the organization holds personal information about the individual. Organizations are encouraged to indicate the source of this information. The organization shall allow the individual access to this information. … In addition, the organization shall provide an account of the use that has been made or is being made of this information and an account of the third parties to which it has been disclosed.”

The business should answer the question about whether they even have information about the individual, and should be able to tell them where that information came from.

They also should be able to tell the individual how that information has been used and to whom it has been disclosed. Businesses are sometimes surprised to discover that they have to keep information about their information in order to satisfy this requirement.

Because a business cannot disclose personal information about somebody without their consent, and the information contained in an individual access request is pretty all-encompassing, it makes sense that the business can require the individual to prove that they are the person they purport to be. It also makes sense that the individual should cooperate in helping the business identify what information may be about them.

That includes “how do we know you are who you say you are?” And “where should we look to find information about you?”

Information provided in that particular context can only be used for that purpose.

To whom has the information been disclosed?

I mentioned that businesses have to keep information about their information. In sub-principle 9.3, individual access rights include a right to know to whom a person’s personal information may have been disclosed. The principle reads:

“9.3 In providing an account of third parties to which it has disclosed personal information about an individual, an organization should attempt to be as specific as possible. When it is not possible to provide a list of the organizations to which it has actually disclosed information about an individual, the organization shall provide a list of organizations to which it may have disclosed information about the individual.”

At the end of the day, organizations need to know where the data they control goes and need to be able to tell people when they ask.

Timelines to respond

The timelines to respond are a good example of the difference between the very general language of the principles and some of the specifics in the statute. The sub-principle 9. 3 says it has to be provided “within a reasonable time”. We’ll see when we flip to section 8 that that really means no later than 30 days in most cases.

The sub-principle also says it has to be at minimal or no cost to the individual.

My general advice is to not charge people for this. But there are cases where individuals will repeatedly make requests and there is no mechanism to say “no” to frivolous or vexatious requests. Attaching a cost may make sense. For example, in any twelve month period the first request is free.

I think Google had the right idea when it started providing users with the ability to download their account information. A self-serve individual access right. Since then, many large data driven companies have followed suit allowing individuals to easily access their own data for free.

This sub-principle also says “The requested information shall be provided or made available in a form that is generally understandable. For example, if the organization uses abbreviations or codes to record information, an explanation shall be provided.”

This makes sense. If a person can’t parse a JSON file or decipher technical abbreviations, the person really isn’t able to access the information. I know of some healthcare providers who will provide a nurse or a records clerk to walk through the records with a patient who asks for it.

Finally, you’ll note that this doesn’t go so far as to give a “data portability” right. We expect this to be added when PIPEDA is updated in the coming year or so.

Disputes about accuracy

PIPEDA contains an accuracy principle, which requires that “Personal information shall be as accurate, complete, and up-to-date as is necessary for the purposes for which it is to be used.”

The individual has the right to dispute the accuracy of any personal information a company may have, and sub-principles 9.5 and 9.6 address how this is to be dealt with. It is pretty straightforward:

“9.5 When an individual successfully demonstrates the inaccuracy or incompleteness of personal information, the organization shall amend the information as required. Depending upon the nature of the information challenged, amendment involves the correction, deletion, or addition of information. Where appropriate, the amended information shall be transmitted to third parties having access to the information in question.”

But what happens if the company doesn’t agree that the information is inaccurate? Sub-principle 9.6 addresses this:

9.6 When a challenge is not resolved to the satisfaction of the individual, the substance of the unresolved challenge shall be recorded by the organization. When appropriate, the existence of the unresolved challenge shall be transmitted to third parties having access to the information in question.

How to make a request

So those are the relevant provisions in the Schedule from the CSA Model Code. Let’s now turn to some of the specifics in the body of the statute itself.

Subsection (1) of Section 8 of PIPEDA says that these requests have to be in writing. This can, of course, be electronic. Note that the wording says “must”. This implies that a request that is not in writing doesn’t trigger the formalities of the Act, but can still be responded to.

Duty to assist

Subsection (2) of Section 8 places an obligation on the organization to assist the individual to make a request if they say they need help.

This makes sense.


I mentioned earlier that the general language about timing in the principles is firmed up in the body of the statute. Specifically, it says “An organization shall respond to a request with due diligence and in any case not later than thirty days after receipt of the request.”

Extension of time limit

This isn’t absolute, however. In some cases, the organization can extend the time but has to let the individual know about the extension, the reason for it and of their right to complain to the Privacy Commissioner.

The first circumstance is if “meeting the time limit would unreasonably interfere with the activities of the organization”.

This would be if the request is complex or would require a lot of resources, who would be taken away from their usual tasks and it would “unreasonably interfere with the activities of the organization.” What “unreasonably interfere” means is unclear. In this case, the timeline can be extended for a second thirty days.

The second circumstance is if the organization needs more time to carry out consultations necessary to respond to the request. For example, some of the information may have been generated in litigation or in contemplation of litigation, and the organization needs to determine if the privilege exception applies and to decide whether to waive it. In this case as well, the timeline can be extended for a second thirty days.

The third scenario is more open ended and allows time to convert the personal information into an alternative format. This may be to accommodate a disability.

Deemed refusal

Subsection (5) of Section 8 says that if the organization fails to respond to an access request within the timelines imposed by the Act, that is a deemed refusal and the individual thus has the right to complain to the Privacy Commissioner.

Costs for responding

You’ll recall that the principles say that access requests have to be “at minimal or no cost to the individual.”

Subsection (6) of Section 8 says that you can only charge the individual if they are advised of the approximate cost and the individual then tells the organization that the request is not being withdrawn.

Notably, there is no other guidance on costs or whether the cost has to be reasonable. That’s likely implied.

Reasons for refusals

If the organization refuses an individual’s request – and I’ll get into the exceptions that can justify a refusal shortly – this refusal has to be in writing. It has to tell them the reasons for the refusal and to tell them they have the right to complain to the Privacy Commissioner.

It also says that the organization essentially must preserve and retain the information at issue for as long as is necessary to allow the individual to exhaust any recourse that they may have.

That makes sense. If it was an unjustified refusal, and the end result is a recommendation from the Commissioner or an order from the court to hand it over, that would be thwarted if the information were deleted in the meantime.

Mandatory refusals

The Act contains a number of circumstances where access either can be refused or where it must be refused.

In subsection (1) of section 9, it says that you have to refuse to provide access if doing so would disclose personal information of a third party. If that personal information can be severed from the disclosure, then you must do the severing and provide the balance of the information. If the third party consents, then access can be granted.

Interestingly, subsection (2) allows giving access even if it would disclose third party personal information if the “individual needs the information because an individual’s life, health or security is threatened.”

Notably, it is not just if the applicant’s life health or security is threatened

That is a real outlier of a scenario and if you encounter that, get immediate advice from an experienced privacy lawyer.

A second scenario where access must be refused is if the personal information that is the subject of the access request has previously been requested by law enforcement, national security or other government agencies. If this is the case: get immediate advice from an experienced privacy lawyer.

The Act sets out a whole routine of consulting with the government agency, seeking their input or direction. If they say don’t disclose it, you can’t disclose it. And you probably can’t tell the individual why and you also have to give notice to the Privacy Commissioner.

The legislators have created a real minefield for organizations if this comes up, so proceed with caution and with good advice.

Discretionary refusals

Subsection (3) of Section 9 sets out a number of circumstances where an organization can choose to refuse access. It doesn’t have to provide it, but it can.

The first is if the information is protected by legal advice or litigation privilege. This comes up a lot because individuals often use the access right under PIPEDA as a pre-litigation discovery tool. If there’s any doubt about whether information fits in this category, seek advice. And of course be aware that this would amount to a waiver of privilege.

The second is if providing access would reveal confidential commercial information, but if that information can be severed, it has to be and the balance of the information must be provided.

The third is if disclosing the information could reasonably be expected to threaten the life or security of another individual. As with confidential commercial information, if that information can be severed, it has to be and the balance of the information must be provided.

The fourth is if the information was collected under paragraph 7(1)(b), which is if it was collected without the knowledge or consent of the individual in connection with an investigation related to a breach of an agreement or a contravention of the laws of Canada or a province. If you refuse on this basis, you have to notify the Privacy Commissioner and include in the notice to the individual whatever information that the Commissioner may specify.

The fifth is if the information was generated in the course of a formal dispute resolution process. This would be in addition to litigation privilege, referred to in paragraph (a).

The sixth scenario where access can be refused is if the information relates to an investigation under the Public Servants Disclosure Protection Act. This rarely arises.


At the end of the day, Canadians are generally not frequent users of the individual access right that they have in the Personal Information Protection and Electronic Documents Act.

But businesses need to understand that this right exists and should have processes and procedures to manage it. Hopefully this has provided information on the general rules that apply to this, and the exceptions to the general right of access.

Thank you very much for tuning in. If you have any comments on this video or any suggestions for topics you’d like to see covered in the future, please leave them in the comments below.

If you find this sort of content to be interesting or informative, please subscribe. If you also click the bell, you’ll be notified of new videos as they are posted.