The Canadian Privacy Law Blog: Developments in privacy law and writings of a Canadian privacy lawyer, containing information related to the Personal Information Protection and Electronic Documents Act (aka PIPEDA) and other Canadian and international laws.
Given the current dumpster fire at Twitter and the recent ban on outbound links to other social platforms, I thought I'd do a post of where to find me:
Can someone legitimately try to stop you from taking photos or recording video in a public place? There are some laws to know about, but the answer for Canada is that you generally have the right to take photos or record video in a public place, and nobody can lawfully stop you from doing so.
How it came up
This past week on Twitter, I saw a couple of discussions about people taking photos in public places, either being called out about it online or being told in person to cut it out.
In the first example, Canadian journalist James MacLeod took it upon himself to get a radar speed gun and document people speeding through a park. He’d take photos of drivers and their speed, and post them on Twitter. One twitter user said doing so seemed “suspect”.
In the second example, a person in Toronto tweeted that he’d been told by a security guard to not take photos of a shipping container put in a public street, blocking a cycling lane. As I replied, “there is no legal basis upon which a security guard can require an individual private citizen to stop taking photos or video in a public place.”
I’ve previously done a video about recording the police in public (link below), but figured it was time to do a more general video about photography and videography in public.
Here’s the general rule: you can take photos in a public place or record video on public property without any legal consequences. That doesn’t always mean you should, but you generally can. You can also photograph or record any place or thing that is visible from a public place, which would include private property as long as you yourself are not trespassing.
There is nothing in our criminal law that makes it illegal to take photos or video in a public place. Other general laws are going to apply. You can’t be a nuisance, and you can’t damage property and you can’t obstruct the police when they are carrying out their duties. You can’t block traffic to get the perfect shot. Short of that, you can generally stand in a public place and take photos of everything and everyone you see.
In fact, you have a Charter right to take photos or record video. The right to freedom of expression protected in section 2(b) of the Charter also protects your right to collect information. Photography and videography are inherently expressive activities and are thus Charter-protected. Any limitation in law on that right would have to be justified under s. 1 of the Charter and any sort of blanket “no photography in public” law would not be justifiable.
Exceptions – voyeurism
That said, there is a crime of voyeurism that has a few nuances and can apply in public or quasi-public places. It was added to the Criminal Code relatively recently.
It involves surreptitiously observing or recording a person where there is a reasonable expectation of privacy. It has to be surreptitious and there has to be a reasonable expectation of privacy.
Paragraph (a) makes it an offence to observe or record in a place in which a person can reasonably be expected to be nude … or to be engaged in explicit sexual activity.
Paragraph (b) makes it an offence where the recording or observing is done for the purpose of observing or recording a person in such a state or engaged in such an activity.
Paragraph (c) covers a broader range of observation or recording, but where it is done for a sexual purpose.
People should be aware that the courts have held you can have a reasonable expectation of privacy in a relatively public place and that the expectation of privacy can vary according to the method of observation. For example, you may not have much of an expectation of privacy with regard to being observed by someone at eye level, but you may have a protected expectation of privacy from being observed or recorded up a person’s dress or from above to look down their top.
One of the leading cases on this is called Jarvis.
The accused was a teacher at a high school. He used a camera concealed inside a pen to make surreptitious video recordings of female students while they were engaged in ordinary school-related activities in common areas of the school. Most of the videos focused on the faces, upper bodies and breasts of female students. The students were not aware that they were being recorded. Of course, they did not consent to the recordings. A school board policy in effect at the relevant time prohibited the type of conduct engaged in by the accused. There were other official surveillance cameras in the school hallways.
The court said:
“Given ordinary expectations regarding video surveillance in places such as schools, the students would have reasonably expected that they would be captured incidentally by security cameras in various locations at the school and that this footage of them could be viewed or reviewed by authorized persons for purposes related to safety and the protection of property. It does not follow from this that they would have reasonably expected that they would also be recorded at close range with a hidden camera, let alone by a teacher for the teacher’s purely private purposes (an issue to which I will return later in these reasons). In part due to the technology used to make them, the videos made by Mr. Jarvis are far more intrusive than casual observation, security camera surveillance or other types of observation or recording that would reasonably be expected by people in most public places, and in particular, by students in a school environment.”
So while the students should have expected to be incidentally observed by the school’s cameras, that did not ultimately affect their expectation of privacy where a teacher with a hidden camera was concerned. He was convicted of voyeurism.
Another key element in the voyeurism offence is that it has to be surreptitious. In Jarvis, the camera was disguised in a pen. There is a case from Ontario called R. v. Lebenfish, 2014 ONCJ 130, in which a person was changed with voyeurism after he was observed taking photos, mainly of women, at a nude beach in Toronto. He was acquitted because he did not make any effort to hide what he was doing. The court also found that the other beach-goers did not have a reasonable expectation of privacy. The court did note that he wasn’t using a long zoom lens or other form of photographic enhancement.
Sneakily taking photos up dresses can be the offence of voyeurism, but standing on a sidewalk obviously taking a photo of someone else would not be.
In Lebenfish, the accused was also charged with mischief. Specifically, it was alleged he committed mischief “by willfully interfering with the lawful enjoyment without legal justification of property,” namely, the beach.
The court found that he did not interfere with the lawful enjoyment of the beach, but also noted that the answer may have been different if there were signs posted saying no photography or if there had been a municipal by-law prohibiting photography at the beach. If photography was prohibited, then part of the enjoyment of the beach would be that it was camera free.
One thing that is worth nothing is that the law doesn’t offer any special protection for children. A while ago, the police here in Halifax were looking for someone who was reported to have been taking photos of kids at a public park. That was followed by a lot of people saying that it is plainly illegal to take photos of other people’s children at a park. That’s not the case. It is certainly creepy and concerning, but likely not illegal in and of itself.
Privacy laws
What about other kinds of laws? We have privacy laws to think about. The ones I deal with most often regulate what businesses can do. An individual taking photos for personal purposes is not a business.
And just to be clear, they have carve-outs for personal use and artistic use. Here’s what PIPEDA says:
(2) This Part does not apply to
(b) any individual in respect of personal information that the individual collects, uses or discloses for personal or domestic purposes and does not collect, use or disclose for any other purpose; or
(c) any organization in respect of personal information that the organization collects, uses or discloses for journalistic, artistic or literary purposes and does not collect, use or disclose for any other purpose.
The other provincial general privacy laws have similar exclusions.
Privacy torts
So what about the risk of being sued for damages for invasion of privacy. That’s not likely either.
In most common law provinces, you can sue or be sued for “intrusion upon seclusion”.
It is, in summary “an intentional or reckless intrusion, without lawful justification, into the plaintiff's private affairs or concerns that would be highly offensive to a reasonable person.”
If you poke into someone’s private life in a way that would be highly offensive, harm and damages are presumed.
You can also be sued for public disclosure of private facts, which also has to engage someone’s private life and be highly offensive to a reasonable person.
It is hard to see how taking photographs or video in a public place would engage someone’s private and intimate life, and be highly offensive to a reasonable person. It could be engaged if one were stalking someone, though.
Statutory torts
Some provinces have what are called statutory torts of invasion of privacy.
Here is the gist of the British Columbia Privacy Act.
1(1) It is a tort, actionable without proof of damage, for a person, wilfully and without a claim of right, to violate the privacy of another.
Note the violation has to be without a claim of right or legitimate justification.
It then goes on and says …
(2) The nature and degree of privacy to which a person is entitled in a situation or in relation to a matter is that which is reasonable in the circumstances, giving due regard to the lawful interests of others.
(3) In determining whether the act or conduct of a person is a violation of another's privacy, regard must be given to the nature, incidence and occasion of the act or conduct and to any domestic or other relationship between the parties.
Note it specifically refers to eavesdropping and surveillance in subsection (4), which reads:
(4) Without limiting subsections (1) to (3), privacy may be violated by eavesdropping or surveillance, whether or not accomplished by trespass.
Again, it is hard to see how obviously taking photographs or video in a public place would engage this tort, but it could be engaged if one were stalking someone.
Private property but public places
Regularly, we go to places where the public is generally invited, but it is private property. This can also include what we often think of as being “public property”, but it is owned by someone else. Think of a park, which is owned by a municipality. People or organisations that own property can put conditions on entry to that property. One of those conditions may be “no photography”. And if you exceed or violate the conditions of your invitation, you could then be trespassing. The property owner would be within their rights to ask you to leave under provincial trespassing statutes. In some provinces, it may be a provincial summary offence. But the owner or occupier of the property would have to put you on notice that photography is prohibited on the premises.
Requests to delete photos
Finally, I’m sometimes asked if you can be required to delete photos taken. The answer is a resounding no. No private individual can take your phone and nobody can require you to delete any photos.
The Privacy Commissioner of Canada just released a report of findings about a company contracted by the Airport of Montreal to do on-arrival covid testing. The company added the people tested to their mailing list and sent them unsolicited commercial electronic messages. The investigation was done jointly with the Information Commissioner of Quebec. The finding raises more questions than it answers.
The complainant in this case arrived at Montreal’s Trudeau International Airport. To comply with the Public Health Agency of Canada’s rules, the individual had to undergo on-arrival COVID testing. Conveniently, the Airport had contracted with a company called Biron Health Group to COVID testing directly at the airport. So the complainant went to the Biron site, provided them with his contact information, had this test done, it was negative and they emailed him the results.
A few days after receiving his test results, the complainant received an email from Biron promoting its other services. The complainant unsubscribed using the link in the email, and never received any further unwanted emails from them. The OPC said “he was shocked to receive such an email” and filed a complaint with the OPC.
The information and privacy commissioner of Quebec also investigated, but does not appear to have released a decision on the case. Instead, they just referred to the OPC’s finding.
During the course of the investigation, the company said it had “implied consent” under Canada’s Anti-Spam Law to send commercial electronic messages and was justified in doing so.
The OPC said there was no implied consent under PIPEDA, however. Here’s what they said specifically:
“The OPC is of the opinion that Biron could not reasonably assume that it had the implicit consent of travellers arriving in Canada. Biron was mandated by the government to conduct COVID-19 testing on travellers and paid by the Montreal Trudeau Airport. Biron was the only company offering this service at this airport. Consequently, travellers arriving in Canada had no choice but to do business with Biron to comply with the rules issued by the Public Health Agency. In this situation, these travellers would not normally expect their personal information to be used for reasons other than the mandatory testing.
Biron collected the travellers’ personal information for the purpose of conducting COVID-19 tests and sending them sensitive information related to their health, notably their test results. Biron was acting as a service provider for the airport. The OPC considers that Biron should have taken these circumstances into account before using the personal information for secondary marketing purposes and for its own purposes.”
Because Biron said they’d stop doing this, the OPC closed the file as “settled during the course of the investigation”. Case closed.
So why is this unsatisfying? There are a couple of key questions in the background, of interest to privacy practitioners, that are unaddressed and thus unanswered.
The first question is what law should actually apply to Biron in this case? The Privacy Commissioner refers to PIPEDA, our federal commercial privacy law. But we have a mess of privacy laws in Canada, more than a few of which could have been applicable.
Quebec has a provincial privacy law that applies to all businesses in that province, unless they are “federal works, undertakings or businesses”. Notably, international airports and airlines are “federal works, undertakings or businesses.”
There really is no doubt that if the testing facility had been off the airport property and operating on its own, the federal privacy Law could not have applied at all and instead the Quebec private sector privacy law would have been applicable. That means the federal Commissioner would have had no jurisdiction to investigate and it would have been entirely up to the Quebec Commissioner to do so.
So does that mean that simply being on or operating from airport property makes you a “federal work, undertaking or business”? I don't think that can really be the case.
Was it because the service they were providing is connected to international travel that places them within Federal jurisdiction? That seems dubious to me.
Were they within Federal jurisdiction because they had been engaged by the airport authority to provide this service? The airport authority is certainly a “federal work, undertaking or business”, but does that mean all of its contractors become “federal works, undertakings or businesses”? Again, I don't think that can really be the case. Would a taxi company given a concession to serve the airport automatically come under federal jurisdiction?
They were performing a function that was required by the Public Health Agency of Canada, but PHAC is subject to the federal Privacy Act, which never came up in the commissioner's report of findings.
This would be more tricky in a province like Alberta, where there is a provincial general privacy law that excludes PIPEDA and a health privacy law that does not. (Quebec doesn’t have a health-specific privacy law.)
Now, it may well be that both the federal and the Quebec Commissioners thought they didn't even have to consider jurisdiction because they got the result they were looking for during the course of the investigation: the company said they would change their practices and what might have been problematic under either the Quebec or the federal law has ceased. This seems likely to me, as in my experience the federal Privacy Commissioner's office we'll bend over backwards to avoid making any statements related to their jurisdiction that could come back to haunt them later.
This is not just a privacy nerd question, because other things turn on whether a company is a “federal work, undertaking or business”. If Biron is in that category, then provincial labour and employment laws don’t apply to that workplace. Instead, the Canada Labour Code applies. Other federal laws would also suddenly apply to them, not just our privacy law. If I was this company, I’d be left scratching my head.
The second element of this that is problematic is the interaction between our privacy laws and Canada's anti-spam law, also known as CASL. You will recall that the company said that they were justified in sending commercial electronic messages because they had an “existing business relationship” with the people who underwent testing. The Privacy Commissioner really did not address that, but instead focused on the Personal Information Protection and Electronic Documents Act which requires consent for all collection, use and disclosure of personal information. That consent can be implied, particularly where it would be reasonable for the individual to expect that their information will be used for a particular purpose in light of the overall transaction. The Commissioner found that individuals would not expect to have their personal information used for the secondary purpose and therefore there was no implied consent under PIPEDA.
But that is contrary to the express scheme of Canada's anti-spam law. Under CASL, an organization can only send a commercial electronic message to a recipient where it has consent to do so. That consent either must be express or implied. Implied consent under CASL is very different from implied consent under PIPEDA. CASL doesn't care about what the consumer's expectation might be. Consent can be implied where there is an existing business relationship. One of the possible existing business relationships is the purchase of goods or services from the organization in the previous two years. Presumably, buying a COVID test from a vendor would meet that threshold and there would be implied consent for sending commercial electronic messages. I do agree with the federal Privacy Commissioner that doing so because you are ordered to by the Public Health Agency of Canada would really be contrary to the individual's expectation.
But this really does highlight some of the absurd dissonance between our anti-spam law and our privacy law. Both use the term “implied consent”, but it means radically different things. From this finding from the federal Commissioner, it appears that he is of the view that implied consent under CASL does not lead to deemed implied consent under PIPEDA. CASL expressly permits it, but PIPEDA does not.
When it comes to consent for sending commercial electronic messages, one would think that the piece of legislation that was expressly written and passed by Parliament for that purpose would be the final say, but the OPC certainly does not seem to be of that view.
The Privacy Commissioner carried out this investigation along with the Quebec commissioner, but there is no mention of whether the CRTC, which is the regulator under CASL, was involved.
At the end of the day, I think an existing business relationship was created between the complainant and the company so that there would have been implied consent to send commercial electronic messages, regardless of whether the consumer would have expected it to do so. The Commissioner did highlight that the individual had to be tested under the rules for the Public Health Agency of Canada, leaving room to argue that had the individual gone to the company for a test for other purposes, that might have been a more direct commercial relationship between the parties.
As my friend and tech law colleague Jade Buchanan pointed out on Twitter, “CASL is completely unnecessary when PIPEDA will apply to the use of personal information (name email, etc.) to send commercial electronic messages.” Personally, I think that one of the reasons why we have CASL is because PIPEDA was seldom enforced by the OPC against spammers when clear jurisdiction to do so existed for more than a decade before CASL was created.
This finding confirms two things:
1. CASL compliance doesn't guarantee PIPEDA compliance. That is, of course, ridiculous. Implied consent to send a CEM under CASL should be implied consent for the associated use of personal information under PIPEDA. https://t.co/o404EHUsfV
And there’s nothing in the pending Consumer Privacy Protection Act that would address this dissonance between our privacy and spam law.
So that is the finding, and we're left scratching our heads a bit or at least have unanswered questions about important matters of jurisdiction and the intersection between our privacy laws and our spam laws.
The government of Canada tabled the Digital Charter Implementation Act, 2022 in the week before parliament rose for their summer break. While this is in limbo, what, if anything, should Canadian businesses be doing to prepare for the Consumer Privacy Protection Act?
In the week before the summer break, the Industry Minister tabled in parliament the Digital Charter Implementation Act, which will overhaul Canada’s federal private sector privacy law. It has been long anticipated and for many, long overdue. With parliamentarians off the for the summer, what can we expect and what should businesses be doing to get ready for it?
I expect that when the house resumes, the bill will be referred to either the Standing Committee on Industry, which is where PIPEDA went more than 20 years ago, or to the Standing Committee on Access to Information, Privacy and Ethics.
I have to say that the current government is very unpredictable. When Bill C-11 was tabled in 2019 for the Digital Charter Implementation Act of 2019, the bill just sat there with no referral to committee and it seemed to not be a priority at all. If they are serious about privacy reform, they should get this thing moving when they are back in session.
When it gets to committee, the usual cast of characters will appear to provide comments. First up will be the minister of Industry and his staff. Then will be the privacy commissioner of Canada, who will only have had a few months in his office at that point. I would not be surprised to see provincial privacy commissioners have their say, and maybe even data protection authorities from other countries. Then industry and advocacy groups will have their say.
The Commissioner in 2019 was very critical of the C-11 version of the bill, and it appears that most of his suggestions have gone unheeded. I expect that between 2019 and now, there has been a lot of consultation and lobbying going on behind the scenes that resulted in the few changes between C-11 and C-27. It will be interesting to see how responsive the committee and the government are to making changes to the bill.
I would not be surprised to see this bill passed, largely in its current form, before the end of the year. But even if it speeds though the House of Commons and the Senate, I do not expect that we will see this law in effect for some time. In order for the Consumer Privacy Protection Act and the Personal Information and Data Protection Tribunal Act to be fully in force, the government will have a lot of work to do.
The biggest effort will be standing up the new tribunal under the Personal Information and Data Protection Tribunal Act. Doing so will not be a trivial matter. At least three members have to be recruited, and at least three of those have to have expertise in privacy and information law. They’ll need offices, staff, a registry, IT infrastructure, then they’ll need to make their rules of procedure. I can’t see that taking any less than a year, even if the government is currently informally recruiting for those roles.
An example I’d look at is the College of Patent Agents and Trademark Agents, which was established pursuant to a bill passed in December 2018 and came into force on June 28, 2021. Essentially, it took two and a half years between the passing of the bill and when the College was open for business. The college was probably more complicated to set up than the tribunal, but it provides some insight I think.
Personally, I don’t think the CPPA can be phased in without the tribunal operating as a going concern. There are transitional provisions related to complaints that are being dealt with by the Commissioner prior to the coming into force of the CPPA, but otherwise the existence of the tribunal is essential to the operation of the CPPA and the Commissioner’s mandate.
So if I had to look into my crystal ball, I don’t think we’ll see this fully in effect for at least a year and a half.
So should companies be doing anything now? I think so. When the CPPA and the Tribunal Act come into effect they will be fully in effect. In addition to making your politicians aware of any concerns you have, companies should be looking very closely at their current privacy management program – if any – to determine if it will be up to snuff.
Section 9 of the Act says that “every organization must implement and maintain a privacy management program that includes the policies, practices and procedures the organization has put in place to fulfill its obligations under this Act, including policies, practices and procedures respecting
(a) the protection of personal information;
(b) how requests for information and complaints are received and dealt with;
(c) the training and information provided to the organization’s staff respecting its policies, practices and procedures; and
(d) the development of materials to explain the organization’s policies and procedures.”
It then says “In developing its privacy management program, the organization must take into account the volume and sensitivity of the personal information under its control.”
This is, of course, very similar to the first principle of the CSA Model Code that’s in PIPEDA. But section 10 of the CPPA says the Commissioner can ask for it and all of its supporting documentation at any time.
I can imagine the OPC sending out requests for all of this documentation to a huge range of businesses shortly after the Act comes into force.
So what does a privacy management program include? If of course includes your publicly-facing privacy statement described in section 62. What has to be in this document will change a lot compared to PIPEDA. It has to explain in plain language what information is under the organization’s control, a general account of how it uses that personal information.
If the organization uses the “legitimate interest” consent exception, the privacy statement has to include a description of that. If the organization uses any automated decision system to make predictions, recommendations or decisions about individuals that could have a “significant impact on them”, that has to be described. It also has to say whether or not the organization carries out any international or interprovincial transfer or disclosure of personal information that may have reasonably foreseeable privacy implications. You also have to state the retention periods applicable to sensitive personal information, then explain the process for questions, complaints, access requests and requests for deletion. Most privacy statements don’t currently include all this information.
You need to assess what personal information you have, where it is, who has it, who has access to it, what jurisdiction is it in or exposed to, how it is secured, when did you collect it, what were the purposes for that collection, are there any new purposes, and have those purposes expired.
A good starting point for your privacy management program is to document all the personal information under the organizations’ control and the purposes for which it is to be used. Section 12(3) of the CPPA requires that this be documented. You will also need to ensure that all of these purposes are appropriate using the criteria at section 12(2).
You’ll also want to review whether any of the consent exceptions related to business activities under 18(1) or legitimate interests in section 18(3) could be applicable, and document them.
Under s. 18(4), this documentation will have to be provided to the Commissioner on request.
You will also need to document the retention schedule for all of your personal information holdings, and make sure they are being followed. And remember, all information related to minors is deemed to be sensitive and the retention schedule for sensitive information has to be included in your privacy statement.
Next, you’ll want to inventory and document all of your service providers who are collecting, using or disclosing personal information on your behalf. You’ll need to review all of the contracts with those service providers to make sure the service provider provides the same level of protection equivalent to original controlling organizations’ obligations. It should be noted that service providers, in the definition in the Act, expressly includes affiliated companies. So you’ll need to make sure that intercompany agreements are in place to address any personal information that may be transferred to affiliates.
You’ll want to check your processes for receiving questions, complaints and access requests from individuals. You may need to tweak your systems or processes to make sure that you can securely delete or anonymise data where required.
And last, but certainly not least, you’ll want to look very closely at your data breach response plans. It needs to identify all suspected data breaches, make sure they are properly escalated and reviewed. Any breach itself of course has to be stopped, mitigated and investigated. The details will need to be recorded and you’ll also want to think about the processes for getting legal advice at that stage so information you may want to keep privileged will be protected and you can understand your reporting and notification obligations.
At the end of the day, the CCPA is not a radical departure from the existing framework of PIPEDA. It requires greater diligence and what we in the privacy industrial complex call “privacy maturity”. Even if it didn’t, the significant penalties and the cost of dealing with investigations and inquiries by the commissioner and possible hearings before the tribunal should be enough to convince organizations to up their privacy games.
Finally, the government of Canada has tabled its long-awaited privacy law, intended to completely overhaul Canada’s private sector privacy law, and rocket the country to the front of the pack for protecting privacy. Not quite, but I’ll give you an overview of what it says.
Highlights
On June 26, 2022, the Industry Minister François Philippe Champagne finally tabled in the House of Commons Bill C-27, called the “Digital Charter Implementation Act, 2022”. This is the long-awaited privacy bill that is slated to replace the Personal Information Protection and Electronic Documents Act, which has regulated the collection, use and disclosure of personal information in the course of commercial activity in Canada since 2001.
PIPEDA, contrary to what Minister Champagne said at the press conference later that day, has been updated a number of times but there really has been a broad consensus that it was in need of a more general overhaul.
The bill is very similar to Bill C-11, which was tabled in 2019 as the Digital Charter Implementation Act, 2019, and which languished in parliament until dying when the federal government called the last election.
The bill creates three new laws. The first is the Consumer Privacy Protection Act, which is the main privacy law. The second is the Personal Information and Data Protection Tribunal Act and the third is the Artificial Intelligence and Data Act, which I’ll have to leave to another episode.
I don’t plan to do a deep dive into the bill in this video, as I want to spend more time poring over its detailed provisions. We can’t just do a line-by-line comparison with PIPEDA, as the Bill is in a completely different structure than PIPEDA. You may recall that PIPEDA included a schedule taken from the Canadian Standards Association Model Code for the Protection of Personal Information. The statute largely said “follow that”, and there are a bunch of provisions in the body of the Act that modify those standards or set out how the law is overseen.
The most significant difference is what many privacy advocates have been calling for: the Privacy Commissioner is no longer an ombudsman. The law includes order-making powers and punitive penalties. The Bill also creates a new tribunal called the Personal Information and Data Protection Tribunal, which replaces the current role of the Federal Court under PIPEDA with greater powers.
Other than order making powers, I don’t see much of a difference between what’s required under the new CCPA and what diligent, privacy-minded organizations have been doing for years.
This is a high-level overview of what’s in Bill C-27, and I’ll certainly do deeper dives into its provisions in later videos.
Does the law apply any differently?
PIPEDA applied to the collection, use and disclosure of personal information in the course of commercial activity and to federally-regulated workplaces. That hasn’t changed, but a new section 6(2) says that the Act specifically applies to personal information that it collected, used or disclosed interprovincially or internationally. The privacy commissioner had in the past asserted that this was implied, but it was never written in the Act. Now it will be. Two things about that are problematic: the first is that it’s not expressly limited to commercial activity, so there’s an argument that could be made that it would apply to non-commercial or employee personal information that crosses borders. The second dumb thing is that this means that a company with operations in British Columbia and Alberta, when it moves data from one province to another not only has to comply with the substantially similar privacy laws of each province, now they have to comply with the Consumer Privacy Protection Act. That seems very redundant.
It includes the same carve-outs for government institutions under the Privacy Act, personal or domestic use of personal information, journalistic, artistic and literary uses of personal information and business contact information.
We really could have benefitted from a clear extension of the Act to personal information that is imported from Europe so we can have confidence that the adequacy finding from the EU, present and future, really applies across the board.
It does have an interesting approach to anonymous and de-identified data. It officially creates these two categories. It defines anonymize as: “to irreversibly and permanently modify personal information, in accordance with generally accepted best practices, to ensure that no individual can be identified from the information, whether directly or indirectly, by any means.” So there effectively is no reasonable prospect of re-identification. To de-identify data means “means to modify personal information so that an individual cannot be directly identified from it, though a risk of the individual being identified remains.” You’re essentially using data with the identifiers removed.
The legislation does not regulate anonymous data, because there is no reasonable prospect of re-identification. It does regulate de-identified data and generally prohibits attempts to re-identify it. The law also says that in some cases, de-identified data can be used or even has to be used in place of fully identifiable personal information.
What happened to the CSA model code?
When you look at the CCPA, you’ll immediately see that it is very different. It’s similar in structure to the Personal Information Protection Acts of Alberta and British Columbia, in that the principles of the CSA Model Code are not in a schedule but are in the body of the Act. And the language of these principles has necessarily been modified to be more statutory rather than the sort of language you see in an industry standards document.
Any changes to the 10 CSA Principles?
The ten principles themselves largely haven’t been changed, and this should not be a surprise. Though written in the 90’s, they were based on the OECD guidelines and we see versions of all the ten principles in all modern privacy laws.
What has changed is the additional rigor that organizations have to implement, or more detail that’s been provided about how they have to comply with the law.
For example, principle 1 of the CSA model code required that an organization “implement policies and practices to give effect to the CSA Model Code principles”. The CCPA explicitly requires that an organization have a privacy management program:
Privacy management program
9 (1) Every organization must implement and maintain a privacy management program that includes the policies, practices and procedures the organization has put in place to fulfill its obligations under this Act, including policies, practices and procedures respecting
(a) the protection of personal information;
(b) how requests for information and complaints are received and dealt with;
(c) the training and information provided to the organization’s staff respecting its policies, practices and procedures; and
(d) the development of materials to explain the organization’s policies and procedures.
Volume and sensitivity
(2) In developing its privacy management program, the organization must take into account the volume and sensitivity of the personal information under its control.
This privacy management program has to be provided to the Privacy Commissioner on Request.
With respect to consent, organizations expressly have to record and document the purposes for which any personal information is collected, used or disclosed. This was implied in the CSA Model Code, but is now expressly spelled out in the Act.
Section 15 lays out in detail what is required for consent to be valid. Essentially, it requires not only identifying the purposes but also communicating in plain language how information will be collected, the reasonably foreseeable consequences, what types of information and to whom the information may be disclosed.
I’ll have to save digging into the weeds for another episode.
Collection and use without consent
One change compared to PIPEDA that will delight some and enrage others is the circumstances under which an organization can collect and use personal information without consent. Section 18 allows collection and use without consent for certain business activities, where it would reasonably be expected to provide the service, for security purposes, for safety or other prescribed activities. Notably, this exception cannot be used where the personal information is to be collected or used to influence the individual’s behaviour or decisions.
There is also a “legitimate interest” exception, which requires an organization to document any possible adverse effects on the individual, mitigate them and finally weigh whether the legitimate interest outweighs any adverse effects. It’s unclear how “adverse effects” would be measured.
Like PIPEDA, an individual can withdraw consent subject to similar limitations that were in PIPEDA. But what’s changed is that an individual can require that their information be disposed of. Notably, disposal includes deletion and rendering it anonymous.
Law enforcement access
On a first review, it doesn’t look like there are many other circumstances where an organization can collect, use or disclose personal information compared to section 7 of PIPEDA.
In my view, it is very interesting that the exceptions that can apply when the government or the cops come looking for personal information have not changed from section 7(3) of PIPEDA. For example, the provision that the Supreme Court of Canada in R v Spencer said was meaningless is essentially reproduced in full.
44 An organization may disclose an individual’s personal information without their knowledge or consent to a government institution or part of a government institution that has made a request for the information, identified its lawful authority to obtain the information and indicated that the disclosure is requested for the purpose of enforcing federal or provincial law or law of a foreign jurisdiction, carrying out an investigation relating to the enforcement of any such law or gathering intelligence for the purpose of enforcing any such law.
The Supreme Court essentially said “what the hell does lawful authority mean”? And the government has made no effort to do so in Bill C-27. but that’s just as well, since Companies should always say “come back with a warrant”.
Investigations
The big changes are with respect to the role of the Privacy Commissioner. The Commissioner is no longer an ombudsman with a focus on nudging companies to compliance and solving problems for individuals. It has veered strongly towards enforcement.
As with PIPEDA, enforcement starts with a complaint by an individual or the commissioner can initiate it on his own accord. There are more circumstances under the CCPA where the Commissioner can decline to investigate. After the investigation, the matter can be referred to an inquiry.
Inquiries seem to have way more procedural protections for fairness and due process than under the existing ad hoc system. For example, each party is guaranteed a right to be heard and to be represented by counsel. They’ve always done this to my knowledge, but this will be baked into the law. Also, the commissioner has to develop rules of procedure and evidence that have to be followed. These rules have to be made public.
At the end of the inquiry, the Commissioner can issue orders to measures to comply with the Act or to stop doing something that is in contravention of the Act. The commissioner can continue to name and shame violators. Notably, the Commissioner cannot levy any penalties.
The Commissioner can recommend that penalties be imposed by the new Privacy and Data Protection Tribunal.
The Tribunal
The legislation creates a new specialized tribunal which hears cases under the CCPA. It is expected that its jurisdiction will likely grow to include more matters. The “online harms” consultation that took place in the last year anticipated that certain questions would be determined by this tribunal as well.
Compared to C-11, the new bill requires that at least three of the tribunal members have expertise in privacy.
Its role is to determine whether any penalties recommended by the Privacy Commissioner are appropriate. It also hears appeals of the Commissioner’s findings, appeals of interim or final orders of the Commissioner and a decision by the Commissioner not to recommend that any penalties be levied.
Currently, under PIPEDA, complainants and the Commissioner can seek a hearing in the federal court after the commissioner has issued his finding. That hearing is “de novo”, so that the court gets to make its own findings of fact and determinations of law, based on the submissions of the parties. The tribunal, in contrast, has a standard of review that is “correctness” for questions of law and “palpable and overriding error” for questions of fact or questions of mixed law and fact. These decisions are subject to limited judicial review before the Federal Court.
So what about these penalties? They are potentially huge and I have a feeling that the big numbers were pulled out of the air in order to support political talking points that they are the most punitive in the G7. The maximum administrative monetary penalty that the tribunal can impose in one case is the higher of $10,000,000 and 3% of the organization’s gross global revenue in its financial year before the one in which the penalty is imposed.
The Act also provides for quasi-criminal prosecutions, which can get even higher.
The Crown prosecutor can decide whether to proceed as an indictable offence with a fine not exceeding the higher of $25,000,000 and 5% of the organization’s gross global revenue or a summary offence with a fine not exceeding the higher of $20,000,000 and 4% of the organization’s gross global revenue. If it’s a prosecution, then the usual rules of criminal procedure and fairness apply, like the presumption of innocence and proof beyond a reasonable doubt.
The government wants border agents to be able to search your smartphones and laptops without any suspicion that you’ve done anything wrong. I think that’s a problem. There are a lot of problematic bills currently pending before parliament but one in particular is not getting enough attention. It’s Bill S-7, called An Act to amend the Customs Act and the Preclearance Act, 2016. Today I’m going to talk about the bill, digital device searches and what I think about it all.
I don’t know about you, but my smartphone and my laptop contain a vast amount of personal information about me. My phone is a portal to every photo of my kids, messages to my wife, my banking and other information. It contains client information. And Canada Border Services Agency wants to be able to search it without any suspicion that I’d committed a crime or violated any law.
Bill S-7, which was introduced in the Senate on March 31, 2022, is intended to give the CBSA the power to go browsing through your smartphone and mine on what amounts to a whim. It also extends the same powers to US Homeland Security agents who carry out pre-departure pre-clearance at Canadian airports.
If you’ve ever watched the TV show “Border Security Canada”, you would have seen how routine these sorts of searches are. Many of the searches do produce evidence of illegal activity, like smuggling, immigration violations and even importation of child sexual abuse materials. The question is not whether these searches should ever be permissible, but under what circumstances. The government wants it to be with a very low threshold, while I’m confident that the Charter requires more than that.
We all know there’s a reduced expectation of privacy at the border, where you can be pulled over to secondary screening and have your stuff searched. The Customs Act specifically gives CBSA the power to search goods. But a big problem has arisen because the CBSA thinks the ones and zeros in your phone are goods they can search.
Smartphones were unheard of when the search powers of the Customs Act were last drafted and the CBSA thinks it gives them carte blanche to search your devices. Now, in the meantime, the courts have rightly said that’s going too far. So the government is looking to amend the Customs Act to authorize device searches if the CBSA officer has a “reasonable general concern” about a contravention of the law.
One big issue is what the hell does “reasonable general concern” mean? In law, we’re used to language like “reasonable grounds to believe a crime has been committed” or even “reasonable grounds to suspect”, but reasonable general concern is not a standard for any sort of search in Canadian law. Your guess is as good as mine, but it seems pretty close to whether the officer's “spidey sense is tingling”.
S-7 is trying to fix a problem and I think the way they’re doing it will ultimately be found to be unconstitutional. To see that, we have to look at the competing interests at play in this context and look at what the courts have recently said about device searches at the border.
It is clear that you have a reduced expectation of privacy at the border, but it is not completely eliminated. And the Charter is not suspended at the border. For example, border officers can’t detain and strip search you just because they want to. These searches legally cannot be performed unless an officer has reasonable grounds to suspect some legal contravention, notably the concealment of goods. And they can’t strip search you unless there is a reason to do so, like looking for contraband smuggled on your person.
Meanwhile, there is a growing body of case law that says individuals have a very high expectation of privacy in our digital devices. For example, in a case called Fearon from 2014, the Supreme Court modified the common law rule related to search incident to arrest for smartphones, specifically due to the immense privacy implications in searching such devices. Upon arrest, they can routinely search you, your clothes and your belongings, but they can only search your smartphone if certain criteria are met.
The Supreme Court has clearly established that the greater the intrusion on privacy, the greater the constitutional protections and a greater justification for the search is required. And while there may be a diminished expectation of privacy at the border, this expectation is not completely extinguished.
At the same time, there has been a developing body of case law saying that suspicionless searches of personal electronic devices at the border violate the Charter.
The leading Supreme Court of Canada case on privacy at the border is from 1988 called Simmons. In that case, the Court recognized that the degree of personal privacy reasonably expected by individuals at the border is lower than in most other situations. Three distinct types of border searches, with an increasing degree of privacy expectation, were identified: (1) routine questioning which every traveller undergoes at a port of entry, sometimes accompanied by a search of baggage and perhaps a pat or frisk of outer clothing; (2) a strip or skin search conducted in a private room after a secondary examination; and (3) a body cavity search. The first category was viewed as the least intrusive type of routine search, not raising any constitutional issues or engaging the rights protected by the Charter. Essentially, this category can be done without any suspicion of wrongdoing.
So since then, customs agents have seen a search of a phone to be the same as the search of your luggage, which they conclude they can do without any suspicion of wrongdoing.
The Alberta Court of Appeal in 2020, in a case called Canfield, said that customs’ treatment of personal electronic devices was wrong, and it does not fit into that first category. The court noted:
“There have been significant developments, both in the technology of personal electronic devices and in the law relating to searches of such devices, since Simmons was decided in 1988. A series of cases from the Supreme Court of Canada over the past decade have recognized that individuals have a reasonable expectation of privacy in the contents of their personal electronic devices, at least in the domestic context. While reasonable expectations of privacy may be lower at the border, the evolving matrix of legislative and social facts and developments in the law regarding privacy in personal electronic devices have not yet been thoroughly considered in the border context.”
The court then said:
“We have also concluded that s 99(1)(a) of the Customs Act is unconstitutional to the extent that it imposes no limits on the searches of such devices at the border, and is not saved by s 1 of the Charter. We accordingly declare that the definition of “goods” in s 2 of the Customs Act is of no force or effect insofar as the definition includes the contents of personal electronic devices for the purpose of s 99(1)(a).”
The Court in Canfield essentially said there has to be a minimal threshold in order to justify a search of a digital device, but they would leave it to parliament to determine what that threshold is.
But the next year, the same Alberta Court of Appeal considered an appeal in a case called Al Askari. In that case, the question was related to a search of a personal electronic device justified under immigration legislation. The Court found that like in Canfield, there has to be a threshold and it can’t be suspicionless.
The court commented favourably on the very reasoned approach put forward by my friend and Schulich School of Law colleague Professor Robert Currie.
“Prof Currie suggests that the critical issue is measuring the reasonably reduced expectation of privacy at the border and the extent of permissible state intrusion into it. In his view, this is best achieved through the established test in R v Collins, [1987] 1 SCR 265, 308. Was the search authorized by law? Is the law itself reasonable? Is the search carried out in a reasonable manner?
When assessing whether the law itself is reasonable, Prof Currie proposes a standard of reasonable suspicion because it is tailor-made to the border context. It must amount to more than a generalized suspicion and be based on objectively reasonable facts within the totality of the circumstances: 311. On the reasonableness of the search, he advocates for an inquiry into whether the search was limited in scope and duration.”
The Court in both Canfield and Al Askari noted that not all searches are the same, and there are degrees of intrusion into personal electronic devices. Asking to look at a receipt for imported goods on a phone is very different from just perusing the full device looking for anything at all.
So fast forward to March 2022. The Alberta Court of Appeal said it’s up to Parliament to set the threshold and for the courts to determine whether it is compliant with the Charter. So Parliament is proposing a threshold of “reasonable general concern” to search documents on a personal digital device. This is setting things up for years of further litigation.
The creation of a ‘’reasonable general concern’ standard is not only new, and the bill doesn’t give it any sort of definition, it is inconsistent with other legislation governing border searches. It also does not impose any obligation that the type of search carried out must be appropriate to what is “of general concern” or set any limits on what can be searched on the device when the “reasonable general concern” (whatever that means) is met.
If you look at the case of Fearon, which addressed device searches incident to arrest, the court imposed a bunch of conditions and limits in order to take account of the nature of device searches. Importantly, the extent of the permitted search has to be appropriate to what they legitimately have an interest in. The court said:
“In practice, this will mean that, generally, even when a cell phone search is permitted because it is truly incidental to the arrest, only recently sent or drafted emails, texts, photos and the call log may be examined as in most cases only those sorts of items will have the necessary link to the purposes for which prompt examination of the device is permitted. But these are not rules, and other searches may in some circumstances be justified. The test is whether the nature and extent of the search are tailored to the purpose for which the search may lawfully be conducted. To paraphrase Caslake, the police must be able to explain, within the permitted purposes, what they searched and why”
In the border context, if they are looking for whether someone appearing on a tourism visa actually has a job waiting for them, you don’t go looking for evidence of that in their camera roll. You scan the subject lines of emails, and not go prowling through all the mail in the inbox.
Fearon also requires police to carefully document their searches, the rationale, what they looked at and why. There is no such requirement in Bill S-7.
Given years of growing jurisprudence confirming that personal electronic devices contain inherently private information, and the tendency of the courts to impose the creation of this lower threshold is unreasonable, inconsistent with other search standards, and anticipated to run afoul of the Charter.
I think after Canfiled and Al Askari, government lawyers and policy makers huddled and and tried to invent a threshold that could plausibly be called a threshold but was miles below reasonable suspicion. And this is what they came up with. You’ll note that they ignored all the really smart and sensible things that Professor Currie proposed.
What is also very notable is that the government ignored the recommendations made by the House of Commons Standing Committee on Access to Information, Privacy and Ethics in 2017 after it had carried out an extensive study and consultation on the issue of privacy at borders and airports. (I testified at those hearings on behalf of the Canadian Bar Association.) It recommended that the threshold of “reasonable grounds to suspect” should be the threshold.
The threshold is so low that it’s hardly a threshold at all. It’s a license for the CBSA to continue their practices of routinely searching electronic devices, and will continue the legal challenges. I just really wish the legislators would listen to the experts and the courts.
Canadian businesses are routinely asked by police agencies to provide customer information in order to further their investigations or intelligence gathering. The police generally do not care whether the business can legally disclose the information and, in my experience, the police are generally ignorant of privacy laws that restrict the ability of Canadian businesses to cooperate with law enforcement investigations.
For some time, there was some degree of uncertainty about the extent to which Canadian businesses could voluntarily provide information to the police upon request, but this uncertainty has been completely resolved so that it is clear that if the police come knocking, Canadian businesses must respond with “come back with a warrant”.
The uncertainty that used to exist is rooted in section 7 of the personal information protection and electronic documents act, also known as PIPEDA. Section 7 is that part of the law that allows businesses to collect, use or disclose personal information without the consent of individuals. Not surprisingly, there is a provision that dictates whether an organization can or cannot give the police customer information if the police come knocking.
Section 7(3)(c.1) allows a business to disclose personal information to a police agency upon request if they have indicated that the information is necessary for a range of purposes and have identified their lawful authority to obtain the information. There's another provision in the act that deals with what happens when the police show up with a warrant or a production order.
It is clear that in those circumstances, personal information can be disclosed. If it is a valid Canadian Court order, it is likely that not providing the information could subject the business to prosecution.
There's also a provision in the Canadian criminal code that makes it clear that the police can ask for anything from a person who is not prohibited by law from disclosing, which further fed this uncertainty.
So for some time in Canada, the police believed that businesses could disclose information without a warrant as long as it was associated with the lawful investigation. Police believed that the fact that they were investigating a crime is all the “lawful authority” they needed.
Where this would come up most often would be if police had identified illegal online conduct and had the IP address of a suspect. They would seek from an internet service provider the customer name and address that was associated with that IP address at that time. Without that information, they had no suspect to investigate and ISPs hold the keys connecting that IP address with a suspect.
The Canadian association of Internet providers actually concluded a form of protocol with Canadian police that would facilitate the provision of this information. Surprisingly, the CAIP was of the view that this was not private information. What would be required would be a written request from a police agency indicating that the information was relevant to an investigation of certain categories of online offenses, principally related to child exploitation. These letters cited that they were issued under the “authority of PIPEDA”, which is simply absurd.
It is my understanding that the internet providers were generally comfortable with providing this information in connection with such important investigations. For other categories of offenses, they would require a production order.
It is also my understanding that some internet providers fine-tuned their terms of service and privacy policies to permit these sorts of disclosures, so that the businesses would have additional cover by saying in fact the customer had consented to disclosure under these circumstances.
One thing to bear in mind, of course, is that this provision in PIPEDA is permissive, meaning that if this interpretation was correct businesses could voluntarily provide this information, but does not compel them to do so. They could always insist on a court order, but very often did not.
Some courts found this agreeable and found that evidence provided voluntarily under this scheme was permissible, while other courts found it to be a violation of the suspect’s Section 8 rights under the Charter.
Then along came a case called R. v Spencer. In this case, a police officer in Saskatoon, Saskatchewan detected someone sharing a folder containing child pornography using a service called LimeWire. The officer was able to determine the IP address of the internet connection being used by that computer and was able to determine that the IP address was allocated to a customer of Shaw Communications. So the cop sent a written “law enforcement request” to Shaw and Shaw handed over the customer information associated with the account. The cops did not try to obtain a production order first.
The IP address was actually in the name of the accused’s sister.
It finally found its way up to the Supreme Court of Canada where the court had to determine whether the request was a “search” under the Charter. It was. And then the question was whether the search was authorized by law. The Court said it was not.
The police and prosecution, of course, argued that this is just “phone book information” that doesn’t implicate any serious privacy issues. The court disagreed, quoting from a Saskatchewan Court of Appeal decision from 2011 called Trapp:
“To label information of this kind as mere “subscriber information” or “customer information”, or nothing but “name, address, and telephone number information”, tends to obscure its true nature. I say this because these characterizations gloss over the significance of an IP address and what such an address, once identified with a particular individual, is capable of revealing about that individual, including the individual’s online activity in the home.”
Justice Cromwell writing for the court concluded that “Here, the subject matter of the search is the identity of a subscriber whose Internet connection is linked to particular, monitored Internet activity.”
The court said that constitutionally protected privacy includes anonymity. Justice Cromwell wrote, and then quoted from the Spencer decision of the Court of Appeal:
[51] I conclude therefore that the police request to Shaw for subscriber information corresponding to specifically observed, anonymous Internet activity engages a high level of informational privacy. I agree with Caldwell J.A.’s conclusion on this point:
. . . a reasonable and informed person concerned about the protection of privacy would expect one’s activities on one’s own computer used in one’s own home would be private. . . . In my judgment, it matters not that the personal attributes of the Disclosed Information pertained to Mr. Spencer’s sister because Mr. Spencer was personally and directly exposed to the consequences of the police conduct in this case. As such, the police conduct prima facie engaged a personal privacy right of Mr. Spencer and, in this respect, his interest in the privacy of the Disclosed Information was direct and personal.
The court then was tasked with considering what “lawful authority” means in subsection 7(3)(c.1).
The court concluded that the police, carrying out this investigation, did not have the lawful authority that would be required to trigger and permit the disclosure under the subsection. Well the police can always ask for the information, they did not have the lawful authority to obtain the information. If they had sought a production order, their right to obtain the information and Shaw's obligation to disclose it would be clear.
What the court did not do was settle what exactly lawful authority means. It does not mean a simple police investigation, even for a serious crime, but what it might include remains unknown.
What is clear, however, is the end result that this subsection of PIPEDA simply does not permit organizations to hand over customer information simply because the police agency is conducting a lawful investigation. If they want the information, they have to come back with a court order.
Just a quick note about other forms of legal process. While production orders are the most common tool used by law enforcement agencies to seek and obtain customer information, a very large number of administrative bodies are able to use different forms of orders or demands. For example, the CRTC spam investigators can use something called a notice to produce under the anti-spam legislation, which is not reviewed or approved by judge in advance.
It is not uncommon for businesses to receive subpoenas, and they need to tread very carefully and read the details of the subpoena. In order to comply with privacy legislation, the organization can only do what it is directed to do in The subpoena, no more. In the majority of cases, the subpoena will direct the company to send somebody to court with particular records. Just sending those records to the litigants or the person issuing the subpoena is not lawful.
Before I wrap up, it should be noted that the rules are different if it is the business itself reporting a crime. Paragraph (c.1) applies where the police come knocking looking for information. Paragraph d is the provision that applies where the organization itself takes the initiative to disclose information to the police or a government institution. It's specifically says that an organization May disclose personal information without consent where it is made on the initiative of the organization to a government institution and the organization has reasonable grounds to believe that the information relates to a contravention of the laws of Canada, a province or foreign jurisdiction that has been, is being or is about to be committed.
This paragraph gives much more discretion to the organization, but it is still limited to circumstances where they have reasonable grounds to believe sub-paragraph 1 applies and they can only disclose the minimum amount of personal information that's reasonably necessary for these purposes.
A scenario that comes up relatively often would be if a store is robbed, and there is surveillance video of the robbery taking place including the suspect. The store can provide that video to the police on their own initiative. Contrast that to another common scenario, where the police are investigating a crime and evidence may have been captured on surveillance video. If it is the police asking for it, and not the organization reporting it on their own initiative, the police have to come back with a court order.
At the end of the day, the safest and smartest thing that a business can do when asked for any customer personal information is to simply say come back with a warrant. Even if you think you can lawfully disclose the information, it simply makes sense that it be left to an impartial decision maker such as a judge or a Justice of the Peace to do the balancing between the public interest in the police having access to the information and the individual privacy interest at play.
I had the honour this week of presenting to a continuing education event for judges on privacy civil claims, past, present and future. I was jointed by Antoine Aylwin and Erika Chamberlain.
To make it a little more daunting, some of the judges who wrote the decisions I referred to were in the room...
It may be of interest to the privacy nerds who follow my blog, so here's the presentation:
I was very kindly invited back to give a keynote at the Canadian Cyber Summit for the High Technology Crime Investigation Association. I spoke about the role of lawyers in incident response and how greater understanding between lawyers and the technical folks of their respective roles can add value to the overall engagement. I also discussed the importance of legal advice privilege in indicent response. Here is a copy of the presentation I gave, in case it's of interest ...
Canada’s anti-spam law is about much more than just spam. It also regulates the installation of software. Like the rest of the law, it is complicated and convoluted and has significant penalties. If you’re a software developer or an IT admin, you definitely need to know about this.
So we’re talking about Canada’s anti-spam law. The official title is much longer, and it also includes two sets of regulations to make it more complicated.
Despite the snappy title that most of us use: Canada’s Anti-Spam Law, it is about more than just spam. It has often-overlooked provisions that make it illegal to install software on another person’s computer – or cause it to be installed – without their consent.
It was clearly put into the law to go after bad stuff like malware, viruses, rootkits, trojans, malware bundled with legitimate software and botnets. But it is not just limited to malevolent software. It potentially affects a huge range of software.
So here is the general rule from Section 8 of the Act:
8. (1) A person must not, in the course of a commercial activity, install or cause to be installed a computer program on any other person’s computer system or, having so installed or caused to be installed a computer program, cause an electronic message to be sent from that computer system, unless
(a) the person has obtained the express consent of the owner or an authorized user of the computer system and complies with subsection 11(5); or
(b) the person is acting in accordance with a court order.
Let’s break that down. The first part is that it has to be part of a commercial activity. I’m not sure they meant to let people off the hook if they’re doing it for fun and giggles. The “commercial activity” part is likely there so that the government can say this is justified under the federal “general trade and commerce power”.
They could have used the federal criminal law jurisdiction, but then they’d be subject to the full due process and fairness requirements of the Canadian Charter of Rights and Freedoms, and the government did not want to do that. They’d rather it be regulatory and subject to much lower scrutiny.
Then it says you can’t install a computer program on another’s computer without the express consent of the owner or authorized user of the computer. (The definition of “computer system” would include desktops, laptops, smartphones, routers and appliances.) or cause to be installed.
The express consent has to be obtained in the manner set out in the Act, and I’ll discuss that later.
It also additionally prohibits installing a computer program on another’s computer and then causing it to send electronic messages. This makes creation of botnets for sending spam extra bad.
The definition of the term “Computer Program” is taken from the Criminal Code of Canada
“computer program” means computer data representing instructions or statements that, when executed in a computer system, causes the computer system to perform a function; (programme d’ordinateur)
In addition to defined terms, there are some key ideas and terms in the Act that are not well-understood.
It talks about “installing” a computer program, but what that is has not been defined in the legislation and the CRTC hasn’t provided any helpful guidance.
I wouldn’t think that running malware once on someone’s system for a malevolent purpose would be captured in the definition of “install”, though it likely is the criminal offence of mischief in relation to data.
What about downloading source code that is not yet compiled? Or then compiling it?
It is certainly possible to load up software and have it ready to execute without being conventionally “installed”. Does it have to be permanent? Or show up in your installed applications directory?
I don’t know.
There’s also the question of who is an owner or an authorized user of a computer system.
If it is leased, the leasing company likely owns the computer and we’ve certainly seen reports and investigations of spyware and intrusive software installed on rented and leased laptops.
My internet service provider owns my cable modem, so it’s apparently ok if they install malware on it.
For authorized users, it means any person who is authorized by the owner to use the computer system. Interestingly, it is not limited by the scope of the authorization. It seems to be binary. Either you are authorized or you are not.
There are some scenarios to think about when considering owners and authorized users.
For example, if a company pre-installs software on a device at the factory or before ownership transfers to the end customer, that company is the owner of the device and can install whatever they like on it.
Many companies issue devices like laptops and smartphones to employees. Those employers own the devices and can install any software on them.
But increasingly, employees are using devices that they own for work-related purposes, and employers may have a legitimate interest in installing mobile device management and security software on those devices. Unless there’s a clear agreement that the employer gets to do so, they may find themselves to be offside the law.
So, in short, it is prohibited to do any of these things without the express consent of the owner or authorized user:
(a) install a computer program of any kind;
(b) cause a computer program of any kind to be installed, such as hiding or bundling additional software in an installer that the owner or authorized user has installed. We sometimes see this when downloading freeware or shareware, and the installer includes other software that the user didn’t ask for;
(c) or cause such a program that has been installed to send electronic messages after installation.
Of course, someone who is the owner or authorized user of the particular device can put whatever software they want on the device. This only covers installation by people who are not the owner or the authorized user of the device.
There are some exceptions that people should be aware of.
It is common to install software and to have it automatically update. This is ok if the user consents to the auto updates. But that probably doesn't apply if the update results in software that does very different things compared to when it was first installed.
There are some cases where consent is deemed or implied.
CASL deems users to consent to the installation of the following list of computer programs if the user’s conduct shows it is reasonable to believe they consented to it. It is a weird list.
At the top of the list are “cookies”. To start with, anyone who knows what cookies are knows they are not computer programs. They are text files, and including them on this list tells me that the people who wrote this law may not know as much as you may hope about this subject.
It then includes HTML code. HTML is hypertext markup language. I suppose it is data that represents instructions to a computer on how to display text and other elements. I guess the next question is whether this includes the variations of HTML like XHTML? I don’t know. But if HTML is a computer program, then so are fonts and Unicode character codes.
Next it refers to “Java Scripts”. Yup. That’s what it says. We are told by industry Canada that this is meant to refer to JavaScript, which is different from a Java script. Not only could have have maybe not made such a stupid mistake, but maybe they could have been clear about whether they were referring to JavaScript run in a browser (with its attendant sandbox) or something else.
Next on the list are “operating systems”, which seems very perverse to include. The operating system is the mostly invisible layer that lies between the computer hardware and the software that runs on top of it. Changes to the operating system can have a huge impact on the security and privacy of the user, and much of it happens below the system. And there is no clarity about whether an “operating system” on this list includes the software that often comes bundled with it. When I replace the operating system on my Windows PC, I get a new version of a whole bunch of standard software that comes with it like the file explorer and notepad. It would make sense that a user who upgrades from one version of MacOS or Windows to another. But I can make an open source operating system distro that’s full of appalling stuff, in addition to the operating system.
Finally, it says any program executable only through use of another computer program for which the user has already consented to installation. Does this include macros embedded in word documents? Not sure.
It makes sense to have deemed consent situations or implied consent, but we could have used a LOT more clarity.
There are some exceptions to the general rule of getting consent, two of which are exclusively reserved to telecommunications service providers, and a final one that related to programs that exclusively correct failures in a computer system or a computer program.
This is understandable, but this would mean that a telco can install software on my computer without my knowledge or consent if it’s to upgrade their network.
So how do you get express consent. It’s like the cumbersome express consent for commercial electronic messages, but with more.
When seeking express consent, the installer has to identify
the reason;
Their full business name;
Their mailing address, and one of: telephone number, email address, or web address;
if consent is sought on behalf of another person, a statement indicating who is seeking consent and on whose behalf consent is being sought;
a statement that the user may withdraw consent for the computer program’s installation at any time; and
a clear and simple description, in general terms, of the computer program’s function and purposes.
But if an installer “knows and intends” that a computer program will cause a computer system to operate in a way its owner doesn’t reasonably expect, the installer must provide a higher level of disclosure and acknowledgement to get the user’s express consent.
This specifically includes the following functions, all of which largely make sense:
collecting personal information stored on the computer system;
interfering with the user’s control of the computer system;
changing or interfering with settings, preferences, or commands already installed or stored on the computer system without the user’s knowledge;
changing or interfering with data stored on the computer system in a way that obstructs, interrupts or interferes with lawful access to or use of that data by the user;
causing the computer system to communicate with another computer system, or other device, without the user’s authorization;
installing a computer program that may be activated by a third party without the user’s knowledge; and
performing any other function CASL specifies (there are none as yet).
Like the unsubscribe for commercial electronic messages, anyone who installs software that meets this higher threshold has to include an electronic address that is valid for at least one year to the user can ask the installer to remove or disable the program.
A user can make this request if she believes the installer didn’t accurately describe the “function, purpose, or impact” of the computer program when the installer requested consent to install it. If the installer gets a removal request within one year of installation, and consent was based on an inaccurate description of the program’s material elements, then the installer must assist the user in removing or disabling the program as soon as feasible – and at no cost to the user.
So how is this enforced? CASL is largely overseen by the enforcement team at the Canadian Radio-television and Telecommunications Commission.
Overall, I see them at least making more noise about their enforcement activities in the software arena than the spam arena.
In doing this work, the CRTC has some pretty gnarly enforcement tools.
First of all, they can issue “notices to produce” which are essentially similar to Criminal Code production orders except they do not require judicial authorization. These can require the recipient of the order to hand over just about any records or information, and unlike Criminal Code production orders, they can be issued without any suspicion of unlawful conduct. They can be issued just to check compliance. I should do a whole episode on these things, since they really are something else in the whole panoply of law enforcement tools.
They can also seek and obtain search warrants, which at least are overseen and have to be approved by a judge.
Before CASL, I imagine the CRTC was entirely populated by guys in suits and now they get to put on raid jackets, tactical boots and a badge.
I mentioned before that there can be some significant penalties for infractions of CASL’s software rules.
It needs to be noted that contraventions involve “administrative monetary penalties” - not a “punishment” but intended to ensure compliance. These are not fines per se and are not criminal penalties. That’s because if they were criminal or truly quasi-criminal, they’d have to follow the Charter’s much stricter standards for criminal offences.
The maximum for these administrative monetary penalties are steep. Up to $1M for an individual offender and $10M for a corporation.
The legislation sets out a bunch of factors to be considered in determining the amount of penalty, including the ability of the offender to pay.
There is a mechanism similar to a US consent decree where the offender can give an “undertaking” that halts enforcement, but likely imposes a whole bunch of conditions that will last for a while.
Officers and directors of companies need to know they may be personally liable for penalties and of course the CRTC can name and shame violators.
There is a due diligence defence, but this is a pretty high bar to reach.
We have seen at least three reported enforcement actions under the software provisions of CASL.
The first was involving two companies called Datablocks and Sunlight Media in 2018. They were found by the CRTC to be providing a service to others to inject exploits onto users’ computers through online ads. They were hit with penalties amount to $100K and $150K, respectively.
The second was in 2019 and involved a company called Orcus Technologies, which was said to be marketing a remote access trojan. They marketed it as a legitimate tool, but the CRTC concluded this was to give a veneer of respectability to a shady undertaking. They were hit with a penalty of $115K.
The most recent one, in 2020, involved a company called Notesolution Inc. doing business as OneClass. They were involved in a shady installation of a Chrome extension that collected personal information on users’ systems without their knowledge or expectation. They entered into an undertaking, and agreed to pay $100K.
I hope this has been of interest. The discussion was obviously at a pretty high level, and there is a lot that it unknown about how some of the key terms and concepts are being interpreted by the regulator.
If you have any questions or comments, please feel free to leave them below. I read them all and try to reply to them all as well. If your company needs help in this area, please reach out. My contact info is in the notes, as well.
In my legal practice, I have seen businesses fail because they did not take privacy into account. I’ve seen customers walk away from deals because of privacy issues and I’ve seen acquisitions fail due diligence because of privacy.
Today, I’m going to be talking about privacy by design for start-ups, to help embed privacy into growing and high-growth businesses.
Episode 2 of Season 4 of HBO’s “Silicon Valley” provides a good case study on the possible consequences of not getting privacy compliance right.
Privacy means different things to different people. And people have wildly variable feelings about privacy. As a founder, you need to understand that and take that into account.
In some ways, privacy is about being left alone, not observed and surveilled.
It is about giving people meaningful choices and control. They need to understand what is happening with their personal information and they should have control over it. What they share and how it is used. They should get to choose whether something is widely disseminated or not.
Privacy is also about regulatory compliance. As a founder you need to make sure your company complies with the regulatory obligations imposed on it. If you are in the business to business space, you will need to understand the regulatory obligations imposed on your customers. I can guarantee you that your customers will look very, very closely at whether your product affects their compliance with their legal obligations. And they’ll walk away if there’s any realistic chance that using your product puts their compliance at risk.
Privacy is about trust in a number of ways. If you are in the business to consumer space, your end-users will only embrace your product if they trust it. If they know what the product is doing with their information and they trust you to keep it consistent. If you are in the business to business space, your customers will only use your product or service if they trust you. If you’re a start-up, you don’t yet have a track record or wide adoption to speak on your behalf. A deal with a start-up is always a leap of faith, and trust has to be built. And there are a bunch of indicators of trustworthiness. I have advised clients to walk away from deals where the documentation, responses to questions don’t suggest privacy maturity. If you have just cut and pasted your privacy policy from someone else, we can tell.
Privacy is not just security, but security is critical to privacy. Diligent security is table stakes. And a lack of security is the highest risk area. We seldom see class-action lawsuits for getting the wrong kind of consent, but most privacy/security breaches are followed by class-action lawsuits. Your customers will expect you to safeguard their data with the same degree of diligence as they would do it themselves. In the b2b space, they should be able to expect you to do it better.
You need to make sure there are no surprises. Set expectations and meet them.
In my 20+ years working with companies on privacy, one thing is clear. People don’t like it when something is “creepy”. Usually this is a useless word, since the creepy line is drawn very differently for different people. But what I’ve learned is that where the creepy line is depends on their expectations. But things are always creepy or off-putting when something happens with your personal information that you did not expect.
As a founder, you really have to realize that regardless of whether or not you care about privacy yourself, your end users care about privacy. Don't believe the hype, privacy is far from dead.
If you are in the business to business arena, your customers are going to care very deeply about the privacy and security of the information that they entrust you with. If you have a competitor with greater privacy diligence or a track record, you have important ground to make up.
And, of course, for founders getting investment is critical to the success of their business. The investors during your friends and family round or even seed funding might not be particularly sophisticated when it comes to privacy. But Mark my words, sophisticated funds carry out due diligence and know that privacy failures can often equal business failures. I have seen investments go completely sideways because of privacy liabilities that are hidden in the business. And when it comes time to make an exit via acquisition, every single due diligence questionnaire has an entire section if not a chapter on privacy and security matters. The weeks leading up to a transaction are not the time to be slapping Band-Aids on privacy problems that were built into the business or the product from the very first days. As a founder, you want to make sure that potential privacy issues are, at least, identified and managed long before that point.
The borderless world
I once worked with a founder and CEO of a company who often said that if you are a startup in Canada, and your ambition is the Canadian market, you have set your sights too low and you are likely to fail. The world is global, and digital is more global than any other sector. You might launch your minimally viable product or experiment with product market fit in the local marketplace, but your prospective customers are around the world. This also means that privacy laws around the world are going to affect your business.
If your product or services are directed at consumers, you will have to think about being exposed to and complying with the privacy laws of every single jurisdiction where your end users reside. That is just the nature of the beast.
If you're selling to other businesses, each of those businesses are going to be subject to local privacy laws that may differ significantly from what you're used to. Once you get into particular niches, such as processing personal health information or educational technology, the complexity and the stakes rise significantly.
You definitely want to consult with somebody who is familiar with the alphabet soup of PIPEDA, PIPA, CASL, PHIA, GDPR, COPPA, CCPA, CPRA, HIPAA.
You're going to want to talk carefully and deeply with your customers to find out what their regulatory requirements are, which they need to push down onto their suppliers.
The consequences of getting it wrong can be significant. You can end up with a useless product or service, one that cannot be sold or that cannot be used by your target customers. I’ve seen that happen.
A privacy incident can cause significant reputational harm, which can be disastrous as a newcomer in a marketplace trying to attract customers.
Fixing issues after the fact is often very expensive. Some privacy and security requirements may mandate a particular way to architect your back-end systems. Some rules may require localization for certain customers, and if you did not anticipate that out of the gate, implementing those requirements can be time and resource intensive.
Of course, there's always the possibility of regulatory action resulting in fines and penalties. Few things stand out on a due diligence checklist like having to disclose an ongoing regulatory investigation or a hit to your balance sheet caused by penalties.
All of these, individually or taken together, can be a significant impediment to closing an investment deal or a financing, and can be completely fatal to a possible exit by acquisition.
So what's the way to manage this? It's something called privacy by design, which is a methodology that was originally created in Canada by Dr Ann Cavoukian, the former information and privacy commissioner of Ontario.
Here's what it requires at a relatively high level.
First of all, you need to be proactive about privacy and not reactive. You want to think deeply about privacy, anticipate issues and address them up front rather than reacting to issues or problems as they come up.
Second, you need to make privacy the default. You need to think about privacy holistically, focusing particularly on consumers and user choice, and setting your defaults to be privacy protective so that end users get to choose whether or not they deviate from those privacy protective defaults.
Third, you need to embed privacy into your design and coding process. Privacy should be a topic at every project management meeting. I'll talk about the methodology for that in a couple minutes.
You need to think about privacy as positive sum game rather than a zero-sum game. Too often, people think about privacy versus efficiency, or privacy versus innovation, or privacy versus security. You need to be creative and think about privacy as a driver of efficiency, innovation and security.
Fifth, you need to build in end-to-end security. As I mentioned before, security may in fact be the highest risk area given the possibility of liability and penalties in this area, you need to think about protecting end users from themselves, from their carelessness, and from all possible adversaries.
Sixth, you need to build visibility and transparency. Just about every single privacy law out there requires that an organization be open and transparent about its practices. In my experience, the more proactive an organization is in talking about privacy and security, and how they address it, it is a significant “leg up” compared to anybody else who does not.
Seventh, and finally, you need to always be aware that and users are human beings who have a strong interest in their own privacy. They might make individual choices that differ from your own privacy comfort levels, but that is human. Always look at your product and all of your choices through the eyes of your human and users. Think about how you will explain your product and services to an end user, and can the choices that you have made in its design be justified to them?
A key tool to implement this is to document your privacy process and build it iteratively into your product development process. For every single product or feature of a product, you need to document what data from or about users is collected. What data is generated? What inferences are made? You will want to get very detailed, knowing every single data field that is collected or generated in connection with your product.
Next you need to carefully document how each data element is used? Why do you need that data, how do you propose to use it and is it necessary for that product or feature? If it is not “must have” but “good to have”, how do you build that choice into your product?
You need to ask “is this data ever externally exposed”? Does it go to a third party to be processed on your behalf, is it ever publicly surfaced? Are there any ways that the data might be exposed to a bad guy or adversary?
In most places, privacy regulations require that you give individual users notice about the purposes for which personal information is collected, used or disclosed. You need to give users control over this. How are the obligations for notice and control built into your product from day one? When a user clicks a button, is it obvious to them what happens next?
You will then need to ask “where is the data”? Is it stored locally on a device or server managed by the user or the customer? Is it on servers that you control? Is it a combination of the two? Is the data safe, wherever it resides? To some people, local on device storage and processing is seen as being more privacy protective than storage with the service provider. But in some cases, those endpoints are less secure than a data center environment which may have different risks.
Finally, think about life cycle management for the data. How long is it retained? How long do you or the end user actually need that information for? If it's no longer needed for the purpose identified to the end user, it should be securely deleted. You'll also want to think about giving the end user control over deleting their information. In some jurisdictions, this is a legal requirement.
Everybody on your team needs to understand privacy as a concept and how privacy relates to their work function. Not everybody will become a subject matter expert, but a pervasive level of awareness is critical. Making sure that you do have subject matter expertise properly deployed in your company is important.
You also have to understand that it is an iterative process. Modern development environments can sometimes be likened to building or upgrading an aircraft while it is in flight. You need to be thinking of flight worthiness at every stage.
When a product or service is initially designed, you need to go through that privacy design process to identify and mitigate all of the privacy issues. No product should be launched, even in beta until those issues have been identified and addressed. And then any add-ons or enhancements to that product or service need to go through the exact same scrutiny to make sure that no new issues are introduced without having been carefully thought through and managed.
I have seen too many interesting and innovative product ideas fail because privacy and compliance simply was not on the founder’s radar until it is too late. I have seen financing deals derailed and acquisitions tanked for similar reasons.
Understandably, founders are often most focused on product market fit and a minimally viable product to launch. But you need to realize that a product that cannot be used by your customers or that has significant regulatory and compliance risk is not a viable product.
I hope this has been of interest. The discussion was obviously at a pretty high level, but my colleagues and I are always happy to talk with startup founders to help assess the impact of privacy and compliance on their businesses.
If you have any questions or comments, please feel free to leave them below. I read them all and try to reply to them all as well. If your company needs help in this area, please reach out.
And, of course, feel free to share this with anybody in the startup community for whom it may be useful.