The Canadian Privacy Law Blog: Developments in privacy law and writings of a Canadian privacy lawyer, containing information related to the Personal Information Protection and Electronic Documents Act (aka PIPEDA) and other Canadian and international laws.
I have to start by giving Public Safety Minister Gary Anandasangaree credit for parking the “lawful access” parts of Bill C-2, going back to the drawing board and introducing a much improved Bill C-22, “An act respecting lawful access.”
As I said, it’s much improved. In a number of ways, it still goes way too far and least in one respect it doesn’t go far enough.
Over the course of a number of episodes, I’m going to do a bit of a deep dive into some of the main features of Bill C-22. I did a forty minute episode going over all of it, but the next ones will be shorter and focused on particular provisions.
Today I’m going to talk about the “Confirmation of Service Demand.” Yes, it is without a warrant but that doesn’t cause me any real concern. And I’ll explain why.
Before we dive into these demands, a bit of background:
The Bill is in two parts. The first part is called “Timely Access to Data and Information” and the second part of the Bill creates a new statute: the “Supporting Authorized Access to Information Act”.
The two parts do wildly different things. Part one is intended to create new AUTHORITIES by which police and national security folks can require companies to provide them with information about their customers. Part two is intended to create new CAPABILITIES by which police and national security folks can require companies to provide them with information about their customers. Part one is about authorities and part two is about capabilities. The authorities under part one are mostly subject to judicial supervision and control, and I can largely live with them. The capabilities under Part Two cause me a LOT of concern.
Over the last twenty years, the government and police have not done a good job explaining why they need either the new authorities or the new capabilities.
To understand whether they should have new authorities and capabilities, I think we need to go through what is the current state of affairs and what the government proposes to change. And then we’ll look at what those changes are and what those changes mean.
Here is a pretty common scenario that plays out all the time. The police have evidence of some sort of online crime. It could be distribution of child abuse materials or it could be extortion. They’re confident a crime has taken place, but they don’t know who the suspect is. They may have an IP address or a phone number, but no name. Using publicly available tools, they can find out who is the internet service provider or who is the telco who first assigned the phone number. But they don’t necessarily know where the suspect may be. If it’s a Rogers, or Bell or Telus IP address, they have customers across the country. If it’s a phone number that was first assigned by Rogers, that customer may have moved provinces and thanks to number portability, the service provider may have changed in the meantime.
So they want to know who is the person – their suspect – connected to this IP address or phone number, who is the current service provider and where they are. The “where” is important, because the crime may have been brought to the attention of the RCMP in Ottawa via international law enforcement partners, but the suspect may be in Montreal, Toronto or Calgary.
But this is not a dead end using their current authorities. The RCMP in Ottawa can go to the court in Ottawa to get a general production order. They’ve been able to do this since 2004, when the Criminal Code was amended to create these third party information orders. So an RCMP constable goes to the court and says – under oath – I have reasonable grounds to believe that a crime has been committed, and here’s the basis for that belief. I also have reasonable grounds to believe that the Telco or ISP has information that will lead me to the identity of the suspect. Therefore I want an order telling the Telco to provide me with the customer name and address associated with the IP address or phone number. And the officer gets a production order that will typically order the Telco to provide the information promptly and usually no later than thirty days. The order can say a shorter time.
The telco will tell the RCMP constable the name and address that the IP address is allocated. Let’s just say it’s John Q. Public of 123 Main Street, Winnipeg, Manitoba. The RCMP in Ottawa will contact the Winnipeg police, send them their investigation file and the information received from the Telco. The Winnipeg police should pick it up from there, and off they go.
This can all be done – and is done daily – using the current authorities in the Criminal Code.
But from time to time, the response from a telco may be “that’s not our phone number” or “yes, that’s our IP address, but it’s actually serviced by a reseller of internet services so we don’t have any customer information”. This doesn’t happen all the time, but it happens.
One of the things that the police and national security folks want is a “confirmation of service demand” because they may not know whether the suspect is actually a customer of a particular telco. They want to be able to ask any telco “Hey, do you service this phone number?” And the telco would have to say “yes” or “no”. It may be an IP address, it may be a SIM card number or an IMEI (International Mobile Equipment Identity), which is a unique 15-digit number that identifies mobile devices on a network. (I should note that IP addresses and SIM card numbers are generally and reliably associated with the service provider.)
A confirmation of service demand makes a lot of sense. They can’t really do this with a current production order because they have to have “reasonable grounds to believe” that the recipient of the order has records. They may have reasonable grounds to believe that the phone number may be served by “A Telco”, but they don’t have reasonable grounds to believe that the phone number is served by any particular Telco. There are 39 registered wireless carriers and more than 100 traditional phone companies.
A yes or no answer to “Hey! Bell! Is 902-555-1212 serviced by you?” does not disclose anything meaningfully private or personal about whoever answers when you dial 902-555-1212. Essentially, for the police, it’s knowing where to send any subsequent court orders related to that number.
So in the scenario I mentioned before, the RCMP in Ottawa that got the report can ask the larger telcos whether they provide the service to the number and get a yes or no answer. Then they know where to send a court order for customer information.
When this was first introduced in Part 14 of Bill C-2, the Strong Borders Act, the “information demand” was far too broad and got a lot of pushback. If this had gone through, without a warrant, the police could demand much more than “is this your customer” and it applied to anyone who provides services to the public. That’s in paragraph (a) - do you or have you provided services. But it went further. If the answer to (a) is “yes”, they can demand whether the company has records and where the services were provided. They can demand the dates during which services were provided. They can demand information about anyone else who is known to provide services to the customer.
So the police can go to Dr. Smith, a family doctor, and say “is John Q. Public your patient, and what specialists have also provided services to your patient”? Clearly over the top.
So in the new Lawful Access Bill, Bill C-22, we have a pared back “Confirmation of service demand”.
The new section 487.0121 allows a peace officer or public officer to make a demand to a telecommunications service provider. It’s not just anyone who provides service to the public, but is now limited to registered, regulated telcos. That demand can require them to confirm, within the time and in the manner specified in the demand, whether or not they provide or have provided telecommunication services to any subscriber or client, or to any account or identifier, specified in the demand.
To make this demand, they just have to suspect that an offence has taken place and that the confirmation will assist in the investigation. That’s a low threshold, but defensible in light of the information being sought. Which is just a yes or a no answer.
In pulling back and fixing the former information demand, I think they may have pulled back a little too far. In the old demand, the police could demand “in which municipality do you provide these services.” That’s no longer there. And I would be OK with putting that part back in the new “Confirmation of Service Demand” because that has the potential to move investigations forward with negligible impact on customer privacy.
Going back to the scenario I mentioned earlier, where the RCMP in Ottawa receive a report from another law enforcement agency outside of Canada, but the suspect is in Winnipeg. If the confirmation of service demand included the location where services are provided, the RCMP can make the demand from the major telcos, find out that the suspect is in Winnipeg and just refer the whole file to the Winnipeg police to investigate. The Winnipeg police would then go to a local judge to get a production order for subscriber information (which I’ll get into in a subsequent episode), and carry on with the investigation.
Being able to refer the matter to the local police of jurisdiction at that stage makes sense to me, and as I said has negligible impact on privacy.
So that’s the “confirmation of service demand” in Bill C-22, the Lawful Access Act of 2026. The scaling back has certainly improved it, but in scaling it back, the police may have lost a useful bit of information that had no meaningful privacy impact.
The latest
attempt at so-called “lawful access” has just dropped in the Parliament of
Canada. I have a few things to say about it. It’s better than the
government’s last attempt, but take a moment and consider this:
If Bill C-22, the Lawful
Access Act 2026 becomes the law, the government of Canada will be able
to secretly order Apple to build in a capability into its infrastructure to
allow Canadian law enforcement and national security folks to track every
iPhone, every iPad, every Apple watch, every Apple AirPod and every AirTag in
real time.
Then they’ll be able to require Apple to
confirm whether they provide you any services.
Then they can go to a justice of the
peace and get an order – without actually believing that a crime has been or
will be committed – requiring Apple to hand over EVERY device identifier for
every device you use with their services. That’s the digital ID for your
iPhone, iPad, Apple watch, Apple AirPod, Apple TV and AirTag.
With that information, they can go back
to the judge and get an order – again without actually believing that a crime
has been or will be committed – requiring Apple to give them the
moment-by-moment locations of all your devices.
Oh, and that secret order also
required Apple to keep your location history for a full year, so cops can get
that too. Is that a power we want Canadian police and law enforcement to
have?
For literal
decades, Canadian law enforcement and national security folks – working through
both liberal and conservative governments – have tried to give cops and spies
easier access to information about Canadians, and to plug directly into our
digital infrastructure to get access to data.
In 2005 Liberal
PM Paul Martin’s Justice Minister Anne Maclellan introduced Bill C-74, called
the “Modernization of Investigative Techniques Act”. It didn’t pass.
In 2009,
Conservative prime minister Stephen Harper’s Minister Peter Van Loan introduced
Bill C-47, renamed the “Technical Assistance for Law Enforcement in the 21st
Century Act”. It also did not pass.
A couple of
years later, in 2011 Conservative Stephen Harper’s Minister of Public Safety
Vic Toews tabled Bill C-52 in Parliament. This attempt was called the
“Investigating and Preventing Criminal Electronic Communications Act”. Shocker
– It did not pass.
Apparently a
sucker for punishment, Minister Vic Toews then tried another kick at the can
the next year with Bill C-30, which was branded as the “Protecting Children
from Internet Predators Act”. Yup, you guessed it – this did not pass.
Fast forward to
2025 … The very first substantial bill of the Prime Minister Mark Carney
government was tabled by Public Safety Minister Gary Anandasangaree. That was
Bill C-2 called the Strong Borders Act. Almost ten years dead, “lawful
access” was pulled from its grave, crammed into Parts 14 and 15 of a border
bill, only to be thrown back on the trash-heap. It never made it to committee
because of the backlash over privacy.
I did a couple
of episodes on how problematic Bill C-2 was. (Part
14 and Part
15.) It was universally panned and it was clear that it would not make it
through the minority liberal parliament. Not to be deterred – but to his credit
— the Minister of Public Safety went back to the drawing board to try to find a
way to make it minimally palatable for it to make it through Parliament.
Notably, the current parliament is not as “minority” as it was when Bill C-2
was introduced.
I’m going to go
through the Bill to let you know what it contains and what it is supposed to
do. I’ll try to highlight the differences between what was attempted earlier in
Bill C-2 and the changes they’ve made for Bill C-22, and I’ll also talk about
what’s different from the current status quo.
The bill is in
two parts, which parallel Parts 14 and Parts 15 of Bill C-2, the Strong
Borders Act. In going back to the drawing board, I think the government has
largely fixed the big problems with what was Part 14 related to warrantless
information demands and new production order powers. But I think that Part 2 is
still a HUGE issue.
Part 1 is
called “timely access to data and information”.
It contains
some amendments to the general search warrant provisions of the criminal
code to permit the examination of computer data in conjunction with the
execution of a warrant when it's authorized by a judge. The status quo, as I
understand it, would require the seizure of the computer, returning to court
and then getting further authorization to search it. This creates a bit of a
One-Stop shop. Criminal law practitioners may have more to say about this
provision.
The rest of
Part 1 largely deals with new information demands and production orders. I
should note at the outset that all the new information demands and production
orders are equally available to the Canadian Security Intelligence Service as
they are to the police. I’m just going to go through each of them once, rather
than dealing with the Criminal Code and CSIS Act amendments
separately.
The first
significant new power that the bill conveys on law enforcement and CSIS is
something called a “confirmation of service demand”. Something similar was in
Bill C-2, but this has been significantly scaled back. Essentially the new
section 487.0121 will allow any police officer or any public officer to make a
demand to a telecommunication service provider requiring them to confirm
whether or not they provide or have provided telecommunication services to any
subscriber or client. This could be done using the person's name, account
identifier, IP address or telephone number.
Confirmation of service demand
487.0121 (1)A peace officer or public officer may make a demand in Form 5.0011
to a telecommunications service provider requiring them to confirm, within the
time and in the manner specified in the demand, whether or not they provide or
have provided telecommunication services to any subscriber or client, or to any
account or identifier, specified in the demand.
The conditions
for making the demand are actually quite low, being “reasonable grounds to
suspect” that a federal offense has taken place and that the confirmation that
is demanded will assist inthe investigation of the offense.
Conditions for making demand
(2)The
peace officer or public officer may make the demand only if they have
reasonable grounds to suspect that
(a)an
offence has been or will be committed under this Act or any other Act of
Parliament; and
(b)the
confirmation that is demanded will assist in the investigation of the offence.
The
telecommunication service provider simply has to provide a yes or no answer. Do
they or do they not provide services to that person or in relation to that
identifier. This is MUCH better than what was in Bill C-2. The revised demand
can only be presented to a telecommunications service provider. The Bill C-2
version could have been made to anyone who provides services to the public,
including a doctor’s office or a law firm. The previous version would have
required – without a warrant – producing information about the nature of the
services and anybody else that the service provider knew who might also provide
services to that person.
In Bill C-22,
this is much more tailored and focused only on telecommunication service
providers or TSPs.
I'm actually
surprised that it doesn't include a requirement to confirm the municipality or
location where the services are provided, because it's my understanding that a
large part of the justification for this in the first place was so that not
only would the police be able to determine whether this service provider is the
right person to send a production order to, but also who is the local police of
jurisdiction. On a daily basis, the RCMP in Ottawa receive international
reports related to criminal activity in Canada, such as dissemination of child
abuse imagery and that report only includes an IP address or account
identifier. That information does not necessarily tell them who is the local
police of jurisdiction to refer the file to. I guess the government was so
sensitive to the pushback they received on Bill C-2, that they removed what
seemed to be pretty innocuous information, which had a compelling
justification.
While I think
this is much improved, I am still very concerned that any peace officer or
public officer who makes a demand is able to impose a non-disclosure condition
for up to one year. That is a significant period of time. I would much prefer
it if it was something short like 30 days, and the officer could go to court to
get it extended.
Non-disclosure
(6)The
peace officer or public officer who makes the demand may impose conditions in
the demand prohibiting the disclosure of its existence or some or all of its
contents for a period not greater than one year after the day on which the
demand is made. The peace officer or public officer may impose the conditions
only if they have reasonable grounds to believe that the disclosure during that
period would jeopardize the conduct of the investigation of the offence to
which the demand relates.
Not
surprisingly, they have included in subsection (12), a provision that says a
peace officer public officer can just ask a telecommunications service provider
to voluntarily provide the confirmation, and this confirmation can be provided
as long as the TSP is not prohibited by law from providing it. Then it goes on
to say that the TSP that provides a confirmation in these circumstances does
not incur any liability for doing so. The Bill has other, similar Safe Harbors
for voluntary disclosure, but related to much more sensitive information.
Request for confirmation
(12)Despite
subsection (1), no demand under that subsection is necessary for a peace
officer or public officer to ask a telecommunications service provider to
voluntarily provide the confirmation referred to in that subsection if the
telecommunications service provider is not prohibited by law from providing it.
A telecommunications service provider that provides a confirmation in those
circumstances does not incur any criminal or civil liability for doing so.
The main
feature in my view of Part 1 is a new “production order for subscriber
information”.
Before we get
into it, it's really important to note that the Criminal Code currently
provides for something called a general production order by which a cop can go
to a judge and if they have reasonable grounds to believe a crime has been
committed or will be committed, they can get an order requiring a third party
to produce records that are listed in the production order. On a daily basis,
police seek and obtain subscriber information using these production orders.
What is different here, mainly, is significantly lowering the threshold so that
the officer only has to have reasonable grounds to suspect an offense has been
committed. They don't even have to have reasonable grounds to believe it has
been committed. They don’t even have to believe that a crime has been or will
be committed.
Reasonable
grounds to suspect doesn’t mean that they actually have to suspect a crime, it
just means they have reasonable grounds that could make someone suspect a
crime. This is extremely low.
So the new
section 487.0142 says that on an ex parte application made by a peace
officer or a public officer, a justice or judge may order a person who provides
services to the public to prepare and produce a document containing all the
subscriber information that relates to any information, including transmission
data, that is specified in the order and that is in their possession or control
when they receive the order.
Production order — subscriber
information
487.0142 (1)On ex parte application made by a peace officer or public
officer, a justice or judge may order a person who provides services to the
public to prepare and produce a document containing all the subscriber
information that relates to any information, including transmission data, that
is specified in the order and that is in their possession or control when they
receive the order.
Unlike the
confirmation of service demand, this is not limited to telcos. This can involve
anyone who provides services to the public. So this does include doctors
offices, hotels, grocery stores and banks.
You will see
that in subsection (2), it says that before making the order the Justice or
judge must be satisfied by information on oath that there are reasonable
grounds to suspect an offence has been or will be committed under the Criminal
Code or any other Act of Parliament and the subscriber information is in
the person's possession of control and will assist in the investigation of the
offense.
Conditions for making order
(2)Before
making the order, the justice or judge must be satisfied by information on oath
in Form 5.004 that there are reasonable grounds to suspect that
(a)an
offence has been or will be committed under this Act or any other Act of
Parliament; and
(b)the
subscriber information is in the person’s possession or control and will assist
in the investigation of the offence.
You should also
note that this is not limited to serious crimes. These powers can be used for
any offence under federal law, such as offences under the National Parks Act,
like sleeping outside of a campground.
It is also
important to understand what is included in “subscriber information”, and I
will note some of the differences from Bill C-2 to Bill C-22. The bill
says:
subscriber information, in
relation to any client of a person who provides services to the public or any
subscriber to the services of such a person, means
(a)information
that may be used to identify the subscriber or client, including their name,
pseudonym, address, telephone number and email address;
(b)identifiers
assigned to the subscriber or client by the person, including account numbers;
and
(c)information
relating to the services provided to the subscriber or client, including
(i)the
types of services provided,
(ii)the
period during which the services were provided, and
(iii)information
that identifies the devices, equipment or things used by the subscriber or
client in relation to the services.
In Bill C-2,
subscriber information included any information provided by the customer to the
service provider in order to obtain the services. This could have included
banking information and passwords. It could have included medical information.
Remember, such an order can be directed to a medical clinic. When you go to a
clinic for the first time, you fill out a pretty detailed form related to your
medical history, and that would be in the category of “information provided by
the customer in order to receive the services”. Thankfully, that has been
removed. The definition of subscriber information is much more scaled-back in
Bill C-22, but information about the “types of services provided” along with
device and equipment identifiers can be sensitive information that goes beyond
mere identifying a possible suspect. For many people, their internet service
provider is also their cable TV provider. Do those “services” include premium
pay-per-view access? Hmm? Scaled back but still a bit too far.
This new bill
also includes quirky “foreign entity information requests”. These are kind of
weird because what it amounts to is an application to court to get permission
to make a request, which is voluntary, to a foreign entity that provides
telecommunications services.
So what they
end up with is a piece of paper asking an entity to voluntarily provide
subscriber information. It is not an order requiring the entity to produce the
information, but it does have judicial approval in Canada. This is intended to
address the question of whether Canadian orders can be enforced outside of
Canada, or more accurately avoid that question entirely. It should be
applicable where voluntary disclosure can be obtained and where the service
provider wants to be sure that there is some third-party judicial approval. It
also should mean that whatever information is obtained can be used in a Canadian
court, because Canadian police have been authorized by a judge to obtain it.
Personally, I think this is a really clever solution for a real issue.
Subsection 4 of
this provision says that the production request can be required to include
information required by the foreign entity, the foreign state or any magic
words that are required by an international agreement or arrangement to which
Canada and the foreign state are parties.
Earlier I
mentioned the gag orders that can accompany a confirmation of service demand.
Part 1 also amends the existing section 487.0191 of the Criminal Code to
authorize a judge, on an ex parte application, to issue a gag order
related to confirmation of service demands.
Part 1 of Bill
C-22 also affects the scheme for judicial review of production orders
generally, not just this new production order for subscriber information. It
compresses the timeline during which the recipient of a production order is
able to seek judicial review, in order to have it modified or revoked. That
deadline will be “within 10 business days after the day on which the order was
received”. In Bill C-2, it was way shorter – five days after the order was
issued – and actually seemed to be designed to prevent the judicial review of
production orders. I have seen production orders served more than five days
after they are issued, so it would be too late by the time you received it. Ten
business days is still pretty short, but much more reasonable than what was in
the Strong Borders Act.
Part 1 of Bill
C-22 also tweaks the existing provisions in the Criminal Code related to
voluntary disclosure of information from any person to the police or a public
officer. It says that documents or information can be provided voluntarily and
it also says that no person incurs any criminal or civil liability for doing
so.
For greater certainty
487.0195 (1)For greater certainty, no preservation demand, preservation
order, keep account open or active order or production order is necessary for a
peace officer or public officer to ask a person to voluntarily preserve data
that the person is not prohibited by law from preserving, to voluntarily keep
an account open or active that the person is not prohibited by law from keeping
open or active or to voluntarily provide a document or information to
the officer that the person is not prohibited by law from disclosing.
No civil or criminal liability
(2)A
person who preserves data, keeps an account open or active or provides a
document or information in those circumstances does not incur any
criminal or civil liability for doing so.
It's kind of
extra weird because subsection (1) says “hey you can voluntarily provide it if
a law doesn't prohibit you from voluntarily providing it”. Then subsection (2)
says if you provide it, you will have no criminal or civil liability. If no law
prevented them from providing it, why do they need immunity from criminal or
civil liability?
This actually
does NOT fix the issue that arose in the Supreme Court of Canada case of
R v.Bycovets.
In that case, a payment service processor voluntarily provided IP address
information related to suspected fraudulent transactions, and the Supreme Court
of Canada said that the police were not able to use that information or even
obtain it without a production order. This does nothing to address that issue.
The Bykovets issue is still there.
We then have a
new subsection (3) that says:
For greater certainty, no production
order or warrant, or confirmation of service demand made under section 487.0121,
is necessary for a peace officer or public officer to receive any information
from a person or a telecommunications service provider, as the case may be, who
is lawfully in possession of it, and to act on the information, if the person,
without being asked for it, provides it voluntarily or is required by law,
including a law of a foreign state, to provide it.
There’s also a
new subsection (4), which says:
For greater certainty, no production
order or warrant, or confirmation of service demand made under section 487.0121,
is necessary for a peace officer or public officer to receive, obtain and act
on any information that is available to the public.
This seems
pretty similar to what was included in Bill C-2, and received a lot of
criticism. A number of smart folks were very concerned that hacked information
and data leaks are included in what would be considered information that is
available to the public. Should the police have the ability to exploit data
that became public unlawfully? But here they can use it willy-nilly. I share
this concern.
Bill C-22 also
amends the current provision in the Criminal Code related to what are
called “exigent circumstances”. Police can search and demand a whole range of
data without a warrant or a court order if the conditions for obtaining an
order exist, but by reason of exigent circumstances it would be impracticable
to obtain an order. It is not all that new, but just extends the authorities to
include the new production order powers.
487.11 A peace officer or public officer may, in the course of their
duties,
(a)exercise any of the
powers described in section 487 [search warrants], 492.1 [tracking warrants]
or 492.2 [transmission data recorder] without a warrant if the
conditions for obtaining a warrant exist but by reason of exigent circumstances
it would be impracticable to obtain a warrant; or
(b)seize any subscriber
information that may be the subject of an order made under subsection 487.0142(1)
[subscriber information] or any data that may be the subject of an order made
under subsection 487.016(1) [transmission data] or 487.017(1) [tracking data]
if the conditions for obtaining an order exist but by reason of exigent
circumstances it would be impracticable to obtain an order.
We will see
that tracking things and tracking people is a theme of this bill. Bill C-22
adds a new subsection to section 492.1 related to tracking orders. These are
orders that are obtained from a judge authorizing a police officer or a public
officer to obtain tracking data related to a person or a thing. Subsection
(2.1) is being added to permit an authorization to track other things that
might be associated with a person where that thing might not have been known to
the officer at the time.
Tracking similar things
(2.1)A justice or judge who
authorizes a peace officer or public officer to obtain tracking data that
relates to the location of a thing that a person uses, carries or wears may, in
the warrant, authorize the peace officer or public officer to obtain tracking
data that relates to the location of any similar thing that is unknown at the
time the warrant is issued if the justice or judge is satisfied that there are
reasonable grounds to suspect that the person will use, carry or wear that
similar thing.
Scope of warrant
(3)The warrant
authorizes the peace officer or public officer, or a person acting under their
direction, to install, activate, use, maintain, monitor and remove the tracking
device, including covertly. The warrant also authorizes a person acting
under the direction of the peace officer or public officer to obtain the
tracking data that is authorized to be obtained under the warrant.
I can imagine
this would include getting an order to track somebody's vehicle, and to add on
authority to track their phone and maybe their smartwatch. Subsection (3) is
also amended to say that an officer can authorize somebody else to obtain the
tracking data authorized to be obtained under the warrant.
Parallel
amendments are made to the similar Criminal Code provisions related to
transmission data warrants.
So that's
largely what is in Part 1 of the new Lawful Access Act, 2026. As you can
see, while there are some things to quibble over, it is a significant
improvement from what was in Part 14 of the Strong Borders Act.
Now we are
going to look at Part 2, which I think is and remains a huge problem. The
outcry associated with the Strong Borders Act was principally focused on
warrantless information demands and overbroad subscriber information orders. In
a lot of the debate and discussion, Part 15 of that Bill was largely ignored. I
really hope that the equivalent of that Part in Bill C-22 gets as much
attention as it deserves.
In a nutshell,
Part 2 will require a huge range of service providers – well beyond traditional
telecommunications service providers – to build in real-time interception and
monitoring capabilities so that cops and national security folks can just plug
into the systems to access data when “authorized” to do so.
Currently the
cops can go to a judge and get a wiretap order to intercept the communications
of a suspect in real time. They can go to a judge to get an order for just
about any data that currently exists.
What the cops
are generally complaining about is that there isn’t a consistent interface for
them to plug into and get the data among all the telcos out there. I can see
that kind of sucks.
But what
they’re not emphasizing is that Part 2 of Bill C-22 will likely require telcos,
AND cloud providers, AND social media companies, AND ai chatbots, AND VPN
services, AND chat services and the like to build in not only the capability
for Canadian police to plug directly in, but Part 2 will also require them to
build in additional surveillance tools and collection capabilities that go well
beyond what data the company actually needs to provide you with services.
I lived in
Romania just after the fall of the Iron Curtain. It was purported that the
state security police had the capability to turn any landline telephone into a
room bug with the flip of a remote switch. Part 2 of Bill C-22 could permit a
secret order directed at telcos to create this capability. The Minister of
Public Safety could order Samsung to turn your smart fridge into a listening
device. The same with your Smart TV or Smart speakers. I find that worrisome.
So let’s talk
about specifically what is in Part 2 of Bill C-22.
Part 2 creates
a new standalone statute called the Supporting Authorized Access to
Information Act or SAAIA. Section 3 sets out its purpose:
3The
purpose of this Act is to ensure that electronic service providers can
facilitate the exercise of authorities to access information that are conferred
on authorized persons.
So it talks
about authorities that are conferred on authorized persons to access
information. It doesn't say “lawful authorities”, nor does it say “judicially
authorized authorities”. It just says authorities. From the discussion about
Part 1, it’s clear that the police and CSIS are authorized to obtain data
without a warrant by just asking for it.
The Supporting
Authorized Access to Information Act has “electronic service providers” in
its crosshairs. It is therefore really important to understand what an
electronic service provider is. ESP is defined in the bill, as is an electronic
service.
electronic service provider means
a person that, individually or as part of a group, provides an electronic
service, including for the purpose of enabling communications, and that
(a)provides
the service to persons in Canada; or
(b)carries
on all or part of its business activities in Canada.
You will note
that it says it provides an electronic service, “including for the purpose of
enabling communications”. The use of the word “including” clearly signals that
it is not limited to those providers who are strictly engaged in
communications. It goes broader than that. We can see from the very broad
definition of electronic service:
electronic service means a
service, or a feature of a service, that involves the creation, recording,
storage, processing, transmission, reception, emission or making available of
information in electronic, digital or any other intangible form by an
electronic, digital, magnetic, optical, biometric, acoustic or other
technological means, or a combination of any such means.
Hey, I am in
the business of creating information in digital form. What is a YouTube video,
or podcast? Or emails to my clients. My law firm is in the business of creating
information in digital form. The Canadian Broadcasting Corporation, the Globe
and Mail and the Canadian Press are in the business of creating information in
digital form. I am not sure that any business exists in Canada that is not some
way or somehow creating, processing or storing digital information. This is
dramatically broad. In conversations I have had with people from Public Safety,
it is clearly their intent to cover traditional telcos, internet service
providers and ALSO cloud computing providers, social media providers and online
game services. Again, this is dramatically broad.
The Bill is
going to deal with two broad categories of electronic service providers. The
first is something called a “core provider”, and there will be subcategories of
core providers. The second group is the rest of the universe that could fit
into the category or definition of “electronic service provider”.
The categories
of core providers are to listed in the schedule to the Act, which is currently
blank, not surprisingly. So these core providers are going to be subject to a
number of obligations that will be set out in the regulations. Subsection (2)
describes these obligations, but note the use of the word “including” which
means that the regulations and the obligations can go well beyond what is
listed in subsections (a) through (d).
(a)the
development, implementation, assessment, testing and maintenance of operational
and technical capabilities, including capabilities related to extracting and
organizing information that is authorized to be accessed and to providing
access to such information to authorized persons;
This is
essentially a requirement to build in the operational and technical
capabilities to enable access to information on the core provider’s
infrastructure or within their systems.
(b)the
installation, use, operation, management, assessment, testing and maintenance
of any device, equipment or other thing that may enable an authorized person to
access information;
This can
require core providers to install particular devices or equipment on their
infrastructure.
(c)notices
to be given to the Minister or other persons, including with respect to any
capability referred to in paragraph (a) and any device, equipment or other
thing referred to in paragraph (b); and
It’s not yet
clear what these notices are all about ….
(d)the
retention of categories of metadata — including transmission data, as defined
in section 487.011 of the Criminal Code — for reasonable periods of
time not exceeding one year.
The requirement
to retain metadata was NOT in Bill C-2, the Strong Borders Act. This is
very concerning. There are some small protections about this, in subsection
(4). That says:
(4)Paragraph
(2)(d) does not authorize the making of regulations that require core
providers to retain information that would reveal
(a)the
content — that is to say the substance, meaning or purpose — of information
transmitted in the course of an electronic service;
(b)a
person’s web browsing history; or
(c)a
person’s social media activities.
Ok. That’s some
protection. But it does not put location information out of scope, which is
concerning. The government clearly wants all cellphones to be trackable, and
under this authority they can be required to save your detailed location
history for a full year.
Subsection (3)
lists a number of factors that the government must take into account in
creating and drafting the regulations which place the specific obligations on
the core providers. These include …
(a)the
benefits of the regulation to the administration of justice, in particular to
investigations under the Criminal Code, and to the exercise of powers
and the performance of duties and functions under the Canadian Security
Intelligence Service Act;
(b)the
feasibility of compliance with the regulation for the core providers;
(c)the
costs to be incurred by the core providers to ensure compliance with the
regulation;
(d)the
potential impact of the regulation on the persons to whom the core providers
provide services;
(e)the
potential impact of the regulation on privacy protection and cybersecurity; and
(f)any
other factor that the Governor in Council considers relevant.
I am glad that
they have included the potential impact on privacy and cybersecurity. I would
like it if it required the government to release their analysis of all these
considerations along with the regulatory impact analysis statement that will
accompany the regulations when they are first published.
The only good
news when dealing with core providers is that these requirements will be in a
regulation that will be public. We will be able to understand, at least in
general terms, what obligations are being imposed on these core providers.
There is
another bit of small comfort in subsection (5) which says
(5)A
core provider is not required to comply with a provision of a regulation made
under subsection (2), with respect to an electronic service, if compliance with
that provision would require the provider to introduce a systemic vulnerability
related to that service or prevent the provider from rectifying such a
vulnerability.
Of course, this
turns on what is a “systemic vulnerability”, which is defined in the
bill:
systemic vulnerability means
a vulnerability in the electronic protections of an electronic service that
creates a substantial risk that secure information could be accessed by a
person who does not have any right or authority to do so.
electronic protection means
authentication, encryption and any other prescribed type of data protection.
Note that it is
limited to systemic vulnerabilities in “services”. It does not include devices
or processes. Just the services themselves. Professor Robert Diab has pointed out
that there’s enough wiggle room in this for the Minister to say that an
operating system, such as Windows or iOS is not a “service”. Firmware is a part
of the device, so please root them all. (The use of the word “please” is only
because we’re Canadian … it would actually be an order.)
Also, what this
does NOT say is that the government is prohibited from requiring an ESP to
circumvent or undermine encryption. We have been told by the government that
they would never do that, but they do not seem willing to put it in the law.
The second
significant power contained in the Supporting Authorized Access to Information
Act are ministerial orders, set out in Section 7. Essentially, the minister of
Public Safety can issue secret orders directed at any one or more electronic
service providers to implement measures that could have been contained in a
regulation for a core provider, but these are secret and would be limited to a
defined time period. Of course this time can be extended at the discretion of
the minister. These orders can also be directed at ESPs that are already core
providers. Bonus requirements!
The only real
protection introduced since the Strong Borders Act is in subsection (2),
which says that these secret orders must be approved by the Commissioner
designated under the Intelligence Commissioner Act. I think this is a real
protection, principally because the intelligence commissioner has to be a
former Superior Court judge who would have spent a career dealing with criminal
law matters and Charter rights. He is currently entrusted with approving
certain National Security orders as a form of semi-judicial oversight. This is,
in my view, real progress.
Subsection (3)
of Section 7 sets out the sorts of considerations that the Minister has to take
into account before issuing a secret ministerial order. This parallels the
considerations that the government would have to take into account in issuing
regulations affecting core providers.
And subsection
(5) has a parallel provision saying that
(5)The
electronic service provider is not required to comply with a provision of the
order, with respect to an electronic service, if compliance with that provision
would require the provider to introduce a systemic vulnerability related to
that service or prevent the provider from rectifying such a vulnerability.
Section 14
creates an obligation for all electronic service providers to assist a range of
people to do a range of things on the Minister’s request. Remember, while we
review this, that my law firm, your doctor’s office and Apple are all
“electronic service providers”. It reads:
14 (1)On
request made by the Minister, an electronic service provider must provide all
reasonable assistance to a person or class of persons specified in the request
to permit the assessment or testing of any device, equipment or other thing
that may enable an authorized person to access information.
Persons to be assisted
(2)Only
the following persons or classes of persons may receive assistance:
(a)the
Minister;
(b)an
employee of the Canadian Security Intelligence Service;
(c)a
person appointed or employed under Part I of the Royal Canadian Mounted
Police Act or a civilian employee referred to in section 10 of that Act;
(d)a
civilian employee of another police force;
(e)a
peace officer, as defined in section 2 of the Criminal Code.
There is some
protection in subsection (4) so that “the assessment or testing must not have
the effect of granting access to personal information.”
One of the huge
problems I have with these Ministerial Orders is the mandatory secrecy that
surrounds them. Without exception, under section 15, an ESP is prohibited by
law from revealing that they are subject to an order, the substance or contents
of an order, any dialogue they’ve had with the Minister in connection with any
order.
This is
draconian, overbroad and frankly offensive. There’s no requirement that the
Minister be satisfied that disclosure of this information would be harmful to
law enforcement or to national security. There is no sunset and no means by
which an ESP can challenge the gag order if they think it’s in the public
interest to disclose the information. I am not sure that this provision, on its
own, would survive a Charter challenge. It also means that a foreign
company can’t advise their own government that they are subject to an
order.
I can’t help
but think of the fact that under the UK equivalent of this law, Apple was
issued with a secret order to circumvent or turn off encryption on iCloud.
Apple couldn’t tell anyone, yet it somehow leaked. The United States government
was of the view that this was contrary to an agreement between the UK and the
US, but Apple was prohibited by UK law from letting their own government know
what shenanigans the US’ own ally was engaging in.
The bill does
anticipate at section 17 that ESPs may seek judicial review of a Minister’s
order, but the cards are again stacked in favour of secrecy, and conducting its
business outside of public scrutiny.
Section 18
allows the government to make a range of regulations related to confidentiality
and security. These are scaled back from the absurd scope anticipated in the Strong
Borders Act. There are security and confidentiality rules for judicial
proceedings provided for in subsection (b). Subsections (c) and (d) authorize
regulations related to ESP employees and contractors involved with law
enforcement and national security access to information, including security
clearances and where they are located, and where facilities are located. As I
understand it, most American service providers run this function from the US
and I’m sure they will not be interested in moving that to Canada or having
their employees subject to Canadian security clearances. I would imagine that
some companies will just decide to not do business in Canada.
Part 2 also
contains a whole regulatory oversight structure, with inspections, audits and
penalties. I’m not going to get into that today.
Throughout this
discussion, I can’t help but be reminded that the US has had something similar
in their laws for some time, and the mandated intercept capabilities were used
by Chinese hackers to get access to data.
The "Salt
Typhoon" hacking incident, attributed to a Chinese state-sponsored
advanced persistent threat (APT) actor, came to light in late 2024 with
revelations that the group had extensively compromised the computer systems of
multiple major US telecommunications companies. The stolen information included
call and text message metadata, and in some high-profile instances, even audio
recordings of phone calls belonging to government officials and political
figures.
A critical
factor facilitating the Salt Typhoon incident was the very infrastructure put
in place to comply with the Communications Assistance for Law Enforcement Act
(CALEA). Enacted in 1994, CALEA mandates that telecommunications providers
build "lawful intercept" capabilities into their networks to allow
law enforcement and intelligence agencies to conduct court-authorized wiretaps.
While intended for legitimate surveillance, these mandated
"backdoors" created inherent vulnerabilities within the telecom networks.
Salt Typhoon exploited these CALEA-mandated systems, effectively turning the
tools designed for lawful access into pathways for unauthorized
espionage.
This is what’s
coming to Canada …
So let’s bring
this down to earth and make it more concrete. At a technical briefing this
week, the government offered only two examples for why they think we need the
Supporting Authorized Access to Information Act:
CSIS cannot track a cellphone
CSIS is trying to determine the
movements of a terrorist group and has received a warrant to track a person of
interest’s cellphone. The electronic service provider did not have the
necessary capabilities to track the device because they are not required to. As
a result, CSIS had to resort to costly and risky in-person surveillance.
With C-22: The GIC will have the
authority to make regulations requiring that ESPs develop and maintain location
tracking capabilities that are standard in Europe and among the Five Eyes.
First of all, I
don’t really care what they are doing in the other Five Eyes. Essentially, the
UK, Australia and New Zealand don’t have a Charter of Rights and Freedoms
and their surveillance laws reflect that. And the law doesn’t we’ll just do
what they do in “Europe and among the Five Eyes.” I bet the Chinese security
services have this capability.
Let’s take a
moment to ponder this scenario and what it means. CSIS wants to be able to
track any cellphone in real-time, with a warrant. That means that they want
every cellphone in Canada to be a tracking device. And they want historical
metadata – which includes location data – retained for one year.
The second
example is equally sympathetic, but shows that the government wants everyone to
be carrying a tracking device:
Police cannot consistently obtain
location information
An at-risk 16-year-old girl was reported
missing. She had already been missing for 10 days when she made an emergency
call. The telecommunications provider was able to confirm the call and the
tower used to make the call but could not provide the last known location of
the phone before it was disconnected since they are not required to have that
capability.
With C-22: Core providers would be
required to maintain accurate and consistent localization capabilities across
the country.
That device in
your pocket will be a tracking device. And the law doesn’t say that this data
can only be accessed if you’re a suspected terrorist or a missing teenaged
girl. It can be tracked by ANY police agency in Canada with an order issued
merely on “reasonable grounds to suspect.” Judicial authorization isn’t even
required in a whole bunch of cases: There are dozens of laws that permit
regulators and others to access this data without judicial authorization.
“If you build
it, they will come.” And the government wants ESPs to build the surveillance
infrastructure for them, to which the police and others will almost certainly
come. And this is even without considering that the backdoors will be a HUGE
target for cybercriminals and threat actors.
I don’t think
that the government has come close to making any sort of compelling case for
Part 2 of Bill C-22, and certainly not one that convinces me that the public
safety interest in building all of this surveillance infrastructure outweighs
the privacy and cybersecurity risk of doing so.
We should also
be looking at this through the lens of what we have now. If the police or CSIS
get a production order, a wiretap order or a tracking order, they can also ask
the judge to issue an “assistance order”. This is an order, directed at the
service provider, ordering them to give all reasonable assistance, reasonably
required to give effect to the production order, wiretap order or tracking
order. On every occasion when I have brought this up with “lawful access”
supporters, nobody has been able to point me to any problems with this.
Assistance orders are like one-off ministerial orders that are appropriately
tailored to the case and circumstances, and are signed off by a judge. And
they’re subject to judicial review. I’m not sure the current system is broken.
It just doesn’t give the police friction-free access to the universe of data
that they want collected on their behalf.
I expect I’ll
probably have more to say about this as Bill C-22 works its way through
Parliament. I will reiterate that I’m glad the government largely went back to
the drawing board and largely fixed Part 1. Part 2 is better than it was
before, but I don’t think it should be passed in its current form. It is wildly
problematic.
An overview of privacy law that regulates private sector businesses in Canada (or those outside of the country who deal with personal information of Canadians): the Personal Information Protection and Electronic Documents Act (PIPEDA).
Introduction
Today I'm going to be talking about Canadian privacy law—a bit of a primer on the subject that will hopefully be useful for a range of folks.
This is intended to be general information, an overview, and a primer. This is a complicated area of the law, and it's one that is changing regularly and one that is really primed to change again in a significant way.
Look at the date on this; the information may become out of date relatively quickly. We expect that there will be a new bill presented in Parliament to completely replace our current federal privacy law. So you might ask “why do an overview of a law that’s on its way out?” Well, even if we do get a new privacy bill in the spring of 2026 and it passes, I expect it’ll be years before it is fully implemented.
And any new law will likely be very similar, a least in many significant ways.
So, what I'm going to talk about is why Canada has so many privacy laws to begin with. Then I'm going to focus specifically on Canada's federal private sector privacy law, the Personal Information Protection and Electronic Documents Act (PIPEDA). Within that, I'm going to talk about some key concepts that are contained in the legislation. I'll talk about the 10 principles that PIPEDA, the federal privacy law, includes. I'm going to talk about how the legislation is enforced, and then I'm going to finally talk about data breach notification as it exists in the Personal Information Protection and Electronic Documents Act. Throughout, I’ll touch on some of the similarities and differences between our various privacy laws.
The Canadian Privacy Landscape
So, what's the current privacy law landscape in Canada? Well, we have a mosaic of privacy laws, or you could even say we have a mess of privacy laws. Canada is a federal country, and unfortunately, I’ll have to talk a bit about federalism.
But across the country from coast to coast to coast, pretty well all government activity is subject to one form of privacy law or another. All private businesses operating in Canada are subject to a variety of privacy laws. The healthcare sector is subject to privacy laws in varying ways in different provinces. And the private sector workplace is really not subject to much regulation other than what's called a federal work undertaking (your business within federal jurisdiction) or private sector workplaces in British Columbia, Alberta, and Quebec.
Canada is a federal country. We have a federal government, and we have provinces and we have territories. And the Canadian Constitution gives certain jurisdictions, or certain forms of jurisdictions, certain powers. So it's divided between the federal government and the provinces. The territories are within federal jurisdiction.
Within our constitution, provinces have exclusive jurisdiction to legislate over what's called "property and civil rights," and this generally includes privacy. And so the provincial governments have exclusive jurisdiction over privacy when it's a matter of property or civil rights. The federal government has jurisdiction over something called "general trade and commerce," which is actually less general than you might think it is. And the federal parliament also has jurisdiction over federal works, undertakings, or businesses. Those are telecommunications companies, federally chartered banks, airlines, inter-provincial works, and things like that.
Only the provinces can pass “true” privacy laws, but the federal government can regulate how businesses manage personal information. So what we end up with is overlapping or potentially overlapping jurisdiction for privacy.
In Canada, we don't have federal supremacy where the existence of a federal law will automatically override a similar or identical provincial law. So we have a situation where the federal government has jurisdiction over certain things, and privacy can be characterized as a matter of regulating the general trade and commerce in Canada, and provinces have jurisdiction over privacy as a matter of property and civil rights. And so the two have to find a way to co-exist. It's not that elegant, but generally, it works in Canada.
Each provincial and federal government can clearly regulate themselves—there's no doubt about that under the Canadian Constitution. And the provincial public sector also includes what we sometimes call the MUSH sector: Municipalities, Universities, Schools, and Hospitals. So provincial and federal governments and their Crown corporations, for example, and their agencies are subject to federal or provincial public sector privacy laws.
Some provinces have specific statutes for the health sector, and I'm not going to get into that too much.
At least in the private sector, we have a possibility of overlapping and contradictory jurisdiction since the provinces can regulate privacy as a matter of civil rights, and the federal government can regulate how businesses collect, use and disclose personal information. When the federal Personal Information Protection and Electronic Documents Act was passed, only one province – Quebec – already had a private sector privacy law. Quebec is very protective of its jurisdiction, so to try to avoid fights, the federal parliament built in a mechanism by which the federal government could cede jurisdiction for privacy in a province that has a substantially similar law.
Currently, Quebec, Alberta and British Columbia have general private sector privacy laws that are deemed to be substantially similar, so the federal law does not apply in those provinces where the provincial law applies. The same has been done for a number of health privacy laws, like the ones in Ontario, Nova Scotia, New Brunswick, Prince Edward Island, and Newfoundland and Labrador.
Development of PIPEDA and the CSA Model Code
Though we could have just looked at the European Data Protection Directive that was enacted in 1995, Canada did its own "made in Canada" solution. In the 1990s, the Canadian Standards Association (CSA), which sets standards for electrical devices and business processes, did a very broad consultation and came up with what was intended to be a self-regulatory code for privacy in Canada. It’s called the Canadian Standards Association Model Code for the Protection of Personal Information. This was adopted in 1996 as a national standard of Canada.
Importantly, it was developed with a wide range of consultations across a large number of industries. There was also general consensus that it was pretty good. If you have an international background in privacy, you'll see that it has a significant kind of overlap and echoes of the OECD guidelines from the Organization for Economic Cooperation and Development. Now the OECD guidelines have eight guidelines; the CSA model code has 10 general principles. I'm going to go through each of those 10 principles and talk about how they're implemented within Canada.
So how was PIPEDA developed? In the 1990s, when the government of Canada wanted to use the general trade and commerce power to implement a privacy law. Rather than coming up with one from scratch or poaching the European Data Protection Directive, the then federal government just decided to implement the CSA model code. We have this great code, there’s a lot of consensus around it and we want to come with a privacy law. Why look further afield?
And so PIPEDA is an unusual statute in a bunch of ways. It has two parts: one part related to personal information protection, the second part related to electronic documents. Essentially, the “Personal Information Protection Act” and the “Electronic Documents Act”, but they jammed them both into one Act. Part one covers privacy, but they slapped the CAS Model Code for the Protection of Personal Information onto the back of it, and says that those organizations that are subject to these rules have to follow the CSA model code.
Now there are quite a few exceptions. The legislation has also been updated a couple of times. The most significant revamp was with the Digital Privacy Act a number of years ago, which put in place data breach notification requirements that I'm going to talk about later on, and also implemented an exception to the consent rule related to certain kinds of business transactions.
Now PIPEDA was designed to be adequate for the purposes of the European Data Protection Directive for cross-border data transfers out of Europe. Even though PIPEDA is really, really old, its adequacy was just renewed in January of 2024.
Key Concepts: Commercial Activity and Personal Information
So how does PIPEDA work? What organizations and activities does it apply to?
A key concept that one needs to understand in order to understand PIPEDA and how it works is the concept of "commercial activity". PIPEDA is based on the general trade and commerce power that the federal government has over within the Canadian Constitution. And PIPEDA was designed to go as far as federal jurisdiction would permit it to do. So PIPEDA applies to the collection, use, and disclosure of personal information in the course of commercial activity. It also applies to workplaces and employee personal information but only for federal works, undertakings, and businesses. Those are the kinds of enterprises that are within exclusive federal jurisdiction. (Think airlines, federally chartered banks, telecommunications and the like.)
We also have to talk about a key concept called "personal information". The statute is all about personal information. If you're not talking about personal information, this statute does not regulate it. And personal information, in short, means any information about an identifiable individual, excluding certain business contact information when that business contact information is used to contact an individual in their business role. But it's a very broad definition, so it's any information related to an identifiable individual. So if you can identify the individual from that information, it is going to be personal information.
If it's reasonable that you could identify an individual from that information, or you could correlate that information to an individual, it will also be considered to be personal information. And so that clearly includes somebody's name, their address, their income, health information, demographics, Social Insurance Number, their image, their photograph, biometrics, and things like that. So it's quite a broad definition. If information is adequately anonymized so there's no reasonable possibility of connecting it to an individual, then it would be out of scope of the legislation and the law would not apply to it.
Now an important thing—and this mainly comes up with dealing with American companies and American lawyers—is that whether information is personal information and therefore subject to regulation doesn't matter whether it's "private" information. It doesn't matter whether that information is publicly known or publicly shared. It really has nothing to do with your expectation of privacy in that information. If it is information about an identifiable individual, it is in scope of the legislation and regulated. There may be some consent exceptions related to publicly available information, but those actually seldom come into play because they’re so narrowly tailored.
PIPEDA also has a baseline "reasonableness" requirement. So an organization can only collect, use, or disclose personal information for purposes that a reasonable person would consider are appropriate in the circumstances. And that’s regardless of whether there’s consent.
This provision was seldom used until recent Privacy Commissioners started to look more closely at whether or not the purposes for which certain businesses collect, use, or disclose personal information are reasonable. They sometimes call these “no go zones”. Again, if the purposes are not reasonable, it does not matter whether you have the individual's consent; this is an absolute kind of guardrail sort of provision. Now of course, what is reasonable in the circumstances could differ significantly from one person's point of view to another, and I draw the line in a different place than the Commissioner often does, but this has to be understood as a baseline principle.
The 10 Principles of the CSA Model Code
Recall that the law essentially says: “Behold the CSA Model Code! If you’re engaged in commercial activity, thou shalt follow it!”
Now all 10 principles can be found to greater or lesser degrees in all privacy laws in Canada. Also in the Privacy Act, which regulates the federal government and its agencies. So the CSA model code has 10 principles, and I'm going to walk through all 10 and talk about how they are implemented within the Canadian PIPEDA framework.
Principle 1: Accountability
The first principle is called accountability.
This says an organization is responsible for personal information under its control and has to designate an individual or individuals who are accountable for the organization's compliance with the 10 principles of the CSA model code. That doesn't mean that that individual or those individuals are personally liable. They’re not the folks who get arrested by the privacy cops in dawn raids.
But what it means is that an organization has to appoint a privacy officer. There has to be somebody or a group of somebodies who are responsible within the organization for making sure that these rules are followed, so there's internal accountability. The Code doesn’t say they have to have a particular title, but they’re generally also the privacy spokesperson for the organization, the liaison for customers, and the person who deals with our privacy regulators if necessary.
What it also means is that the organization remains accountable for personal information that it has collected, used, or disclosed, even if it transfers that information to another party to handle it on its behalf.
This is similar to the notion of "controllers" and "processors" in Europe. We do not use the exact same language, but the principle is applicable. If you are the organization that is facing the customer and you have collected personal information from that customer for your purposes, and then you give it to a contractor to manage on your behalf, the first organization remains legally responsible for it and has to make sure that there are contracts in place with their service providers so that the contractors will handle it only on their behalf and will do all the necessary things to remain compliant with the law.
If the contractor screws up, the responsibility remains with the original organization. You can’t contract out of ultimate responsibility under Canadian privacy law.
There is a very important distinction between a "transfer" and a "disclosure". An organization can transfer personal information to a contractor without consent where the contractor is only going to use it as a processor on behalf of the original organization. If it is shared with another organization so that the recipient organization can use it for their own purposes, then that’s a disclosure. A disclosure requires consent, and the company that gets the personal information becomes legally responsible for managing it and protecting it.
Principle 2: Identifying Purposes
The second principle is called identifying purposes. I think this is one of the most important of the ten principles.
The CSA model code says the purposes for which personal information is collected shall be identified by the organization at or before the time the information is collected. This has two parts:
(1) the organization has to identify – and hopefully document – what it proposes to do with the personal information; and
(2) the organization has to communicate those purposes to the individual before it collects their personal information.
And it really should be noted that privacy policies seldom satisfy this requirement. Because the purposes have to be identified to the individual at or before the time the information is collected, just having a privacy policy on your website does not provide any assurance that the customer or the individual has read, understands, or knows what those purposes are.
One exception may be, for example, on account creation where an individual is required to flip through the privacy statement prior to creating an account and then clicks "I agree".
So what this means in practice is that every organization has to document internally what are all the purposes for which they collect, use, or disclose personal information. Those documented purposes have to be communicated to the individual at or before the time the personal information is collected. Now that can be done orally or it can be done in writing, but the important thing is that it has to be done.
And employees who collect personal information on behalf of a company need to be able to explain the purposes to individuals. This information needs to be provided in a manner that you could have some reasonable confidence that they understand what those purposes are, they understand what it is that they're agreeing to.
Principle 3: Consent
Principle 2 is linked very closely with Principle 3. Principle 3 is the consent principle, and this says the knowledge and consent of the individual are required for the collection, use, or disclosure of personal information, except where inappropriate. Now notice that I've struck that out—"except where inappropriate" no longer applies. The only exceptions to the consent rule are contained in the statute itself in Section 7.
That may have made some sense when the CSA Model Code was designed to be a voluntary code and the organization could determine when it was not appropriate. But under PIPEDA, organizations don't get to choose whether or not it's inappropriate to seek consent. Consent is the only basis upon which personal information is collected, used, or disclosed, unless those exceptions apply. And those exceptions are significant outliers.
So unlike in Europe where there are other grounds for processing personal information in the private sector, consent is the principle that is at play in Canada.
This consent has to be informed consent; that’s why Principle 2 (identifying purposes) is so important. The individual has to be told at or before the time the information is collected what the purposes are for the collection, use, or disclosure of personal information. And those “purposes” are the parameters for the consent obtained.
The principle also says that the form of the consent is going to be dependent upon the sensitivity of the information. So the more sensitive the information, the greater the burden of consent. Expectations also come into play. If the consumer expects you to use it for the obvious purposes, consent can be implied.
So you can have opt-out consent where the information is really not sensitive. Opt-in consent would be preferred in most cases. If you're dealing with sensitive information—health information, information about somebody's intimate life or family life or things like that—you would want to make sure that they expressly agree that their information can be collected, used, or disclosed for that purpose.
Written consent should be used in a range of cases, particularly where you’re going to want a record of the consent and a clear record of what was consented to.
This principle also says you cannot require that an individual consent to a collection, use, or disclosure of personal information that's not necessary to fulfill the explicitly stated and legitimate purposes.
Individuals can withdraw consent. This is similar to the European "right of erasure" but not identical. So an individual can withdraw consent at any time, but the organization has the obligation of telling the individual what are the consequences of that withdrawal of consent. For example, the organization might not be able to provide services to the individual if the individual does not consent to the collection, use, and disclosure of personal information that's necessary for the provision of those services.
And the consent of an individual is only valid if it is reasonable to expect that the individual would understand the nature, purposes, and consequences of the collection, use, or disclosure of the personal information to which they're consenting. This highlights the importance of being clear to the individual what those purposes are and having confidence that the individual does in fact understand what those purposes are.
Principle 4: Limiting Collection
Principle 4 is closely aligned with Principle 5, and both of them link back to Principle 2 of identifying the purposes. So Principle 4 says the collection of personal information shall be limited by that which is necessary for the purposes identified by the organization.
So you can only collect personal information that's reasonably necessary for the purposes that you've identified. You cannot collect any more personal information if it's not reasonably necessary for those purposes. And information shall be collected by fair and lawful means, so no use of deceit or trickery or anything else like that.
Note again, this loops back to the purposes identified in Principle 2. Those purposes set the guardrails.
Principle 5: Limiting Use, Disclosure, and Retention
And then Principle 5 leads us to: “you can only use personal information or disclose personal information for the purposes that have been identified.” Again, so much of this comes back to clearly identifying the purposes to the individual. And those purposes create significant guardrails around that information. That information cannot be used for any other purpose unless you go back to the individual, you identify the new purposes, and you get new consent for that.
There's also a requirement to limit the retention of personal information. Personal information shall only be retained as long as is necessary for the fulfillment of those purposes. So the organization needs to clearly document what the purposes are and what the lifecycle of the data is.
The law doesn’t specifically say you need a written document retention plan, but you really should have one. When it is no longer necessary for the purposes that are identified, that information has to be destroyed. Notably, it also says it can be made anonymous; if it's made anonymous, then it's no longer personal information and no longer subject to the legislation.
Principle 6: Accuracy
Principle 6 is the accuracy principle, and this says that personal information shall be as accurate, complete, and up-to-date as is necessary for the purposes for which it is to be used. And so again, it ties back to the purposes that have been identified to the individual.
This principle really only comes into play when personal information is used to make a decision about somebody. And so an organization needs to make sure that the information is as accurate as it needs to be for those purposes, probably taking into account what are the consequences of that decision to the individual. But information should not routinely be updated "just because".
Principle 7: Safeguards
Principle 7 is a key principle, it's entitled "Safeguards". Personal information shall be protected by security safeguards appropriate to the sensitivity of the information. And it goes on to say that personal information must be protected from many threats: loss, theft, unauthorized access, unauthorized disclosure, copying, use, modification. And this obligation exists regardless of the format in which it is held.
Now you'll note that this is principles-based. This requires an organization to use safeguards that are reasonable and appropriate in light of the sensitivity of the information. So we don't have prescriptive rules that say this sort of information must be encrypted or this sort of information must be kept under lock and key.
This is designed to be technologically neutral and so that it would survive over time. So this was written in the late 1990s, became law in 2001, and so what are “reasonable safeguards” now would differ substantially from what would be reasonable safeguards in 2001. It's intended to be flexible and fluid.
What I generally tell my clients is that you need to implement at least the "state of the art" of security safeguards that are prevalent in your industry—not just in Canada, but also look internationally. And try to do one better than that.
This doesn't require a standard of perfection. The safeguards need to be reasonable and appropriate in the circumstances. A company is NOT expected to spend a million dollars to protect a hundred dollars worth of personal information. And as information technology systems get more complicated, safeguarding that information gets more complicated and more difficult.
Principle 8: Openness
Principle 8 is called openness. An organization shall make readily available to individuals specific information about its policies and practices related to its management of personal information. So this essentially means the organization has to have a privacy policy. The privacy policy is not about identifying the purposes in order to get consent; the privacy policy is in order for the organization to be open and transparent.
That privacy policy has to have contact information for the privacy officer—doesn't have to name them, but has to have the contact information. It has to tell the individual how they can exercise their access rights.
It has to educate the individual with the general account of what personal information the organization routinely collects, uses, and discloses, and how it is used. This can be done through brochures or through the website or other things like that. And the organization also has to let the consumer know what personal information is made available to related organizations.
The Privacy Commissioner Canada has also said the privacy statement should include information about what personal information may be stored outside of Canada, transferred outside of Canada, or accessed from outside of Canada. That is not in the statute, but that certainly is a best practice. The Alberta and Quebec privacy laws make those disclosures mandatory.
Principle 9: Individual Access
Principle 9 is individual access. So upon request, an individual shall be informed of the existence, use, and disclosure of his or her personal information and shall be given access to that information. In that process, an individual shall be able to challenge the accuracy and completeness of the information and have it amended as appropriate. So this is a data subject access right. The organization has to respond within 30 days.
And the organization needs to let the individual know to whom their information may have been disclosed. So organizations effectively have to keep a record of how they use personal information and to whom it's been disclosed.
This access should be at minimal or at no charge, and the information provided needs to be comprehensible to the individual, so abbreviations and technical terms may need to be explained.
There are some limitations and some exceptions to this access right, such as confidential business information, third party personal information and information that is privileged.
What is interesting is that this right is not exercised as often as you think it might be in Canada.
Principle 10: Challenging Compliance
The final principle is called challenging compliance. And this says an individual shall be able to address a challenge concerning compliance with the above principles to the designated individual or individuals who are accountable for the organization's compliance.
This is just common sense. The organization will want to hear complaints first before the individual goes to the regulator. The organization will probably want to have an opportunity to address them and to fix them before an individual chooses a more formal path of recourse. And must have a method to receive complaints, address them properly, and need to let the individual know that they have a right to complain to the appropriate authority.
Enforcement Powers
So now I'm going to talk about enforcement powers under Canadian privacy laws. The Personal Information Protection and Electronic Documents Act is overseen by the Privacy Commissioner of Canada or the Office of the Privacy Commissioner of Canada, sometimes referred to as the OPC.
The Privacy Commissioner of Canada is an ombudsman. The Commissioner doesn't have the ability to levy fines or issue orders. Only the Federal Court of Canada can issue orders or award damages. What the Commissioner does is the Commissioner deals with complaints first and foremost. Any individual can send a written complaint to the Privacy Commissioner of Canada. The Commissioner can also initiate complaints of his own accord.
I should note that the Alberta, British Columbia and Quebec Privacy Commissioners can issue orders, and the Quebec commissioner also has considerable financial penalty powers.
But back to the federal Commissioner: After a complaint is received, the Commissioner investigates the complaint, and there's minimal involvement on the part of the complainant in most cases.
During that investigation, the Commissioner has very strong powers. So for example, the Commissioner can compel evidence, can issue essentially subpoenas, can administer oaths, and accept evidence under oath. The Commissioner can also accept and review evidence that ordinarily would not be admissible in court. The Commissioner can also enter any premises other than a dwelling and review any documents in there.
So far we've never had any "dawn raids" by the Privacy Commissioner of Canada. I don't think that any of these particularly intrusive powers have ever been used until relatively recently. It's always been my experience in speaking for myself and speaking with colleagues that those who are the subject of the complaint tend to cooperate, at least in the course of the investigation.
The end product of the investigation is a report. It's called a Report of Findings. The Commissioner has to issue a Report of Findings with respect to an investigation within one year from the day the complaint is filed. Now in my experience, that's seldom the case; they usually take more than a year. But that may reflect the complexity of cases that I generally deal with.
The finding says here’s what the Commissioner found, essentially. Here's what the person complained about, here is what I investigated, here is what I found.
If the Commissioner found non-compliance, the report will include recommendations, and those recommendations will generally be communicated to the organization in the course of the investigation, so the organization can implement those prior to the conclusion of the investigation.
Though the Commissioner does not have order making powers nor can he levy penalties, the "naming and shaming" is a significant incentive for businesses to cooperate. Some of the findings are published—but not all. And for high-profile investigations, particularly those involving large American tech companies, there tends to be a lot of fanfare that goes along with the issuance of a report of findings, including press conferences and things like that.
Many organizations do not want to be the subject of naming and shaming like this, so will do what they can to be compliant to ultimately resolve the complaint to the satisfaction of the complainant and the Commissioner.
Those findings will fit into a number of categories:
Not well-founded: which means that the complaint was not made out, the Commissioner did not find any violations of Canadian privacy laws.
Well-founded and resolved: meaning that ultimately there was an issue, but it was resolved in the course of the investigation.
Well-founded and conditionally resolved: so the organization has been asked to report back with changes that it has made over a medium-term or longer-term.
Well-founded and unresolved: and those are relatively rare.
Organizations tend to want to resolve the matter during the investigation stage. And if it's unresolved, then the Commissioner can in fact take the organization to court, or the complainant can.
Court Hearings
Court hearings are essentially where the enforcement rubber hits the road. Some people suggest that the Commissioner's lack of an ability to issue fines or issue orders is a bug with the legislation, and the process of going to court is somewhat cumbersome. I tend to think it's more of a feature that, when it comes to these sorts of measures, it's best reserved to a court, particularly where the resolution turns on the interpretation of the statute.
In these court hearings, a complainant—but not the organization—can start an application in our federal court for a hearing. And it is notable that the organization does not have any automatic ability to take the Commissioner to court to have the Commissioner’s report reviewed or appealed or overturned.
In fact, what happens in court is not an appeal at all; it's what's called a de novo proceeding. The court starts from scratch. The Commissioner might be a party with the cooperation of the complainant. It may in fact be the Commissioner who's carrying the bag on all of it in going to court, but it's not a review of the Commissioner's finding; they start from scratch. And this can only be done once the report from the Privacy Commissioner has been finalized and delivered.
There is a way to get into court in the course of an investigation on something called a "judicial review" if there are jurisdictional issues or other things that might need to be considered by the court, but generally, it's only after the report of findings is issued.
Perhaps not surprisingly, the court has pretty broad remedial powers—that's what courts do. The courts are empowered to order the organization to correct their practices in order to comply with the provisions of the act. Can also require the organization to publish a notice of actions that they have taken in order to correct their practices—so, I guess, a "double naming and shaming". And finally, the court can award damages, including damages for humiliation that the complainant might have suffered.
It should be noted that there is no mechanism through PIPEDA for a class action to be brought within this process. You have an individual complainant, you have the Privacy Commissioner, and you have a case before a judge.
Commissioner Audits
The Commissioner also has the power to audit organizations.
The Commissioner can initiate one of these if, on reasonable grounds, the Commissioner believes the organization is contravening a provision of Division 1 or Schedule 1 of the act. And during the course of an audit, the Commissioner has pretty well the same powers that the Commissioner has in an investigation: take evidence, enforce attendance, and have the powers of a superior court of record. He can enter any premises other than a dwelling house, examine any records or extracts of records.
To my knowledge, the federal Privacy Commissioner of Canada has not initiated any audits of any private businesses. The Commissioner has, at least on one occasion, requested that the organization obtain a third-party audit and provide the report of that audit to the Commissioner. But the Commissioner would not be able to order that.
As I understand it, the Commissioner doesn't feel that their office has sufficient resources in order to go about auditing organizations. One thing that they have asked Parliament for is a power to order audits of organizations and their information handling practices.
So the key “stick” that the Commissioner actually has is this power of publicity. Because within the act, the Commissioner is specifically empowered to make public any information related to the personal information management practices of an organization if the Commissioner considers that it's in the public interest to do so.
Data Breach Notification
In 2015, Parliament amended PIPEDA to bring in data breach notification requirements.
We now have data breach reporting to the Commissioner, data breach notification to the affected individuals, and a record-keeping requirement embedded in these amendments.
It should also be noted that there may be a common law duty to notify affected individuals if their personal information has been compromised in a way that could affect them, particularly if giving them notice and warning would give them an opportunity to mitigate harm that could happen to them.
But we're going to focus on the statutory requirements.
As with any data breach law, you always have to be very careful about the definition of what is a "breach". So what triggers this whole process? In PIPEDA, it is a "breach of security safeguards", which means the loss of, unauthorized access to, or unauthorized disclosure of personal information resulting from a breach of an organization's security safeguards that are referred to in Clause 4.7 of Schedule 1 (so that's Principle 7) or from a failure to establish those safeguards.
The notice and reporting obligations become triggered if there is a breach of security safeguards where it is reasonable in the circumstances to believe that the breach creates a real risk of significant harm to an individual. This particular provision talks about the personal information being under the control of an organization. So this says to me that the obligation to report to the Commissioner is only on the part of a data controller, not a data processor.
As between any data processor and the controller, there should be a clear contract that says the processor will notify the controller so that the controller can report any data breach that they have to the Privacy Commissioner, and so that they can notify affected individuals.
Subsection 2 talks about what has to be in the report, and I'll get into that in just a moment. And Subsection 3 talks about notification to affected individuals.
Again, the definition—what is a breach of security safeguards—refers back to Principle 7, “Safeguards”. And so what this principle requires is that an organization implement reasonable security safeguards to protect against a list of risks that is appropriate and commensurate with the sensitivity of the information at issue. So it's not unduly prescriptive; it's what's reasonable in the circumstances.
And again, this comes back to the concept of sensitivity. So we don't have strictly defined categories of what is sensitive personal information. Personal information can be more sensitive or it could be less sensitive depending upon the circumstances, depending upon the context in which the information is collected.
We do have some helpful guidance or wording in the CSA model code to help determine what information is more sensitive or less sensitive. Certainly information about somebody's private life, their intimate life, their family life, information about their race, ethnicity, religion, those sorts of things, financial information, health information would all be considered to be at the more sensitive end of the spectrum.
But somebody's name can be less sensitive or more sensitive depending upon the circumstances. So if your name appears on a list of people who attended a hockey game, for example, that's not particularly sensitive. If your name appears on a list of people who have upcoming appointments with a psychiatrist, that would be sensitive information, because the context in which that information appears tells you information about that person's private life, their mental life, their health conditions, or things like that.
Real Risk of Significant Harm
The triggers of notification and reporting relate to "real risk of significant harm".
This is a two-part test: you look at the real risk and then you look at the possible significant harm. And real risk depends upon the sensitivity of the personal information involved and the probability that the personal information has been, is being, or will be misused. And there may also be other prescribed factors, but we haven't seen new factors to consider.
So you're looking at what's the likelihood that mischief will take place; what are the circumstances in which the breach took place?
One example may be a lost hard drive and there's no information to suggest that it was stolen by a bad guy. It was just misplaced. You don't have any real sense that mischief is afoot. That seems low risk of harm.
But if somebody breaks into your network and exfiltrates information, you already know that there's a bad guy involved, or a "threat actor" as the cool kids say. That tells you there’s a high risk that bad things are likely to happen. Or at least bad things are more likely to happen in a scenario like that.
The second part of the analysis is “significant harm”, and that requires you to ask “what could go wrong?” You ask “What could this information be used for? How could this information be abused?”
The legislation specifically talks about certain kinds of harm being significant: “bodily harm, humiliation, damage to reputation or relationships, loss of employment, business, or professional opportunities, financial loss, identity theft, negative effects on the person's credit record, and damage to our loss of property.”
It ties pretty closely to the concept of sensitivity.
In some jurisdictions, reporting is based simply on the type of data involved – more often tied to risk of fraud and impersonation.
The significant harms that are at play and have to be considered in Canadian privacy legislation are much broader than that, and relate to kind of “softer elements” of privacy and personal life.
Reporting Requirements
For a report to the Commissioner, the legislation prescribes what has to be contained in that report. Not surprisingly, the Privacy Commissioner of Canada has a form on his website that contains this information to fill out and report.
They generally want to know:
who was the organization,
what was the nature of the information,
what were the circumstances of the breach,
when was it discovered,
how many people are affected,
what steps have you done to mitigate, to stop the breach and to mitigate the risk of harm, and
who is able to be a point of contact for the Privacy Commissioner.
The Commissioner can initiate an investigation based on a report, but most of these are just received with thanks and that's largely the end of it. The notice to individuals is generally quite similar to the information that has to be provided to the Commissioner, though the organization is also required to tell the individual if there are steps that that individual could take to mitigate any harm to themselves.
Record-Keeping Requirements
Now one additional thing that's notable is there's also a “record-keeping” requirement. This says, regardless of whether or not there's a real risk of significant harm to the individual, every organization must create a record related to every breach of security safeguards, regardless of how trivial.
That record has to contain essentially the same sort of information that you would include in a report to the Commissioner. It should also include information to substantiate the conclusion that there was not a real risk of significant harm to the affected individuals, so that no report was required.
These reports have to be kept by the organization for two years. And they have to be provided to the Privacy Commissioner of Canada on request. So this does create a discoverable paper trail in the event of litigation.
It should also be noted that the Privacy Commissioner has in fact, on his own accord, conducted surveys of organizations requiring them to provide to his office and his investigators all of these breach records in order to make sure that they are being created and maintained appropriately.
Importantly, it's an offense to not create these records, and to not maintain them for the period of two years. It’s also an offense to not provide them to the Commissioner.
Conclusion
So, I hope this has been a useful, informative overview about Canadian privacy law. As I said, it was mainly intended for a general audience of folks who may have a need to know the basics of Canadian privacy laws.