Wednesday, March 04, 2026

PIPEDA: Canadian Privacy Law 101 - a primer on the privacy law that regulates businesses in Canada

 

An overview of privacy law that regulates private sector businesses in Canada (or those outside of the country who deal with personal information of Canadians): the Personal Information Protection and Electronic Documents Act (PIPEDA).


Introduction

Today I'm going to be talking about Canadian privacy law—a bit of a primer on the subject that will hopefully be useful for a range of folks. 

This is intended to be general information, an overview, and a primer. This is a complicated area of the law, and it's one that is changing regularly and one that is really primed to change again in a significant way. 

Look at the date on this; the information may become out of date relatively quickly. We expect that there will be a new bill presented in Parliament to completely replace our current federal privacy law. So you might ask “why do an overview of a law that’s on its way out?” Well, even if we do get a new privacy bill in the spring of 2026 and it passes, I expect it’ll be years before it is fully implemented. 

And any new law will likely be very similar, a least in many significant ways. 

So, what I'm going to talk about is why Canada has so many privacy laws to begin with. Then I'm going to focus specifically on Canada's federal private sector privacy law, the Personal Information Protection and Electronic Documents Act (PIPEDA). Within that, I'm going to talk about some key concepts that are contained in the legislation. I'll talk about the 10 principles that PIPEDA, the federal privacy law, includes. I'm going to talk about how the legislation is enforced, and then I'm going to finally talk about data breach notification as it exists in the Personal Information Protection and Electronic Documents Act. Throughout, I’ll touch on some of the similarities and differences between our various privacy laws.


The Canadian Privacy Landscape

So, what's the current privacy law landscape in Canada? Well, we have a mosaic of privacy laws, or you could even say we have a mess of privacy laws. Canada is a federal country, and unfortunately, I’ll have to talk a bit about federalism. 

But across the country from coast to coast to coast, pretty well all government activity is subject to one form of privacy law or another. All private businesses operating in Canada are subject to a variety of privacy laws. The healthcare sector is subject to privacy laws in varying ways in different provinces. And the private sector workplace is really not subject to much regulation other than what's called a federal work undertaking (your business within federal jurisdiction) or private sector workplaces in British Columbia, Alberta, and Quebec.


Canada is a federal country. We have a federal government, and we have provinces and we have territories. And the Canadian Constitution gives certain jurisdictions, or certain forms of jurisdictions, certain powers. So it's divided between the federal government and the provinces. The territories are within federal jurisdiction.


Within our constitution, provinces have exclusive jurisdiction to legislate over what's called "property and civil rights," and this generally includes privacy. And so the provincial governments have exclusive jurisdiction over privacy when it's a matter of property or civil rights. The federal government has jurisdiction over something called "general trade and commerce," which is actually less general than you might think it is. And the federal parliament also has jurisdiction over federal works, undertakings, or businesses. Those are telecommunications companies, federally chartered banks, airlines, inter-provincial works, and things like that.


Only the provinces can pass “true” privacy laws, but the federal government can regulate how businesses manage personal information. So what we end up with is overlapping or potentially overlapping jurisdiction for privacy. 

In Canada, we don't have federal supremacy where the existence of a federal law will automatically override a similar or identical provincial law. So we have a situation where the federal government has jurisdiction over certain things, and privacy can be characterized as a matter of regulating the general trade and commerce in Canada, and provinces have jurisdiction over privacy as a matter of property and civil rights. And so the two have to find a way to co-exist. It's not that elegant, but generally, it works in Canada.


Each provincial and federal government can clearly regulate themselves—there's no doubt about that under the Canadian Constitution. And the provincial public sector also includes what we sometimes call the MUSH sector: Municipalities, Universities, Schools, and Hospitals. So provincial and federal governments and their Crown corporations, for example, and their agencies are subject to federal or provincial public sector privacy laws.


Some provinces have specific statutes for the health sector, and I'm not going to get into that too much. 

At least in the private sector, we have a possibility of overlapping and contradictory jurisdiction since the provinces can regulate privacy as a matter of civil rights, and the federal government can regulate how businesses collect, use and disclose personal information. When the federal Personal Information Protection and Electronic Documents Act was passed, only one province – Quebec – already had a private sector privacy law. Quebec is very protective of its jurisdiction, so to try to avoid fights, the federal parliament built in a mechanism by which the federal government could cede jurisdiction for privacy in a province that has a substantially similar law. 

Currently, Quebec, Alberta and British Columbia have general private sector privacy laws that are deemed to be substantially similar, so the federal law does not apply in those provinces where the provincial law applies.  The same has been done for a number of health privacy laws, like the ones in Ontario, Nova Scotia, New Brunswick, Prince Edward Island, and Newfoundland and Labrador.


Development of PIPEDA and the CSA Model Code

Though we could have just looked at the European Data Protection  Directive that was enacted in 1995, Canada did its own "made in Canada" solution. In the 1990s, the Canadian Standards Association (CSA), which sets standards for electrical devices and business processes, did a very broad consultation and came up with what was intended to be a self-regulatory code for privacy in Canada. It’s called the Canadian Standards Association Model Code for the Protection of Personal Information. This was adopted in 1996 as a national standard of Canada. 

Importantly, it was developed with a wide range of consultations across a large number of industries. There was also general consensus that it was pretty good. If you have an international background in privacy, you'll see that it has a significant kind of overlap and echoes of the OECD guidelines from the Organization for Economic Cooperation and Development. Now the OECD guidelines have eight guidelines; the CSA model code has 10 general principles. I'm going to go through each of those 10 principles and talk about how they're implemented within Canada.

So how was PIPEDA developed? In the 1990s, when the government of Canada wanted to use the general trade and commerce power to implement a privacy law. Rather than coming up with one from scratch or poaching the European Data Protection Directive, the then federal government just decided to implement the CSA model code. We have this great code, there’s a lot of consensus around it and we want to come with a privacy law. Why look further afield? 

And so PIPEDA is an unusual statute in a bunch of ways. It has two parts: one part related to personal information protection, the second part related to electronic documents. Essentially, the “Personal Information Protection Act” and the “Electronic Documents Act”, but they jammed them both into one Act. Part one covers privacy, but they slapped the CAS Model Code for the Protection of Personal Information onto the back of it, and says that those organizations that are subject to these rules have to follow the CSA model code.


Now there are quite a few exceptions. The legislation has also been updated a couple of times. The most significant revamp was with the Digital Privacy Act a number of years ago, which put in place data breach notification requirements that I'm going to talk about later on, and also implemented an exception to the consent rule related to certain kinds of business transactions.

Now PIPEDA was designed to be adequate for the purposes of the European Data Protection Directive for cross-border data transfers out of Europe. Even though PIPEDA is really, really old, its adequacy was just renewed in January of 2024. 

Key Concepts: Commercial Activity and Personal Information

So how does PIPEDA work? What organizations and activities does it apply to?

A key concept that one needs to understand in order to understand PIPEDA and how it works is the concept of "commercial activity". PIPEDA is based on the general trade and commerce power that the federal government has over within the Canadian Constitution. And PIPEDA was designed to go as far as federal jurisdiction would permit it to do. So PIPEDA applies to the collection, use, and disclosure of personal information in the course of commercial activity. It also applies to workplaces and employee personal information but only for federal works, undertakings, and businesses. Those are the kinds of enterprises that are within exclusive federal jurisdiction. (Think airlines, federally chartered banks, telecommunications and the like.)


We also have to talk about a key concept called "personal information". The statute is all about personal information. If you're not talking about personal information, this statute does not regulate it. And personal information, in short, means any information about an identifiable individual, excluding certain business contact information when that business contact information is used to contact an individual in their business role. But it's a very broad definition, so it's any information related to an identifiable individual. So if you can identify the individual from that information, it is going to be personal information.

If it's reasonable that you could identify an individual from that information, or you could correlate that information to an individual, it will also be considered to be personal information. And so that clearly includes somebody's name, their address, their income, health information, demographics, Social Insurance Number, their image, their photograph, biometrics, and things like that. So it's quite a broad definition. If information is adequately anonymized so there's no reasonable possibility of connecting it to an individual, then it would be out of scope of the legislation and the law would not apply to it.

Now an important thing—and this mainly comes up with dealing with American companies and American lawyers—is that whether information is personal information and therefore subject to regulation doesn't matter whether it's "private" information. It doesn't matter whether that information is publicly known or publicly shared. It really has nothing to do with your expectation of privacy in that information. If it is information about an identifiable individual, it is in scope of the legislation and regulated. There may be some consent exceptions related to publicly available information, but those actually seldom come into play because they’re so narrowly tailored.

PIPEDA also has a baseline "reasonableness" requirement. So an organization can only collect, use, or disclose personal information for purposes that a reasonable person would consider are appropriate in the circumstances. And that’s regardless of whether there’s consent. 

This provision was seldom used until recent Privacy Commissioners started to look more closely at whether or not the purposes for which certain businesses collect, use, or disclose personal information are reasonable. They sometimes call these “no go zones”. Again, if the purposes are not reasonable, it does not matter whether you have the individual's consent; this is an absolute kind of guardrail sort of provision. Now of course, what is reasonable in the circumstances could differ significantly from one person's point of view to another, and I draw the line in a different place than the Commissioner often does, but this has to be understood as a baseline principle.

The 10 Principles of the CSA Model Code

Recall that the law essentially says: “Behold the CSA Model Code! If you’re engaged in commercial activity, thou shalt follow it!” 

Now all 10 principles can be found to greater or lesser degrees in all privacy laws in Canada. Also in the Privacy Act, which regulates the federal government and its agencies. So the CSA model code has 10 principles, and I'm going to walk through all 10 and talk about how they are implemented within the Canadian PIPEDA framework.


Principle 1: Accountability

The first principle is called accountability. 

This says an organization is responsible for personal information under its control and has to designate an individual or individuals who are accountable for the organization's compliance with the 10 principles of the CSA model code. That doesn't mean that that individual or those individuals are personally liable. They’re not the folks who get arrested by the privacy cops in dawn raids. 

But what it means is that an organization has to appoint a privacy officer. There has to be somebody or a group of somebodies who are responsible within the organization for making sure that these rules are followed, so there's internal accountability. The Code doesn’t say they have to have a particular title, but they’re generally also the privacy spokesperson for the organization, the liaison for customers, and the person who deals with our privacy regulators if necessary.


What it also means is that the organization remains accountable for personal information that it has collected, used, or disclosed, even if it transfers that information to another party to handle it on its behalf. 

This is similar to the notion of "controllers" and "processors" in Europe. We do not use the exact same language, but the principle is applicable. If you are the organization that is facing the customer and you have collected personal information from that customer for your purposes, and then you give it to a contractor to manage on your behalf, the first organization remains legally responsible for it and has to make sure that there are contracts in place with their service providers so that the contractors will handle it only on their behalf and will do all the necessary things to remain compliant with the law.

If the contractor screws up, the responsibility remains with the original organization. You can’t contract out of ultimate responsibility under Canadian privacy law. 

There is a very important distinction between a "transfer" and a "disclosure". An organization can transfer personal information to a contractor without consent where the contractor is only going to use it as a processor on behalf of the original organization. If it is shared with another organization so that the recipient organization can use it for their own purposes, then that’s a disclosure. A disclosure requires consent, and the company that gets the personal information becomes legally responsible for managing it and protecting it.


Principle 2: Identifying Purposes

The second principle is called identifying purposes. I think this is one of the most important of the ten principles. 

The CSA model code says the purposes for which personal information is collected shall be identified by the organization at or before the time the information is collected. This has two parts: 

(1) the organization has to identify – and hopefully document – what it proposes to do with the personal information; and 

(2) the organization has to communicate those purposes to the individual before it collects their personal information. 

And it really should be noted that privacy policies seldom satisfy this requirement. Because the purposes have to be identified to the individual at or before the time the information is collected, just having a privacy policy on your website does not provide any assurance that the customer or the individual has read, understands, or knows what those purposes are. 

One exception may be, for example, on account creation where an individual is required to flip through the privacy statement prior to creating an account and then clicks "I agree".


So what this means in practice is that every organization has to document internally what are all the purposes for which they collect, use, or disclose personal information. Those documented purposes have to be communicated to the individual at or before the time the personal information is collected. Now that can be done orally or it can be done in writing, but the important thing is that it has to be done. 

And employees who collect personal information on behalf of a company need to be able to explain the purposes to individuals. This information needs to be provided in a manner that you could have some reasonable confidence that they understand what those purposes are, they understand what it is that they're agreeing to.

Principle 3: Consent

Principle 2 is linked very closely with Principle 3. Principle 3 is the consent principle, and this says the knowledge and consent of the individual are required for the collection, use, or disclosure of personal information, except where inappropriate. Now notice that I've struck that out—"except where inappropriate" no longer applies. The only exceptions to the consent rule are contained in the statute itself in Section 7. 

That may have made some sense when the CSA Model Code was designed to be a voluntary code and the organization could determine when it was not appropriate. But under PIPEDA, organizations don't get to choose whether or not it's inappropriate to seek consent. Consent is the only basis upon which personal information is collected, used, or disclosed, unless those exceptions apply. And those exceptions are significant outliers.


So unlike in Europe where there are other grounds for processing personal information in the private sector, consent is the principle that is at play in Canada. 

This consent has to be informed consent; that’s why Principle 2 (identifying purposes) is so important. The individual has to be told at or before the time the information is collected what the purposes are for the collection, use, or disclosure of personal information. And those “purposes” are the parameters for the consent obtained. 


The principle also says that the form of the consent is going to be dependent upon the sensitivity of the information. So the more sensitive the information, the greater the burden of consent. Expectations also come into play. If the consumer expects you to use it for the obvious purposes, consent can be implied. 

So you can have opt-out consent where the information is really not sensitive. Opt-in consent would be preferred in most cases. If you're dealing with sensitive information—health information, information about somebody's intimate life or family life or things like that—you would want to make sure that they expressly agree that their information can be collected, used, or disclosed for that purpose.

Written consent should be used in a range of cases, particularly where you’re going to want a record of the consent and a clear record of what was consented to. 

This principle also says you cannot require that an individual consent to a collection, use, or disclosure of personal information that's not necessary to fulfill the explicitly stated and legitimate purposes. 

Individuals can withdraw consent. This is similar to the European "right of erasure" but not identical. So an individual can withdraw consent at any time, but the organization has the obligation of telling the individual what are the consequences of that withdrawal of consent. For example, the organization might not be able to provide services to the individual if the individual does not consent to the collection, use, and disclosure of personal information that's necessary for the provision of those services. 

And the consent of an individual is only valid if it is reasonable to expect that the individual would understand the nature, purposes, and consequences of the collection, use, or disclosure of the personal information to which they're consenting. This highlights the importance of being clear to the individual what those purposes are and having confidence that the individual does in fact understand what those purposes are.


Principle 4: Limiting Collection

Principle 4 is closely aligned with Principle 5, and both of them link back to Principle 2 of identifying the purposes. So Principle 4 says the collection of personal information shall be limited by that which is necessary for the purposes identified by the organization. 

So you can only collect personal information that's reasonably necessary for the purposes that you've identified. You cannot collect any more personal information if it's not reasonably necessary for those purposes. And information shall be collected by fair and lawful means, so no use of deceit or trickery or anything else like that. 

Note again, this loops back to the purposes identified in Principle 2. Those purposes set the guardrails. 

Principle 5: Limiting Use, Disclosure, and Retention

And then Principle 5 leads us to: “you can only use personal information or disclose personal information for the purposes that have been identified.” Again, so much of this comes back to clearly identifying the purposes to the individual. And those purposes create significant guardrails around that information. That information cannot be used for any other purpose unless you go back to the individual, you identify the new purposes, and you get new consent for that.

There's also a requirement to limit the retention of personal information. Personal information shall only be retained as long as is necessary for the fulfillment of those purposes. So the organization needs to clearly document what the purposes are and what the lifecycle of the data is. 

The law doesn’t specifically say you need a written document retention plan, but you really should have one. When  it is no longer necessary for the purposes that are identified, that information has to be destroyed. Notably, it also says it can be made anonymous; if it's made anonymous, then it's no longer personal information and no longer subject to the legislation.

Principle 6: Accuracy

Principle 6 is the accuracy principle, and this says that personal information shall be as accurate, complete, and up-to-date as is necessary for the purposes for which it is to be used. And so again, it ties back to the purposes that have been identified to the individual. 

This principle really only comes into play when personal information is used to make a decision about somebody. And so an organization needs to make sure that the information is as accurate as it needs to be for those purposes, probably taking into account what are the consequences of that decision to the individual. But information should not routinely be updated "just because". 

Principle 7: Safeguards

Principle 7 is a key principle, it's entitled "Safeguards". Personal information shall be protected by security safeguards appropriate to the sensitivity of the information. And it goes on to say that personal information must be protected from many threats: loss, theft, unauthorized access, unauthorized disclosure, copying, use, modification. And this obligation exists regardless of the format in which it is held.


Now you'll note that this is principles-based. This requires an organization to use safeguards that are reasonable and appropriate in light of the sensitivity of the information. So we don't have prescriptive rules that say this sort of information must be encrypted or this sort of information must be kept under lock and key. 

This is designed to be technologically neutral and so that it would survive over time. So this was written in the late 1990s, became law in 2001, and so what are “reasonable safeguards” now would differ substantially from what would be reasonable safeguards in 2001. It's intended to be flexible and fluid.

What I generally tell my clients is that you need to implement at least the "state of the art" of security safeguards that are prevalent in your industry—not just in Canada, but also look internationally. And try to do one better than that. 

This doesn't require a standard of perfection. The safeguards need to be reasonable and appropriate in the circumstances. A company is NOT expected to spend a million dollars to protect a hundred dollars worth of personal information. And as information technology systems get more complicated, safeguarding that information gets more complicated and more difficult.

Principle 8: Openness

Principle 8 is called openness. An organization shall make readily available to individuals specific information about its policies and practices related to its management of personal information. So this essentially means the organization has to have a privacy policy. The privacy policy is not about identifying the purposes in order to get consent; the privacy policy is in order for the organization to be open and transparent.

That privacy policy has to have contact information for the privacy officer—doesn't have to name them, but has to have the contact information. It has to tell the individual how they can exercise their access rights. 

It has to educate the individual with the general account of what personal information the organization routinely collects, uses, and discloses, and how it is used. This can be done through brochures or through the website or other things like that. And the organization also has to let the consumer know what personal information is made available to related organizations. 

The Privacy Commissioner Canada has also said the privacy statement should include information about what personal information may be stored outside of Canada, transferred outside of Canada, or accessed from outside of Canada. That is not in the statute, but that certainly is a best practice. The Alberta and Quebec privacy laws make those disclosures mandatory. 

Principle 9: Individual Access

Principle 9 is individual access. So upon request, an individual shall be informed of the existence, use, and disclosure of his or her personal information and shall be given access to that information. In that process, an individual shall be able to challenge the accuracy and completeness of the information and have it amended as appropriate. So this is a data subject access right. The organization has to respond within 30 days.

And the organization needs to let the individual know to whom their information may have been disclosed. So organizations effectively have to keep a record of how they use personal information and to whom it's been disclosed. 

This access should be at minimal or at no charge, and the information provided needs to be comprehensible to the individual, so abbreviations and technical terms may need to be explained. 

There are some limitations and some exceptions to this access right, such as confidential business information, third party personal information and information that is privileged. 

What is interesting is that this right is not exercised as often as you think it might be in Canada.


Principle 10: Challenging Compliance

The final principle is called challenging compliance. And this says an individual shall be able to address a challenge concerning compliance with the above principles to the designated individual or individuals who are accountable for the organization's compliance. 

This is just common sense. The organization will want to hear complaints first before the individual goes to the regulator. The organization will probably want to have an opportunity to address them and to fix them before an individual chooses a more formal path of recourse. And must have a method to receive complaints, address them properly, and need to let the individual know that they have a right to complain to the appropriate authority.


Enforcement Powers

So now I'm going to talk about enforcement powers under Canadian privacy laws. The Personal Information Protection and Electronic Documents Act is overseen by the Privacy Commissioner of Canada or the Office of the Privacy Commissioner of Canada, sometimes referred to as the OPC.

The Privacy Commissioner of Canada is an ombudsman. The Commissioner doesn't have the ability to levy fines or issue orders. Only the Federal Court of Canada can issue orders or award damages. What the Commissioner does is the Commissioner deals with complaints first and foremost. Any individual can send a written complaint to the Privacy Commissioner of Canada. The Commissioner can also initiate complaints of his own accord.

I should note that the Alberta, British Columbia and Quebec Privacy Commissioners can issue orders, and the Quebec commissioner also has considerable financial penalty powers. 


But back to the federal Commissioner: After a complaint is received, the Commissioner investigates the complaint, and there's minimal involvement on the part of the complainant in most cases. 

During that investigation, the Commissioner has very strong powers. So for example, the Commissioner can compel evidence, can issue essentially subpoenas, can administer oaths, and accept evidence under oath. The Commissioner can also accept and review evidence that ordinarily would not be admissible in court. The Commissioner can also enter any premises other than a dwelling and review any documents in there.

So far we've never had any "dawn raids" by the Privacy Commissioner of Canada. I don't think that any of these particularly intrusive powers have ever been used until relatively recently. It's always been my experience in speaking for myself and speaking with colleagues that those who are the subject of the complaint tend to cooperate, at least in the course of the investigation.

The end product of the investigation is a report. It's called a Report of Findings. The Commissioner has to issue a Report of Findings with respect to an investigation within one year from the day the complaint is filed. Now in my experience, that's seldom the case; they usually take more than a year. But that may reflect the complexity of cases that I generally deal with. 

The finding says here’s what the Commissioner found, essentially. Here's what the person complained about, here is what I investigated, here is what I found.

If the Commissioner found non-compliance, the report will include recommendations, and those recommendations will generally be communicated to the organization in the course of the investigation, so the organization can implement those prior to the conclusion of the investigation. 

Though the Commissioner does not have order making powers nor can  he levy penalties, the "naming and shaming" is a significant incentive for businesses to cooperate. Some of the findings are published—but not all. And for high-profile investigations, particularly those involving large American tech companies, there tends to be a lot of fanfare that goes along with the issuance of a report of findings, including press conferences and things like that. 

Many organizations do not want to be the subject of naming and shaming like this, so will do what they can to be compliant to ultimately resolve the complaint to the satisfaction of the complainant and the Commissioner.

Those findings will fit into a number of categories:

  • Not well-founded: which means that the complaint was not made out, the Commissioner did not find any violations of Canadian privacy laws.

  • Well-founded and resolved: meaning that ultimately there was an issue, but it was resolved in the course of the investigation.

  • Well-founded and conditionally resolved: so the organization has been asked to report back with changes that it has made over a medium-term or longer-term.

  • Well-founded and unresolved: and those are relatively rare.

Organizations tend to want to resolve the matter during the investigation stage. And if it's unresolved, then the Commissioner can in fact take the organization to court, or the complainant can.


Court Hearings

Court hearings are essentially where the enforcement rubber hits the road. Some people suggest that the Commissioner's lack of an ability to issue fines or issue orders is a bug with the legislation, and the process of going to court is somewhat cumbersome. I tend to think it's more of a feature that, when it comes to these sorts of measures, it's best reserved to a court, particularly where the resolution turns on the interpretation of the statute.

In these court hearings, a complainant—but not the organization—can start an application in our federal court for a hearing. And it is notable that the organization does not have any automatic ability to take the Commissioner to court to have the Commissioner’s report reviewed or appealed or overturned.


In fact, what happens in court is not an appeal at all; it's what's called a de novo proceeding. The court starts from scratch. The Commissioner might be a party with the cooperation of the complainant. It may in fact be the Commissioner who's carrying the bag on all of it in going to court, but it's not a review of the Commissioner's finding; they start from scratch. And this can only be done once the report from the Privacy Commissioner has been finalized and delivered. 

There is a way to get into court in the course of an investigation on something called a "judicial review" if there are jurisdictional issues or other things that might need to be considered by the court, but generally, it's only after the report of findings is issued.

Perhaps not surprisingly, the court has pretty broad remedial powers—that's what courts do. The courts are empowered to order the organization to correct their practices in order to comply with the provisions of the act. Can also require the organization to publish a notice of actions that they have taken in order to correct their practices—so, I guess, a "double naming and shaming". And finally, the court can award damages, including damages for humiliation that the complainant might have suffered. 


It should be noted that there is no mechanism through PIPEDA for a class action to be brought within this process. You have an individual complainant, you have the Privacy Commissioner, and you have a case before a judge. 


Commissioner Audits

The Commissioner also has the power to audit organizations. 

The Commissioner can initiate one of these if, on reasonable grounds, the Commissioner believes the organization is contravening a provision of Division 1 or Schedule 1 of the act. And during the course of an audit, the Commissioner has pretty well the same powers that the Commissioner has in an investigation: take evidence, enforce attendance, and have the powers of a superior court of record. He can enter any premises other than a dwelling house, examine any records or extracts of records.


To my knowledge, the federal Privacy Commissioner of Canada has not initiated any audits of any private businesses. The Commissioner has, at least on one occasion, requested that the organization obtain a third-party audit and provide the report of that audit to the Commissioner. But the Commissioner would not be able to order that. 

As I understand it, the Commissioner doesn't feel that their office has sufficient resources in order to go about auditing organizations. One thing that they have asked Parliament for is a power to order audits of organizations and their information handling practices.


So the key “stick” that the Commissioner actually has is this power of publicity. Because within the act, the Commissioner is specifically empowered to make public any information related to the personal information management practices of an organization if the Commissioner considers that it's in the public interest to do so. 


Data Breach Notification

In 2015, Parliament amended PIPEDA to bring in data breach notification requirements. 

We now have data breach reporting to the Commissioner, data breach notification to the affected individuals, and a record-keeping requirement embedded in these amendments. 

It should also be noted that there may be a common law duty to notify affected individuals if their personal information has been compromised in a way that could affect them, particularly if giving them notice and warning would give them an opportunity to mitigate harm that could happen to them. 

But we're going to focus on the statutory requirements.

As with any data breach law, you always have to be very careful about the definition of what is a "breach". So what triggers this whole process? In PIPEDA, it is a  "breach of security safeguards", which  means the loss of, unauthorized access to, or unauthorized disclosure of personal information resulting from a breach of an organization's security safeguards that are referred to in Clause 4.7 of Schedule 1 (so that's Principle 7) or from a failure to establish those safeguards.

The notice and reporting obligations become triggered if there is a breach of security safeguards where it is reasonable in the circumstances to believe that the breach creates a real risk of significant harm to an individual. This particular provision talks about the personal information being under the control of an organization. So this says to me that the obligation to report to the Commissioner is only on the part of a data controller, not a data processor. 

As between any data processor and the controller, there should be a clear contract that says the processor will notify the controller so that the controller can report any data breach that they have to the Privacy Commissioner, and so that they can notify affected individuals.

Subsection 2 talks about what has to be in the report, and I'll get into that in just a moment. And Subsection 3 talks about notification to affected individuals. 

Again, the definition—what is a breach of security safeguards—refers back to Principle 7, “Safeguards”. And so what this principle requires is that an organization implement reasonable security safeguards to protect against a list of risks that is appropriate and commensurate with the sensitivity of the information at issue. So it's not unduly prescriptive; it's what's reasonable in the circumstances.

And again, this comes back to the concept of sensitivity. So we don't have strictly defined categories of what is sensitive personal information. Personal information can be more sensitive or it could be less sensitive depending upon the circumstances, depending upon the context in which the information is collected.

We do have some helpful guidance or wording in the CSA model code to help determine what information is more sensitive or less sensitive. Certainly information about somebody's private life, their intimate life, their family life, information about their race, ethnicity, religion, those sorts of things, financial information, health information would all be considered to be at the more sensitive end of the spectrum.


But somebody's name can be less sensitive or more sensitive depending upon the circumstances. So if your name appears on a list of people who attended a hockey game, for example, that's not particularly sensitive. If your name appears on a list of people who have upcoming appointments with a psychiatrist, that would be sensitive information, because the context in which that information appears tells you information about that person's private life, their mental life, their health conditions, or things like that.


Real Risk of Significant Harm

The triggers of notification and reporting relate to "real risk of significant harm". 


This is a two-part test: you look at the real risk and then you look at the possible significant harm. And real risk depends upon the sensitivity of the personal information involved and the probability that the personal information has been, is being, or will be misused. And there may also be other prescribed factors, but we haven't seen new factors to consider.

So you're looking at what's the likelihood that mischief will take place; what are the circumstances in which the breach took place? 

One example may be a lost hard drive and there's no information to suggest that it was stolen by a bad guy. It was just misplaced. You don't have any real sense that mischief is afoot. That seems low risk of harm.

But if somebody breaks into your network and exfiltrates information, you already know that there's a bad guy involved, or a "threat actor" as the cool kids say. That tells you there’s a high risk that bad things are likely to happen. Or at least bad things are more likely to happen in a scenario like that.


The second part of the analysis is “significant harm”, and that requires you to ask “what could go wrong?” You ask “What could this information be used for? How could this information be abused?” 

The legislation specifically talks about certain kinds of harm being significant: “bodily harm, humiliation, damage to reputation or relationships, loss of employment, business, or professional opportunities, financial loss, identity theft, negative effects on the person's credit record, and damage to our loss of property.” 

It ties pretty closely to the concept of sensitivity. 

In some jurisdictions, reporting is based simply on the type of data involved – more often tied to risk of fraud and impersonation. 

The significant harms that are at play and have to be considered in Canadian privacy legislation are much broader than that, and relate to kind of “softer elements” of privacy and personal life.


Reporting Requirements

For a report to the Commissioner, the legislation prescribes what has to be contained in that report. Not surprisingly, the Privacy Commissioner of Canada has a form on his website that contains this information to fill out and report. 

They generally want to know: 

  • who was the organization, 

  • what was the nature of the information, 

  • what were the circumstances of the breach, 

  • when was it discovered, 

  • how many people are affected, 

  • what steps have you done to mitigate, to stop the breach and to mitigate the risk of harm, and 

  • who is able to be a point of contact for the Privacy Commissioner.


The Commissioner can initiate an investigation based on a report, but most of these are just received with thanks and that's largely the end of it. The notice to individuals is generally quite similar to the information that has to be provided to the Commissioner, though the organization is also required to tell the individual if there are steps that that individual could take to mitigate any harm to themselves. 

Record-Keeping Requirements

Now one additional thing that's notable is there's also a “record-keeping” requirement. This says, regardless of whether or not there's a real risk of significant harm to the individual, every organization must create a record related to every breach of security safeguards, regardless of how trivial.


That record has to contain essentially the same sort of information that you would include in a report to the Commissioner. It should also include information to substantiate the conclusion that there was not a real risk of significant harm to the affected individuals, so that no report was required.


These reports have to be kept by the organization for two years. And they have to be provided to the Privacy Commissioner of Canada on request. So this does create a discoverable paper trail in the event of litigation. 

It should also be noted that the Privacy Commissioner has in fact, on his own accord, conducted surveys of organizations requiring them to provide to his office and his investigators all of these breach records in order to make sure that they are being created and maintained appropriately.

Importantly, it's an offense to not create these records, and to not maintain them for the period of two years. It’s also an offense to not provide them to the Commissioner.


Conclusion

So, I hope this has been a useful, informative overview about Canadian privacy law. As I said, it was mainly intended for a general audience of folks who may have a need to know the basics of Canadian privacy laws.