Sunday, January 18, 2026

BC Privacy Commissioner finds city's use of public surveillance cameras unlawful ... off to court

The Information and PrivacyCommissioner of British Columbia just found that the City of Richmond in the BC lower mainland broke the law when it installed ultra-high-definition cameras in public places that capture faces, licence plates, and other identifiers. The Commissioner recommended that they take down the cameras and delete all the recordings. The City said “nope”, so the Commissioner issued a binding order for them to stop collection, delete recordings, and disband the system.

This is definitely going to court. The City of Richmond issued a statement saying they think it is lawful and appropriate, and are looking to have the legality of all of this determined by the Courts. I think that’s a good thing … the more clarity we have from the superior courts on the interpretation of our privacy laws, the better.

I should note that while these laws are generally consistent from province to province, there is a big variation on how police services are delivered. Not all of the conclusions of this finding will necessarily be applicable in all other provinces or municipalities.

The City of Richmond in British Columbia began field testing its “Public Safety Camera System” – or PSCS – in early 2025 at the intersection of Minoru Boulevard and Granville Avenue.

The City’s stated sole purpose was to collect and disclose video footage to the RCMP to assist in identifying criminal suspects. That point—sole purpose—is central to the Commissioner’s analysis. There was no other rationale for the City of Richmond to put up these cameras in these locations. 

Operationally, the system involved multiple high-resolution cameras capturing:

  • licence plate numbers,
  • high-definition images of vehicle occupants,
  • pedestrians,
  • vehicle identifying features, and
  • location/time information tied to the intersection.

The cameras recorded continuously, and the City retained footage for 48 hours before deletion.

The field test included capabilities like licence plate recognition, pan-tilt-zoom variants, panoramic/multi-sensor configurations, and other detection features; the City confirmed it did not use facial recognition or built-in audio recording during field testing, though some cameras had those capabilities.

The City’s goal for the field test was essentially procurement-and-design: evaluate camera tech, decide numbers and placement, assess performance in different conditions, and confirm the PSCS could generate “usable” footage for law enforcement use later.

Under BC FIPPA, public bodies can’t collect personal information just because it seems useful. Collection has to fit within a listed authorization—most importantly here, s. 26.

The Commissioner situates that within a broader privacy-protective approach: privacy rights are treated as quasi-constitutional, and public bodies should only compromise privacy where there’s a compelling state interest.

Richmond relied on three possible authorities:

  • s. 26(b) (law enforcement),
  • s. 26(c) (authorized program/activity + necessity),
  • s. 26(e) (planning/evaluating a program/activity).

The Commissioner rejected all three, finding there simply was not legal authority for the collection of personal information – and without legal authority, there’s no lawful collection.

Richmond first said they were authorized under s. 26(b):

26          A public body may collect personal information only if

(b)          the information is collected for the purposes of law enforcement,

Note the use of the word “only”. Unless section 26 permits it, a public body cannot collect personal information.

Richmond’s theory was straightforward: the definition of “law enforcement” includes policing, and the PSCS was meant to support policing by helping identify suspects—so it’s “for law enforcement.” That was their alleged purpose.

The Commissioner accepted there’s a connection: the information might be used by the RCMP in policing. But the Commissioner says that’s not the end of the inquiry, because the collector is the City—and the City must have a law enforcement mandate of its own to rely on s. 26(b).

This is a recurring theme in Canadian privacy oversight: a public body can’t bootstrap a law-enforcement collection power merely because another entity with a law-enforcement mandate might find the data useful.

The City may pay for law enforcement, and it may provide resources to law enforcement but they do not have a lawful law enforcement mandate. 

The report describes three arguments Richmond advanced:

  1. RCMP mandate should be imputed to the City (because the City “provides” policing by contracting with the RCMP to do it).
  2. The City has a mandate to collect information for the RCMP.
  3. The City has its own independent mandate to police through the cameras.

The Commissioner’s response is pretty technical: under the Police Act and the Municipal Police Unit Agreement framework, municipalities fund and resource policing, but policing authority and law enforcement functions remain with the police, operating independently of the municipality.

He underscores that the Police Act sets out specific ways a municipality provides policing—such as establishing a municipal force or contracting with the RCMP—and “running a surveillance camera system for the police to use” is not among those statutory options.

He also points to the RCMP’s peace-officer functions and the Municipal Police Unit Agreement structure as vesting law enforcement responsibilities in the RCMP, not the City, and he reads the legislative set-up as intentionally keeping policing independent from municipal control.

So this argument advanced by the City failed: the City lacked the necessary law-enforcement mandate, so it could not collect under s. 26(b)—even if the police might later use the footage.

Section 26(c) is the classic “public body operational authority” provision: even if a statute doesn’t explicitly say “collect this kind of personal information,” a public body can collect personal information if it is both:

  • directly related to an authorized program or activity, and
  • necessary for that program or activity.

Richmond framed its program as essentially: an intersection camera program to identify criminal suspects following criminal incidents, pointing to broad service powers under its Community Charter.

But the Commissioner rejected that program characterization as “authorized,” because—again—of the Police Act structure. In the Commissioner’s view, “collecting evidence to identify criminals that the RCMP may rely on” isn’t part of how the City is authorized to provide policing services or resources under the Police Act framework.

So, the analysis fails at the first step: if the underlying “program” isn’t authorized, 26(c) can’t save the collection.

The report goes further and addresses necessity. The Commissioner emphasizes that the City’s record was limited in establishing that: (a) unresolved crime was “real, substantial, and pressing,” (b) existing measures were ineffective, or (c) less intrusive means had been seriously examined.

He characterizes the intrusion into privacy as “vast,” relative to the limited evidentiary foundation offered to justify necessity.

The net effect was that the Commissioner was not satisfied that the City demonstrated that mass capture of high-definition identifying footage from “tens of thousands of people each day” who had nothing to do with any sort of crime was necessary for the purported municipal activity.

Richmond also argued: the field test is just planning and evaluation, and s. 26(e) specifically authorizes collection necessary for planning/evaluating a program.

The Commissioner’s treatment of 26(e) is crisp: 26(e) presupposes that the program being planned or evaluated is otherwise authorized. You can plan or evalue an authorized program, but if the program ain’t authorized, you can’t collect personal information to plan or evaluate it. Richmond itself largely accepted that proposition, and the Commissioner agreed.

Because the Commissioner had already found the PSCS was not authorized under 26(b) or 26(c), Richmond could not rely on 26(e) to do “planning” for an unauthorized program.

It makes sense that you can’t use the planning/evaluation clause as an end-run around the core requirement of lawful authority. Otherwise, everything under the sun could be said to be for planning or evaluation. 

FIPPA generally requires notice of purpose and authority when collecting personal information. Richmond tried to avoid notice by invoking s. 27(3)(a)—the idea that a notice is not required where the information is “about law enforcement.”

The Commissioner gives two responses.

First: the City couldn’t rely on law enforcement as its underlying authorization in the first place—so that alone undermined the attempt to rely on the exception.

Second, and more fact-specific: during the field testing phase, the City had confirmed it was not using the information for actual public safety or enforcement purposes—only to test and evaluate camera technical capabilities.

So even reading “about law enforcement” broadly, the Commissioner questioned whether the testing-phase collection qualified as “about law enforcement,” because it would not be used to enforce any laws, and there was no compelling enforcement purpose weighing against notice.

Richmond did install signs, but the Commissioner describes them as a “courtesy” and finds them legally inadequate.

The sign said “PUBLIC SAFETY CAMERA TESTING / FIELD TESTING IN PROGRESS AT THIS INTERSECTION” with contact information for the City’s Director of Transportation.

The Commissioner’s critique is twofold:

  1. First there was a Content deficiency: the signs did not clearly notify people that cameras were recording and collecting personal information, and did not include the purposes and legal authority for collection as required by s. 27(2).
  2. And secondly there was a Placement deficiency: signage was vehicle-focused, placed for eastbound and westbound approaches, but did not address entries from other directions and did not notify pedestrians—despite the system’s capacity to capture pedestrians and pan widely, including multi-direction recording.

The Commissioner’s conclusion is direct: the City did not adequately notify individuals when it collected their personal information during field testing.

The report notes that disclosure under s. 33(2) generally depends on lawful collection in the first place, and because the collection lacked authority, the City could not rely on “consistent purpose” disclosure to the RCMP for evaluation.

On security, the Commissioner acknowledges the City described a reasonably robust set of safeguards, and that even where collection is unlawful, the City still has a duty under s. 30 to protect personal information in its custody or control.

But safeguards don’t cure lack of authority. They are necessary, not sufficient.

The OIPC’s recommendations were blunt:

  1. stop collecting personal information through the PSCS,
  2. delete all recordings, and
  3. disband the equipment.

Richmond advised it would not comply, and the Commissioner issued Order F26-01, requiring immediate compliance and written evidence of compliance by a specific date.

My takeaway is that the Commissioner’s reasoning is primarily structural and jurisdictional: the City tried to create a surveillance-for-police capability, but the Commissioner reads BC’s legal framework as drawing a hard line between municipal services and police law-enforcement authority—particularly when the activity is mass surveillance in public space.

If you’re a public body contemplating “pilot projects” with high-capability cameras, the report is a reminder that planning provisions don’t let you pilot an unauthorized program, and that “law enforcement adjacent” doesn’t equal “law enforcement authorized.”

For a public body, every collection of personal information has to be directly authorized by law. It’s worth noting that the “law enforcement” provision in most public sector privacy laws is wide enough to drive a truck through. The RCMP in Richmond could have paid for and put up those cameras all over the place, since they have a law enforcement mandate. 

Criminal courts are pretty adept at dealing with privacy invasions on a case-by-case basis using section 8 of the Charter, but we actually need a better way to to evaluate proportionality, necessity and appropriateness when it comes to proposed police programs that hoover up data on hundreds, thousands or maybe millions of innocent people in the name of “law enforcement”.

It’ll be interesting to see how the courts deal with this.

 

Sunday, January 11, 2026

Canada's new proposed law to outlaw explicit deepfakes: Bill C-16

A number of years ago, the Parliament of Canada amended our Criminal Code to create a criminal offense related to the non-consensual distribution of intimate images. Last month, the Government of Canada proposed to further amend the Criminal Code to include so-called deepfake intimate images, and to create an offence of threatening to disclose intimate images, deepfake or not.

Section 162.1, which was added to the Criminal Code in 2014, makes it an offence to publish, distribute, transmit, sell, make available or advertising an intimate image without the consent of the individual depicted in the image. 


And a number of provinces have put in place laws that create civil remedies for the non-consensual distribution of intimate images. 


With some variation, they generally have the same definition of “intimate image”, but they really haven’t kept up with an explosion of synthetic, AI-generated intimate imagery. Synthetic images are created by generative AI systems that can “learns” what a person looks like and can use that information to create new images that resemble that person. 


If you look at the definition of what is an intimate image, it clearly presupposes that it is a recording of an actual person and that the actual person was involved, or at least present at its recording.


Criminal Code – 2014 Amendments Definition of intimate image (2) In this section, intimate image means a visual recording of a person made by any means including a photographic, film or video recording, (a) in which the person is nude, is exposing his or her genital organs or anal region or her breasts or is engaged in explicit sexual activity; (b) in respect of which, at the time of the recording, there were circumstances that gave rise to a reasonable expectation of privacy; and (c) in respect of which the person depicted retains a reasonable expectation of privacy at the time the offence is committed.


It refers to an image or recording where the person “is exposing” certain body parts or “is engaging” in explicit sexual activity. It talks about “reasonable expectations of privacy” at the time the image is recorded and at the time the offence is created. 


This definition would not capture synthetic, “deep fake” intimate images.


The province of British Columbia has the newest provincial statute to create a civil framework to provide civil remedies for the non-consensual distribution of intimate images. The definition there is clearly modeled on the definition from the Criminal Code of Canada, but does include images where the person is depicted as engaged in a particular activity, also regardless of whether the image has been altered. So the BC law would cover a situation where an actual image of a person has been altered, in any way, to depict the person as engaging in certain acts or nude. 


Intimate Images Protection Act (British Columbia) “intimate image” means a visual recording or visual simultaneous representation of an individual, whether or not the individual is identifiable and whether or not the image has been altered in any way, in which the individual is or is depicted as (a) engaging in a sexual act, (b) nude or nearly nude, or (c) exposing the individual's genital organs, anal region or breasts, and in relation to which the individual had a reasonable expectation of privacy at, (d) in the case of a recording, the time the recording was made and, if distributed, the time of the distribution, and (e) in the case of a simultaneous representation, the time the simultaneous representation occurred;

But this updated definition does not cover purely synthetic images, meaning images that are original and are not simply alterations of existing images. You may recall a little while ago when AI generated sexualized images of superstar Taylor Swift were posted online. If I recall correctly, these were images that were not alterations of existing images but were rather the result of the AI image generator having ingested many, many images of Taylor Swift and “knowing” what she looks like. Those images would not have been captured by the current Criminal Code or even the newer definition in the British Columbia intimate images law. 

In December, the Government of Canada introduced Bill C-16, called the “Protecting Victims Act”, that makes a number of amendments to Canadian criminal and related laws. Included in Bill C-16 are proposed amendments that will expand the existing definition of “intimate image” to include synthetic deepfakes. 


So here’s the new definition from Bill C-16, but it’s more helpful to compare it to the existing language of the Criminal Code. I’ve crossed out what’s being removed and underlined what’s being added. So we see in subsection (2)(a)(i), where it deals with what has to be in an image or recording to be considered an “intimate image” – they’ve removed “his or her genital organs or anal region or her breasts” and have replaced it with “their sexual organs”. 


Bill C-16 Proposed amendments (redline)

Definition of intimate image
(2) In this section, intimate image means

(a) a visual recording of a person made by any means including a photographic, film or video recording,

(i) in which the person is nude, is exposing his or her genital organs or anal region or her breasts their sexual organs or is engaged in explicit sexual activity,

(ii) in respect of which, at the time of the recording, there were circumstances that gave rise to a reasonable expectation of privacy, and

(iii) in respect of which the person depicted retains a reasonable expectation of privacy at the time the offence is committed; or

(b) a visual representation that is made by any electronic or mechanical means and that shows an identifiable person who is depicted as nude, as exposing their sexual organs or as engaged in explicit sexual activity, if the depiction is likely to be mistaken for a visual recording of that person.
That change doesn’t really do what it appears it will do because they’ve added a new defined term in section 150 of the Code, which defines specific terms for Part V of the Code which deals with sexual offences. 

“sexual organs” include breasts that are or appear to be female breasts and the anal region; 


So this isn’t really a material change, as far as I can see. 


Subsection (2)(b) is where they scope in deepfakes:


(b) a visual representation that is made by any electronic or mechanical means and that shows an identifiable person who is depicted as nude, as exposing their sexual organs or as engaged in explicit sexual activity, if the depiction is likely to be mistaken for a visual recording of that person.


So this part doesn’t depend on the reasonable expectation of privacy in the image or recording. Which makes sense. An actual image of an actual person will be associated with that actual person’s expectations of what would happen with that image. A purely made-up image doesn’t have that. 


The key parts are that it is a visual representation that depicts the same sorts of body parts or conduct as in subsection (2)(a)(i), and that it has to be sufficiently realistic that the depiction “is likely to be mistaken for a visual recording of that person.”


It can’t be cartoon-ish or of such poor quality that you’d know immediately that it is not really that person. 


The scope of what could be an intimate image could be broader, but we have to be mindful of freedom of expression. Unfortunately, as of January 10 when I’m recording this, no Charter statement related to Bill C-16 has been released by the Canadian Department of Justice. (It’s been more than a month since the Bill was tabled in Parliament, so should have been released by now.)


The creation and distribution of intimate images is an expressive act and would be protected by the freedom of expression provision in section 2(b) of the Charter of Rights and Freedoms. But protected expression can be subject to “reasonable limits prescribed by law as can be demonstrably justified in a free and democratic society”. In order to justify the limitation, the goal of the legislature has to be pressing and substantial. i.e., is the objective sufficiently important to justify limiting a Charter right? And then there has to be proportionality between the objective and the means used to achieve it. 


This has three parts: first, the limit must be rationally connected to the objective. There must be a causal link between the measure and the pressing and substantial objective.


Second, the limit must impair the right or freedom no more than is reasonably necessary to accomplish the objective. The government will be required to show that there are no less rights-impairing means of achieving the objective “in a real and substantial manner”. 


Third, there must be proportionality between the deleterious and salutary effects of the law.


I think there is some risk that this expanded definition of “intimate images” may be vulnerable to being struck down as an unjustified infringement of freedom of expression. The law doesn’t create an offence of creating explicit deepfakes for “personal use”, so that’s not an issue. Though there is a defence related to “serving the public good” in section 162.1(3), I don’t think it’s broad enough to address the potential use of deepfakes in political satire and commentary.


Whether you like it or not, and regardless of whether you think it’s tasteful, AI generated imagery is being used to produce political commentary and satire. And yes, some of it does veer into depicting body parts and activities that can be captured in the new definition of “intimate image.” And you generally can’t outlaw expression just because it’s tasteless. At the end of the day, I don’t think the existing defence of “serving the public good” shields such political expression and leaves this provision vulnerable to a successful Charter challenge. 


Before I wrap up, I should note that the Protecting Victims Act also proposes to create an offence of threatening to publish or distribute an intimate image. This is the new section 162.1(1.1):


Everyone who, with the intent to intimidate or to be taken seriously, knowingly threatens to publish, distribute, transmit, sell, make available or advertise an intimate image of a person knowing that the person depicted in the image would not give their consent to that conduct, or being reckless as to whether or not that person would give their consent to that conduct, is guilty of an offence.


This goes beyond what is typically described as “sextortion”, where a bad guy threatens to release intimate images in exchange for more such images or money. “Sextortion” is captured in the general offence of extortion. This new offence would capture a threat even where the person making the threat doesn't expect or demand anything in return. It’s a reasonable addition to the criminal law.


Sunday, December 14, 2025

When student data is hacked & stolen: Regulators’ lessons from the PowerSchool data breach


You may recall hearing about a significant cybersecurity breach affecting school boards from the end of last year and the beginning of this year: the PowerSchool cybersecurity incident. In the past little while, the Information and Privacy Commissioners of Ontario and Alberta have released their reports of findings into the incident. (Ontario, Alberta) There is some interesting stuff in there that I think is worth chatting about. I’ll note that the Information and Privacy Commissioner of Saskatchewan also released a report of findings in August of this year.


This incident affected millions of students, parents, and educators across the country, involved sensitive personal information, and raised questions about outsourcing, cybersecurity, and accountability in the public sector. But many of these issues will be relevant for the private sector. You simply can’t outsource accountability for protecting data. 


One thing to be sensitive to is that school boards are chronically under-resourced and have a very hard time meeting their privacy and security obligations under existing budgets. Personally, I think the provinces should take a much more active role in working with school boards and their contractors to ensure the highest levels of cybersecurity. We’re seeing that with health information systems, and should expect it for student information systems.


Before I get into the main point of this episode, one digression … At least in Canada, we always have to ask “what privacy law applies?” When the incident came to light, it was completely clear that at least in Canada, public school boards and their students were affected. Every school board is subject to a provincial public sector privacy law. So there’d be no doubt that a provincial Information and Privacy Commissioner would have jurisdiction to investigate the incident. 


It was interesting that the federal commissioner jumped in there. The federal commissioner has jurisdiction under the federal Personal Information Protection and Electronic Documents Act – or PIPEDA – where there is a collection, use and disclosure of personal information in the course of commercial activity. 


In this case, the collection, use and disclosure of personal information was in the course of the school boards’ non-commercial activities. Just because the contractor – in this case PowerSchool is doing this for commercial purposes – should not give the federal commissioner jurisdiction. While both public and private sector privacy laws contain obligations to safeguard data, they work in very different ways. If a public sector privacy law applies to the school board, while the private sector law applies to the contractor with respect to the same information, it is unworkable. The two categories of laws are simply not compatible.


Regardless, the federal Office of the Privacy Commissioner of Canada also started making inquiries with PowerSchool, first announced on January 20. On February 11, the federal Commissioner announced they had launched an investigation and noted that they’d remain in close contact with provincial and territorial counterparts on the incident. There was no mention on the basis of his jurisdiction to investigate.


In July, the federal Commissioner announced that they’d negotiated a number of commitments from PowerSchool regarding cybersecurity upgrades, certification and monitoring. It’s worth noting that the letter of commitment specifically says that the Commissioner was of the view that PIPEDA applied in this case, PowerSchool did not agree, and reserves all future rights. And rightly so. At some point, we really need a court to step in to clearly lay down the lines between privacy laws in Canada. 


Thanks for indulging me for this digression. Now onto the main part of this episode, where I plan to cover four things:


  1. The background to PowerSchool and how schools use it

  2. What happened in the cyberattack

  3. What the Ontario and Alberta regulators investigated and concluded

  4. Where their findings align — and where they differ


PowerSchool is a major education technology provider. Across Canada, school boards use PowerSchool’s Student Information System, or SIS, to manage day-to-day education operations. That includes:


  • Student enrollment and attendance

  • Grades and academic records

  • Contact information for students and parents

  • Medical alerts, accommodations, and special needs

  • Staff and educator information


In many provinces, PowerSchool hosts this data in cloud-based environments that are largely operated and managed by PowerSchool itself, not the school boards. Of course, it’s done on the school boards’ behalf. 


Crucially, under Canadian privacy laws, school boards remain legally responsible for the personal information — even when a third-party service provider is handling it. That legal principle becomes very important once something goes wrong.


THE INCIDENT: WHAT HAPPENED?


The cyberattack was discovered in late December 2024.


Here’s what investigators from Ontario and Alberta determined happened. A threat actor obtained valid credentials belonging to a PowerSchool support contractor. These credentials had elevated privileges, meaning they could access PowerSchool’s internal support portal called PowerSource. PowerSource exists so that PowerSchool staff can provide remote technical support to customer school boards.


Once inside PowerSource with these credentials, the attacker was able to access multiple school boards’ Student Information System environments — effectively stepping through the front door.


From there, the attacker accessed student and educator databases, exfiltrated large volumes of personal information and copied data rather than encrypting systems. This was data theft, not ransomware in the traditional “systems locked” sense that we often see.


The compromised data included:


  • Names, dates of birth, and contact details

  • Student ID numbers

  • Medical alert fields and accommodations

  • Guardianship or custody indicators

  • Educator contact and employment details


In Alberta, some school boards reported that social insurance numbers were also involved.


After the breach was discovered, PowerSchool paid a ransom, reportedly believing that the data would be deleted. Months later, a second extortion attempt occurred involving the same stolen data — a reminder that once data is taken, control is largely lost.


Paying the ransom might have been a very sensible thing to do in the circumstances, but it’s no guarantee that the data’s been deleted and will never re-surface.


THE REGULATORY RESPONSE


Because public bodies were involved, this triggered investigations by provincial privacy regulators.


  • In Ontario, the Information and Privacy Commissioner investigated 20 school boards and the Ministry of Education.

  • In Alberta, the Information and Privacy Commissioner investigated 33 school boards, charter schools, and a francophone authority.


In both provinces, the regulators focused on a central legal question: Did the public bodies take reasonable measures to protect personal information, as required by their respective privacy statutes?



ONTARIO FINDINGS


The Ontario Commissioner concluded that, as a group, the institutions did not meet their statutory obligations under FIPPA and MFIPPA. That’s the Freedom of Information and Protection of Privacy Act and the Municipal Freedom of Information and Protection of Privacy Act. 


There were three major themes in the Ontario findings: (1) Inadequate Security Safeguards, (2) Weak Contracts and Oversight, and (3) Data Minimization and Retention Failures.  


1. Inadequate Security Safeguards


The Commissioner identified multiple weaknesses with Security Safeguards


  • PowerSchool accounts with excessive privileges - The rationale for the principle of least privilege is to reduce security and privacy risk by limiting the damage that can result from human error, malicious insiders, or compromised accounts. It should be implemented by granting users, systems, and applications only the specific permissions required to perform defined tasks, using restrictive defaults, role-based or task-based access controls, time-limited elevation of privileges, and regular access reviews to remove unnecessary or outdated permissions.


  • No mandatory multi-factor authentication for PowerSource access - This is one of the most important and effective measures for preventing unauthorized use of purloined credentials. 


  • Always-on” remote maintenance access - This meant that a bad guy with the credentials could get access to the maintenance tools, rather than only at the invitation of individual school boards.


  • Short log-retention periods, which limited detection of earlier suspicious activity


While PowerSchool operated the systems, Ontario emphasized that the school boards were still responsible for ensuring reasonable protections were in place.


2. Weak Contracts and Oversight


Ontario was particularly critical of how school boards managed their contracts with PowerSchool.

Many agreements:


  • Lacked meaningful audit rights

  • Did not require detailed security reporting

  • Had limited enforcement mechanisms

  • Did not clearly address subcontractors


Even more importantly from the OIPC’s point of view, the boards did not actively monitor PowerSchool’s compliance with those contracts. In other words, contractual promises existed — but verification did not.


3. Data Minimization and Retention Failures


The Ontario Commissioner also focused on data minimization and retention failures. The Commissioner found that many institutions simply collected more data than necessary and retained data far longer than required.


That significantly amplified the harm when the breach occurred. If you don’t need it, don’t collect it. If you no longer need it, don’t retain it. If you fail on either one of those – or both! – you  have more data that you have to protect and more data that’s affected if things go wrong. 


The Ontario Commissioner also found that breach response planning was inconsistent and, in some cases, inadequate.


ALBERTA FINDINGS


Alberta reached a similar conclusion, but approached the analysis somewhat differently.


The Alberta Commissioner found that the educational bodies failed to comply with section 38 of the FOIP Act, which requires reasonable security arrangements.


Key aspects of Alberta’s findings included (1) A lack of internal policies and guidance, (2) treating PowerSchool as an “employee”, and (3) an emphasis on the sensitivity of children’s data.


1. Lack of Internal Policies and Governance


Alberta placed strong emphasis on the fact that many educational bodies did not have adequate privacy or vendor-management policies, they could not point to documented procedures for assessing or monitoring service providers and they simply relied heavily on PowerSchool’s assurances.


From the Alberta OIPC’s perspective, privacy compliance begins with governance.


2. PowerSchool Treated as an “Employee”


One notable legal point in Alberta’s report is that, under FOIP, a service provider performing services for a public body is legally treated as an “employee”. That meant PowerSchool’s actions were attributed directly to the school boards themselves. This reinforces the idea that outsourcing does not reduce accountability.


3. Strong Emphasis on Sensitivity of Children’s Data


Alberta was particularly explicit in recognizing that children’s personal information is inherently highly sensitive, especially medical and accommodation data.


That sensitivity raised the expected standard of protection — and Alberta concluded that PowerSchool’s safeguards fell below that standard.


KEY DIFFERENCES BETWEEN ONTARIO AND ALBERTA 


The conclusions in Ontario and Alberta were broadly aligned, but there are some differences in emphasis.


1. Governance vs. Contracting Focus


  • Ontario focused heavily on contracts, oversight, and vendor management failures.

  • Alberta focused more on internal policies, governance frameworks, and statutory accountability.


2. Sensitivity of Information


  • Alberta placed stronger, more explicit weight on the heightened sensitivity of children’s data.

  • Ontario addressed sensitivity, but framed much of the analysis around risk amplification through retention and over-collection.


Despite these differences, both regulators reached the same core conclusion: The public bodies did not meet their legal obligations, and outsourcing did not excuse that failure.


BROADER LESSONS


There are several broader takeaways from these investigations.


First, outsourcing does not outsource accountability. Public bodies remain legally responsible for personal information, regardless of who hosts it. This is the same in the private sector for outsourcing. Accountability does not shift under Canadian privacy laws. 


Second, contracts alone are not enough: Oversight, auditing, and verification matter.


Third, data minimization is a security control: Retaining unnecessary data simply increases breach impact.


And finally, children’s data demands higher standards. Regulators are very clear on that point.


CONCLUSION 


The PowerSchool incident may be just another cybersecurity story, but like most such stories there are lessons to be learned or reminders of things we should already know.


It’s a case study in public-sector procurement, privacy governance, and risk management.


Ontario and Alberta both sent a clear message: If you rely on third-party platforms to manage sensitive data — especially data about children — you must actively govern those relationships, not simply trust them.


In the backdrop to all of this is the simple fact that most school boards are chronically under-resourced and have a very hard time meeting their privacy and security obligations under existing budgets. This is particularly the case for smaller – often rural – school boards. The same can be said for smaller municipalities. Personally, I think the provinces should take a much more active role in working with school boards and their contractors to ensure the highest levels of cybersecurity. For a system as widely used as PowerSchool, provincial departments of education should enter into master services agreements with all the appropriate security terms, and the provincial departments of education should actively oversee at least the security and audit portions of the delivery of services. 


One final thing to note – just because school boards are 100% accountable to their students for personal information they collect, use and disclose doesn’t mean that PowerSchool is necessarily off the hook. PowerSchool – and any contractor for that matter – can be liable to their customers for any contractual failings when it comes to safeguarding personal information. And depending on the contract terms, the contractor may be liable for the cost of any lawsuits that students and parents might bring against the school boards. And I can imagine some more extreme cases where students, parents and teachers could have a viable claim directly against PowerSchool. I understand there is one putative class action pending, started by a Calgary law firm. And this would be in addition to the at least 55 class action lawsuits filed in the United States by American plaintiffs.