Sunday, September 21, 2025

The words “use” and “loss” in privacy laws may not mean what you think in a cyber-security incident



I want to talk about a recent decision from the Ontario Divisional Court that affirms the Information and Privacy Commissioner’s very expansive view of what counts as a “use” or “loss” of personal information under Ontario’s privacy laws. Spoiler alert: it probably doesn’t mean what you think it means.


This case came out of ransomware attacks on two organizations: the Hospital for Sick Children in Toronto, known as SickKids, and the Halton Children’s Aid Society. Neither organization’s investigation found that hackers had actually looked at, copied, or stolen personal information. But both were still found by the Information and Privacy Commissioner of Ontario—the IPC—to have breached their obligations to notify individuals. And when the case went to court, the judges deferred to the regulator. Let’s look at what happened.


In 2022, both SickKids and Halton were hit by separate ransomware attacks. If you’re not familiar, ransomware is malicious software that encrypts systems and data so that they can’t be accessed unless a ransom is paid to get the decryption key.


Here, the attackers encrypted the systems at the container level—think of it like changing the lock on a filing cabinet. The files inside were untouched, unviewed, and un-exfiltrated, but temporarily unavailable.


Both SickKids and Halton promptly investigated, brought in cybersecurity experts, and concluded that there was no evidence of any data being accessed or stolen. They even notified the IPC, though they argued this was just a courtesy because the legal requirement to notify individuals wasn’t triggered. SickKids went further, posting public updates on its website and social media. But they didn’t include the mandatory line about the right to complain to the Information and Privacy Commissioner.


The IPC saw things differently. In 2024, it issued two decisions (Sick Kids, Halton CAS) . It found that both organizations had experienced a privacy breach involving an unauthorized “use” and “loss” of personal information. The trigger is an unauthorized “use” or an unauthorized “loss” of personal information. They concluded that the information was “used” and “lost” in an unauthorized manner, triggering the requirement to report to the Commissioner and to notify affected individuals. And to advise them of their right to complain to the Commissioner. 


Why? The IPC reasoned that encrypting the containers “handled” or “dealt with” the personal information inside them, making it inaccessible to authorized users. That, it said, was enough to count as a “use.” And because the information was unavailable for a period of time, that was also a “loss.”


It should be noted that encryption at the container level did not expose any personal information and did not create any sort of risk to the affected individuals once remedied.


For Halton, the IPC ordered notice to affected individuals—though by way of a website posting rather than direct notification. For SickKids, since it had already gone public, no remedial order was made.


Both SickKids and Halton challenged the IPC’s decisions in court. The Ontario Hospital Association even intervened to support them, arguing that this interpretation of “use” and “loss” would lead to pointless over-notification and compliance burdens.


Now, this is where what we lawyers call the “standard of review” becomes important. When a court reviews an administrative decision, like one from the IPC, it doesn’t just substitute its own view of the law. Under a framework established by the Supreme Court of Canada in a case called Vavilov, the default standard is “reasonableness.” That means the court will defer to the regulator’s decision so long as it is “reasonable”, meaning it is internally coherent, justified, and within the bounds of the law.


In other words, unless the regulator really went off the rails, the court won’t step in.


The Divisional Court—Judges Sachs, Lococo, and Kurke—dismissed both the judicial reviews and Halton’s appeal.


They held that the IPC had reasonably interpreted “use” to include encryption that denied authorized users access to information, even if no one else ever looked at it. They also upheld the IPC’s finding that this was a “loss” of information, again because of the temporary unavailability.


The Applicants had argued that notification should only be required where individuals’ privacy interests were actually affected—where there’s a real risk of harm, like theft or misuse. The Court rejected that. Ontario’s Personal Heath Information Protection Act and Child, Youth and Family Services Act, 2017 don’t contain a “risk of significant harm” threshold. The statutes just say notify if information is “used” or “lost.” That’s the threshold.


The Court emphasized that words like “use” don’t necessarily carry their ordinary, common-sense or dictionary meaning. Instead, they take on the meaning given by the regulator, so long as that interpretation is reasonable.


I’ll be blunt: I don’t agree with this outcome. I understand why the Court deferred to the IPC, but I don’t agree with the IPC’s interpretation of those words. Encrypting a server at the container level is not, in any meaningful sense, a “use” of personal information. In any ordinary sense of the word, it was not “used”. Nobody viewed it, nobody copied it, and nobody exfiltrated it. The information was never actually touched. Ones and zeroes are moved around hard drives every minute of every day, and we don’t think of that as data being “used”. 


And calling this a “loss”? At best, it was a temporary disruption. To me, that’s not what “loss” means. Putting it on a thumb drive and misplacing it would be a “loss”. If there was a temporary power cut to their data centre and the information was not accessible for an hour, we would not think that there’s any real unauthorized “loss” of the data. There was no risk of identity theft, no misuse, no real risk of harm to the individuals involved.


Here’s where I think the problem lies: Ontario’s PHIPA and the CYFSA don’t have a risk-based threshold. They require notification if there’s a “use” or a “loss,” regardless of whether there’s any actual risk to the individual. Compare that to the federal private sector law, PIPEDA. Under PIPEDA, an organization has to notify affected individuals and report to the federal Privacy Commissioner only if there’s been a “breach of security safeguards” that creates a “real risk of significant harm”.


That’s a sensible threshold. It filters out situations like this one, where the systems were disrupted but no one’s privacy was actually at risk. In my view, the PIPEDA standard is better. It focuses on the individual’s actual risk, rather than forcing organizations to notify just because a breach happened. Without a risk filter, you end up with over-notification, unnecessary costs, and notice fatigue, which ultimately makes people take these notices less seriously.


Because Ontario’s statutes don’t include a “real risk of significant harm” threshold, regulators like the IPC are free to take a very broad approach to words like “use” and “loss.” And courts, applying the deferential reasonableness standard, are not going to interfere.


So what does this mean for organizations in Ontario? It means that a word like “use” doesn’t always mean what you think it means. Regulators may adopt broader, purposive interpretations—especially in the context of cyberattacks. And courts, applying the reasonableness standard, will generally defer to those interpretations.


It also reinforces to me that privacy law is not really a practice area that one can just dabble in. Words in the statutes don’t necessarily mean what you’d think they mean. They have meanings given to them by the regulators, and the courts will generally defer to that interpretation. 


The lesson is this: don’t rely on common-sense definitions of terms like “use,” “loss,” or “disclosure.” And don’t assume that the risk-based federal standard applies provincially. Look at how regulators are interpreting these terms in practice, because that’s what will stand up in court.


Sunday, September 14, 2025

Recording conversations -- using AI gadgets and otherwise -- and the law in Canada


One of the most common questions I get is about recording conversations. Can you do it? Is it legal? And maybe just as importantly … is it a good idea?


The answer is … complicated. And sometimes, even if it’s legal, it can be hostile or problematic.


A quick production note: I started a privacy law blog in 2004, and then started a YouTube channel at the end of 2021. In order to make this as accessible across multiple media, I’ve started a podcast that takes the audio and makes it available via Apple Podcasts, Spotify and the others. If you’d like privacy content while in the car or mowing the lawn, just look for “privacylawyer” in your favourite podcast app.


Now back to recording conversations and the law in Canada … 


I’ll try to break it down.


Before we get into the traditional scenarios, let’s start with something very new: AI wearables.


You might have heard of something called the “Humane Pin”. The Humane AI Pin was a screenless, AI-powered wearable device designed by the American startup Humane. They somehow thought it could replace smartphones. After shipping in April 2024 to overwhelmingly negative reviews, Humane was acquired by HP, which discontinued the device's service in February 2025. Famously, Marques Brownlee - an incredibly influential YouTuber and product reviewer called it the worst product he’d reviewed. The Humane Pin flopped, but that wasn’t the end of “AI wearables.”





A more recent device is a thing called “Bee”. It’s a small wrist-worn gadget with microphones built in. The idea is kind of simple and a logical extension of a lot of what generative AI has to offer: You slap it on your wrist and it listens to what’s going on, it transcribes, and it helps you keep track of what’s said throughout your day. Think of it as a memory assistant. You can review conversations later, get reminders of “to-dos,” or even have it summarize meetings.





That sounds useful for productivity and accessibility. Imagine if English isn’t your first language, or if you’re hard of hearing, have a bad memory or if you simply want a perfect record of a complex meeting. 


I’ve had relatives dealing with dementia, and something like this could be helpful, assistive technology when memories are fading and failing. 


The catch is that they’re “always listening.” They’re not just catching your thoughts — they’re catching the people around you, likely without their knowledge. And that can raise privacy concerns.


Now, the law hasn’t changed because of gadgets like these. The same rules apply (which I’ll get into in greater detail): if you’re a party to the conversation, recording isn’t automatically illegal. But the scale and permanence are different. Instead of someone taking really detailed notes, now you have a verbatim transcript — stored in the cloud, maybe analyzed by AI, and potentially vulnerable to misuse or breach.


You may recall Google Glass, originally launched in 2014. It was pretty cool and likely ahead of its time. What caused privacy regulators heartburn was that it had an integrated camera. Though it was not recording all the time, the regulators really wanted it to have a red light on the front so that people around would at least be aware of whether it is recording. These new wearables are even less conspicuous and people whose voices can be captured likely have no knowledge that they’re being picked up.


Let’s dig into the law that applies to recording conversations in Canada, whether you do so on an old timey reel-to-reel recorder, your smartphone or an AI wearable. And these rules are the same whether you’re face-to-face, on a phone call or in a Teams meeting.


If we’re talking about conversations that begin and end in Canada, the first place to look is the Criminal Code of Canada. Part VI of the Code is actually titled “Invasion of Privacy,” and it makes it illegal to intercept a private communication unless you have authorization — like a warrant — or unless one of the legitimate parties to the conversation consents.


The Criminal Code makes it a hybrid offence (meaning that it can be prosecuted either as an indictable offence or a summary offence) to “knowingly intercept a private communication”. The maximum penalty is up to five years in prison. There’s a saving provision which says the offence does not apply to “a person who has the consent to intercept, express or implied, of the originator of the private communication or of the person intended by the originator thereof to receive it”.


This is often called “one-party consent.” In simple terms, if you’re part of the conversation, you can record it. But if you’re not part of the conversation, you can’t secretly bug the room, leave a phone recording on the table, and walk away. That would be illegal eavesdropping.


You’ll note that consent can be implied. I haven’t seen any cases on this point, but I’d think having a loud conversation in a public place within earshot of others may be “implied consent” for the conversation to be “intercepted.” But I would not want to be the test case.


While you might see CCTV surveillance cameras all over the place, they should NOT be recording audio. This would likely be illegal “interception of a private communication” and I don’t think signs like this one will get the requisite consent. Many consumer grade surveillance cameras that we’re now seeing all over the place also have a capability to record audio. If you’re using one of these cameras and they’re positioned where someone might be having a conversation, disable the audio collection. 





So, if you’re a lawful participant in the conversation, the Criminal Code is not triggered. But if it’s someone else’s conversation, you can’t intercept it or record it. 


But that’s not the end of the story. In Canada, we also have privacy laws: PIPEDA federally, plus provincial laws in Alberta, BC, and Quebec.


Here’s the key: these laws don’t apply to purely personal or domestic activities. So if you’re recording a conversation for your own memory, or for journalistic purposes, or to make a record of something for your own personal purposes, you’re not subject to PIPEDA when you’re doing that. The same applies for the provincial privacy laws of Alberta, BC and Quebec. Those laws generally apply to businesses and “organizations”.


But if you’re recording for commercial purposes — say, recording customer service calls — then privacy law kicks in. In those cases, you generally need to tell the person and get their consent. You’ll notice most companies start their customer service lines with: “This call may be recorded for quality assurance and record keeping purposes.” That’s why. The idea is that you’re on notice that it will be recorded and if you stay on the line, your consent to the recording is implied.


(Technically, the company has to list all the purposes for the recording and I think many are not doing a full job. For example, you can’t just say it’s for “quality assurance” purposes when you’re also keeping the recordings for record keeping purposes.)


And there’s more: even if a recording doesn’t violate the Criminal Code or privacy statutes, you may still face claims under provincial privacy torts, or common law actions for unreasonable invasion of privacy. This is a bit of a stretch for a conversation that the recorder is lawfully a part of, but I can certainly see a possible claim if the conversation was clearly of a private nature and the recording is made public.


Now let’s shift to the workplace. This is where the issue gets interesting — and frankly, tricky.


I was at a labour and employment law conference not long ago, and almost everyone in the room had a story about employees secretly recording conversations. Sometimes they recorded meetings with their supervisors, sometimes with colleagues. And in every anecdote I heard, it was a case where the other party to that conversation would not have agreed to the recording and people got really upset when the recording became known.


If the employee is a lawful party to the conversation, it’s not illegal under the Criminal Code. But does that make it okay? Not really.


Secretly recording a conversation is almost always seen as a hostile act. It signals distrust, it poisons the relationship, and it creates a “gotcha” culture.


Employers are within their rights to regulate this. I’ve heard of cases where an employee steps out of a meeting, but leaves their phone in the room, recording. The employee may be wondering if their colleagues talk about them when they’re not around. Well, that’s eavesdropping and a crime. If they secretly record meetings they’re attending, it may not be criminal — but it can still be problematic, and it may be against workplace policy. Employers should have policies about this. 


Beyond ordinary workplaces, I’ve advised hospitals and health authorities about audio recording. Doctors and psychologists often feel uneasy when patients pull out a recorder. It can feel adversarial.


But sometimes recording is legitimate — even helpful. I remember when my father was diagnosed with cancer, my mother took detailed notes at every doctor’s appointment. There was so much information and all of it was overwhelming. If smartphones had been as common then as they are now, I would have suggested that she record these conversations, just to make sure she captured all the important information in such a stressful moment.


I’ve also spoken with psychologists where patients wanted to record therapy sessions. At first, practitioners felt uneasy. But when we explored it, recording actually improved therapy in some cases: patients could revisit the conversation, reinforce insights, and strengthen the therapeutic relationship. Once this was understood, the psychologists were concerned about whether the patients would adequately protect the recordings of these very sensitive conversations. Once the client walks about, that’s not really on the psychologist, but they can talk to their clients about this. I think in this scenario, it’s important for everyone to be on the same page.


So it’s not always hostile. Sometimes it’s accommodation. Sometimes it’s simply practical.


There’s also a new one that’s come up a lot recently: AI-enabled recording and transcription services that are built into or added onto video calls. You’ve probably seen them in Zoom or Microsoft Teams — a little box pops up saying “Recording and transcription is on.” I’ve seen people send their little ai companions to calls that they can’t attend personally. 


These tools can be fantastic. They create a really good record of meetings, which can help with minutes, accountability, or accessibility — for example, if someone in the meeting is hard of hearing, or if English isn’t their first language. I’ve used automatic captions in a number of cases because it can be very helpful, and this is enabled by AI “interception.” Automatic transcription can also let people go back and confirm exactly what was said.


But they can also make people nervous. Suddenly, everything you say in a meeting is not just heard in the moment — it’s captured, stored, maybe even analyzed. That can change the vibe and how people participate.


It also creates a very detailed record that can be subject to discovery in litigation, which is its own risk.


From a legal standpoint, the rules haven’t really changed. If you’re part of the conversation, recording or transcribing isn’t illegal. In many ways, it’s not that different from someone taking very detailed and accurate notes. The real difference is scale and permanence: instead of one person’s notes, it’s a verbatim transcript that might live on a server indefinitely. It also creates a reliable record that is likely more credible in a hearing or a trial than any one person’s recollection or notes may be.


I think it’s a best practice for organizations to have a clear policy about the use of these tools. Decide when it’s appropriate, make sure everyone in the meeting knows what’s happening, and have rules around how those recordings and transcripts will be used, stored, and eventually deleted. I’m on the board of one volunteer organization, and it was decided that recording and AI transcription could be used but only to help the meeting’s secretary prepare the final minutes. Once the minutes were final, the recording and the transcript were deleted. The minutes are the official record.


And be careful about confidentiality. You may be fine with recording most of a meeting, but want to turn it off during any “in camera” period. And you’ll want to make sure that the recordings are securely stored in accord with the company’s records keeping policies. 


Before I wrap up, I’ll mention two additional scenarios that are related to the legal system itself. First, under the rules of professional conduct for lawyers in Canada, there are requirements for a lawyer to notify a client or another legal practitioner of their intent to record a conversation. Rule 7.2-3 from the Law Society of Ontario Rules of Professional Conduct says

“A lawyer shall not use any device to record a conversation between the lawyer and a client or another legal practitioner, even if lawful, without first informing the other person of the intention to do so.”

So this requires notice, not consent. Essentially, you can’t do it secretly. 


The second scenario related to the legal system is court hearings. As a general rule, you cannot record a court hearing without the permission of the presiding judge. I’ve been at hearings where reporters present are allowed to record, but the recordings can only be used to check the accuracy of their notes, and the recordings cannot be further disseminated or broadcast.


Monday, September 08, 2025

Privacylawyer content now available as a podcast

I'm a longtime podcast listener and I watch a lot of YouTube. For some time, I've wanted to be sure that anyone who may be interested in my original content can get it wherever they want it. (That's one reason why I generally post the text of my YouTube videos here on the blog. Some people like to read words rather than watch a talking head.

From now on, my YouTube content will also be available as a podcast so  you can just subscribe in your podcast app of choice. 

The standalone page for the podcast can be found here: Privacylawyer - Canadian privacy and technology law with David Fraser.


Ontario privacy finding: Hidden biometrics in on-campus vending machines


On August 27, 2025, the Information and Privacy Commissioner of Ontario released a revised finding against the University of Waterloo. The initial report was issued in June this year and I should have done an episode on it then. The case involved what looked like a pretty ordinary thing on campus — vending machines. Except these weren’t just any vending machines. They were “intelligent vending machines,” installed by a third-party service provider, and they secretly used biometric face detection technology.


That sounds creepy and the University was found to have violated Ontario’s public sector privacy law. It’s not as cut and dried, but there are some interesting takeaways from that decision. 


Nobody on campus was aware that these vending machines use face detection technology until one of the machines malfunctioned and flashed an error message on its screen — basically outing itself as running “FacialRecognition.App.exe.” Understandably, students complained. It got a lot of media coverage and some buzz on Reddit.


Photo of a display showing an error message



The Information and Privacy Commissioner of Ontario investigated.


At the outset, the University of Waterloo challenged whether the Commissioner even had jurisdiction here. The University argued that this wasn’t really about Ontario’s Freedom of Information and Protection of Privacy Act — instead, they said it was governed by the federal Personal Information Protection and Electronic Documents Act or PIPEDA. Their reasoning? Selling snacks through vending machines is a commercial activity. And PIPEDA applies to the collection, use and disclosure of personal information in the course of commercial activity. And that meant the federal law applied, not the provincial law.


They also argued that if the vending machines didn’t actually capture personal information — as the manufacturer claimed — then there was nothing for the Commissioner to investigate. And finally, Waterloo tried to limit its responsibility by pointing out that it never contracted for biometric collection in the first place. In their view, if the vendor went off and deployed face detection technology, that wasn’t for them, they didn’t ask for it and they should not be on the hook for it.


The Commissioner rejected all of those jurisdictional arguments. The decision emphasized that under FIPPA, Ontario institutions like universities are responsible for personal information collected by vendors operating on their behalf — even when those vendors are engaged in activities with a commercial character. The Commissioner leaned on the “double aspect” doctrine in our constitutional jurisprudence: both federal and provincial laws can apply at the same time. In other words, even if PIPEDA could cover some of the activity, that doesn’t oust FIPPA.


So the bottom line on the jurisdiction question was that the University of Waterloo couldn’t escape the Commissioner’s oversight just by pointing to federal law or saying “we didn’t know.” Once personal information was being collected on its campus by machines it authorized, the University was on the hook under FIPPA


On the merits, the Commissioner concluded that the machines were capturing facial images, even if only for milliseconds. Not surprisingly, these facial images qualify as “personal information” under Ontario’s Freedom of Information and Protection of Privacy Act (FIPPA).


The collection wasn’t authorized by law, wasn’t necessary for selling chips and chocolate bars, and no notice was given.


Therefore, in the IPC’s view, Waterloo had violated FIPPA.


In order to find Waterloo at fault, or in violation of FIPPA, the IPC asks and answers three questions:


The IPC asked: “Did Waterloo “collect” personal information?” The Commissioner said yes. Even though the vendor claimed the system only processed images in real time, the machines captured full facial images in memory to estimate age and gender. That’s enough to count as a collection of personal information.


But really? Was it really Waterloo who “collected” personal information? Legally, yes. They had a vendor who was supplying goods and services on their behalf and the University is responsible for that. 


Then the IPC asked: “Was the collection compliant with FIPPA?” No. Section 38(2) of FIPPA says you can only collect personal information if it’s expressly authorized, needed for law enforcement, or necessary to carry out a lawful activity. Selling snacks doesn’t need biometric data. It might be “helpful” for marketing — but helpful isn’t the same as “necessary.” And also, no notice was given that personal information was being collected and why.


Finally, the IPC asked: “Did Waterloo have reasonable measures to protect personal information?” The Commissioner said they had decent contract clauses, but they fell down in procurement. They didn’t do the privacy risk assessment that could have flagged the biometric capability. That failure meant they didn’t exercise enough due diligence, and so they’re responsible.


Here’s where I think the finding is problematic. Waterloo had no knowledge of the biometric functionality. They weren’t using it, they didn’t ask for it, and their contract didn’t mention it. The vendor who responded to the RFP for vending machines apparently wasn’t aware of this functionality in some of the machines they provided. That other supplier embedded this capability, and at the time nobody was aware of it.


Due diligence usually asks the question with reference to what a reasonably prudent person would have done in the same circumstances. Without the benefit of hindsight, I think the University met that standard. But they could have done better, so the University is still on the hook for a privacy violation. It seems to be holding them to a higher standard, based on what we know now. 


It could have been enough to just give them a gentle slap upside the head, saying it’s 2025 and we need to assume that anything that uses electricity – and particularly if it’s a “connected device” – has the potential to collect personal information. You need to check. Even vending machines. 


Think about what this means in practice:


Does every university, hospital, or government office now need to disassemble or reverse-engineer every piece of technology it procures? Almost. 


Do they need to anticipate hidden biometric features in a vending machine?


Or test for surveillance capabilities in every piece of software?


That’s a pretty heavy burden — one that goes far beyond what most organizations reasonably do. I guess the standard for reasonable diligence has to be raised.


Yes, we want institutions to take privacy seriously. Yes, procurement processes should involve risk assessments. But here, it feels like the University is being faulted for not uncovering something that was essentially hidden. I’m not sure we can fault them for not asking at the time whether a vending machine used biometrics. We know now, but I don’t think they should be expected to have known to ask back then. 


While the vendor was not in the cross-hairs of the IPC’s investigation, vendors need to be mindful. If you build a product with biometric capabilities, you should have to disclose it — clearly and up front. If it’s an “internet of things” connected thing, it should be clearly identified as such. There probably is a boilerplate term in contracts that put the vendor on the hook if they cause the customer to violate any applicable law.  


In the end, a finding of having violated FIPPA isn’t like a criminal charge. The IPC issued two recommendations, which the university agreed to implement. First was to review their policies to make sure that future collection of personal information complies with FIPPA. Second was to implement practices to carry out necessary due diligence to identify, assess and mitigate any potential risks to personal information throughout the entire procurement process, including during the planning, tendering, vendor selection, agreement management and termination phases.


There’s a lesson here for everyone: I guess it’s time to update all your procurement and vendor documentation to ask about any connected or biometric features. Ask detailed questions about every bit of gear being installed and fully understand their capabilities. And I’d include reps and warranties in my contacts allowing for the termination of agreements if there has been any misrepresentation about the possible collection of personal information. 


One thing also to note is I think this would have gone differently for the university if the vendor wasn’t the university’s service provider. As I mentioned before, the university is on the hook for all personal information collected by their service providers, whether they wanted the information collected in the first place. But if the university had structured the arrangement differently, they likely would have avoided that direct responsibility. For example, if the agreement was more like the bare rental of space for the placement of vending machines on campus, the element of custody or control of the data likely would not have been there. Imagine the university enters into a lease with Starbucks to put a coffee shop in the library atrium. In such a scenario, you wouldn’t really see the University as being responsible for Starbucks’ collection of personal information as part of the Starbucks Rewards loyalty program.  Or maybe the privacy commissioner would take a different view? I kind of hope not.


In any event, there are more than a few lessons to learn from this finding.