Sunday, October 19, 2025

Canada's Privacy Regulators vs. TikTok: A critical overview


(This post is largely a transcript of the YouTube and podcast episode above.)

On September 23, 2025, the Federal Privacy Commissioner and his provincial counterparts in British Columbia, Alberta and Quebec issued a joint report of findings into TikTok.  This is a big one. It raises some interesting — and troubling — questions about jurisdiction, children’s privacy, reasonableness, consent, and what it actually means to protect privacy.

In my view, the Commissioners have imposed an almost impossible standard on TikTok — one that, ironically, could actually reduce privacy for users. Let’s unpack what they found, and why I think they may have gone too far.

I’ll note that the finding is more than thirty pages long, with almost two hundred paragraphs. This should be treated as an overview and not a deep dive into all of the minutiae. 

TikTok Pte. Ltd., a Singapore-based company owned by ByteDance, operates one of the most popular social-media platforms in the world. In Canada alone, about 14 million monthly users scroll, post, and engage on TikTok.

The investigation examined whether TikTok’s collection, use, and disclosure of personal information complied with PIPEDA, Quebec’s Private Sector Act, and the provincial privacy statutes of Alberta and B.C.

A key preliminary issue was jurisdiction

The British Columbia Personal Information Protection Act is a bit quirky. It says 

Application

3    (1)    Subject to this section, this Act applies to every organization.

    (2)    This Act does not apply to the following: (c) the collection, use or disclosure of personal information, if the federal Act applies to the collection, use or disclosure of the personal information;

TikTok argued that because of this, only one of the Federal Act or the British Columbia Act could apply. 

In my view, the response to this argument by the Commissioners is facile. They said: 

[22] Privacy regulation is a matter of concurrent jurisdiction and an exercise of cooperative federalism, which is a core principle of modern division of powers jurisprudence that favours, where possible, the concurrent operation of statutes enacted by the federal and provincial levels of government. PIPA BC has been “designed to dovetail with federal laws” in its protection of quasi-constitutional privacy rights of British Columbians. The legislative history of the enactment of PIPEDA and PIPA BC and their interlocking structure support the interpretation that PIPEDA and PIPA BC operate together seamlessly.

[23] PIPA BC operates where PIPEDA does not, and vice versa. In cases such as the present, which involve a single organization operating across both jurisdictions with complex collection, use, and disclosure of personal information, both acts operate with an airtight seal to leave no gaps. An interpretation of s. 3(2)(c) that would deprive the OIPC BC of its authority in any circumstance the OPC also exercises authority is inconsistent with the interlocking schemes and offends the principle of cooperative federalism.

In my view, this has nothing to do with “cooperative federalism”. In this case, they’re waving their hands instead of engaging in helpful legal analysis. The British Columbia legislature chose to say that if PIPEDA applies, PIPA will not. This is not about constitutional law. The Commissioners could have articulated a much more clear and straightforward response to this argument: TikTok collects personal information across Canada, in BC and elsewhere. PIPA applies to “the collection, use and disclosure of personal information that occurs within the Province of British Columbia” (This is from the federal regulation regarding PIPEDA’s application in British Columbia.) So in this joint investigation, BC’s PIPA applies to the personal information of British Columbians and PIPEDA applies to the personal information of individuals outside of British Columbia. They could have said that, but they didn’t. They did say it was about “overlapping protections” and not “silos”. I think this is incorrect. The British Columbia Act and the Federal Regulation clearly say: this is “the BC Commissioner’s silo”, and this is “the Federal Commissioner’s silo.”

So, the investigation moved forward jointly, setting the stage for three major questions:

  1. Were TikTok’s purposes appropriate?

  2. Was user consent valid and meaningful?

  3. Did TikTok meet its transparency obligations — especially in Quebec?

The first issue asked whether TikTok was collecting and using personal information — particularly from children — for an appropriate and legitimate purpose.

TikTok’s terms forbid users under 13 (14 in Quebec), but the Commissioners found its age-assurance tools were largely ineffective. The platform relied mainly on a simple birth-date gate at signup, plus moderation for accounts flagged by other users or automated scans.

As a result, TikTok said that it removes around half a million under-age Canadian accounts each year — but regulators concluded that many more likely go undetected.

It seems to me that terminating half a million accounts a year because they think the user may be underaged is a pretty strong sign that the company is sincere in its desire to NOT have kids on their platform. 

They also noted TikTok already uses sophisticated facial- and voice-analytics tools for other purposes, like moderating live streams or estimating audience demographics, but not to keep kids off the platform. The regulators want TikTok to re-purpose these tools for age estimation. 

The Commissioners found that TikTok was collecting sensitive information from children — including behavioral data and inferred interests — without a legitimate business need. In their view, that violates the “reasonable person” standard under PIPEDA s. 5(3) and the comparable provisions in the provincial laws.

This part makes my head hurt a bit. The regulators said:

[67] In light of the above (as summarized in paragraphs 64 to 66), we determined that TikTok has no legitimate need or bona fide business interest for its collection and use of the sensitive personal information of these underage users (in the context of PIPEDA, PIPA AB and PIPA BC), nor is this collection and use in support of a legitimate issue (in the context of Quebec’s Privacy Sector Act). It is therefore our finding, irrespective of TikTok’s assertion that this collection and use is unintentional, that TikTok’s purposes for collection and use of personal information of underage users are inappropriate, unreasonable, and illegitimate, and that TikTok contravened subsection 5(3) of the PIPEDA, section 4 of Quebec’s Private Sector Act, sections 11 and 14 of PIPA BC and sections 11 and 16 of PIPA AB.

It’s clear that TikTok does not want children on its platform and takes active steps to keep children off its platform. The regulators were clear that they didn’t think the measures taken were adequate, but I didn’t see them say that TikTok was insincere about this. So they find that TikTok’s purposes for collecting personal information from children was not reasonable. 

But TikTok had no purposes for collecting personal information from children. If kids make it through the age-gate and don’t have their account deleted, TikTok still does not want that data. They essentially said: “Your collection of personal information that you do not want and do not try to get is unreasonable.” Ok. I guess that’s their view. 

The second issue focused on consent — whether TikTok obtained valid and meaningful consent for tracking, profiling, targeting, and content personalization.

The Commissioners said it did not.

They found that TikTok’s privacy policy and consent flows were too complex, too long, and lacked the up-front clarity needed for meaningful understanding. In particular:

  • Key information about what data was being collected and how it was used wasn’t presented prominently.

  • Important details were buried in linked documents.

  • The privacy policy was not available in French until the investigation began.

  • And users were never clearly told how their biometric information — facial and voice analytics — was used to infer characteristics like age and gender.

Even for adults, the Commissioners said consent wasn’t meaningful because users couldn’t reasonably understand the nature and consequences of TikTok’s data practices.

And for youth 13–17, TikTok mostly relied on the same communications used for adults — no simplified, age-appropriate explanations of how data is collected, used, or shared.

Under the Commissioners’ reasoning, because the data involved is often sensitive — revealing health, sexuality, or political views — TikTok needed express consent. They found the platform failed that standard.

[81] Additionally, while users might reasonably expect TikTok to track them while on the platform, which they can use for “free”, it is our determination that they would not reasonably expect that TikTok collects the wide array of specific data elements outlined earlier in this report or the many ways in which it uses that information to deliver targeted ads and personalize the content they are shown on the platform. Many of these practices are invisible to the user. They take place in the background, via complex technological tools such as computer vision and TikTok’s own machine learning algorithms, as the user engages with the platform. Where the collection or use of personal information falls outside of the reasonable expectations of an individual or what they would reasonably provide voluntarily, then the organization generally cannot rely upon implied or deemed consent.

The Commissioners’ reasoning is generally coherent, but I’m not sure that it directly leads to a requirement for express consent. Consent can be implied where the individual understands what information is being collected and how it will be used, and it makes sense to take into account whether the individual expects the collection and use.  The main issue here is that there was collection and use of information outside the reasonable expectations of the individual. TikTok’s data practices are part of its “secret sauce” that has led to its success. Following the reasoning of the Commissioners … if TikTok had better calibrated the expectations of its users, it could have relied on implied consent. 

The Quebec Commissioner took things even further. Under Quebec’s Private Sector Act, organizations must inform the person concerned before collecting personal information.

The CAI found TikTok failed to highlight key elements of its practices and was using technologies like computer vision and audio analytics to infer users’ demographics and interests without adequate disclosure.

The CAI also found that TikTok allowed features that could locate or profile users without an active opt-in action, violating Quebec’s rule that privacy settings must offer the highest level of privacy by default.

Now here’s where I think the Commissioners overreached.

They’re effectively holding TikTok — and by extension, every global digital platform — to a near-impossible standard.

First, on age verification: to exclude all under-13 users, TikTok would need to collect more information from everyone — things like government-issued ID or facial-age scans. That’s exactly the kind of sensitive biometric data privacy regulators have previously warned against.

So in demanding “better” age assurance, the Commissioners are actually requiring more surveillance and more data collection from all users — adults and teens alike. While it may be “protecting the children”, like so many age assurance tools it is actually privacy-invasive.

Second, on consent and transparency: privacy regulators have long said privacy policies are too long, too legalistic, and too hard to read. Yet here, they criticize TikTok for not providing enough detail — for not being even longer and more comprehensive.

So which is it? We can’t reasonably expect the average user to read a novel-length privacy policy, yet that’s what these findings effectively require.

And third, the Commissioners’ reasoning conflates complexity with opacity. TikTok’s algorithms and personalization systems are complex — that’s the nature of modern machine learning. Explaining them “in plain language” is a noble goal, but demanding a full technical manual risks burying users in noise.

In my view, this decision reflects a growing tension in privacy regulation: between idealism — the desire for perfect transparency and perfect protection — and pragmatism — the need for solutions that actually enhance user privacy without breaking the internet.

The regulators seem to be demanding a standard of perfection in a messy and complicated world. These laws can be applied reasonably and flexibly.

One final thing to note: The regulators say that information provided to support consent from young people (over the age of 13 or 14) has to be tailored to the cognitive level of those young people. That means it has to be subjective, in light of the individual. But the Privacy Commissioner of Canada is arguing in the Supreme Court of Canada against Facebook that consent is entirely objective, based on the fictional “reasonable person” (who is NOT a young person). They should pick a lane. 

So, where does this leave us? TikTok has agreed to implement many of the Commissioners’ recommendations — stronger age-assurance tools, better explanations, new teen-friendly materials, and improved consent flows.

But whether these measures will truly protect privacy — or simply demand more data from more users — is a question regulators and platforms alike still need to grapple with.

Sunday, September 21, 2025

The words “use” and “loss” in privacy laws may not mean what you think in a cyber-security incident



I want to talk about a recent decision from the Ontario Divisional Court that affirms the Information and Privacy Commissioner’s very expansive view of what counts as a “use” or “loss” of personal information under Ontario’s privacy laws. Spoiler alert: it probably doesn’t mean what you think it means.


This case came out of ransomware attacks on two organizations: the Hospital for Sick Children in Toronto, known as SickKids, and the Halton Children’s Aid Society. Neither organization’s investigation found that hackers had actually looked at, copied, or stolen personal information. But both were still found by the Information and Privacy Commissioner of Ontario—the IPC—to have breached their obligations to notify individuals. And when the case went to court, the judges deferred to the regulator. Let’s look at what happened.


In 2022, both SickKids and Halton were hit by separate ransomware attacks. If you’re not familiar, ransomware is malicious software that encrypts systems and data so that they can’t be accessed unless a ransom is paid to get the decryption key.


Here, the attackers encrypted the systems at the container level—think of it like changing the lock on a filing cabinet. The files inside were untouched, unviewed, and un-exfiltrated, but temporarily unavailable.


Both SickKids and Halton promptly investigated, brought in cybersecurity experts, and concluded that there was no evidence of any data being accessed or stolen. They even notified the IPC, though they argued this was just a courtesy because the legal requirement to notify individuals wasn’t triggered. SickKids went further, posting public updates on its website and social media. But they didn’t include the mandatory line about the right to complain to the Information and Privacy Commissioner.


The IPC saw things differently. In 2024, it issued two decisions (Sick Kids, Halton CAS) . It found that both organizations had experienced a privacy breach involving an unauthorized “use” and “loss” of personal information. The trigger is an unauthorized “use” or an unauthorized “loss” of personal information. They concluded that the information was “used” and “lost” in an unauthorized manner, triggering the requirement to report to the Commissioner and to notify affected individuals. And to advise them of their right to complain to the Commissioner. 


Why? The IPC reasoned that encrypting the containers “handled” or “dealt with” the personal information inside them, making it inaccessible to authorized users. That, it said, was enough to count as a “use.” And because the information was unavailable for a period of time, that was also a “loss.”


It should be noted that encryption at the container level did not expose any personal information and did not create any sort of risk to the affected individuals once remedied.


For Halton, the IPC ordered notice to affected individuals—though by way of a website posting rather than direct notification. For SickKids, since it had already gone public, no remedial order was made.


Both SickKids and Halton challenged the IPC’s decisions in court. The Ontario Hospital Association even intervened to support them, arguing that this interpretation of “use” and “loss” would lead to pointless over-notification and compliance burdens.


Now, this is where what we lawyers call the “standard of review” becomes important. When a court reviews an administrative decision, like one from the IPC, it doesn’t just substitute its own view of the law. Under a framework established by the Supreme Court of Canada in a case called Vavilov, the default standard is “reasonableness.” That means the court will defer to the regulator’s decision so long as it is “reasonable”, meaning it is internally coherent, justified, and within the bounds of the law.


In other words, unless the regulator really went off the rails, the court won’t step in.


The Divisional Court—Judges Sachs, Lococo, and Kurke—dismissed both the judicial reviews and Halton’s appeal.


They held that the IPC had reasonably interpreted “use” to include encryption that denied authorized users access to information, even if no one else ever looked at it. They also upheld the IPC’s finding that this was a “loss” of information, again because of the temporary unavailability.


The Applicants had argued that notification should only be required where individuals’ privacy interests were actually affected—where there’s a real risk of harm, like theft or misuse. The Court rejected that. Ontario’s Personal Heath Information Protection Act and Child, Youth and Family Services Act, 2017 don’t contain a “risk of significant harm” threshold. The statutes just say notify if information is “used” or “lost.” That’s the threshold.


The Court emphasized that words like “use” don’t necessarily carry their ordinary, common-sense or dictionary meaning. Instead, they take on the meaning given by the regulator, so long as that interpretation is reasonable.


I’ll be blunt: I don’t agree with this outcome. I understand why the Court deferred to the IPC, but I don’t agree with the IPC’s interpretation of those words. Encrypting a server at the container level is not, in any meaningful sense, a “use” of personal information. In any ordinary sense of the word, it was not “used”. Nobody viewed it, nobody copied it, and nobody exfiltrated it. The information was never actually touched. Ones and zeroes are moved around hard drives every minute of every day, and we don’t think of that as data being “used”. 


And calling this a “loss”? At best, it was a temporary disruption. To me, that’s not what “loss” means. Putting it on a thumb drive and misplacing it would be a “loss”. If there was a temporary power cut to their data centre and the information was not accessible for an hour, we would not think that there’s any real unauthorized “loss” of the data. There was no risk of identity theft, no misuse, no real risk of harm to the individuals involved.


Here’s where I think the problem lies: Ontario’s PHIPA and the CYFSA don’t have a risk-based threshold. They require notification if there’s a “use” or a “loss,” regardless of whether there’s any actual risk to the individual. Compare that to the federal private sector law, PIPEDA. Under PIPEDA, an organization has to notify affected individuals and report to the federal Privacy Commissioner only if there’s been a “breach of security safeguards” that creates a “real risk of significant harm”.


That’s a sensible threshold. It filters out situations like this one, where the systems were disrupted but no one’s privacy was actually at risk. In my view, the PIPEDA standard is better. It focuses on the individual’s actual risk, rather than forcing organizations to notify just because a breach happened. Without a risk filter, you end up with over-notification, unnecessary costs, and notice fatigue, which ultimately makes people take these notices less seriously.


Because Ontario’s statutes don’t include a “real risk of significant harm” threshold, regulators like the IPC are free to take a very broad approach to words like “use” and “loss.” And courts, applying the deferential reasonableness standard, are not going to interfere.


So what does this mean for organizations in Ontario? It means that a word like “use” doesn’t always mean what you think it means. Regulators may adopt broader, purposive interpretations—especially in the context of cyberattacks. And courts, applying the reasonableness standard, will generally defer to those interpretations.


It also reinforces to me that privacy law is not really a practice area that one can just dabble in. Words in the statutes don’t necessarily mean what you’d think they mean. They have meanings given to them by the regulators, and the courts will generally defer to that interpretation. 


The lesson is this: don’t rely on common-sense definitions of terms like “use,” “loss,” or “disclosure.” And don’t assume that the risk-based federal standard applies provincially. Look at how regulators are interpreting these terms in practice, because that’s what will stand up in court.


Sunday, September 14, 2025

Recording conversations -- using AI gadgets and otherwise -- and the law in Canada


One of the most common questions I get is about recording conversations. Can you do it? Is it legal? And maybe just as importantly … is it a good idea?


The answer is … complicated. And sometimes, even if it’s legal, it can be hostile or problematic.


A quick production note: I started a privacy law blog in 2004, and then started a YouTube channel at the end of 2021. In order to make this as accessible across multiple media, I’ve started a podcast that takes the audio and makes it available via Apple Podcasts, Spotify and the others. If you’d like privacy content while in the car or mowing the lawn, just look for “privacylawyer” in your favourite podcast app.


Now back to recording conversations and the law in Canada … 


I’ll try to break it down.


Before we get into the traditional scenarios, let’s start with something very new: AI wearables.


You might have heard of something called the “Humane Pin”. The Humane AI Pin was a screenless, AI-powered wearable device designed by the American startup Humane. They somehow thought it could replace smartphones. After shipping in April 2024 to overwhelmingly negative reviews, Humane was acquired by HP, which discontinued the device's service in February 2025. Famously, Marques Brownlee - an incredibly influential YouTuber and product reviewer called it the worst product he’d reviewed. The Humane Pin flopped, but that wasn’t the end of “AI wearables.”





A more recent device is a thing called “Bee”. It’s a small wrist-worn gadget with microphones built in. The idea is kind of simple and a logical extension of a lot of what generative AI has to offer: You slap it on your wrist and it listens to what’s going on, it transcribes, and it helps you keep track of what’s said throughout your day. Think of it as a memory assistant. You can review conversations later, get reminders of “to-dos,” or even have it summarize meetings.





That sounds useful for productivity and accessibility. Imagine if English isn’t your first language, or if you’re hard of hearing, have a bad memory or if you simply want a perfect record of a complex meeting. 


I’ve had relatives dealing with dementia, and something like this could be helpful, assistive technology when memories are fading and failing. 


The catch is that they’re “always listening.” They’re not just catching your thoughts — they’re catching the people around you, likely without their knowledge. And that can raise privacy concerns.


Now, the law hasn’t changed because of gadgets like these. The same rules apply (which I’ll get into in greater detail): if you’re a party to the conversation, recording isn’t automatically illegal. But the scale and permanence are different. Instead of someone taking really detailed notes, now you have a verbatim transcript — stored in the cloud, maybe analyzed by AI, and potentially vulnerable to misuse or breach.


You may recall Google Glass, originally launched in 2014. It was pretty cool and likely ahead of its time. What caused privacy regulators heartburn was that it had an integrated camera. Though it was not recording all the time, the regulators really wanted it to have a red light on the front so that people around would at least be aware of whether it is recording. These new wearables are even less conspicuous and people whose voices can be captured likely have no knowledge that they’re being picked up.


Let’s dig into the law that applies to recording conversations in Canada, whether you do so on an old timey reel-to-reel recorder, your smartphone or an AI wearable. And these rules are the same whether you’re face-to-face, on a phone call or in a Teams meeting.


If we’re talking about conversations that begin and end in Canada, the first place to look is the Criminal Code of Canada. Part VI of the Code is actually titled “Invasion of Privacy,” and it makes it illegal to intercept a private communication unless you have authorization — like a warrant — or unless one of the legitimate parties to the conversation consents.


The Criminal Code makes it a hybrid offence (meaning that it can be prosecuted either as an indictable offence or a summary offence) to “knowingly intercept a private communication”. The maximum penalty is up to five years in prison. There’s a saving provision which says the offence does not apply to “a person who has the consent to intercept, express or implied, of the originator of the private communication or of the person intended by the originator thereof to receive it”.


This is often called “one-party consent.” In simple terms, if you’re part of the conversation, you can record it. But if you’re not part of the conversation, you can’t secretly bug the room, leave a phone recording on the table, and walk away. That would be illegal eavesdropping.


You’ll note that consent can be implied. I haven’t seen any cases on this point, but I’d think having a loud conversation in a public place within earshot of others may be “implied consent” for the conversation to be “intercepted.” But I would not want to be the test case.


While you might see CCTV surveillance cameras all over the place, they should NOT be recording audio. This would likely be illegal “interception of a private communication” and I don’t think signs like this one will get the requisite consent. Many consumer grade surveillance cameras that we’re now seeing all over the place also have a capability to record audio. If you’re using one of these cameras and they’re positioned where someone might be having a conversation, disable the audio collection. 





So, if you’re a lawful participant in the conversation, the Criminal Code is not triggered. But if it’s someone else’s conversation, you can’t intercept it or record it. 


But that’s not the end of the story. In Canada, we also have privacy laws: PIPEDA federally, plus provincial laws in Alberta, BC, and Quebec.


Here’s the key: these laws don’t apply to purely personal or domestic activities. So if you’re recording a conversation for your own memory, or for journalistic purposes, or to make a record of something for your own personal purposes, you’re not subject to PIPEDA when you’re doing that. The same applies for the provincial privacy laws of Alberta, BC and Quebec. Those laws generally apply to businesses and “organizations”.


But if you’re recording for commercial purposes — say, recording customer service calls — then privacy law kicks in. In those cases, you generally need to tell the person and get their consent. You’ll notice most companies start their customer service lines with: “This call may be recorded for quality assurance and record keeping purposes.” That’s why. The idea is that you’re on notice that it will be recorded and if you stay on the line, your consent to the recording is implied.


(Technically, the company has to list all the purposes for the recording and I think many are not doing a full job. For example, you can’t just say it’s for “quality assurance” purposes when you’re also keeping the recordings for record keeping purposes.)


And there’s more: even if a recording doesn’t violate the Criminal Code or privacy statutes, you may still face claims under provincial privacy torts, or common law actions for unreasonable invasion of privacy. This is a bit of a stretch for a conversation that the recorder is lawfully a part of, but I can certainly see a possible claim if the conversation was clearly of a private nature and the recording is made public.


Now let’s shift to the workplace. This is where the issue gets interesting — and frankly, tricky.


I was at a labour and employment law conference not long ago, and almost everyone in the room had a story about employees secretly recording conversations. Sometimes they recorded meetings with their supervisors, sometimes with colleagues. And in every anecdote I heard, it was a case where the other party to that conversation would not have agreed to the recording and people got really upset when the recording became known.


If the employee is a lawful party to the conversation, it’s not illegal under the Criminal Code. But does that make it okay? Not really.


Secretly recording a conversation is almost always seen as a hostile act. It signals distrust, it poisons the relationship, and it creates a “gotcha” culture.


Employers are within their rights to regulate this. I’ve heard of cases where an employee steps out of a meeting, but leaves their phone in the room, recording. The employee may be wondering if their colleagues talk about them when they’re not around. Well, that’s eavesdropping and a crime. If they secretly record meetings they’re attending, it may not be criminal — but it can still be problematic, and it may be against workplace policy. Employers should have policies about this. 


Beyond ordinary workplaces, I’ve advised hospitals and health authorities about audio recording. Doctors and psychologists often feel uneasy when patients pull out a recorder. It can feel adversarial.


But sometimes recording is legitimate — even helpful. I remember when my father was diagnosed with cancer, my mother took detailed notes at every doctor’s appointment. There was so much information and all of it was overwhelming. If smartphones had been as common then as they are now, I would have suggested that she record these conversations, just to make sure she captured all the important information in such a stressful moment.


I’ve also spoken with psychologists where patients wanted to record therapy sessions. At first, practitioners felt uneasy. But when we explored it, recording actually improved therapy in some cases: patients could revisit the conversation, reinforce insights, and strengthen the therapeutic relationship. Once this was understood, the psychologists were concerned about whether the patients would adequately protect the recordings of these very sensitive conversations. Once the client walks about, that’s not really on the psychologist, but they can talk to their clients about this. I think in this scenario, it’s important for everyone to be on the same page.


So it’s not always hostile. Sometimes it’s accommodation. Sometimes it’s simply practical.


There’s also a new one that’s come up a lot recently: AI-enabled recording and transcription services that are built into or added onto video calls. You’ve probably seen them in Zoom or Microsoft Teams — a little box pops up saying “Recording and transcription is on.” I’ve seen people send their little ai companions to calls that they can’t attend personally. 


These tools can be fantastic. They create a really good record of meetings, which can help with minutes, accountability, or accessibility — for example, if someone in the meeting is hard of hearing, or if English isn’t their first language. I’ve used automatic captions in a number of cases because it can be very helpful, and this is enabled by AI “interception.” Automatic transcription can also let people go back and confirm exactly what was said.


But they can also make people nervous. Suddenly, everything you say in a meeting is not just heard in the moment — it’s captured, stored, maybe even analyzed. That can change the vibe and how people participate.


It also creates a very detailed record that can be subject to discovery in litigation, which is its own risk.


From a legal standpoint, the rules haven’t really changed. If you’re part of the conversation, recording or transcribing isn’t illegal. In many ways, it’s not that different from someone taking very detailed and accurate notes. The real difference is scale and permanence: instead of one person’s notes, it’s a verbatim transcript that might live on a server indefinitely. It also creates a reliable record that is likely more credible in a hearing or a trial than any one person’s recollection or notes may be.


I think it’s a best practice for organizations to have a clear policy about the use of these tools. Decide when it’s appropriate, make sure everyone in the meeting knows what’s happening, and have rules around how those recordings and transcripts will be used, stored, and eventually deleted. I’m on the board of one volunteer organization, and it was decided that recording and AI transcription could be used but only to help the meeting’s secretary prepare the final minutes. Once the minutes were final, the recording and the transcript were deleted. The minutes are the official record.


And be careful about confidentiality. You may be fine with recording most of a meeting, but want to turn it off during any “in camera” period. And you’ll want to make sure that the recordings are securely stored in accord with the company’s records keeping policies. 


Before I wrap up, I’ll mention two additional scenarios that are related to the legal system itself. First, under the rules of professional conduct for lawyers in Canada, there are requirements for a lawyer to notify a client or another legal practitioner of their intent to record a conversation. Rule 7.2-3 from the Law Society of Ontario Rules of Professional Conduct says

“A lawyer shall not use any device to record a conversation between the lawyer and a client or another legal practitioner, even if lawful, without first informing the other person of the intention to do so.”

So this requires notice, not consent. Essentially, you can’t do it secretly. 


The second scenario related to the legal system is court hearings. As a general rule, you cannot record a court hearing without the permission of the presiding judge. I’ve been at hearings where reporters present are allowed to record, but the recordings can only be used to check the accuracy of their notes, and the recordings cannot be further disseminated or broadcast.