Sunday, February 01, 2026

Privacy, Online Fraud, and What You Can Do About It

This past week, I was invited to speak with a client’s employees for International Data Privacy Day about “Privacy, Online Fraud, and What You Can Do About It”. There were a few hundred people on the call and I’m told it was well-received. So I’ve decided to take that presentation and turn it into an episode for this channel / podcast. 

In my practice, I get to do some really awesome things with really great people who bring innovative products to consumers and business customers. But I also see some pretty shady, horrible stuff that takes place online. 

I don’t know what the proportion is between people who are awesome and innovative, and people who are horrible and innovative. There are a lot of horrible people out there who are really crafty, and have found the internet and digital tech to be a great avenue to take your money from you. 

So what I want to do today is raise awareness about privacy, explain how it connects directly to online fraud, and walk through the kinds of scams and misuse of personal information I’m seeing most often. I’ll also spend some time on practical, concrete steps individuals can take to protect themselves.

What Is Privacy — and Why Does It Matter?

Privacy is a weird thing. It’s very personal, so it varies from person to person. It also is culturally informed. At the end of the day, privacy expectations vary enormously.

Different countries — and even different generations — have very different norms around personal information.

You’ll often hear people say that “young people don’t care about privacy”. That hasn’t been my experience at all.

Young people care deeply about privacy — but they’re very intentional about “audience”. I often point to examples like people having multiple social media accounts on the same platform: one instagram account for close friends, another that’s more public and curated. That’s not a lack of concern for privacy; it’s a sophisticated understanding of it.

Privacy also depends on context. People post different things on LinkedIn than they do on Facebook, and different things again on Instagram or in a private group chat. The audience matters, and expectations matter.

Privacy as a Legal and Compliance Issue

In workplaces, privacy most often shows up as a legal and compliance issue.

In Canada, privacy laws differ by jurisdiction. In this context, jurisdiction can mean province to province, and it can mean between provinces and the federal government. It can also mean between the health sector and other sectors. But these laws generally share a common structure. But today I’ll focus on the privacy laws – federal and provincial – that govern what personal information businesses can collect, use or disclose, and the parameters around that. 

Very broadly, these laws say that organizations may only collect, use, or disclose personal information:

  • for purposes that are reasonable;
  • that have been explained to the individual;
  • that the individual understands; and
  • that the individual has consented to, subject to limited exceptions.

Those purposes are critical. They are the thread that runs through privacy law.

Organizations can only collect information that is necessary for the stated purposes. They can only use it for those purposes. If they want to use it for some other purpose, they generally have to go back to the individual and obtain new consent.

And once the information is no longer needed, it should not be kept indefinitely. Retention has to be tied to legitimate purposes, such as legal requirements or risk management. If you don’t need it anymore for the “purposes”, get rid of it. 

Privacy laws also require organizations to protect personal information using safeguards appropriate to its sensitivity.

The more sensitive the information, the higher the expectation of protection.

A lot of privacy complaints and mistrust come down to expectations. People feel unsettled or “creeped out” when information is used in ways they didn’t expect, disclosed to people they didn’t expect, or wasn’t protected to the level they expected.

The law doesn’t talk about being “creeped out,” but that reaction is often a sign that expectations were not properly set or respected. It means you haven’t clearly identified the purposes and gotten their OK. 

Privacy Harms

Canadian privacy law now explicitly recognizes a range of harms that can result from misuse of personal information, including:

  • bodily harm;
  • humiliation or embarrassment;
  • damage to reputation or relationships;
  • loss of employment, business or professional opportunities;
  • financial loss; 
  • identity theft;
  • negative impacts on credit records; and
  • damage to or loss of property.

Even information that seems relatively innocuous — like an email address — can create real risk when taken out of context.

For example, if someone obtains an email address from a particular organization, they know the individual has a relationship with that organization. That makes phishing attacks far more convincing. For example, a bad guy gets a customer list for a business. The bad guy can send emails to the customers pretending to be someone from the business, asking them to “update their billing information” or something. The fact that it looks like it comes from someone they know makes it more likely that the recipient will act on that email. 

The Scale of Online Fraud

Online fraud is enormous in scale. According to the Canadian Anti Fraud Centre, they had more than 33 thousand reports in the first three quarters of last year, with more than half a billion dollars lost  — and that’s almost certainly an understatement, because many victims never report what happened.

Fraud affects individuals, families, businesses, schools, hospitals, and governments. While large organizations often make headlines, individuals frequently suffer the most direct harm.

The Canadian Anti-Fraud Centre has an enormous catalog of the types of fraud that get reported and it’s worth taking a look at it to help understand all the different varieties of scams and frauds that are out there. 

As I said, it’s enormous but I’ll go through some of the most common fraud types that I’m seeing and then will provide some pointers on how to protect yourself. 

Common Fraud Scenarios I’m Seeing

Email Account Intrusions and Business Email Compromise

One of the most common starting points is an email account compromise.

If someone gains access to your email, they often gain access to much more: documents, shared drives, financial systems, and internal platforms. There’s a lot in your email inbox that a bad guy can use to cause harm. 

In many cases, the harm that they can cause is impersonating the person whose email they’ve taken over. I’ve seen far too many cases where attackers simply watch — waiting for the right opportunity to inject themselves into a conversation.

I’ve seen situations where attackers impersonate trusted employees and send emails redirecting payments or requesting urgent action. Because the email comes from a real, trusted account, it’s very convincing.

Funds Transfer and Payroll Fraud

A classic example is funds transfer fraud. An attacker impersonates a vendor or employee and provides “updated” banking information. Payments or payroll deposits are quietly redirected to fraudulent accounts, sometimes for weeks before anyone notices.

I’ve seen many cases where a company is about to make a big sale, and some bad guy lurking in their system impersonates the sales person or a person from finance and tells them the payments for the widgets should be made to a particular bank account. That’s not the company’s actual bank account, but one that the bad guy has access to.

Another, smaller scale example is a bad guy who knows that a person is employed with a particular company and gets the contact information for the payroll department of that company. One email that convincingly looks like it comes from the employee sent to HR saying “I’ve switched banks, so please have my direct deposit go to this new account ….” In the grand scheme of online fraud, that’s relatively small potatoes, but a bad guy that does that A LOT will make a lot of money. And leave a lot of frustrated employees in their wake. 

Tech Support Scams

Many people have received calls claiming to be from Microsoft or their internet provider, warning about suspicious activity.

The goal is to convince the victim that they have to make changes to their computer, which is really to install remote access software. Once that happens, the attacker might as well be sitting at your computer. They can block you from using it, they can control the computer, access saved passwords, log into online banking, and move money.

I’ve seen cases where victims were locked out of their own computers while attackers logged into online banking and emptied accounts in real time. 

I’ve also seen cases where bad guys have used remote access software to just watch everything the person was doing on the computer, waiting until they can extract the most cash.

Grandparent and Family Emergency Scams

This increasingly common scam targets grandparents, which is one of the most heartless, reprehensible scams out there. It targets pensioners and exploits the best intentions of these victims. 

Attackers impersonate grandchildren or other family members using information found on social media, claiming they’ve been injured, arrested, or stranded. They create urgency and demand immediate payment.

In some cases, AI is now being used to mimic actual voices, making these scams even more convincing. In other cases, the scammer pretends to be a lawyer, telling the grandparent or family member that a loved one has been arrested and requires immediate bail money. 

Fake Renewals, Refunds, and Overpayments

These include fake subscription renewals, refund scams, and overpayment schemes on online marketplaces.

In some cases, you’ll get a text message or an email saying that some service is about to renew for a huge sum, and “click here” to cancel the renewal. That click takes you to a fake site that is looking for your Amazon, Netflix or other online credentials. With that information, they can impersonate you and perhaps your payment information. 

In an overpayment scam, for example, a buyer sends a cheque or bank draft for more than the agreed amount. They say it was a mistake or was intended to cover processing charges, and then asks the seller to refund the difference — before the original payment is discovered to be fake. Before the cheque or bank draft is found to be fake by the seller’s bank, the seller has already sent actual, non-refundable funds to the scammer. 

Fraudulent legal notices

There’s a pretty common scam, usually via text message or email, that purports to be a legal notice saying that you have an outstanding fine or other sort of payment that needs to be made to a government authority. Last year I got one that purported to be from the “Ministry of Transportation of Canada” that said my license would be revoked, my vehicle registrations would be blocked and there could be further action if I didn’t pay a parking ticket using the link below. 

Some of them will refer to overdue taxes and penalties. Yeah, it’s just fraudulent. 

Ransomware and Data Theft

Ransomware attacks lock people and organizations out of their systems and often involve theft of sensitive data. Using a number of means, including malware infected email attachments or installing remote access software I discussed before, a bad guy gets into a computer system and installs software that will encrypt all the data on the system or the network. 

They will then blackmail the victim to pay some amount in bitcoin to get the decryption key. 

Once companies realized that having good backups out of reach of the bad guys would mean they didn’t have to pay for the decryption key, the bad guys started to download all the data they could get their hands on before encrypting it. 

So even organizations with good backups may feel pressure to pay to prevent stolen data from being leaked or misused.

So many of the cybercrime stories that hit the headlines are ransomware, as they will often shut down a business for days or even weeks before things get sorted out.

Sextortion targeting young people

In my book, if you go after pensioners and whatever savings they have, you’re an absolute horrible person. But words fail me in describing the grotesque and vile people who target young people with sextortion. 

In this type of crime, fraudsters create fake profiles on social media, discussion boards and dating websites. Impersonating the persona they’ve adopted, they reach out to people – often young people – and lure them into a relationship. Using a whole range of manipulative tactics, they coerce the into taking intimate images of themselves or performing sexual acts on camera. The victims sincerely believe that they are in a relationship with the bad guy. Then he records the session and threatens to send the image or video to other people – like family members or friends – unless they pay or provide more sexual content.

It prays upon young people’s vulnerability and exploits shame. Many victims have died by suicide and the horrible perpetrators go onto the next victim. 

So What Can You Do to Protect Yourself?

There is no such thing as perfect security, but there are practical steps that can significantly reduce risk.

Try to Slow Down

Scammers rely on urgency. If someone is pushing you to act immediately, that alone should raise red flags. The bad guys want you to act immediately so you don’t have a chance to reflect on what’s really going on. Take a deep breath, step back and remember that very few things require an immediate decision – particularly for a situation that comes out of the blue. 

Verify things Independently

Never rely on contact information provided in a suspicious email or call. Use a trusted number or address you already have.

For example, if your “bank” calls you and asks for information, hang up and call the number on the back of your bank card.

Never let a stranger tell you to do anything on your computer or your phone

No legitimate company will cold call you and tell you to do anything on your computer or phone, or tell you to install software. If that happens, hang up.

Use Two Factor Authentication

Two factor authentication adds a critical layer of protection. Even if someone gets your password, they still can’t log in without the second factor. Many forms of two-factor authentication, like SMS, are not perfect, but they’re all better than most alternatives. 

Never Reuse Passwords

Credential theft is widespread. Reusing passwords means a low risk breach can quickly turn into access to your bank or email. 

A lot of companies are hacked on a regular basis, with the bad guys going after customer login information. If you used the same password to order a pizza as you use for your online banking, if that pizza place is hacked, bad guys will likely try that user name and password in other places. A lot of the emails and texts you may get saying that your Netflix has expired are hoping that the login information you put into their fake website will also work on your bank. 

Be careful about What You Share Publicly

Be mindful of what you post on social media, especially travel plans and family details. Police report that burglars use vacation posts to choose houses to break into. And the grandparent scams I mentioned before often rely on determining relationships between people from social media sites. 

Use a Family Verification Question

For family emergency scams, have a simple verification question that only real family members would know. I’ve told the seniors in my family that if they ever get a call purporting to be from any of my kids, they should ask them for the name of a particular animal that was important to them when they were growing up and that they’d never forget. That name is not on any social media site and anyone who can’t answer that question immediately is an impersonator. 

Never buy gift cards at someone else’s direction

One of the most common ways that scammers try to get “money” from victims is having them purchase gift cards. Once the cards are bought and the scammer gets the numbers from the back of the cards, they can use the value from those cards. Actual government agencies will never, ever, ever ask for payment via iTunes or Amazon gift cards. If anyone mentions any sort of a gift card, red flags should go up and alarm bells should start ringing. 

Set Alerts and Limits

You should set alerts on your financial accounts so you’re notified when money moves. Someone may have picked your wallet out of your pocket, or taken your credit card number. If you get alerted as soon as a transaction happens, you can immediately contact your bank to have it addressed.

And lower your daily transaction limits if you don’t need higher ones. Scammers who get into your online banking will use money transfer services to send money to other accounts. If you rarely Interac e-transfer more than a couple of hundred dollars per day, set your limit that low. If you have an unusually large payment to make, you can contact your bank to temporarily increase that limit. 

Closing

I think it’s worth taking some time to go into your “spam folder” in your email and your text messages to see some of the examples of scam messages that were sent to you that you didn’t see. It’ll help, I think, raise your awareness and sensitivity to what is sketchy and should raise red flags for the future.

We live in a world where personal information is incredibly valuable and increasingly easy to misuse.

Unfortunately, there are a lot of really horrible people who are very creative in trying to separate you from your money.  Awareness, skepticism, and a few practical habits can reduce the risk of becoming a victim.

Sunday, January 18, 2026

BC Privacy Commissioner finds city's use of public surveillance cameras unlawful ... off to court

The Information and PrivacyCommissioner of British Columbia just found that the City of Richmond in the BC lower mainland broke the law when it installed ultra-high-definition cameras in public places that capture faces, licence plates, and other identifiers. The Commissioner recommended that they take down the cameras and delete all the recordings. The City said “nope”, so the Commissioner issued a binding order for them to stop collection, delete recordings, and disband the system.

This is definitely going to court. The City of Richmond issued a statement saying they think it is lawful and appropriate, and are looking to have the legality of all of this determined by the Courts. I think that’s a good thing … the more clarity we have from the superior courts on the interpretation of our privacy laws, the better.

I should note that while these laws are generally consistent from province to province, there is a big variation on how police services are delivered. Not all of the conclusions of this finding will necessarily be applicable in all other provinces or municipalities.

The City of Richmond in British Columbia began field testing its “Public Safety Camera System” – or PSCS – in early 2025 at the intersection of Minoru Boulevard and Granville Avenue.

The City’s stated sole purpose was to collect and disclose video footage to the RCMP to assist in identifying criminal suspects. That point—sole purpose—is central to the Commissioner’s analysis. There was no other rationale for the City of Richmond to put up these cameras in these locations. 

Operationally, the system involved multiple high-resolution cameras capturing:

  • licence plate numbers,
  • high-definition images of vehicle occupants,
  • pedestrians,
  • vehicle identifying features, and
  • location/time information tied to the intersection.

The cameras recorded continuously, and the City retained footage for 48 hours before deletion.

The field test included capabilities like licence plate recognition, pan-tilt-zoom variants, panoramic/multi-sensor configurations, and other detection features; the City confirmed it did not use facial recognition or built-in audio recording during field testing, though some cameras had those capabilities.

The City’s goal for the field test was essentially procurement-and-design: evaluate camera tech, decide numbers and placement, assess performance in different conditions, and confirm the PSCS could generate “usable” footage for law enforcement use later.

Under BC FIPPA, public bodies can’t collect personal information just because it seems useful. Collection has to fit within a listed authorization—most importantly here, s. 26.

The Commissioner situates that within a broader privacy-protective approach: privacy rights are treated as quasi-constitutional, and public bodies should only compromise privacy where there’s a compelling state interest.

Richmond relied on three possible authorities:

  • s. 26(b) (law enforcement),
  • s. 26(c) (authorized program/activity + necessity),
  • s. 26(e) (planning/evaluating a program/activity).

The Commissioner rejected all three, finding there simply was not legal authority for the collection of personal information – and without legal authority, there’s no lawful collection.

Richmond first said they were authorized under s. 26(b):

26          A public body may collect personal information only if

(b)          the information is collected for the purposes of law enforcement,

Note the use of the word “only”. Unless section 26 permits it, a public body cannot collect personal information.

Richmond’s theory was straightforward: the definition of “law enforcement” includes policing, and the PSCS was meant to support policing by helping identify suspects—so it’s “for law enforcement.” That was their alleged purpose.

The Commissioner accepted there’s a connection: the information might be used by the RCMP in policing. But the Commissioner says that’s not the end of the inquiry, because the collector is the City—and the City must have a law enforcement mandate of its own to rely on s. 26(b).

This is a recurring theme in Canadian privacy oversight: a public body can’t bootstrap a law-enforcement collection power merely because another entity with a law-enforcement mandate might find the data useful.

The City may pay for law enforcement, and it may provide resources to law enforcement but they do not have a lawful law enforcement mandate. 

The report describes three arguments Richmond advanced:

  1. RCMP mandate should be imputed to the City (because the City “provides” policing by contracting with the RCMP to do it).
  2. The City has a mandate to collect information for the RCMP.
  3. The City has its own independent mandate to police through the cameras.

The Commissioner’s response is pretty technical: under the Police Act and the Municipal Police Unit Agreement framework, municipalities fund and resource policing, but policing authority and law enforcement functions remain with the police, operating independently of the municipality.

He underscores that the Police Act sets out specific ways a municipality provides policing—such as establishing a municipal force or contracting with the RCMP—and “running a surveillance camera system for the police to use” is not among those statutory options.

He also points to the RCMP’s peace-officer functions and the Municipal Police Unit Agreement structure as vesting law enforcement responsibilities in the RCMP, not the City, and he reads the legislative set-up as intentionally keeping policing independent from municipal control.

So this argument advanced by the City failed: the City lacked the necessary law-enforcement mandate, so it could not collect under s. 26(b)—even if the police might later use the footage.

Section 26(c) is the classic “public body operational authority” provision: even if a statute doesn’t explicitly say “collect this kind of personal information,” a public body can collect personal information if it is both:

  • directly related to an authorized program or activity, and
  • necessary for that program or activity.

Richmond framed its program as essentially: an intersection camera program to identify criminal suspects following criminal incidents, pointing to broad service powers under its Community Charter.

But the Commissioner rejected that program characterization as “authorized,” because—again—of the Police Act structure. In the Commissioner’s view, “collecting evidence to identify criminals that the RCMP may rely on” isn’t part of how the City is authorized to provide policing services or resources under the Police Act framework.

So, the analysis fails at the first step: if the underlying “program” isn’t authorized, 26(c) can’t save the collection.

The report goes further and addresses necessity. The Commissioner emphasizes that the City’s record was limited in establishing that: (a) unresolved crime was “real, substantial, and pressing,” (b) existing measures were ineffective, or (c) less intrusive means had been seriously examined.

He characterizes the intrusion into privacy as “vast,” relative to the limited evidentiary foundation offered to justify necessity.

The net effect was that the Commissioner was not satisfied that the City demonstrated that mass capture of high-definition identifying footage from “tens of thousands of people each day” who had nothing to do with any sort of crime was necessary for the purported municipal activity.

Richmond also argued: the field test is just planning and evaluation, and s. 26(e) specifically authorizes collection necessary for planning/evaluating a program.

The Commissioner’s treatment of 26(e) is crisp: 26(e) presupposes that the program being planned or evaluated is otherwise authorized. You can plan or evalue an authorized program, but if the program ain’t authorized, you can’t collect personal information to plan or evaluate it. Richmond itself largely accepted that proposition, and the Commissioner agreed.

Because the Commissioner had already found the PSCS was not authorized under 26(b) or 26(c), Richmond could not rely on 26(e) to do “planning” for an unauthorized program.

It makes sense that you can’t use the planning/evaluation clause as an end-run around the core requirement of lawful authority. Otherwise, everything under the sun could be said to be for planning or evaluation. 

FIPPA generally requires notice of purpose and authority when collecting personal information. Richmond tried to avoid notice by invoking s. 27(3)(a)—the idea that a notice is not required where the information is “about law enforcement.”

The Commissioner gives two responses.

First: the City couldn’t rely on law enforcement as its underlying authorization in the first place—so that alone undermined the attempt to rely on the exception.

Second, and more fact-specific: during the field testing phase, the City had confirmed it was not using the information for actual public safety or enforcement purposes—only to test and evaluate camera technical capabilities.

So even reading “about law enforcement” broadly, the Commissioner questioned whether the testing-phase collection qualified as “about law enforcement,” because it would not be used to enforce any laws, and there was no compelling enforcement purpose weighing against notice.

Richmond did install signs, but the Commissioner describes them as a “courtesy” and finds them legally inadequate.

The sign said “PUBLIC SAFETY CAMERA TESTING / FIELD TESTING IN PROGRESS AT THIS INTERSECTION” with contact information for the City’s Director of Transportation.

The Commissioner’s critique is twofold:

  1. First there was a Content deficiency: the signs did not clearly notify people that cameras were recording and collecting personal information, and did not include the purposes and legal authority for collection as required by s. 27(2).
  2. And secondly there was a Placement deficiency: signage was vehicle-focused, placed for eastbound and westbound approaches, but did not address entries from other directions and did not notify pedestrians—despite the system’s capacity to capture pedestrians and pan widely, including multi-direction recording.

The Commissioner’s conclusion is direct: the City did not adequately notify individuals when it collected their personal information during field testing.

The report notes that disclosure under s. 33(2) generally depends on lawful collection in the first place, and because the collection lacked authority, the City could not rely on “consistent purpose” disclosure to the RCMP for evaluation.

On security, the Commissioner acknowledges the City described a reasonably robust set of safeguards, and that even where collection is unlawful, the City still has a duty under s. 30 to protect personal information in its custody or control.

But safeguards don’t cure lack of authority. They are necessary, not sufficient.

The OIPC’s recommendations were blunt:

  1. stop collecting personal information through the PSCS,
  2. delete all recordings, and
  3. disband the equipment.

Richmond advised it would not comply, and the Commissioner issued Order F26-01, requiring immediate compliance and written evidence of compliance by a specific date.

My takeaway is that the Commissioner’s reasoning is primarily structural and jurisdictional: the City tried to create a surveillance-for-police capability, but the Commissioner reads BC’s legal framework as drawing a hard line between municipal services and police law-enforcement authority—particularly when the activity is mass surveillance in public space.

If you’re a public body contemplating “pilot projects” with high-capability cameras, the report is a reminder that planning provisions don’t let you pilot an unauthorized program, and that “law enforcement adjacent” doesn’t equal “law enforcement authorized.”

For a public body, every collection of personal information has to be directly authorized by law. It’s worth noting that the “law enforcement” provision in most public sector privacy laws is wide enough to drive a truck through. The RCMP in Richmond could have paid for and put up those cameras all over the place, since they have a law enforcement mandate. 

Criminal courts are pretty adept at dealing with privacy invasions on a case-by-case basis using section 8 of the Charter, but we actually need a better way to to evaluate proportionality, necessity and appropriateness when it comes to proposed police programs that hoover up data on hundreds, thousands or maybe millions of innocent people in the name of “law enforcement”.

It’ll be interesting to see how the courts deal with this.

 

Sunday, January 11, 2026

Canada's new proposed law to outlaw explicit deepfakes: Bill C-16

A number of years ago, the Parliament of Canada amended our Criminal Code to create a criminal offense related to the non-consensual distribution of intimate images. Last month, the Government of Canada proposed to further amend the Criminal Code to include so-called deepfake intimate images, and to create an offence of threatening to disclose intimate images, deepfake or not.

Section 162.1, which was added to the Criminal Code in 2014, makes it an offence to publish, distribute, transmit, sell, make available or advertising an intimate image without the consent of the individual depicted in the image. 


And a number of provinces have put in place laws that create civil remedies for the non-consensual distribution of intimate images. 


With some variation, they generally have the same definition of “intimate image”, but they really haven’t kept up with an explosion of synthetic, AI-generated intimate imagery. Synthetic images are created by generative AI systems that can “learns” what a person looks like and can use that information to create new images that resemble that person. 


If you look at the definition of what is an intimate image, it clearly presupposes that it is a recording of an actual person and that the actual person was involved, or at least present at its recording.


Criminal Code – 2014 Amendments Definition of intimate image (2) In this section, intimate image means a visual recording of a person made by any means including a photographic, film or video recording, (a) in which the person is nude, is exposing his or her genital organs or anal region or her breasts or is engaged in explicit sexual activity; (b) in respect of which, at the time of the recording, there were circumstances that gave rise to a reasonable expectation of privacy; and (c) in respect of which the person depicted retains a reasonable expectation of privacy at the time the offence is committed.


It refers to an image or recording where the person “is exposing” certain body parts or “is engaging” in explicit sexual activity. It talks about “reasonable expectations of privacy” at the time the image is recorded and at the time the offence is created. 


This definition would not capture synthetic, “deep fake” intimate images.


The province of British Columbia has the newest provincial statute to create a civil framework to provide civil remedies for the non-consensual distribution of intimate images. The definition there is clearly modeled on the definition from the Criminal Code of Canada, but does include images where the person is depicted as engaged in a particular activity, also regardless of whether the image has been altered. So the BC law would cover a situation where an actual image of a person has been altered, in any way, to depict the person as engaging in certain acts or nude. 


Intimate Images Protection Act (British Columbia) “intimate image” means a visual recording or visual simultaneous representation of an individual, whether or not the individual is identifiable and whether or not the image has been altered in any way, in which the individual is or is depicted as (a) engaging in a sexual act, (b) nude or nearly nude, or (c) exposing the individual's genital organs, anal region or breasts, and in relation to which the individual had a reasonable expectation of privacy at, (d) in the case of a recording, the time the recording was made and, if distributed, the time of the distribution, and (e) in the case of a simultaneous representation, the time the simultaneous representation occurred;

But this updated definition does not cover purely synthetic images, meaning images that are original and are not simply alterations of existing images. You may recall a little while ago when AI generated sexualized images of superstar Taylor Swift were posted online. If I recall correctly, these were images that were not alterations of existing images but were rather the result of the AI image generator having ingested many, many images of Taylor Swift and “knowing” what she looks like. Those images would not have been captured by the current Criminal Code or even the newer definition in the British Columbia intimate images law. 

In December, the Government of Canada introduced Bill C-16, called the “Protecting Victims Act”, that makes a number of amendments to Canadian criminal and related laws. Included in Bill C-16 are proposed amendments that will expand the existing definition of “intimate image” to include synthetic deepfakes. 


So here’s the new definition from Bill C-16, but it’s more helpful to compare it to the existing language of the Criminal Code. I’ve crossed out what’s being removed and underlined what’s being added. So we see in subsection (2)(a)(i), where it deals with what has to be in an image or recording to be considered an “intimate image” – they’ve removed “his or her genital organs or anal region or her breasts” and have replaced it with “their sexual organs”. 


Bill C-16 Proposed amendments (redline)

Definition of intimate image
(2) In this section, intimate image means

(a) a visual recording of a person made by any means including a photographic, film or video recording,

(i) in which the person is nude, is exposing his or her genital organs or anal region or her breasts their sexual organs or is engaged in explicit sexual activity,

(ii) in respect of which, at the time of the recording, there were circumstances that gave rise to a reasonable expectation of privacy, and

(iii) in respect of which the person depicted retains a reasonable expectation of privacy at the time the offence is committed; or

(b) a visual representation that is made by any electronic or mechanical means and that shows an identifiable person who is depicted as nude, as exposing their sexual organs or as engaged in explicit sexual activity, if the depiction is likely to be mistaken for a visual recording of that person.
That change doesn’t really do what it appears it will do because they’ve added a new defined term in section 150 of the Code, which defines specific terms for Part V of the Code which deals with sexual offences. 

“sexual organs” include breasts that are or appear to be female breasts and the anal region; 


So this isn’t really a material change, as far as I can see. 


Subsection (2)(b) is where they scope in deepfakes:


(b) a visual representation that is made by any electronic or mechanical means and that shows an identifiable person who is depicted as nude, as exposing their sexual organs or as engaged in explicit sexual activity, if the depiction is likely to be mistaken for a visual recording of that person.


So this part doesn’t depend on the reasonable expectation of privacy in the image or recording. Which makes sense. An actual image of an actual person will be associated with that actual person’s expectations of what would happen with that image. A purely made-up image doesn’t have that. 


The key parts are that it is a visual representation that depicts the same sorts of body parts or conduct as in subsection (2)(a)(i), and that it has to be sufficiently realistic that the depiction “is likely to be mistaken for a visual recording of that person.”


It can’t be cartoon-ish or of such poor quality that you’d know immediately that it is not really that person. 


The scope of what could be an intimate image could be broader, but we have to be mindful of freedom of expression. Unfortunately, as of January 10 when I’m recording this, no Charter statement related to Bill C-16 has been released by the Canadian Department of Justice. (It’s been more than a month since the Bill was tabled in Parliament, so should have been released by now.)


The creation and distribution of intimate images is an expressive act and would be protected by the freedom of expression provision in section 2(b) of the Charter of Rights and Freedoms. But protected expression can be subject to “reasonable limits prescribed by law as can be demonstrably justified in a free and democratic society”. In order to justify the limitation, the goal of the legislature has to be pressing and substantial. i.e., is the objective sufficiently important to justify limiting a Charter right? And then there has to be proportionality between the objective and the means used to achieve it. 


This has three parts: first, the limit must be rationally connected to the objective. There must be a causal link between the measure and the pressing and substantial objective.


Second, the limit must impair the right or freedom no more than is reasonably necessary to accomplish the objective. The government will be required to show that there are no less rights-impairing means of achieving the objective “in a real and substantial manner”. 


Third, there must be proportionality between the deleterious and salutary effects of the law.


I think there is some risk that this expanded definition of “intimate images” may be vulnerable to being struck down as an unjustified infringement of freedom of expression. The law doesn’t create an offence of creating explicit deepfakes for “personal use”, so that’s not an issue. Though there is a defence related to “serving the public good” in section 162.1(3), I don’t think it’s broad enough to address the potential use of deepfakes in political satire and commentary.


Whether you like it or not, and regardless of whether you think it’s tasteful, AI generated imagery is being used to produce political commentary and satire. And yes, some of it does veer into depicting body parts and activities that can be captured in the new definition of “intimate image.” And you generally can’t outlaw expression just because it’s tasteless. At the end of the day, I don’t think the existing defence of “serving the public good” shields such political expression and leaves this provision vulnerable to a successful Charter challenge. 


Before I wrap up, I should note that the Protecting Victims Act also proposes to create an offence of threatening to publish or distribute an intimate image. This is the new section 162.1(1.1):


Everyone who, with the intent to intimidate or to be taken seriously, knowingly threatens to publish, distribute, transmit, sell, make available or advertise an intimate image of a person knowing that the person depicted in the image would not give their consent to that conduct, or being reckless as to whether or not that person would give their consent to that conduct, is guilty of an offence.


This goes beyond what is typically described as “sextortion”, where a bad guy threatens to release intimate images in exchange for more such images or money. “Sextortion” is captured in the general offence of extortion. This new offence would capture a threat even where the person making the threat doesn't expect or demand anything in return. It’s a reasonable addition to the criminal law.