Monday, April 18, 2022

Video: Canada's Anti-Spam Law and the installation of software

Canada’s anti-spam law is about much more than just spam. It also regulates the installation of software. Like the rest of the law, it is complicated and convoluted and has significant penalties. If you’re a software developer or an IT admin, you definitely need to know about this.

So we’re talking about Canada’s anti-spam law. The official title is much longer, and it also includes two sets of regulations to make it more complicated.

Despite the snappy title that most of us use: Canada’s Anti-Spam Law, it is about more than just spam. It has often-overlooked provisions that make it illegal to install software on another person’s computer – or cause it to be installed – without their consent.

It was clearly put into the law to go after bad stuff like malware, viruses, rootkits, trojans, malware bundled with legitimate software and botnets. But it is not just limited to malevolent software. It potentially affects a huge range of software.

So here is the general rule from Section 8 of the Act:

8. (1) A person must not, in the course of a commercial activity, install or cause to be installed a computer program on any other person’s computer system or, having so installed or caused to be installed a computer program, cause an electronic message to be sent from that computer system, unless

(a) the person has obtained the express consent of the owner or an authorized user of the computer system and complies with subsection 11(5); or

(b) the person is acting in accordance with a court order.

Let’s break that down. The first part is that it has to be part of a commercial activity. I’m not sure they meant to let people off the hook if they’re doing it for fun and giggles. The “commercial activity” part is likely there so that the government can say this is justified under the federal “general trade and commerce power”.

They could have used the federal criminal law jurisdiction, but then they’d be subject to the full due process and fairness requirements of the Canadian Charter of Rights and Freedoms, and the government did not want to do that. They’d rather it be regulatory and subject to much lower scrutiny.

Then it says you can’t install a computer program on another’s computer without the express consent of the owner or authorized user of the computer. (The definition of “computer system” would include desktops, laptops, smartphones, routers and appliances.) or cause to be installed.

The express consent has to be obtained in the manner set out in the Act, and I’ll discuss that later.

It also additionally prohibits installing a computer program on another’s computer and then causing it to send electronic messages. This makes creation of botnets for sending spam extra bad.

The definition of the term “Computer Program” is taken from the Criminal Code of Canada

“computer program” means computer data representing instructions or statements that, when executed in a computer system, causes the computer system to perform a function; (programme d’ordinateur)

In addition to defined terms, there are some key ideas and terms in the Act that are not well-understood.

It talks about “installing” a computer program, but what that is has not been defined in the legislation and the CRTC hasn’t provided any helpful guidance.

I wouldn’t think that running malware once on someone’s system for a malevolent purpose would be captured in the definition of “install”, though it likely is the criminal offence of mischief in relation to data.

What about downloading source code that is not yet compiled? Or then compiling it?

It is certainly possible to load up software and have it ready to execute without being conventionally “installed”. Does it have to be permanent? Or show up in your installed applications directory?

I don’t know.

There’s also the question of who is an owner or an authorized user of a computer system.

If it is leased, the leasing company likely owns the computer and we’ve certainly seen reports and investigations of spyware and intrusive software installed on rented and leased laptops.

My internet service provider owns my cable modem, so it’s apparently ok if they install malware on it.

For authorized users, it means any person who is authorized by the owner to use the computer system. Interestingly, it is not limited by the scope of the authorization. It seems to be binary. Either you are authorized or you are not.

There are some scenarios to think about when considering owners and authorized users.

For example, if a company pre-installs software on a device at the factory or before ownership transfers to the end customer, that company is the owner of the device and can install whatever they like on it.

Many companies issue devices like laptops and smartphones to employees. Those employers own the devices and can install any software on them.

But increasingly, employees are using devices that they own for work-related purposes, and employers may have a legitimate interest in installing mobile device management and security software on those devices. Unless there’s a clear agreement that the employer gets to do so, they may find themselves to be offside the law.

So, in short, it is prohibited to do any of these things without the express consent of the owner or authorized user:

  • (a) install a computer program of any kind;
  • (b) cause a computer program of any kind to be installed, such as hiding or bundling additional software in an installer that the owner or authorized user has installed. We sometimes see this when downloading freeware or shareware, and the installer includes other software that the user didn’t ask for;
  • (c) or cause such a program that has been installed to send electronic messages after installation.

Of course, someone who is the owner or authorized user of the particular device can put whatever software they want on the device. This only covers installation by people who are not the owner or the authorized user of the device.

There are some exceptions that people should be aware of.

It is common to install software and to have it automatically update. This is ok if the user consents to the auto updates. But that probably doesn't apply if the update results in software that does very different things compared to when it was first installed.

There are some cases where consent is deemed or implied.

CASL deems users to consent to the installation of the following list of computer programs if the user’s conduct shows it is reasonable to believe they consented to it. It is a weird list.

At the top of the list are “cookies”. To start with, anyone who knows what cookies are knows they are not computer programs. They are text files, and including them on this list tells me that the people who wrote this law may not know as much as you may hope about this subject.

It then includes HTML code. HTML is hypertext markup language. I suppose it is data that represents instructions to a computer on how to display text and other elements. I guess the next question is whether this includes the variations of HTML like XHTML? I don’t know. But if HTML is a computer program, then so are fonts and Unicode character codes.

Next it refers to “Java Scripts”. Yup. That’s what it says. We are told by industry Canada that this is meant to refer to JavaScript, which is different from a Java script. Not only could have have maybe not made such a stupid mistake, but maybe they could have been clear about whether they were referring to JavaScript run in a browser (with its attendant sandbox) or something else.

Next on the list are “operating systems”, which seems very perverse to include. The operating system is the mostly invisible layer that lies between the computer hardware and the software that runs on top of it. Changes to the operating system can have a huge impact on the security and privacy of the user, and much of it happens below the system. And there is no clarity about whether an “operating system” on this list includes the software that often comes bundled with it. When I replace the operating system on my Windows PC, I get a new version of a whole bunch of standard software that comes with it like the file explorer and notepad. It would make sense that a user who upgrades from one version of MacOS or Windows to another. But I can make an open source operating system distro that’s full of appalling stuff, in addition to the operating system.

Finally, it says any program executable only through use of another computer program for which the user has already consented to installation. Does this include macros embedded in word documents? Not sure.

It makes sense to have deemed consent situations or implied consent, but we could have used a LOT more clarity.

There are some exceptions to the general rule of getting consent, two of which are exclusively reserved to telecommunications service providers, and a final one that related to programs that exclusively correct failures in a computer system or a computer program.

This is understandable, but this would mean that a telco can install software on my computer without my knowledge or consent if it’s to upgrade their network.

So how do you get express consent. It’s like the cumbersome express consent for commercial electronic messages, but with more.

When seeking express consent, the installer has to identify

  • the reason;
  • Their full business name;
  • Their mailing address, and one of: telephone number, email address, or web address;
  • if consent is sought on behalf of another person, a statement indicating who is seeking consent and on whose behalf consent is being sought;
  • a statement that the user may withdraw consent for the computer program’s installation at any time; and
  • a clear and simple description, in general terms, of the computer program’s function and purposes.

But if an installer “knows and intends” that a computer program will cause a computer system to operate in a way its owner doesn’t reasonably expect, the installer must provide a higher level of disclosure and acknowledgement to get the user’s express consent.

This specifically includes the following functions, all of which largely make sense:

  • collecting personal information stored on the computer system;
  • interfering with the user’s control of the computer system;
  • changing or interfering with settings, preferences, or commands already installed or stored on the computer system without the user’s knowledge;
  • changing or interfering with data stored on the computer system in a way that obstructs, interrupts or interferes with lawful access to or use of that data by the user;
  • causing the computer system to communicate with another computer system, or other device, without the user’s authorization;
  • installing a computer program that may be activated by a third party without the user’s knowledge; and
  • performing any other function CASL specifies (there are none as yet).

Like the unsubscribe for commercial electronic messages, anyone who installs software that meets this higher threshold has to include an electronic address that is valid for at least one year to the user can ask the installer to remove or disable the program.

A user can make this request if she believes the installer didn’t accurately describe the “function, purpose, or impact” of the computer program when the installer requested consent to install it. If the installer gets a removal request within one year of installation, and consent was based on an inaccurate description of the program’s material elements, then the installer must assist the user in removing or disabling the program as soon as feasible – and at no cost to the user.

So how is this enforced? CASL is largely overseen by the enforcement team at the Canadian Radio-television and Telecommunications Commission.

Overall, I see them at least making more noise about their enforcement activities in the software arena than the spam arena.

In doing this work, the CRTC has some pretty gnarly enforcement tools.

First of all, they can issue “notices to produce” which are essentially similar to Criminal Code production orders except they do not require judicial authorization. These can require the recipient of the order to hand over just about any records or information, and unlike Criminal Code production orders, they can be issued without any suspicion of unlawful conduct. They can be issued just to check compliance. I should do a whole episode on these things, since they really are something else in the whole panoply of law enforcement tools.

They can also seek and obtain search warrants, which at least are overseen and have to be approved by a judge.

Before CASL, I imagine the CRTC was entirely populated by guys in suits and now they get to put on raid jackets, tactical boots and a badge.

I mentioned before that there can be some significant penalties for infractions of CASL’s software rules.

It needs to be noted that contraventions involve “administrative monetary penalties” - not a “punishment” but intended to ensure compliance. These are not fines per se and are not criminal penalties. That’s because if they were criminal or truly quasi-criminal, they’d have to follow the Charter’s much stricter standards for criminal offences.

The maximum for these administrative monetary penalties are steep. Up to $1M for an individual offender and $10M for a corporation.

The legislation sets out a bunch of factors to be considered in determining the amount of penalty, including the ability of the offender to pay.

There is a mechanism similar to a US consent decree where the offender can give an “undertaking” that halts enforcement, but likely imposes a whole bunch of conditions that will last for a while.

Officers and directors of companies need to know they may be personally liable for penalties and of course the CRTC can name and shame violators.

There is a due diligence defence, but this is a pretty high bar to reach.

We have seen at least three reported enforcement actions under the software provisions of CASL.

The first was involving two companies called Datablocks and Sunlight Media in 2018. They were found by the CRTC to be providing a service to others to inject exploits onto users’ computers through online ads. They were hit with penalties amount to $100K and $150K, respectively.

The second was in 2019 and involved a company called Orcus Technologies, which was said to be marketing a remote access trojan. They marketed it as a legitimate tool, but the CRTC concluded this was to give a veneer of respectability to a shady undertaking. They were hit with a penalty of $115K.

The most recent one, in 2020, involved a company called Notesolution Inc. doing business as OneClass. They were involved in a shady installation of a Chrome extension that collected personal information on users’ systems without their knowledge or expectation. They entered into an undertaking, and agreed to pay $100K.

I hope this has been of interest. The discussion was obviously at a pretty high level, and there is a lot that it unknown about how some of the key terms and concepts are being interpreted by the regulator.

If you have any questions or comments, please feel free to leave them below. I read them all and try to reply to them all as well. If your company needs help in this area, please reach out. My contact info is in the notes, as well.

Wednesday, April 13, 2022

Video: Privacy and start-ups ... what founders need to know

In my legal practice, I have seen businesses fail because they did not take privacy into account. I’ve seen customers walk away from deals because of privacy issues and I’ve seen acquisitions fail due diligence because of privacy.

Today, I’m going to be talking about privacy by design for start-ups, to help embed privacy into growing and high-growth businesses.

Episode 2 of Season 4 of HBO’s “Silicon Valley” provides a good case study on the possible consequences of not getting privacy compliance right.

Privacy means different things to different people. And people have wildly variable feelings about privacy. As a founder, you need to understand that and take that into account.

In some ways, privacy is about being left alone, not observed and surveilled.

It is about giving people meaningful choices and control. They need to understand what is happening with their personal information and they should have control over it. What they share and how it is used. They should get to choose whether something is widely disseminated or not.

Privacy is also about regulatory compliance. As a founder you need to make sure your company complies with the regulatory obligations imposed on it. If you are in the business to business space, you will need to understand the regulatory obligations imposed on your customers. I can guarantee you that your customers will look very, very closely at whether your product affects their compliance with their legal obligations. And they’ll walk away if there’s any realistic chance that using your product puts their compliance at risk.

Privacy is about trust in a number of ways. If you are in the business to consumer space, your end-users will only embrace your product if they trust it. If they know what the product is doing with their information and they trust you to keep it consistent. If you are in the business to business space, your customers will only use your product or service if they trust you. If you’re a start-up, you don’t yet have a track record or wide adoption to speak on your behalf. A deal with a start-up is always a leap of faith, and trust has to be built. And there are a bunch of indicators of trustworthiness. I have advised clients to walk away from deals where the documentation, responses to questions don’t suggest privacy maturity. If you have just cut and pasted your privacy policy from someone else, we can tell.

Privacy is not just security, but security is critical to privacy. Diligent security is table stakes. And a lack of security is the highest risk area. We seldom see class-action lawsuits for getting the wrong kind of consent, but most privacy/security breaches are followed by class-action lawsuits. Your customers will expect you to safeguard their data with the same degree of diligence as they would do it themselves. In the b2b space, they should be able to expect you to do it better.

You need to make sure there are no surprises. Set expectations and meet them.

In my 20+ years working with companies on privacy, one thing is clear. People don’t like it when something is “creepy”. Usually this is a useless word, since the creepy line is drawn very differently for different people. But what I’ve learned is that where the creepy line is depends on their expectations. But things are always creepy or off-putting when something happens with your personal information that you did not expect.

As a founder, you really have to realize that regardless of whether or not you care about privacy yourself, your end users care about privacy. Don't believe the hype, privacy is far from dead.

If you are in the business to business arena, your customers are going to care very deeply about the privacy and security of the information that they entrust you with. If you have a competitor with greater privacy diligence or a track record, you have important ground to make up.

And, of course, for founders getting investment is critical to the success of their business. The investors during your friends and family round or even seed funding might not be particularly sophisticated when it comes to privacy. But Mark my words, sophisticated funds carry out due diligence and know that privacy failures can often equal business failures. I have seen investments go completely sideways because of privacy liabilities that are hidden in the business. And when it comes time to make an exit via acquisition, every single due diligence questionnaire has an entire section if not a chapter on privacy and security matters. The weeks leading up to a transaction are not the time to be slapping Band-Aids on privacy problems that were built into the business or the product from the very first days. As a founder, you want to make sure that potential privacy issues are, at least, identified and managed long before that point.

The borderless world

I once worked with a founder and CEO of a company who often said that if you are a startup in Canada, and your ambition is the Canadian market, you have set your sights too low and you are likely to fail. The world is global, and digital is more global than any other sector. You might launch your minimally viable product or experiment with product market fit in the local marketplace, but your prospective customers are around the world. This also means that privacy laws around the world are going to affect your business.

If your product or services are directed at consumers, you will have to think about being exposed to and complying with the privacy laws of every single jurisdiction where your end users reside. That is just the nature of the beast.

If you're selling to other businesses, each of those businesses are going to be subject to local privacy laws that may differ significantly from what you're used to. Once you get into particular niches, such as processing personal health information or educational technology, the complexity and the stakes rise significantly.

You definitely want to consult with somebody who is familiar with the alphabet soup of PIPEDA, PIPA, CASL, PHIA, GDPR, COPPA, CCPA, CPRA, HIPAA.

You're going to want to talk carefully and deeply with your customers to find out what their regulatory requirements are, which they need to push down onto their suppliers.

The consequences of getting it wrong can be significant. You can end up with a useless product or service, one that cannot be sold or that cannot be used by your target customers. I’ve seen that happen.

A privacy incident can cause significant reputational harm, which can be disastrous as a newcomer in a marketplace trying to attract customers.

Fixing issues after the fact is often very expensive. Some privacy and security requirements may mandate a particular way to architect your back-end systems. Some rules may require localization for certain customers, and if you did not anticipate that out of the gate, implementing those requirements can be time and resource intensive.

Of course, there's always the possibility of regulatory action resulting in fines and penalties. Few things stand out on a due diligence checklist like having to disclose an ongoing regulatory investigation or a hit to your balance sheet caused by penalties.

All of these, individually or taken together, can be a significant impediment to closing an investment deal or a financing, and can be completely fatal to a possible exit by acquisition.

So what's the way to manage this? It's something called privacy by design, which is a methodology that was originally created in Canada by Dr Ann Cavoukian, the former information and privacy commissioner of Ontario.

Here's what it requires at a relatively high level.

First of all, you need to be proactive about privacy and not reactive. You want to think deeply about privacy, anticipate issues and address them up front rather than reacting to issues or problems as they come up.

Second, you need to make privacy the default. You need to think about privacy holistically, focusing particularly on consumers and user choice, and setting your defaults to be privacy protective so that end users get to choose whether or not they deviate from those privacy protective defaults.

Third, you need to embed privacy into your design and coding process. Privacy should be a topic at every project management meeting. I'll talk about the methodology for that in a couple minutes.

You need to think about privacy as positive sum game rather than a zero-sum game. Too often, people think about privacy versus efficiency, or privacy versus innovation, or privacy versus security. You need to be creative and think about privacy as a driver of efficiency, innovation and security.

Fifth, you need to build in end-to-end security. As I mentioned before, security may in fact be the highest risk area given the possibility of liability and penalties in this area, you need to think about protecting end users from themselves, from their carelessness, and from all possible adversaries.

Sixth, you need to build visibility and transparency. Just about every single privacy law out there requires that an organization be open and transparent about its practices. In my experience, the more proactive an organization is in talking about privacy and security, and how they address it, it is a significant “leg up” compared to anybody else who does not.

Seventh, and finally, you need to always be aware that and users are human beings who have a strong interest in their own privacy. They might make individual choices that differ from your own privacy comfort levels, but that is human. Always look at your product and all of your choices through the eyes of your human and users. Think about how you will explain your product and services to an end user, and can the choices that you have made in its design be justified to them?

A key tool to implement this is to document your privacy process and build it iteratively into your product development process. For every single product or feature of a product, you need to document what data from or about users is collected. What data is generated? What inferences are made? You will want to get very detailed, knowing every single data field that is collected or generated in connection with your product.

Next you need to carefully document how each data element is used? Why do you need that data, how do you propose to use it and is it necessary for that product or feature? If it is not “must have” but “good to have”, how do you build that choice into your product?

You need to ask “is this data ever externally exposed”? Does it go to a third party to be processed on your behalf, is it ever publicly surfaced? Are there any ways that the data might be exposed to a bad guy or adversary?

In most places, privacy regulations require that you give individual users notice about the purposes for which personal information is collected, used or disclosed. You need to give users control over this. How are the obligations for notice and control built into your product from day one? When a user clicks a button, is it obvious to them what happens next?

You will then need to ask “where is the data”? Is it stored locally on a device or server managed by the user or the customer? Is it on servers that you control? Is it a combination of the two? Is the data safe, wherever it resides? To some people, local on device storage and processing is seen as being more privacy protective than storage with the service provider. But in some cases, those endpoints are less secure than a data center environment which may have different risks.

Finally, think about life cycle management for the data. How long is it retained? How long do you or the end user actually need that information for? If it's no longer needed for the purpose identified to the end user, it should be securely deleted. You'll also want to think about giving the end user control over deleting their information. In some jurisdictions, this is a legal requirement.

Everybody on your team needs to understand privacy as a concept and how privacy relates to their work function. Not everybody will become a subject matter expert, but a pervasive level of awareness is critical. Making sure that you do have subject matter expertise properly deployed in your company is important.

You also have to understand that it is an iterative process. Modern development environments can sometimes be likened to building or upgrading an aircraft while it is in flight. You need to be thinking of flight worthiness at every stage.

When a product or service is initially designed, you need to go through that privacy design process to identify and mitigate all of the privacy issues. No product should be launched, even in beta until those issues have been identified and addressed. And then any add-ons or enhancements to that product or service need to go through the exact same scrutiny to make sure that no new issues are introduced without having been carefully thought through and managed.

I have seen too many interesting and innovative product ideas fail because privacy and compliance simply was not on the founder’s radar until it is too late. I have seen financing deals derailed and acquisitions tanked for similar reasons.

Understandably, founders are often most focused on product market fit and a minimally viable product to launch. But you need to realize that a product that cannot be used by your customers or that has significant regulatory and compliance risk is not a viable product.

I hope this has been of interest. The discussion was obviously at a pretty high level, but my colleagues and I are always happy to talk with startup founders to help assess the impact of privacy and compliance on their businesses.

If you have any questions or comments, please feel free to leave them below. I read them all and try to reply to them all as well. If your company needs help in this area, please reach out.

And, of course, feel free to share this with anybody in the startup community for whom it may be useful.