Recently I’ve taken a fairly deep dive into U.S. privacy laws, including recent ones like the California Consumer Privacy Act (CCPA) and upcoming California Privacy Rights Act (CPRA). It's really interesting stuff.

The first class action lawsuits, alleging violations resulting from the CCPA, started soon after it came into effect on Jan. 1, 2020.

The lawsuits are largely about:

  • whether or not violations (data breaches) occurred
  • whether the breaches affected people who are covered under the Act
  • what injuries, if any, resulted from the violations.

The bigger implications of these lawsuits will be to produce clarity, precedent and concrete interpretations of various parts of the CCPA.

For example, what, precisely, constitutes a data breach? You may conjure up a mental image of hoodie-clad teenagers in dark basements in foreign lands, hacking their way into companies’ systems. Shadowy bad guys making off with the sensitive personal information of millions of people, which is then sold to even more shadowy, badder guys on the dark web.

Or could it mean a company collecting and selling customers’ data without asking or telling them. What if some of that data came from children? (This is the case in several CCPA lawsuits.)

But I’ve noticed something is missing. Or, perhaps more accurately, it’s the particular way that the laws are framed. All of them, even the Big Boss of privacy laws, the European Union’s General Data Protection Regulation (GDPR).

Everything is us versus them. Companies versus consumers. Mainly protecting consumers from companies that just can’t seem to help themselves where our bright, shiny, lucrative data is concerned.

While this is the obvious framing for these laws given the capitalist world we live in, unfortunately, I think it limits how much protection these laws can actually provide. The definitions of what harm is, and who can be harmed, tend to get fairly narrow interpretation. And I doubt that the settlements of those lawsuits will change or expand much.

I’m also inclined to agree with the Electronic Frontier Foundation that privacy laws with an opt-out model, like the CCPA, are not best for consumers. They put the onus on us to prevent companies that already have our personal information from doing things with it that we don’t want.

Then I saw this CBC article about potential issues with a virtual passport application platform. Certainly cause for concern, but what snagged my attention was this:

Benoît Dupont, a criminology professor at l'Université de Montréal and Canada Research Chair in cybersecurity, said the passport app will likely be a major target for fraudsters eager to get their hands on Canadian passports and the mobility that comes with them.

"That's very attractive for organized crime groups who specialize in human trafficking," Dupont said in French. "They will attempt to exploit the program very quickly, very intensely to obtain the most fraudulent passports they can in the least amount of time."

It comes back to ideas of injury or harm. If I applied for a passport, and the personal information that I provided in the application is stolen, how much am I harmed? Perhaps less than if my actual passport was stolen, which could be a bit of a bureaucratic nightmare, plus the small issue of identity theft.

But if my stolen passport is used to fake an identity for a human-trafficking victim, is that person harmed? Not by having a fake identity she didn’t seek out, per se, but by being trafficked, absolutely. Substantially more than I am.

With laws designed to punish data breaches, when trying to prove “injury” from a breach, is harm “once removed” considered? Is the human-trafficking victim considered? Under the law, is that person a relevant “consumer”?

What are the odds that I, the original passport owner, would even find out how my personal information had been used, in order to be able to inform my lawyer should I choose to join a class action lawsuit about the original violation?

Same if my credit card information is stolen. The credit card company has strong monitoring, so it would likely be detected quickly. The credit card company isn’t going to keep me on the hook for the fraudulent spending. The incident isn’t going to wreck my credit rating. And I’m likely to have a new card in hand within a few days. How much harm have I suffered?

But what if my credit card is used to pay for hotel rooms where human-trafficking victims are held captive? If it pays for car rentals or gas for vehicles to move victims around? Is that considered injury resulting from the data breach wherein my credit card details were stolen?

Now, identity theft can be a years-long nightmare for those victimized by it. Identity theft resulting from a data breach is absolutely a form of harm.

So is the time and stress to clean up even relatively minor issues from a data breach, all things you wouldn’t have had to deal with if the violation hadn’t happened. Not to mention likely permanently eroded trust in the company that suffered the breach.

What if victimization wasn’t so encapsulated? What if potential victims weren’t just categorized as a company’s direct users or customers? What if we considered the societal harm done by misuse of personal information; the harm done not just to those whose personal data is stolen, but to secondary and tertiary victims – as many as there are traceable to the violation?

But how do you do that? If a company is hacked, I don’t see courts ruling that it is responsible for all degrees of harm caused by the data breach.

What about those aforementioned companies, collecting and selling customers’ data without telling them, and without getting opt-in or opt-out? Or potentially selling personal information about young children to third parties in other countries (also part of some of the CCPA lawsuits)? Are they more responsible for harm caused? It’s not an unreasonable argument.

They not only collected and used the data without authorization, they failed to keep it secure. And not by having insufficient security and being hacked, but by chasing the almighty dollar and selling it. Proactively, they were their own shadowy bad guys.

Now, I don’t expect the CCPA would ever be interpreted this way. Even the CPRA, which expands and amends the CCPA, isn’t going to be that kind of law, because we don’t have that kind of society. (And yes, U.S. laws do affect Canadians and others.)

However, these laws are in many ways Version 1. We are realizing how complex this digital world we’ve built is, and how much it’s been the Wild West to date. Yes, the horses are already out of the barn, but that doesn’t mean they’re gone forever.

Consumers are starting to pay more attention. We’re starting to understand just how much we’ve been letting companies take from us for nothing and without our input. And once we’ve asked the big questions, we will be part of defining big answers.

So who knows? Who knows who “us” will become, and “them”? Who knows if we will evolve past reactivity, just sometimes punishing those that have failed (or actively violated) and caused harm?

Perhaps we’ll move toward a proactive model, designed to do better from the beginning, to protect and prevent harm. Not only for consumers, but, to the greatest degree we can, for everyone.

M-Theory is an opinion column by Melanie Baker. Opinions expressed are those of the author and do not necessarily reflect the views of Communitech. Melle can be reached on Twitter at @melle or by email at me@melle.ca.