The Information Economy
At Facebook’s initial public offering in 2012, Mark Zuckerberg shared a motto: “Move fast and break things.” Later abandoned by Facebook, the catchphrase prevails as a call for disruptive innovation. It’s invoked by tech executives who insist they must “break eggs to make an omelet,” and also in policy circles to condemn the tech industry’s recklessness. But the harsh truth is we gave the tech industry no incentive to do anything other than move fast and break things. In the information economy, as long as companies comply with promises and regulatory checklists, the consequences of their actions don’t count. If we want a system where tech companies don’t follow this mantra, we should match our moral outrage with their business models by making them pay for what they break.
The information economy refers to a commercial system where companies profit not only from the money we give them, but from the information they take from us. That means people are often harmed in ways of they can’t anticipate. Take deepfakes, for example. In 2023, an Australian man used AI to make hosts of deepfake pornography of real women, including local teachers and students, breaching court orders to stop. This year, an underage girl from New Jersey had to sue a classmate who made and shared deepfake explicit pictures of her. Years ago, courts would have dismissed these cases, blaming the victim for sharing information online and saying that nothing can be done until someone can prove “real” harm. What courts have failed to consider then is something many fail to consider now: people’s interest in their information is related to dignity and goes beyond material consequences. These victims harm resulted from them being pulled into the information economy. Bad actors who spread abusive content, while blameworthy, are enabled by services that, for profit, provide tools to create it and platforms to disseminate it.
Most harms of the information economy don’t just target one person but affect many. The Cambridge Analytica scandal, for example, implied harm to democracy that affected millions. And these harms have uneven impacts. AI used by the police, for example, affect marginalized groups when they have disparate error rates. One doesn’t have to be famous or important to be part of the information economy. We are all exposed to its risks because data is indispensable to our social and economic lives. This pervasiveness makes data harms impossible for people to predict and guard against.
Power to Harm
AI changed how the information economy functions. But privacy and data protection laws are built on a consumer protection model built on a different reality: one person interacting with one company where the customer needs choices to discern which practices are good for them. These assumptions ceased to be true for three reasons.
The first reason is AI inferences. Uber makes real-time inferences about demand for rides from data on the number of people opening the app, location, weather, and potentially your battery level. Spotify uses data from your listening habits to infer your personality and emotions. Health insurers make inferences about people’s health risks from data from wearable devices, which could affect premiums and coverage. The U.S. Department of Homeland Security collects social media data to infer potential security threats, which can lead to people being placed on watchlists. Facial recognition systems use pictures to infer your identity and, sometimes, emotions, by analyzing facial expressions. Deepfakes are also inference-made. Consequently, current protection mechanisms, which treat data subjects as consumers, are ineffective because it’s impossible to predict what is learned from our data. Each piece of data collected is a single piece of a large puzzle. With AI, pieces of information from us are combined to learn things we didn’t agree to disclose. We may know what information we shared, but we don’t know about the puzzle that formed about us on the back end. In standard consumer relationships, interactions start and ends at the cash register and happen one by one. But in the information economy, our interactions happen second by second and they follow us forever because of the inferences created each step of the way. Having control over one’s inferred data is impossible.
The second reason is that personal data is interrelated. Not having TikTok or Instagram doesn’t mean that the companies know nothing about us, because they have hosts of information from people similar to us. Inferences made from data are informative about shared groups, behaviors, and trends that go beyond anyone who initially shared the data. A company only needs bits of demographic information – such as gender, age, or residence – to infer considerable information about one’s behavior or preferences from others with similar characteristics. Under the current model, sharing information is consenting to information about others. This breaks even the thinnest notions of consent.
The third reason is power. AI tools, which are often biased against members of vulnerable groups, make significant decisions about our lives. Systems like COMPAS, for example, measure likelihood of recidivism, meaning they can be the reason someone is denied or granted parole or bail. Other AI algorithms make important decisions about us when they’re used to screen our CV or determine what content we see on social media. These dynamics shift relationships of power. Thirty years ago, to decide if someone should get a mortgage loan, a bank teller made an individual assessment based on a person’s income and budget. Today, that decision is made, in part or totally, by credit scoring. Thousands of bank tellers’ discretion gets replaced by the discretion of a few AI designers. This shift concentrates power, making our data choices less relevant and inferences about us more significant.
Meaningful Accountability
For these reasons, frameworks for harm in the information economy are outdated. Data protection is often viewed as a gradient. On one end is a free-market system where obligations originate from agreements between parties: companies make commitments in their privacy policies and people accept or reject them. On the other end is a direct regulation system, where obligations arise from what legislators prohibit in advance, like using facial recognition for emotion detection. But shifting from a contract system to a direct regulation system maintains the basic problem. Just how individuals can’t possibly anticipate all harms, neither can legislators and regulators. That’s because most privacy harms don’t depend on a flatly unfair practice or technology that can easily be prohibited; they depend on what’s inferred and how the data will be used down the line.
Most data protection systems around the world trigger responsibility only when a company breaches a contractual promise or a regulatory provision. In them, companies must commit to what they promise in their privacy policies and follow procedures mandated in advance, such as conducting privacy impact assessments and having a Data Protection Officer. But the negative consequences of for-profit data practices that fall outside of those categories don’t have corresponding forms of responsibility. This means that, if companies draft privacy policies carefully and follow regulatory checklists, they may escape responsibility for data harm. We need a system of accountability for those consequences.
Separating accountability from regulatory violations and corporate promises would be new for data protection, but it’s normal in many other areas of the law. Think about driving. When you drive, regulations say you must have your lights and mirrors working and respect the speed limit. Legislators direct us as such because it reduces many risks. But legislators also know that there are risks these measures don’t prevent. So together with rules about lights, mirrors, and speed limits, if you crash and a judge determines that you were insufficiently careful, you’re still responsible for the accident’s consequences. The law creates a safety net to catch the harms that regulations couldn’t anticipate.
What we need to achieve meaningful accountability in the information economy is to overcome the privacy fallacy. Politicians and the tech industry often say that privacy is worth protecting – perhaps even a fundamental right. The extension is that privacy can be harmed even if we prevent material consequences of privacy violations, like identity theft. Falling for the privacy fallacy means, thinking that, if someone has nothing to hide, they have nothing to fear because, under this mistaken view, privacy is only valuable for the negative consequences that it prevents. Overcoming the privacy fallacy means embracing the consequences of stating that people’s privacy holds value. In a world where data is the lifeblood of the economy, figuring out how to protect people from data harms is paramount.
Latest Comments
Have your say!