Is (just) some privacy worth anything?
Two recent stories made me dust off an old draft about crypto-anarchy1: (1) my family discussing recent reporting on China’s “social credit system” and (2) diar’s reporting on US government spend on blockchain analysis services. We’ve reached a point where companies and states can understand a person better than that person understands themself. And using cryptocurrencies adds to the data available to surveillers2.
Most people don’t care about this (“I’ve got nothing to hide”3), but what they don’t realize is that even if it doesn’t impact them directly today, it impacts the society they live in.
In this post, I explore whether pure privacy (vs some privacy) is required to protect individual freedoms. It makes the following arguments.
Our goal is to increase individual freedom and equality
Surveillance leads to marginalization, which decreases freedom and equality
Even mostly private (but not completely) private systems unravel into effectively unprivate systems
Therefore, mostly private systems lead to marginalization
Marginalization decreases individual freedom
All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood4.
By and large, people generally agree that increasing individual freedom and equality is a positive thing.
As a society, we should seek systems that–among other things–increase personal freedoms and equality of opportunity. To use the language of crypto, reduce censorship and reduce marginalization of any individuals or groups. These are the desired outcomes in this piece.
Surveillance leads to marginalization
Unprivate systems are surveillable, and surveillance marginalizes groups. Marginalization decreases individual freedom and equality.
Surveillance is an unappreciated cost of networks that aggregate data on individuals. Most people are not part of the marginalized group (by definition), but if society cares about freedom and equality, we should care that there exists the mere possibility of a marginalized group.
As Sarah Jamie Lewis (my favorite writer on the topic) says of P2P tech (emphasis mine):
The point of P2P tech is to distribute trust. You can’t distribute trust without consent. You can’t consent without meaningful privacy. Privacy should be a foundational element in any P2P stack, not a challenge, or a footnote, or a “maybe we will get to this in the future”.
[…]
Privacy is not an optional design element. When you refuse to build privacy into a system you are further marginalizing populations, enforcing censorship and encouraging surveillance.
When you refuse to build privacy into a system you are stating that you believe that only certain types of people should be able to use your system, and only for certain things. You might not intend that, but that is fundamentally the result.
To use a toy example: imagine a society of people that are either red or blue. If society has a bias against blue people, a service–say a social network–may try to prevent blue people from creating accounts, or they may decrease the discoverability of blue people content. If instead, the social network cannot classify its users into red or blue–they’re all colorless–they are unable to marginalize any group.
This is in an unprivate system. But marginalization occurs in mostly private systems as well.
Mostly private systems can unravel, leading to surveillance
In “Unraveling privacy: the personal prospectus and the threat of a fulldisclosure future”, Scott R. Peppet argues that for systems with mostly private data, an economic incentive for some individual to disclose their data leads to the marginalization of those who do not wish to disclose.
In a world of verifiable information and low-cost signaling, the game-theoretic “unraveling effect” kicks in, leading selfinterested actors to fully disclose their personal information for economic gain. Although at first consumers may receive a discount for using a driving or health monitor, privacy may unravel as those who refuse to disclose are assumed to be withholding negative information and therefore stigmatized and penalized.
He uses the example of a buyer inspecting a crate of oranges (apparently the classic example):
A buyer inspects a crate of oranges, but cannot open the crate because the oranges will rot before transport
If the seller is not required to disclose how many oranges are in the crate, but there are stiff penalties for lying
The quantity of oranges is private data held by sellers. Unraveling suggests that game theory will force all of the sellers to disclose this data. Here’s how it works:
A seller with a full crate (say 100 oranges), will say they have 100 oranges
A seller with an almost full crate (say 99 oranges) will say they have 99 oranges and accept the 1% discount on the rate they would have otherwise received
And so on and so forth so forth because each seller does not wish to be treated as if they have even fewer oranges than they actually have.
As the author of the paper quotes, “silence cannot be sustained because high-value sellers will distinguish themselves from low-value sellers through voluntary disclosure.”
The theory of unraveling helps us understand the societal impact of surveillance and mostly-but-not-quite-100% private systems.
Consider the transactions of a privacy coin. If one can disclose their transactions (via something like Zcash viewkeys5) for a reward or to avoid a penalty, those with “nothing to hide” would likely disclose, leading to the marginalization of those that do not.
Here’s how unraveling might occur:
A government says that citizens that disclose all their privacy coin transactions will receive a tax break
Citizens that feel like they have nothing to hide provide the government with their view key (let’s say this is 10% of the population)
Citizens with a little to hide, but nothing serious, disclose to avoid being perceived as having something serious to hide (let’s say this is 40% of the population)
Those that do not disclose are suddenly subject to increasing scrutiny. They are searched more frequently when they go through airport security. Cops start to pull them over for no apparent reason
Eventually, citizens continue to disclose until it is only persecuted groups and criminals that have not disclosed
Private systems that allow disclosure can lead to marginalization via unraveling. Marginalization decreases individual freedoms.
All or nothing?
The crypto-anarchists saw this coming.
“[Anonymous communication] will alter completely the nature of government regulation, the ability to tax and control economic interactions, the ability to keep information secret, and will even alter the nature of trust and reputation.”
–Timothy May, The Crypto Anarchist Manifesto (1988):
For crypto-anarchists, anonymous6 communication enabled infrastructure unable to comply with authoritarian requests to break the participating individuals’ secrecy of correspondence. Such an infrastructure limits the control of governments and other centralized entities.
If we return to the example of a government trying to unravel the transactions for a privacy coin, crypto-anarchists imagine the following:
A government says that citizens to disclose all their privacy coin transactions will receive a tax break
Nobody is able to disclose anything
Crypto-anarchists believed in an infrastructure that by design could not comply with authoritarian requests. Pure, 100% anonymity and privacy was the requirement for such as system. The concept of unraveling supports their perspective: in a privacy-preserving system that allows individuals to disclose, individuals can be incentivized to do so, and a marginalized group emerges as a result. Therefore, any system that isn’t purely private and anonymous can lead to marginalization.
Replacing every system with purely private alternatives doesn’t seem very probable.
Still, there are reasons to be optimistic. The majority of the internet is protected by HTTPS–a very secure and private communication protocol–despite a history of government attempts to control it7. We might enjoy the same security in messaging apps if everybody follows the lead of WhatsApp and iMessage and implements end to end encryption. These are examples of privacy and security winning against the interests of authoritarians.
DuckDuckGo’s hockeysticking growth curve suggests a growing interest in privacy
Last week, DuckDuckGo–a search engine that doesn’t track its users–released traffic stats, revealing a very convincing hockeystick. In the same week, Facebook announced a security breach compromising personal information of 50M users. Increased usage of privacy-preserving services and privacy fumbles of internet giants could continue this trend.
Yet we operate in the least private, least anonymous time in human history. One layer (like HTTPS) that is private and secure is not enough to protect a user from marginalization if they are compromised in the next layer. Which brings us to the critical question: is some privacy any good? or do we need complete privacy to increase freedoms and prevent marginalization?
I worry the crypto industry is taking too many half measures. A privacy coin that is able to comply with regulators is highly vulnerable to unraveling. A web3 stack that has privacy layers, but identifies users or their actions at other layers doesn’t seem private at all.
Unlike the ideal of “decentralization”, privacy and anonymity are measurable: are my actions or identity identifiable or not? Any technology that appears private but eventually unravels is worse for individual freedoms than a technology that is explicitly public. Users that need privacy most might adopt the seemingly private technology and ultimately be subject to marginalization8.
I’m curious how privacy experts think about these trade-offs. Is starting semi-private to appease regulators compatible with full privacy in the future? Or should projects start and stay 100% private?
Thanks to Ainsley Sutherland and Elena Nadolinski for their input on this piece
Though after a few rounds of edits, almost nothing remains of that original draft. ↩
Bitcoin has extremely poor privacy. As every transaction is recorded in an immutable ledger with the sender, receiver, and amount sent, it’s trivially easy to identify Alice and Bob. Two federal agents that stole money during the Silk Road probe learned that the hard way. Ethereum has the same problem for its smart contract transactions. And even if you create new addresses for every interaction, if any of your addresses interact with each other, you reduce the pseudonymity. This is called “linkability.” ↩
This was my attitude until relatively recently ↩
We should probably revise this to say siblinghood or something. ↩
View keys allow individuals to “selectively disclose the data from a specific transfer […or] all transactions for a given shielded address.” ↩
Both privacy and anonymity are used in crypto-anarchist documents. A quick detour to disambiguate them: Privacy: the ability to keep some information or actions to yourself. Like shutting the door to use the toilet. Anonymity: the ability to present some information or take some action without anybody being able to identify you. Like calling the the anonymous tip line at a police station or whistleblowing. In the context of surveillance and censorship, privacy and anonymity get blurry. Even if an authoritarian does not know your passport number, if they can tell you are a “blue person”, they can still marginalize you. Disclosing your “blueness” could be a failure of privacy, and not anonymity, depending on the system. For example, “blueness” could be the observation that you supported a specific cause. So to achieve the anonymity desired by crypto-anarchists, you also need privacy. ↩
Read the history of DES for some examples. ↩
As I wrote a couple weeks ago, users are subject to the same type of risk with regulated stablecoins that might seem like they have the desirable properties of cryptocurrencies but are actually even more censorable and seizable than paper money. ↩