Does Crypto Create Value or Just Redistribute It?
Exploring the Far-Reaching Implications of Open Data
[Note that this essay was originally published on Medium]
Tl;dr:
The web3 debate often overlooks a fundamental question: what can crypto do better than other technologies and will this create value for a lot of people or not?
The pro-web3 narrative doesn’t adequately address this question because it emphasizes how crypto redistributes value, not how it creates it.
The short answer is that the most significant and far-reaching effects of crypto will likely come from open data.
To show why, it’s helpful to draw a parallel with the Internet. The Internet created enormous value for billions of people by unbundling data from physical infrastructure, enabled by packet switching. This broke the data communications logjam at the network layer, causing the cost of information to plummet.
The Internet as we know it today, however, has migrated to a client-server architecture and data is stored in proprietary servers, by default. The data logjam has moved up to the application layer.
On public blockchains, data is open, by default. Crypto effectively unbundles data from applications, breaking this new logjam. Less friction promises even cheaper information and even larger network effects.
It will take a long time to understand the real-world implications of a more open web. But we can already start to flesh out a few that have the potential to create enormous value for a lot of people: elimination of intermediaries, code composability, data composability, self-sovereign data, data persistence, coordination and financialization/securitization.
The promise of web3 is not about decentralization for decentralization’s sake. It’s also not primarily about value redistribution. It is decentralization as a means toward openness and the efficiency, innovation and utility this will create.
Web3 is the most divisive topic in technology.
It either heralds a new, egalitarian Internet and an inflection point in human history or it’s a Ponzi scheme with no practical application.
If you follow the space closely, you’ve seen a rising crypto/web3 backlash in recent months. A chorus of prominent CEOs, including Jack Dorsey, Elon Musk, Aaron Levie and Brian Chesky (h/t Austin Rief), have publicly criticized the pro-web3 narrative on Twitter. Former Signal CEO Moxie Marlinspike posted a widely-read essay pointing out that web3 is not as decentralized as its proponents argue. Scott Galloway piled on, with his usual stew of wit, insight and sensationalism.
The inevitable backlash-to-the-backlash followed, with a host of responses, from Vitalik Buterin, Chris Dixon, Packy McCormick and more.
I write because I’m trying to figure out the answers, not because I have them. I also have no dog in the fight. But it seems to me that a lot of this debate is overlooking the most fundamental question: what can crypto do better than other technologies and will that create value for a lot of people or not?
In this essay, I try to answer it. But first, it’s worth exploring why this is an important question to answer now and why the prevailing web3 narrative doesn’t adequately address it.
The Debate Matters
A fair place to start: why should we care? Does it really matter if a bunch of CEOs and VCs are trolling each other on Twitter?
Crypto’s success will ultimately be measured by the effect it has on people’s lives, which will depend on the breadth of adoption and use cases. Technologies do not get installed and adopted overnight, so that will take time to play out.
In the meantime, the debate matters. It influences the flow of capital and talent. It matters even more in the court of public opinion, which will affect the rate of adoption and perhaps the biggest wild card, regulation.
The Web3 Narrative Emphasizes Value Redistribution, Not Value Creation
No one is in charge of the “web3/crypto narrative;” there is no Chief Communications Office of crypto or crypto PR agency of record. It is an emergent product of a lot of different opinions and articulations. But the pro- web3 narrative most often reads something like this:
Centralization is bad, because powerful centralized institutions extract usurious rents, can censor or de-platform users at will and fail to protect user privacy.
Cryptonetworks lack centralized institutions, enabling users and creators to retain ownership of what they create. Therefore decentralization will redistribute value and power to creators and users.
In other words, decentralization = distributed ownership = redistribution. This is also sometimes referred to as The Ownership Economy and the progression of web1 ➡️ web2 ➡️ web3 is often summarized as:
web1: read
web2: read / write
web3: read / write / own
There are many examples of this narrative. There are elements of it in Chris Dixon’s seminal Why Decentralization Matters, Jesse Walden’s description of the ownership economy, recent congressional testimony from Bitfury CEO Brian Brooks or this clip from Cooper Turley.
To be clear, I agree with these positions, which I think are well reasoned and articulated. Redistribution of value and power from centralized institutions is a very important potential benefit of web3. No one disputes that in web2 tremendous power is concentrated in just a few hands and people are increasingly disillusioned with centralized institutions in general. Neither concentration of power and wealth nor lack of trust are socially beneficial.
It’s also understandable why this narrative has caught on. It has been shaped by and targeted towards the most important early constituencies. It appeals to investors, who think a lot about how value flows within ecosystems. It has been presented to regulators as a way to counterbalance the power of “Big Tech.” And it has fueled a lot of the ideology and energy behind the early crypto movement. There’s nothing like a common enemy to unify a group. The subtext of WAGMI is that “we” are gonna make it, but “they” (centralized institutions) are not.
But I also think this narrative has a lot of drawbacks:
Redistribution of power and value to users is only one potential benefit of web3, among many. More on this below.
It fails to refute the bear case. Most criticisms of crypto basically boil down to the idea that it does not have much or any practical utility and is therefore a pyramid scheme that only redistributes wealth. A narrative focused so much on value redistribution, not value creation, inadvertently plays right into this criticism.
People adopt products because of utility, not ideology.
Redistribution won’t attract the next billion users. Products achieve mass adoption because they meet a need in people’s lives better than the alternatives, not because of ideology. The Internet itself was first billed as a tool for free expression, but people ultimately coalesced around shopping, bingeing TV, keeping up with friends, watching people fall down and sending unsuspecting victims videos of Rick Astley. It is very much unclear that many people consider “redistribute value and power away from centralized institutions” an unmet need.
The vast majority of people don’t create, they consume. There is already evidence that web3 redistributes value from platforms to creators. There are countless stories of fine artists and photographers who struggled for years and are now making a living wage, or more, selling their work as NFTs. We are starting to see a similar phenomenon for musicians on services like sound.xyz. But most people don’t care about getting compensated for what they create because they don’t create, they consume. YouTube has ~37MM channels, which sounds like a lot until you compare it to 2B+ MAUs. Only 10% of Twitter users generate over 90% of tweets.
Redistribution is too abstract. It’s very hard to explain to Sally Mass Adopter how she will benefit from this redistribution because the business models of web3 are in their infancy. Let’s introduce Faceledger, the web3 alternative to Facebook. It will share its revenue with users! Great, but how will it make money? Will it have ads? Will it charge users? How much will I make? Uh, don’t know. But did I mention it will have a mechanism for community governance and content moderation! Cool, how will that work? Also don’t know. Anyway, since you own all your friend lists and posts, if you don’t like it you will be able to move to a new network with a few clicks! Great, but how do I coordinate this initial move to Faceledger with all my friends? Also don’t know. So why would I go through the trouble? Um…
Redistribution is being conflated with equal distribution, which will not happen “because power laws.” The promise of redistribution is being twisted into one web3 can’t keep. Many have interpreted “redistribution” to mean far more equal distribution in web3 than is currently evident on the Internet. (We can see that in the criticisms from Dorsey and Galloway mentioned above.) That will not happen because the way information, including popularity and value, propagates through a network is dictated by power laws, not normal distributions.
In normal distributions, the vast majority of observations fall relatively close to the average. These are common in many natural phenomena, but only when the observations are independent of each other.
In a power law distribution extremely large and extremely small observations are far more common. Power law-like distributions are common in networks because network phenomena tend to be dependent. Each node influences, and is influenced by, other nodes. For example, popularity tends to follow power law-like distributions, particularly when there is asymmetric information and search and research costs are high (i.e., there are a lot of choices and it’s hard to figure it out yourself). In those cases, people are much more likely to base their choices on the signals they get from the network. This leads to the “rich-get-richer effect,” whereby popular things tend to get more popular and unpopular things stay unpopular (like my bi-monthly tweets). Other than cases where recommendation engines expressly ignore what other people choose (e.g., Pandora’s Music Genome Project), almost any measure of popularity, like Twitter followers, box office, book sales, YouTube subscribers and views, viewership of Netflix shows, earnings on Patreon, the value of NFT collections, the market cap of alt coins, etc., exhibits power law-like distributions. The distribution of popularity and value on web3 will almost invariably follow these highly skewed distributions.
Web3 has the capability to push the entire power-law distribution up and to the right for creators, meaning that everyone in the distribution will be able to make more than they previously did. This can happen for two reasons: the lack of centralized natural monopolies that take outsize “rakes”; and the ability of creators to extract consumer surplus from ardent fans. (Or what Packy McCormick calls the “dead space under the demand curve.”) But the distribution will likely still look like a power law.
The most controversial take: absolute decentralization is neither possible nor, for many people, desirable. The decentralization = distributed ownership = redistribution narrative makes absolute decentralization (whatever that means) the be-all and end-all. This is also fragile, because absolute decentralization may be neither possible nor desirable.
Among the recent criticisms of web3, Marlinspike’s post My First Impressions of Web3 got the most notice because he pointed out real issues in the current design of many web3 applications (plus, he pulled a sneaky trick: he wrote in a balanced tone). His primary point is that it is currently difficult to run a full blockchain node and therefore most web3 applications (including household names like MetaMask, OpenSea and Axie Infinity and all the biggest DeFi apps), rely on a few third party indexers (namely Alchemy, Infura and Quicknode) to interact with the blockchain rather than run nodes themselves. Just a couple of centralized providers are potentially single points of failure for the vast majority of supposedly decentralized apps.
What is most striking about this criticism is not that it’s true, but that it’s well understood within the developer community and has been for years (as can be seen in this article from 2018, which makes many of the same points). That this is still an issue (and that Alchemy just did a raise at more than a $10 billion valuation) is telling.
For one thing, it may be very difficult to fix. Running a blockchain node requires expensive equipment, dedicated IT staff and, because the blockchain is in constant flux, from millisecond to millisecond any given node doesn’t always have the most current version of “state.” If you’re running an app that needs to continually verify state, managing your own node may introduce unacceptable latency. These challenges will only increase as the number of transactions scale and many applications increasingly want to build to multiple blockchains, both of which require running even more nodes. The Alchemy valuation underscores that many teams, which often start small and struggle to even hire enough web3 software developers, will continue to opt for blockchain indexers rather than staff up to run their own nodes.
It is likely that there will be significant centralized players in the web3 value chain for the foreseeable future.
More important is that “absolute decentralization” may not be desirable in all cases. Many developers and consumers just want services to work, they don’t necessarily care about decentralization for decentralization’s sake. Let’s focus on a much simpler area of centralization than blockchain indexers: custodial services. Most mainstream consumers are probably going to want custodial solutions to manage their crypto assets, particularly as it becomes possible to participate in DeFi or NFTs with custodial wallets. Coinbase has ~90MM customers and these people aren’t just converting fiat to crypto and then moving it all to a non-custodial wallet; as of its last quarterly report it had assets of $278 billion on the platform (which excludes the Coinbase Wallet). More broadly, last year supposedly $14 trillion traded on centralized exchanges vs. $1 trillion on decentralized exchanges. Contrary to the “not your keys, not your crypto” ethos of crypto maxis, it is likely that most mainstream consumers are not going to want the risk or hassle of managing their own keys.
The reality is that there will likely be significant centralized players in the web3 value chain for the foreseeable future.
So, if decentralization = distributed ownership = redistribution is not the best narrative, what is? Crypto needs to articulate how it can create value for a lot of people, not just redistribute it. But first off: can it?
What Does it Mean for a Technology to “Create Value”?
This might seem like an obvious answer, but a technology (or suite of technologies) creates value if its economic benefits exceed its economic costs. It must enable something new, better or cheaper than other technologies and these benefits must outweigh producers’ costs to use the technology and consumers’ costs to adopt it (including the opportunity costs of using some other solution, the costs to abandon existing solutions and direct implementation costs). Hence the most obvious set of equations ever:
It’s important to note what is excluded from these equations.
They don’t require that value is equally distributed between producers and consumers or, for that matter, among consumers.
A technology also doesn’t necessarily need to obviously “solve a problem” to create value, because sometimes the problem it solves is far downstream of where it is implemented or it enables the creation of something new that no one knew they needed.
It also doesn’t require that it has consumer-facing use cases, because sometimes the use cases are not consumer facing.
How the Internet Created Value: Unbundling Data from Infrastructure
The best way to explore how web3 may create value, and how broadly, is to draw a parallel with “the Internet” (or what we could collectively call web1/web2). Clearly the Internet has created tremendous economic value for billions of people.
There are the obvious consumer Internet applications, literally staring us in the face. There are also the less obvious (and largely non-consumer facing) ways that the Internet has affected almost every “non-tech” industry, or what you could call digital transformation.
The former category of consumer applications has created trillions of dollars of value (as measured by market cap), plus there’s the unmeasured but massive consumer surplus that consumers get but don’t pay for (Packy’s “dead space” again).
The latter category of (mostly) non-consumer facing benefits has probably created even more value. According to IDC, aggregate private and public spending on digital transformation will approach $7 trillion by 2023. Is there a positive return on this spend or have these institutions been forced to make these investments for competitive reasons? Who knows. If we split the difference, it’s a blunt proxy for value creation. On top of that is the enormous and probably unmeasurable value that has been created for consumers by greater competition and price discovery.
What is it about the Internet that enabled this value creation? The Internet comprises many, ever-evolving, technologies. But if you were to cite one transformative technology, it is TCP/IP, the protocols that underlie packet switching. TCP/IP breaks all information down into packets and provides a mechanism for those packets to recombine correctly at the intended recipient, even if they arrive in the wrong order and take different physical paths.
Why was packet switching such a big deal? Digitization set the cost to store and transport information on a path to asymptotically approach zero. The print collection of the Library of Congress is reportedly equivalent to ~200 terabytes. It has a staff of over 3,000 and an annual budget of $800 million. And while all those people aren’t maintaining buildings or re-stocking shelves, some non-trivial percentage are supporting the physical collection. By contrast, the current cost of 200 TB of external hard drives is less than $5,000. Data itself is something close to free. What makes data expensive is when there are gatekeepers that have sufficient market power to impede the flow of data and extract economic rent.
Prior to the adoption of TCP/IP in 1983, telecommunications providers were the gatekeepers in data communications, because they needed to allocate expensive and scarce bandwidth and dedicated circuits, and charged accordingly. Packet switching broke this logjam by unbundling information from infrastructure. This unleashed a torrent of innovation in hardware, software and physical infrastructure, leading to the connected, fast, mobile, rich media world we all take for granted.
Almost all of the value created on the Internet happened because TCP/IP reduced the friction of moving data around.
In 1983 (or 1993 or even 2003), you probably couldn’t have predicted many of the most valuable use cases that would ultimately emerge far downstream from TCP/IP. It is notoriously hard to predict use cases because they are emergent phenomena that result from the complex interaction of new technologies, ecosystems, business models and consumer behaviors. But what you could foresee were the broad outlines of the economic benefits of TCP/IP.
By unbundling information from infrastructure, packet switching made it orders of magnitude cheaper to obtain, use and distribute data/information. (Cloud-based storage and compute would eventually make it even cheaper.) It also made it possible to deliver information anywhere in the networked world, breaking down geographical boundaries. It blurred traditional boundaries between industries too, because so many businesses became some variant of moving bits back and forth. The advent of always-on fixed and wireless broadband networks, greater upstream bandwidth and increasingly powerful mobile devices also led to the development of networks of unprecedented scale.
All of that led to lower entry barriers (and greater competition), businesses of larger scale and scope, massive networks (with massive network effects), elimination of intermediaries, more efficient business processes, improved consumer price discovery and an explosion of cheap or free content. These dynamics ultimately yielded lower prices and better experiences for consumers, creating tremendous value for billions of people.
How Crypto Creates Value: Unbundling Data from Applications
The most significant and far-reaching effects of crypto are likely to come from open data.
Web2: Many Siloed Networks
TCP/IP broke the data communications logjam at the physical network layer, but that bottleneck has migrated up the stack to the application layer.
Web2 applications are often relatively simple or even open-source code written on top of proprietary data. This data is proprietary, by default, because the Internet as we know it today is predominantly a client-server architecture. It didn’t start this way. As the Internet moved toward mass adoption, it proved impractical for most consumers to operate their own servers due to technical complexity, asymmetric consumer bandwidth (i.e., cable and DSL connections that provided far more downstream than upstream throughput) and the advent of dynamic IP addresses. It also became more lucrative for commercial enterprises to control proprietary servers and data.
Today, almost all data flow online is mediated by servers. When users (clients) login to a website, they enter an ID and password that are authenticated by a server. Then all the data they input or generate on the site is associated with their IDs and stored on servers. That’s why data is proprietary, by default.
Applications use this proprietary data to enhance the utility of their services, monetize their platforms (usually through advertising) and increase consumer lock in/switching costs. The source of their moats and value is their data. That’s why many of these applications gather so much information about users. It is also why they guard their data so closely and strictly limit what data can flow beyond the boundaries of their networks.
For web2 applications, the source of their value is their data.
I just casually threw out the word networks, but what makes these applications distinct networks? A network simply describes the relationships of interconnected nodes (things or people). For information networks, the boundaries of the network depend on context, defined by what information can pass between nodes. (For instance, a Twitter user might embed a YouTube video in a tweet, but since Twitter users can’t access the full array of YouTube content without leaving Twitter and accessing the YouTube domain, Twitter and YouTube are usually considered different networks.) The web as we know it comprises many such independent networks.
Web3: One Big Network
Public blockchains break the data logjam that now sits at the application layer.
To see why, it’s helpful to revisit what a public blockchain is and how it works. At inception, the Internet was an open, global, peer-to-peer communications network. For comparison, a public blockchain is an open, global, peer-to-peer communications network with an open, consensus database. The Internet is “stateless” (it doesn’t have a database) and blockchains are “stateful” (it does).
Turns out that maintaining a consensus database makes a big difference. For crypto, the analogy to the invention of TCP/IP was Satoshi Nakamoto’s bitcoin white paper. Nakamoto’s stroke of genius was in figuring out how a network anyone can join could also agree on a database without a trusted central (risk-assuming, guarantee-providing) party. He/she/they recognized that there must be a way for peers to send data to each other securely (cryptographically-secured public/private key pairs); a mechanism to enable everyone to agree on the consensus state as well as prevent bad actors from corrupting it (consensus algorithms and cryptographic hash functions); and an incentive for participants to maintain and update the database (tokens). Vitalik Buterin and his Ethereum co-founders generalized this concept further, creating a consensus database not just of debits and credits, but of any type of computation.
Just as most web2 applications are closed, by default, public blockchains are open, by default. The blockchain can only reach consensus if every node can audit the entire history of transactions — therefore all transactions must be public. On smart contract blockchains, such as Ethereum, each full node must execute the code in each smart contract — therefore all code must be public. And because all transactions must be initiated by users (with no server verifying the authenticity of communications), all user data must be self-authenticated (signed by a private key). This doesn’t mean that all user data is open, but it does mean that users will have the ability to control their own data and therefore will be able to provide selective access to that data (in a cryptographically-secure way).
In web3, unlike in web2, data is structurally unmediated by central servers, making it functionally much more like one giant network — which will further reduce the friction of moving data around.
As described above, the foundational value unlock of the Internet was cheap information. However, all this value has been siloed within discrete networks. The great value unlock of web3 is that, by design, it avoids the gatekeepers who currently prevent open data flow. Removing this friction will mean even cheaper information and even larger network effects.
One more angle to put this idea in perspective. TCP/IP was originally created by DARPA (U.S. Department of Defense Advanced Research Projects Agency) to enable communications even if nuclear war destroyed part of the physical telecommunications infrastructure. Its implications are obviously much broader. Bitcoin was originally created as a way to send electronic cash without an intermediary. It too could have much broader implications.
What are the Practical Implications of Open Data?
Conceptually, the idea of free(r) flowing data on “one network” sounds good, but what does that mean in practice? It is such a fundamental change that it is hard to comprehend all the implications. How do applications establish competitive moats when there is no consumer lock in? What is the basis of competition — and, for that matter, what is corporate strategy and what is antitrust — if code (perhaps including pre-programmed competitive responses) is publicly accessible? It will take a long time to figure out these kinds of questions. But we can already start to sketch out some of the economic benefits of greater data access and transparency. Each of these could warrant its own essay:
Elimination of intermediaries
Rent-seeking intermediaries exist for many reasons. Sometimes, for instance, there is insufficient competition in one part of the supply chain because of high entry barriers or regulation. Often, however, there are intermediaries because transacting requires specialized knowledge (technical, legal or regulatory), there is information asymmetry or they assume risk. On one big network, with open smart contracts, each of these diminish or go away. Since smart contracts are self executing, much of the specialized knowledge is embedded in the smart contract itself. When information is publicly accessible, there is far less asymmetry to exploit. And when smart contracts are auditable on a blockchain, it may no longer be necessary for a third party to assume risk.
Much of the traditional financial industry, many legal services, wholesalers, brokers and agents will be radically affected or rendered obsolete when formerly proprietary information is publicly accessible and smart contracts execute transactions. This is, of course, the whole point of DeFi. (Beyond banks, consider the potential impact on credit card providers, trust companies, music performance rights organizations, title companies, escrow services, digital ad networks and factoring providers.) Removing intermediaries from value chains doesn’t sound as sexy as the decline of all centralized institutions, but the value of this alone is likely measured in the hundreds of billions of dollars.
Code composability
Code composability means that applications can be built on top of other applications, without permission, simply by pointing to existing smart contracts. Composability is common in the world of DeFi, as developers continuously launch new projects that utilize other applications to offer better yield (it is referred to as “money legos”). See here for an excellent description by Linda Xie. And while cross-chain composability is not currently possible (i.e., smart contracts on a rollup like, say Arbitrum, accessing a smart contract written on Ethereum), there are technical solutions in the works, especially for Ethereum Layer 2s and other Layer1s that are compatible with the Ethereum Virtual Machine, like Avalanche. Code composability promises to dramatically speed the pace and efficiency of software development and innovation.
Data composability
Closely related to the idea of code composability is the concept of data composability. Code composability refers to accessible code; data composability refers to accessible databases. Here’s a highly-approachable description of the implications of data composability by Danny Zuckerman, co-founder of Ceramic (an open, decentralized network to store, modify and access non-financial web3 data). As he explains, truly open databases also have the potential to radically lower the entry barriers to create new or improve existing applications.
Data self-sovereignty
As described before, the idea that users and creators will own their own data — data self-sovereignty — is the benefit that is most closely associated with web3 and most discussed. Because users must self-authenticate the data they input on the client side, in web3 user data is predisposed to be owned and controlled by users themselves.
How would this work? In theory, each user will have one or more unique decentralized identifier(s) (DID) (a W3C specification), that can be associated with any kind of data you can envision: social posts, photos and videos, likes, playlists, purchase behavior, professional and academic credentials, government IDs, memberships, financial information, calendars, medical information, contact lists, etc. All this data may be stored in different places, including decentralized file storage networks like IPFS, cloud storage, locally or even on multiple blockchains. But it would be possible to manage all these “DID documents” from one UX, such as an Ethereum-compatible wallet, alongside financial assets like cryptocurrencies and NFTs.
Then users would be able to grant read and/or write access to any of the data in this virtual personal “data repository” to any application they choose. This is sometimes referred to as data portability, but that’s a bit of a misnomer. Moving from app to app will not require porting anything, but rather granting permission to (sets of records within) the same pool of data. A good illustration of this concept is above, from Ruben Verborgh. Eventually, consumers would even be able to prove information without revealing it, using zero-knowledge proofs (e.g., verifying a certain income threshold to take out a mortgage, without disclosing the specific amount).
There are many important potential benefits of data self-sovereignty:
It could resolve the consumer privacy problem. Like a lot of concepts in this section, it is impossible to do it justice here. Suffice it to say that it is abundantly clear that consumers want more control over their data. (That has been evident from the impact of Apple’s Ad Tracking Transparency initiative on mobile advertising and reports that as many as 96% of users opt out of tracking on iOS.) There are several ways to improve consumer privacy, but the most secure approach is if users have complete and granular control of their data, with end-to-end encryption.
Less consumer lock in and therefore power and value redistribution from centralized platforms to users — namely the most often-cited benefit of web3. As described above, I think this benefit has become too prominent, because it’s only one of many, but it’s still an important one. If consumers control their own data, switching costs to move between applications should plummet. This will force application providers to compete on the basis of consumer loyalty, not consumer lock-in, something I discussed in detail here.
It will enable unimaginable new consumer applications. Data composability refers to the potential innovation when applications can access existing databases rather than build their own. Flipping this on its head: what kinds of consumer applications will be available when it is possible for individual consumers to selectively share their personal information? (A lot of this may sound highly invasive, but the idea here is that this data would be shared solely at the consumer’s discretion.) For instance, let’s say you aren’t feeling well. Rather than go to WebMD (where a crude symptom checker almost always leaves you convinced your hangnail is terminal), imagine that you were able to provide controlled (including anonymized) access to your medical history — and maybe even the feed of biometric stats from your wearable. What kind of diagnostic precision, treatment and/or preventive steps would be possible? Similarly, actuarial tables are a relatively blunt instrument — what if you could obtain life or driver’s insurance based on your actual behaviors? Or how might Netflix’s recommendations improve if it could see all the TV shows you watch on Disney+ and HBO Max and, for that matter, everything you listen to on Spotify? What if you were sent targeted offers that reflected all your purchase behavior? Or imagine the Lending Tree concept — pitting service providers against each other — with more granular data, but for a far wider range of services? It is not clear how valuable any one individual’s data is. But data self-sovereignty also raises the prospects of consumers being able to monetize their own data, the as-yet unrealized promise of personal data exchanges.
It enables better creator economics. This is another familiar idea, namely that giving individual creators the ability to own their own work will spur a “creative renaissance,” as Li Jin describes here.
Data persistence
One of the byproducts of data self-sovereignty is data persistence, namely that consumer data will, by definition, be consistent from application to application (because every DApp would be accessing components of the same virtual data repository). This also has important practical implications:
One obvious application is single sign-on/identity. Navigating the web today requires juggling passwords or consenting to allow Google or Facebook to handle authentication for you. If instead you control a personal data repository, this will enable you to effectively sign in once, with complete control of what is exposed and to whom. This would be vastly more efficient than memorizing umpteen different passwords and re-inputing the same information umpteen times.
It will greatly enhance the utility of digital assets. Data persistence means that all of your digital assets are discoverable and consistent anywhere you go. As a result, it will be possible for avatars, art, collectibles, digital clothing, in game assets (weapons, vehicles, skins, currency) or digital proof of physical ownership to have utility in any digital environment that chooses to give them utility. That’s a precondition to a vibrant, open metaverse. Why would digital platforms enable this cross-application interoperability? They may have to competitively. For example, in the future, new games may be highly incentivized to recognize (or remix) the utility of other games’ assets as a way of attracting new players. (Looking forward to the first P2E vampire attack…)
Verifiable credentials and reputation management. As mentioned, these personal data repositories could hold any conceivable digital asset, including academic or professional credentials and perhaps even some measure of professional or personal reputation or work product. This could make all kinds of transactions more efficient, even if it sounds like a Black Mirror episode. Think of it as an Uber or Ebay rating that travels with you. It could also change the nature of discourse online — rather than infer reputation on a social network from follower counts and mutual friends or puzzle over how many Yelp or Amazon reviews are legitimate, it might be possible to see measures of reputation directly. It will also make it far easier for unknown parties to coordinate at scale, as discussed next.
Coordination
A profound topic deserves a profound starting point. Why have humans taken over the planet? According to Yuval Harari, it’s due to our ability to coordinate at scale. But people will only contribute their resources toward a shared goal when they trust the other people with whom they are coordinating. How much they must trust them — what you could call minimum viable trust, or MVT — is correlated to the value of the resources they’re committing to the collective effort.
Dunbar’s number, named after anthropologist Robin Dunbar, describes the maximum number of stable relationships that humans can have based on the size of our brains, and is about 150. This creates a problem. How can hundreds, thousands, millions or even billions of people trust each other? When the stakes are low, our trust may exceed MVT based solely on our belief that others share our beliefs, or what Harari calls “shared myths” (such as religion, patriotism, values, a shared social contract, etc.). But as the stakes go up, the costs of surpassing MVT also goes up. These costs could include research (such as discovering the reputation of people or organizations); legal contracts (which require an entire apparatus to support them, like law firms and courts); and laws (which are ultimately supported by federal, state and local governments, the judicial system, police, prisons and the military). And even then, sometimes these measures aren’t sufficient to surpass MVT.
The cost to secure trust and foregone coordination from insufficient trust are both inefficient allocations of societal resources. Economists call this “deadweight loss.”
That’s why the combination of publicly verifiable credentials and reputation, open smart contracts (which provide transparency for all parties and are self-executing) and tokens (which align incentives) could be so important and socially beneficial: they make it easier to secure sufficient trust between parties, lowering the cost of coordination and enabling coordination that might otherwise have not occurred.
The most visible example of this concept is the DAO. Many current DAOs are relatively rudimentary, sometimes dismissed as “a group chat with a bank account.” Many are also still trying to figure out the right governance, decision rights and way to sustain member engagement. But they also could represent a path toward far more efficient labor utilization. Today, there is a lot of friction and cost to onboard new employees (by one estimate, ~$600 billion per year), spent on advertising, internal and external recruiters, pre-hiring assessment, background checks and preparing employment contracts. There is also high cost for prospective employees (time spent interviewing, etc). These high onboarding costs also mean there are high switching costs, because if employee and employer separate, both will have to go through the whole rigamarole again. As a result, employers are more likely to retain subpar employees and employees are more likely to stick with uninspiring work. Easier reputation discovery, combined with the cultural acceptance of more transitory work, could result in much more fluid and efficient labor markets.
Blockchains could enable coordination on all kinds of societal problems.
But the coordination enabled by blockchains could extend well beyond an alternative to traditional corporations. It could enable much more efficient sharing of resources, like compute (which is effectively what PoW blockchains like Bitcoin and Ethereum are), storage (Filecoin, Arweave) or bandwidth (Helium). It could also enable coordination on big societal problems, like scientific research, drug discovery, political actions or climate change.
Financialization/securitization
Being able to verify the authenticity, ownership and provenance of any digital asset (including digital representations of physical assets) will enable any digital asset to be “financialized.” Some may understandably balk at this. The term financialization is often equated with activities that purely move money around, but don’t actually create anything. However, there are numerous potential efficiencies from the “financialization of everything.”
Efficient markets are pro-social because they enable the efficient allocation of resources. When information flows freely about the authenticity, ownership and provenance of assets, markets are likely to be more liquid and there will be easier price discovery, lower transactional costs and less risk. This makes it much more likely that assets trade to a buyer who can use them most productively or who ascribes the most utility to them. Financial laws and regulations exist both to protect investors from fraud and to make sure there is sufficient trust in the capital markets to enable the efficient flow of capital. Greater transparency would have a similar effect.
Free information flows also means that new financial tools are possible for those assets. If you own a valuable NFT, because your ownership is easily transferable and publicly verifiable, you can sell, trade, rent, lease, collateralize or fractionalize that NFT, with a potentially unlimited number of service providers competing for your business. (Services like NFTfi and Arcade enable collateralized lending of NFTs today.) Over time, it will be possible to do the same for many physical assets too. For instance, imagine a service that verifies the provenance of wine and stores and insures it. Let’s also assume it issues an NFT for each bottle it stores (and the owner of the NFT can exchange it for the physical bottle at any time). This would make it easy to sell into a liquid market (sorry), trade, collateralize or even fractionalize ownership of that bottle, things that are costly or impossible to do today. That might sound far-fetched, but Stockx is already doing something similar with sneakers.
This will also open up new sources of financing. This is also already happening today. Think of Kickstarter, but with liquid equity exchanged instead of goods and much greater transparency. Many DAOs have pooled capital to invest in or even buy entire businesses. Over time, this type of funding could upend traditional private equity and venture capital, especially if the rules around accredited investors change. On Mirror, it is already possible to crowdsource journalism or even an entire novel and other forms of media aren’t far behind. Hollywood DAO and NFT Studios, for instance, seek to enable DAO financing for film and TV projects.
These potential benefits of web3 — elimination of intermediaries, data composability, code composability, data self-sovereignty, data persistence, coordination and financialization — are all made possible by more open data. Similar to the downstream effects of TCP/IP, they could lower entry barriers, make business processes more efficient, spur innovation and enable new consumer features and applications. That’s how crypto creates value.
The Elephant in the Room: How Can Web3 be Efficient?
All this may seem naïve or quixotic. Web3 has a long way to go. The personal data repository I mentioned above doesn’t even exist yet. Wallets are not even close to ready for the masses. It is very difficult to transact across chains. Ethereum is prohibitively expensive for all but the largest transactions. But, most troubling, I used the word efficient a lot in the prior section. Many may reject the idea that web3 could be more efficient for anything.
Let’s put aside the inefficiency of proof-of-work consensus algorithms and assume that Ethereum will successfully make the transition to more efficient proof-of-stake (PoS) and more applications will be built on cheaper layer 2s or competing smart contract PoS layer 1s. Even PoS decentralized networks are still inherently less efficient than centralized ones. As mentioned, on Ethereum and other Ethereum Virtual Machine (EVM)-compatible blockchains, the entire history of all transactions must be stored and every smart contract must be executed on every full node. That means thousands of times the storage and computational costs, not to mention the network costs to sync up all these transactions between nodes. How can this possibly be efficient?
An “inefficient” technology will be adopted if its benefits outweigh this inefficiency.
Going back to my Nobel-prize-worthy equations above, technologies create value — and therefore are adopted — when their benefits exceed their costs. Often, less efficient technologies prevail when their other benefits are sufficiently large.
Take TCP/IP again. Circuit switched networks are efficient and reliable because they send the signal once over dedicated bandwidth. By contrast, TCP/IP breaks all transmissions into packets, adding packet header overhead; it doesn’t necessarily use the shortest route; and it re-sends packets repeatedly when it doesn’t receive confirmation of receipt. It can be less efficient and it is certainly less reliable. But it also made the Internet possible, so there’s that.
There are many other examples when “inferior” technology was adopted because its other benefits exceeded this cost (VHS vs. Betamax; streaming video over IP vs. traditional pay TV; internal combustion vehicles vs. electric; etc.).
So the question is not whether public blockchains are inherently inefficient relative to a client-server architecture. The question is whether the benefits of an open Internet outweigh this inefficiency.
Another Elephant: How Important is Decentralization?
I should directly address a question I implicitly raised earlier: can web3 create value even in the absence of absolute decentralization?
I think the simple answer is yes, for the reasons cited above: the chief value unlock of web3 is not decentralization per se, but the ability of decentralization to enable the free flow of data. Try this thought experiment: imagine if all your favorite web2 platforms were completely open. What if Facebook opened its social graph (albeit on an anonymized basis)? What if Twitter users owned and controlled all of their posts, likes and friend lists and all Fortnite gamers owned their skins, gliders and V-bucks, in some sort of standardized formats? What if any developer could build on any platform or leverage its code base? Many of the benefits cited before would be possible without any decentralization at all.
Some minimum level of decentralization is necessary, of course — such as a minimum number of miners or validators on a blockchain — to achieve sufficient security. And decentralization as an ethos is critical, because it holds everyone accountable to openness. But, in practice, it probably does not matter if most apps use Alchemy to index blockchains or if Coinbase manages most users’ keys — what matters is that any centralized entities do not throttle the flow of data. It is more important to be an open-data maxi than a decentralization maxi.
The ethos of decentralization is critical, but decentralization is not an end in itself — it is a means toward enabling free data flow.
From The Ownership Economy to The Open Web
Above I argued that the prevailing pro-web3 narrative has a lot of drawbacks. What’s a better approach? As I mentioned, no one is empowered to decide, it’s the collective decision of the broader community. It’s also presumptuous to claim I have the answer. But since it isn’t great form to criticize something without proposing an alternative, here’s a modest proposal. Web3 is The Open Web: an Internet of open data that is more efficient and innovative, controlled by users.
Web3 is the Natural Evolution of the Internet
There is always a tension between centralized and decentralized systems in technology (which mirrors the tension between order and disorder in complex adaptive systems more generally). Centralization optimizes for control and risk management at the expense of experimentation and innovation; decentralization optimizes for experimentation and innovation at the expense of order.
When the needle swings too far in either direction, the tension builds. Web2 has created tremendous value for people, but the needle has swung too far toward centralization. It is societally suboptimal for data to be controlled by too few hands. With web3, the needle is swinging back.