Written by Jarrod Dicker, Jonathan Glick, Brian Flynn, Matt Stephenson and Tal Shachar
Implicit in the name ‘Web3’ is the expectation of replacement. In technology, versions are successors. When the new model comes out, we upgrade and the older one is replaced and forgotten.
Is this what we mean when we refer to Web3? Will it replace the set of principles, technologies, and behaviors that we’ve come to know as Web 2.0? (Is this even the way that phases of the Internet works?)
Many blockchain enthusiasts seem to suggest as much. As the term Web 2.0 has become closely associated with centralized social networks like Twitter and Facebook, and as those networks have become increasingly politically powerful -- with arbitrary, capricious or simply incompetent moderation policies not to mention potentially societally harmful business practices -- it is common to hear crypto folks imply that decentralized applications would do a better job. Community-owned networks would be fairer, in that the users who created the content and attracted new members would fully participate, as token holders, in the value they created. The distribution of content could be based on meaningful values like reputation and expertise, rather than gamified outrage and algorithmically rewarded hype. And governance, like moderation policies, would be driven by the entire community, rather than a small club of executives and their handpicked oversight committees made up of insiders looking to maximize their own wealth or security. The attractiveness of this superior model, it is proposed, will lure in key influencers, provide a better user experience, and propel a shift from the old social platforms to the new.
And yet, it’s difficult to see how that happens. If anything, it seems like crypto has made the ‘old’ centralized social networks, especially Twitter, more central than ever. How many influencers have used their followers to make an easy leap into crypto sales? How many important projects have used social media to find and recruit contributors? Where else would advocates for new coins and tokens state their cases and share their memes? Where else could we show off our CryptoPunks, Hashmasks and Bored Apes? And how could a Bitcoin day trader not keep his eyes glued on Elon’s Twitter feed over the past few months? Social media is so essential to the day-to-day reality of DeFi that it is hard to see where one ends and the other begins. And it is arguably precisely the gamified virality of centralized platforms that makes them so valuable to crypto marketing and community.
Given this dynamic, how realistic is some transition? Or have we actually trapped ourselves deeper in a centralized system? And what is actually wrong with Web 2.0 anyway?
What is actually wrong with Web 2.0?
In 2004 “Web 2.0” was popularized at a conference that sought to ask the very same question: “What is wrong?” The organizers of the first Web 2.0 conference felt that the industry “had lost its way” after the dotcom bubble burst and needed an injection of confidence. In particular, there was a sense that the vertically integrated portals, like America Online and Yahoo, had created unsustainable monoliths ‘evilly’ trapped users and their data, prevented creative innovation from smaller startups, and led to bubble and bust. Web 2.0 at its outset was about understanding and emulating the ideals and design concepts of the open-source movement, but also the success of companies that were still expanding despite the drought, like Google and Amazon.
In those days, Google especially was seen as a role model, in the way that it sent searching users away to the best resources. Rather than try to seize 100% of the value, the virtuous approach was also the best for the ecosystem. The symbolically central concept was that of the ‘open API.’ Everyone should and would build on everyone else. In fact, one of the most important lessons gleaned from the surviving companies was that user-generated content and data could be a source of sustainable competitive advantage while improving the user experience and the web for all. Users would from here on be first-class contributors, no longer relegated to bottom-of-the-page comments sections, but rather the stars of these new open systems. Developers would vie to attract their usage and contributions by offering ever greater freedoms and flexibility. The best companies would be the best platforms, creating more value for the entire ecosystem.
But the catch was that this was never a sustainable long-term strategy, at least not for the purposes of promoting an open ecosystem. User-generated data is vulnerable to a family of sub optimizations such as Goodhart’s Law, in which using a “measure” as a “metric” devalues the measure itself. If you’re Google building PageRank on natural weblinks you’ll quickly generate SEO schemes that seek to game your measure. If you’re Amazon relying on user-generated reviews, you’ll soon be flooded with fakes. And if you’re Facebook building algorithms for user recommendation, you’ll soon be suggested content in your existing ideologies and filter bubbles.
These measure-based competitive advantages erode as companies instill them as metrics to be gained and optimized against. The result is that these companies, naturally seeking to maintain an advantage, are forced to develop strong capabilities in user management and manipulation themselves. More charitably, it turns out that the data was more a means to reduce friction, improve experience and aggregate demand centrally, recreating most of the negatives of the old portal system, just with a friendly moniker that everything is just one click away. These successful companies in most cases weren’t true platforms, but aggregators, locking in demand and exerting central control, though admittedly with a velvet fist enforced at least at first and in part by superior user experience.
And this necessary lock-in is arguably the source of today’s ire. Whereas 2004's Web 2.0 was reckoning with marketplace failure, 2021's Web3 is reckoning with user failure. We are a community of people failing the marshmallow test, always optimizing for less friction, more dopamine, and shorter feedback cycles. We claim to want richer experiences, to support independent actors, to despise centralization, and to demand more meaningful connections and higher quality content. But then we unlock our phones. And so Google, Facebook, and Twitter’s success with data-based ad monetization and engagement optimized user experience is now the problem, not the solution.
As Web 2.0 companies began to deeply understand the powerful potential of their innovations — the game-ish reaction buttons, the follower graph, the algorithmic ‘newsfeed’ — they became adept at a clever stratagem. Rather than committing to perpetual openness, they could offer new user tools or developer APIs and encourage the community to use them to create. If the results were beneficial to the company, they might allow this creativity to flourish for a while, but soon enough, they would either charge the most successful creations for prominence or buy or build their own.
It was hard to blame them for this approach. It worked! Twitter, Facebook and Google were for-profit companies and the dynamic was not new for ‘platforms.’ Microsoft, the boogeyman of the open source community, had employed similar tactics in its heyday. But it was the development and explosive growth of the smartphone that made it so ferocious.
Mobile devices, with their app stores, emojis, cameras, notifications, and most importantly, constant tactile presence, were the perfect match for ‘Web 2.0’ social networks. Every new phone buyer was a potential heavy social user, and the dense relationship networks that quickly formed made it almost impossible for people to resist buying phones to access them.
This loop triggered a flood of new and highly engaged users unlike anything seen in the history of technology adoption. All over the world, people who recently had only limited access to computers were now checking their newsfeeds throughout the day. Every new family or friend each of them followed increased the value of the experience, the depth of their connection, and the virality of the network. Mobile and social became a second brain, “notifications” a sixth sense. So, even if content creators and app developers knew the ‘game’ was ‘rigged,’ how could they not take advantage of this massive opportunity? Maybe, they reasoned, if their brand or invention was the first, if it was the best, it might be one of the few that got so big so fast they’d achieve some kind of escape velocity from the gravity well of the social networks. After all, it had happened. Social apps or content with just the right combination of uniqueness, design genius, and luck had hit the jackpot and acquired millions of customers within hours. And so they poured their ideas, talent, and cash into these platforms, adding even more appealing media and interactivity, fostering even deeper engagement without any additional cost to the network. And despite the grumbles about unfair practices or addictive behaviors, all this just made the networks more essential. The flood of users kept pouring in.
We now live in a world of this flood. It’s a world overflowing with social activity, a world where every institution is dragged around by the currents of a new wildly unpredictable environment. Our children dream of growing up to be Instagram ‘influencers.’ Our pop charts are a direct output of the most popular backing tracks on Tiktok. A Facebook-inspired political cult attacked the US Capitol. Even the social platforms are buffeted by the storms unleashed by their technology. Their efforts to assert control have little effect, other than making their position even more essential and central. So, it is in the context of this flooded world, that the notion of Web3 as a successor paradigm has such alluring appeal.
Platform wealth and economic sharing: Today, creators & consumers contribute to a platform to gain social or economic status. But this status is rented (status on platform only, revenue shared directly with platform). In Web3 this status is owned. Not only is social reputation portable, but the relationship with IP created on platform can be owned and portable as well.
Content ownership and rights: Web3 introduces ownership as a process. As IP is put out on the web, IP is minted on chain in order to show provenance and give creator control to its distribution and usage rights. This is done on the individual level vs. the platform level.
The happy accident of NFTs: it seems increasingly evident that certain objects on an open, interoperable world computer will be treated and valued like, well… real objects. This is not yet fully understood, but insofar as NFTs are unique to blockchain-style architectures this would represent quite an advantage.
Lies, hype and bots: Actors can still choose to create as many personalities, handles, keys, etc. as they’d like. However, the incentive for building reputation is to accumulate as much good faith on a single identity as possible. This will enable better positioning and recruiting into more lucrative and socially beneficial opportunities than those handles that lack reputation.
Governance, individual influence and moderation: As we discussed in previous articles, ownership is less about financial upside and more about social influence. In Web3, the individual is now influential within the organization it holds tokens of, giving s/he the ability to drive decision making that will bring better health and effectiveness to the community.
Permissionless development and composability: Today’s platforms control what can and cannot be published on their network. It also starts every user from the same starting line. In Web3, development is unrestricted because the permission is tied directly to that individual's reputation. Every decision they make is tied to their identity, so it’s a choice as to whether or not you want to carry these decisions with you throughout the web. Everything built on the protocol is foundational to what others can build on top of it. So instead of everyone acting independently and starting from scratch, everyone works in collaboration and builds on top of foundational legos from other members before them.
New models of work and collaboration: The business models of Web3 encourage collaboration. In Web2, all revenue streams reward the action of the output (advertising against what’s already published, subscribing to a finished piece of work). In Web3, there’s now a business model on the input. Crowdfunding and social tokens are an investment in the idea before there is any output generated, encouraging strong collaboration at the onset which rewards all participants throughout the creative process.
This is a rich set of imagined capabilities that directly answers the most pernicious problems of Web 2.0 as it has crystallized nearly twenty years after its inception: ownership, governance, openness, incentives, reputation. In many ways, the solutions Web3 proposes are the direct opposite of those offered by social networks, a far ‘better deal’ for users, creators and developers, an approach intentionally weighted to reward the individual or team on the basis of her, his or their input, rather than be drowned amid ever-increasing control and returns to the owners and operators of networks. Which leads us to the question: If it’s better for more people, will Web3 replace Web 2.0? Why wouldn’t it?
There are three possible answers:
1) No -- Web3 is best understood as a sort of ‘economic’ extension of Web 2.0, especially of social media, and it will grow as Web 2.0 grows.
2) Sort of -- key aspects of Web 2.0 will be better performed by Web 3 systems and there will be a symbiotic relationship between two parallel but similarly powerful models.
3) Yes -- over time, all of the key aspects of Web 2.0, including centralized social networks, will be replaced by web 3-style user-owned and governed protocols.
We can think of these being three horizons: Web3 as testing ground (1), as complement (2), or as critique (3). As for the first, it may be that Web3 is a sort of open lab of experimentation whose successes will eventually be absorbed by existing structures. This testing ground outcome emerges if the natural inefficiencies of, say, Web3 database structures don’t have any compensating advantages over the long term. The second outcome of complementarity occurs as aspects of Web3 thrive alongside, and even enhance the value of, Web 2.0. This seems to be the case with Twitter, which may be the greatest consumer onramp to Web3 and clearly plays a major role in the ecosystem. The third outcome, the most extreme, is probably best thought of as Web3 being an inherent critique of Web 2.0. Replacement will seem necessary insofar as Twitter, or Facebook, or Paypal’s success is thought to be Web3’s failure.
Perhaps one difference between today’s context and 2004’s -- when we roughly transitioned to the social/mobile Web 2.0 -- is that the large tech companies that have mastered their strategies are broadly and firmly entrenched, and it isn’t as clear how a newcomer can emulate or even improve on their strategies while maintaining defensibility. To look at Twitter for example, for crypto, its ‘purpose’ is a portal to scarcity games, NFT drops, and FOMO. The function today is to spread narratives and create FOMO for markets in Web3. But what happens as the number of markets in Web3 exponentially increases? Web3 says it provides an escape, but it’s not certain whether the escape is scarcity financial games that shift attention or a new fairground with even playing rules. We were trained to play the game rather than play the game in a fair manner.
Right now, and for the foreseeable future, follower graphs on centralized social networks are a source of significant and enduring value. We see in every dimension of our society how efficiently that influence can be translated into financial, political and cultural gain. So to ask our question another way: Will there be a time when most users’ on-chain identities and reputations are more valuable to them than their centralized social network follower count and composition? The three answer options above represent different degrees of optimism.
It is easy, given the trajectory of this most recent era of tech, to be pessimistic. Many of the idealistic outcomes people argued were ‘inevitable’ in 2004 now look laughably pollyannaish. But our current situation, strangely simultaneously both centralized and chaotic, is neither all bad nor intractable. The mobile and social transition that flooded — and still floods — billions of people into Web 2.0 is providing a historic opening for human creativity and connection. For all of the algorithmic manipulation and clever incentives of social networks, these are billions of human beings with free will. They aren’t pre-programmed robots. It is factful, not fanciful to recognize that they will determine the future. But they won’t do it based on vague ideals of decentralization. Only if and to the extent that Web3 truly benefits their lives will they push past our legacy networks into a next phase.
It's definitely easier to be pessimistic, but Web3 applications have really significant potential. Thanks for this!