How Web3’s ‘programmable commerce layer’ will transform the global economy

World Economic Forum, 28 November 2022. Originally published here. With Justin Banon, Jason Potts and Sinclair Davidson.

The world economy is in the early stages of a profound transition from an industrial to a digital economy.

The industrial revolution began in a seemingly unpromising corner of northwest Europe in the early 1800s. It substituted machine power for animal and human power, organized around the factory system of economic production. Soon, it created the conditions to lift millions of humans from a subsistence economy into a world of abundance.

The digital economy began with similarly unpromising origins when Satoshi Nakomoto published his Bitcoin white paper to an obscure corner of the internet in late 2008. We call this the origin of Web3 now – with the first blockchain – but this revolution traces back decades as the slow economic application of scientific and military technologies of digital communication. The first wave of innovation was in computers, cryptography and inter-networking – Web1.

By the late 1990s, so-called “e-commerce” emerged as new companies, which soon became global platforms, built technologies that enabled people to find products, services and each other through new digital markets. That was Web2, the dot-com age of social media and tech giants.

But the actual age of digital economies was not down to these advances in information and communications technologies but to a very different type of innovation: the manufacture of trust. And blockchains industrialize trust.

Industrial economies industrialized economic production using physical innovations, such as steam engines and factories. Such institutional technologies organize people and machines into high production. What the steam engine did for industry, the trust engine will do for society. The fundamental factor of production that a digital economy economizes on is trust.

Blockchain is not a new tool. It is a new economic infrastructure that enables anyone, anywhere, to trust the underlying facts recorded in a blockchain, including identity, ownership and promises represented in smart contracts.

These economic facts are the base layer of any economy. They generally work well in small groups – a family, village or small firm – but the verification of these facts and monitoring of how they change becomes increasingly costly as economic activity scales up.

Layers of institutional solutions to trust problems have evolved over perhaps thousands of years. These are deep institutional layers – the rule of law, principles of democratic governance, independence of bureaucracy etc. Next, there are administrative layers containing organizational structures – the public corporation, non-profits, NGOs and similar technologies of cooperation. Then we have markets – institutions that facilitate exchange between humans.

It has been the ability to “truck, barter and exchange” over increasing larger markets that has catapulted prosperity to the levels now seen around the world.

Information technology augments our ability to interact with other people at all levels – economic, social and political. It has expanded our horizons. In the mid-1990s, retail went onto the internet. The late 1990s saw advertising on the internet. While the mid-2000s saw the news, information and friendship groups migrate to the internet. Since their advent in 2008, cryptocurrencies and natively digital financial assets have also come onto the internet. The last remaining challenge is to put real-world (physical) assets onto the internet.

The technology to do so already exists. Too many people think of non-fungible tokens (NFTs) as trivial JPEGs. But NFTs are not just collectable artworks; they are an ongoing experiment in the evolution of digital property rights. They can represent a certificate of ownership or be a digital twin of a real-world asset. They enable unique capital assets to become “computable,” that is, searchable, auditable and verifiable. In other words, they can be transacted in a digital market environment with a low cost of trust.

The internet of things can track real-world assets in real-time. Oracles can update blockchains regarding the whereabouts of physical assets being traded on digital markets. For example, anyone who has used parcel tracking over the past two years has seen an early version of this technology at work.

Over the past few years, people have been hard at work building all that is necessary to replicate real-world social infrastructure in a digital world. We now have money (stablecoins), assets (cryptocurrencies e.g. Bitcoin), property rights (NFTs) and general-purpose organizational forms (decentralized autonomous organizations (DAOs)). Intelligent people are designing dispute-resolution mechanisms using smart contracts. Others are developing mechanisms to link the physical and digital worlds (more) closely.

When will all this happen? The first-mover disadvantage associated with technological adoption has been overcome, mostly by everyone having to adopt new practices and technology simultaneously. Working, shopping and even entertaining online is now a well-understood concept. Digital connectedness is already an integral part of our lives. A technology that enhances that connectedness will have no difficulty in being accepted by most users.

It is very easy to imagine an interconnected world where citizens, consumers, investors and workers seamlessly live their lives transitioning between physical and digital planes at will before the decade concludes.

Such an economy is usefully described as a digital economy because that is the main technological innovation. And the source of economic value created is rightly thought of as the industrialization of trust, which Web3 technologies bring. But when the physical parts of the economy and the digital parts become completely and seamlessly join, this might well be better described as a “computable economy.” A computable economy has low-cost trust operating at global market scale.

The last part of this system that needs to fall into place is “computable capital.”

Now that we can tokenize all the world’s physical products and services into a common, interoperable format; list them within a single, public ledger; and enable market transactions with low cost of trust, which are governed by rules encoded within and enforced by the underlying substrate, what then?

Then, computable capital enables “programmable commerce,” but more than that – it enables what we might call a “turing-complete economy.”

Why a US crypto crackdown threatens all digital commerce

Australian Financial Review, 10 August 2022

The US government’s action against the blockchain privacy protocol Tornado Cash is an epoch-defining moment, not only for cryptocurrency but for the digital economy.

On Tuesday, the US Treasury Department placed sanctions on Tornado Cash, accusing it of facilitating the laundering of cryptocurrency worth $US7 billion ($10.06 billion) since 2019. Some $455 million of that is connected to a North Korean state-sponsored hacking group.

Even before I explain what Tornado Cash does, let’s make it clear: this is an extraordinary move by the US government. Sanctions of this kind are usually put on people – dictators, drug lords, terrorists and the like – or specific things owned by those people. (The US Treasury also sanctioned a number of individual cryptocurrency accounts, in just the same way as they do with bank accounts.)

But Tornado Cash isn’t a person. It is a piece of open-source software. The US government is sanctioning a tool, an algorithm, and penalising anyone who uses it, regardless of what they are using it for.

Tornado Cash is a privacy application built on top of the ethereum blockchain. It is useful because ethereum transactions are public and transparent; any observer can trace funds through the network. Blockchain explorer websites such as Etherscan make this possible for amateur sleuths, but there are big “chain analysis” firms that work with law enforcement that can link users and transactions incredibly easily.

Tornado Cash severs these links. Users can send their cryptocurrency tokens to Tornado Cash, where they are mixed with the tokens of other Tornado Cash users and hidden behind a state-of-the-art encryption technique called “zero knowledge proofs”. The user can then withdraw their funds to a clean ethereum account that cannot be traced to their original account.

Obviously, as the US government argues, there are bad reasons that people might want to use such a service. But there are also very good reasons why cryptocurrency users might want to protect their financial privacy – commercial reasons, political reasons, personal security, or even medical reasons. One mundane reason that investment firms used Tornado Cash was to prevent observers from copying their trades. A more serious reason is personal security. Wealthy cryptocurrency users need to be able to obscure their token holdings from hackers and extortionists.

Tornado Cash is a tool that can make these otherwise transparent blockchains more secure and more usable. No permission has to be sought from anyone to use Tornado Cash. The Treasury department has accused Tornado Cash of “laundering” more than $US7 billion, but that seems to be the total amount of funds that have used the service at all, not the funds that are connected to unlawful activity. There is no reason to believe that the Tornado Cash developers or community solicited the business of money launderers or North Korean hackers.

Now American citizens are banned from interacting with this open-source software at all. It is a clear statement from the world’s biggest economy that online privacy tools – not just specific users of those tools, but the tools themselves – are the targets of the state.

We’ve been here before. Cryptography was once a state monopoly, the exclusive domain of spies, diplomats and code breakers. Governments were alarmed when academics and computer scientists started building cryptography for public use. Martin Hellman, one of those who invented public key cryptography in the 1970s (along with Whitfield Diffie and Ralph Merkle), was warned by friends in the intelligence community his life was in danger as a result of his invention. In the so-called “crypto wars” of the 1990s, the US government tried to enforce export controls on cryptographic algorithms.

One of the arguments made during those political contests was that code was speech; as software is just text and lines of code, it should be protected by the same constitutional protections as other speech.

GitHub is a global depository for open-source software owned by Microsoft. Almost immediately after the Treasury sanctions were introduced this week, GitHub closed the accounts of Tornado Cash developers. Not only did this remove the project’s source code from the internet, GitHub and Microsoft were implicitly abandoning the long-fought principle that code needs to be protected as a form of free expression.

An underappreciated fact about the crypto wars is that if the US government had been able to successfully restrict or suppress the use of high-quality encryption, then the subsequent two decades of global digital commerce could not have occurred. Internet services simply would not have been secure enough. People such as Hellman, Diffie and Merkle are now celebrated for making online shopping possible.

We cannot have secure commerce without the ability to hide information with cryptography. By treating privacy tools as if they are prohibited weapons, the US Treasury is threatening the next generation of commercial and financial digital innovation.

Reliable systems out of unreliable parts

Amsterdam Law & Technology Institute Forum, 27 July 2022. Originally published here.

How we understand where something comes from shapes where we take it, and I’m now convinced we’re thinking about the origins of blockchain wrong.

The typical introduction to blockchain and crypto for beginners – particularly non-technical beginners – gives Bitcoin a sort of immaculate conception. Satoshi Nakamoto suddenly appears with a fully formed protocol and disappears almost as suddenly. More sophisticated introductions will observe that Bitcoin is an assemblage of already-existing technologies and mechanics – peer to peer networking, public-key cryptography, the principle of database immutability, the hashcash proof of work mechanism, some hand-wavey notion of game theory – put together in a novel way. More sophisticated introductions again will walk through the excellent ‘Bitcoin’s academic pedigree’ paper by Arvind Narayanan and Jeremy Clark that guides readers through the scholarship that underpins those technologies.

This approach has many weaknesses. It makes it hard to explain proof-of-stake systems, for one. But what it really misses – what we fail to pass on to students and users of blockchain technology – is the sense of blockchain as a technology for social systems and economic coordination. Instead, it comes across much more like an example of clever engineering that gave us magic internet money. We cannot expect every new entrant or observer of the industry to be fully signed up to the vision of those that came before them. But it is our responsibility to explain that vision better.

Blockchains and crypto are the heirs of a long intellectual tradition building fault tolerant distributed systems using economic incentives. The problem this tradition seeks to solve is: how can we create reliable systems out of unreliable parts? In that simply stated form, this question serves not just as a mission statement for distributed systems engineering but for all of social science. In economics, for example, Peter Boettke and Peter Leeson have called for a ‘robust political economy’, or the creation of a political-economic system robust to the problems of information and incentives. In blockchain we see computer engineering converge with the frontiers of political economy. Each field is built on radically different assumptions but have come to the same answers.

So how can we tell an alternative origin story that takes beginners where they need to go? I see at least two historical strands, each of which take us down key moments in the history of computing.

The first starts with the design of fault tolerant systems shortly after the Second World War. Once electronic components and computers began to be deployed in environments with high needs for reliability (say, for fly-by-wire aircraft or the Apollo program) researchers turned their mind to how to ensure the failure of parts of a machine did not lead to critical failure of the whole machine. The answer was instinctively obvious: add backups (that is, multiple redundant components) and have what John von Neumann in 1956 called a ‘restoring organ’ combine their multiple outputs into a single output that can be used for decision-making.

But this creates a whole new problem: how should the restoring organ reconcile those components’ data if they start to diverge from each other? How will the restoring organ know which component failed? One solution was to have the restoring organ treat each component’s output as a ‘vote’ about the true state of the world. Here, already, we can see the social science and computer science working in parallel: Duncan Black’s classic study of voting in democracies, The Theory of Committees and Elections was published just two years after von Neumann’s presentation of the restoring organ tallying up the votes of its constituents.

The restoring organ was a single, central entity that collated the votes and produced an answer. But in the distributed systems that started to dominate the research on fault tolerance through the 1970s and 1980s there could not be a single restoring organ – the system would have come to consensus as a whole. The famous 1982 paper ‘The Byzantine Generals’ Problem’ paper by Leslie Lamport, Robert Shostak and Marshall Peace (another of the half-taught and quarter-understood parts of the origins of blockchain canon) addresses this research agenda by asking how many voting components are needed for consensus in the presence of faulty – malicious – components. One of their insights was cryptographically unforgeable signatures makes the communication of information (‘orders’) much simplifies the problem.

The generation of byzantine fault tolerant distributed consensus algorithms that were built during the 1990s – most prominently Lamport’s Paxos and the later Raft – now underpin much of global internet and commerce infrastructure.

Satoshi’s innovation was to make the distributed agreement system permissionless – more precisely, to join the network as a message-passer or validator (miner) does not require the agreement of all other validators. To use the Byzantine generals’ metaphor, now anyone can become a general.

That permissionlessness gives it a resilience against attack that the byzantine fault tolerant systems of the 1990s and 2000s were never built for. Google’s distributed system is resilient against a natural disaster, but not a state attack that targets the permissioning system that Google as a corporate entity oversees. Modern proof-of-stake systems such as Tendermint and Ethereum’s Casper are an evolutionary step that connects Bitcoin’s permissionlessness with decades of knowledge of fault tolerant distributed systems.

This is only a partial story. We still need the second strand: the introduction of economics and markets into computer science and engineering.

Returning to the history of computing’s earliest days, the institutions that hosted the large expensive machines of the 1950s and 1960s needed to manage the demand for those machines. Many institutions used sign-up sheets, some even had dedicated human dispatchers to coordinate and manage a queue. Timesharing systems tried to spread the load on the machine so multiple users could work at the same time.

It was not long before some researchers realised that sharing time on a machine was fundamentally a resource allocation problem that could be tackled by with relative prices. By the late 1960s Harvard University was using a daily auction to reserve space on their PDP-1 machine using a local funny money that was issued and reissued each day.

As the industry shifted from a many-users, one-computer structure to a many-users, many-distributed-computers structure, the computer science literature started to investigate the allocation of resources between machines. Researchers stretched for the appropriate metaphor: were distributed systems like organisations? Or were they like separate entities tied together by contracts? Or were they like markets?

In the 1988 Agoric Open Systems papers, Mark S. Miller and K. Eric Drexler argued not simply for the use of prices in computational resource allocation but to reimagine distributed systems as a full-blown Hayekian catallaxy, where computational objects have ‘property rights’ and compensate each other for access to resources. (Full disclosure: I am an advisor to Agoric, Miller’s current project.) As they noted, one missing but necessary piece for the realisation of this vision was the exchange infrastructure that would provide an accounting and currency layer without the need for a third party such as a bank. This, obviously, is what Bitcoin (and indeed its immediate predecessors) sought to provide.

We sometimes call Bitcoin the first successful fully-native, fully-digital money, but skip over why that is important. Cryptocurrencies don’t just allow for censorship-free exchange. They radically expand the number of exchange that can occur – not just between people but between machines. Every object in a distributed system, all the way up and down the technology stack, has an economic role and can form distinctly economic relationships. We see this vision in its maturity in the complex economics of resource allocation within blockchain networks.

Any origin story is necessary simplified, and the origin story I have proposed here skips over many key sources of the technology that is now blockchain: cryptography, the history and pre-history of smart contracts, and of course the cypherpunk community from which Bitcoin itself emerged. But I believe this narrative places us on a much sounder footing to talk about the long term social and economic relevance of blockchain.

As Sinclair Davidson, Jason Potts and I have argued elsewhere, blockchains are an institutional technology. They allow us to coordinate economic activity in radically different ways, taking advantage of the global-first, trust-minimised nature of this distributed system to create new types of contracts, exchanges, organisations, and communities. The scale of this vision is clearest when we compare it with what came before.

Consider, for instance, the use of prices for allocating computer time. The early uses of prices were either to recoup the cost of operation for machines, or as an alternative to queuing, allowing users to signal the highest value use of scarce resources. But prices in real-world markets do a lot more than that. By concentrating dispersed information about preferences they inspire creation – they incentivise people to bring more resources to market, and to invent new services and methods of production that might earn super-normal returns. Prices helped ration access to Harvard’s PDP-1, but could not inspire the PDP-1 to grow itself more capacity.

The Austrian economist Ludwig von Mises wrote that “the capitalist system is not a managerial system; it is an entrepreneurial system”. The market that is blockchain does not efficiently allocate resources across a distributed system but instead has propelled an explosion of entrepreneurial energy that is speculative and chaotic but above all innovative. The blockchain economy grows and contracts, shaping and reshaping just like a real economy. It is not simply a fixed network with nodes and connections. It is a market: it evolves.

We’ve of course seen evolving networks in computation before. The internet itself is a network – a web that is constantly changing. And you could argue that the ecosystem of open-source software that allows developers to layer and combine small, shared software components into complex systems looks a lot like an evolutionary system. Neither of these directly use the price system for coordination. They are poorer for it. The economic needs of internet growth has encouraged the development of a few small and concentrated firms while the economic needs of open-source are chronically under-supplied. To realise the potential of distributed computational networks we need the tools of an economy: property rights and a native means of exchange.

Networks can fail for many reasons: nodes might crash, might fail to send or receive messages correctly, their responses might be delayed longer than the network can tolerate, they might report incorrect information to the rest of the network. Human social systems can fail when information is not available where and when it is needed, or if incentive structures favour anti-social rather than pro-social behaviours.

As a 1971 survey of the domain of fault tolerant computing noted “The discipline of fault-tolerant computing would be unnecessary if computer hardware and programs would always behave in perfect agreement with the designer’s or programmer’s intentions”. Blockchains make the joint missions of economics and computer science stark: how to build reliable systems out of unreliable parts.

A better design for defi grant programs

With Darcy WE Allen

The blockchain and defi sector should understand more about how real world grant giving bodies function. Nowhere is this clearer than in the recent debate about UniSwap and its new $20 million Defi Education Fund.

In the real world, grant giving is a lot like venture finance. It is an entrepreneurial activity involving the discovery of new information, new opportunities, and new ideas. It helps realise those opportunities and ideas and is rewarded for doing so.

The fact that grants are done with a for-purpose goal while venture finance is done with a for-profit goal only makes a difference at the margin. The best grant giving bodies in the world work very hard to ensure that the custodians of funds have incentives tightly aligned to the overall objectives of the body. Some even use external independent auditors to see whether grants align to objectives, and penalise the program’s management if they do not. These rules bind the grant makers, allowing the grant seekers to innovate and discover how best to achieve the programs objectives.

Admittedly, it can be sometimes hard to see the entrepreneurial and discovery nature of grant programs. Academic research grants tend to be highly bureaucratic processes with layers of committees and appointed experts collating and judging grant proposals at arms-length from the funders.

But ultimately this bureaucracy has a purpose. Those systems of rules might seem inefficient, but they have been designed to align the dispersal of funds with the objectives of the fund. In the case of the Australian Research Council, all those committees are intended to fulfil the objectives of the Department of Education’s scientific mandate through discovery and investment. (Let’s not get hung up about how effective these government programs are.)

At the other end of the spectrum is Tyler Cowen’s Emergent Ventures grant program, where almost all decision-making is Cowen’s judgement. But this too is a structure designed to align objectives with fund dispersal. The objectives of the fund are to allow Cowen to use his knowledge to support “high-risk, high-reward ideas that advance prosperity, opportunity, and wellbeing” — and by all accounts the program is an incredible success.

Two approaches to defi grants

Right now we broadly have two models of grant giving in the defi space. The first is small centralised grant committees. These tend to be small groups of authoritative community leaders with near absolute control of large treasuries assessing and granting funds to desirable projects. These leaders may be elected or appointed, but either way they are using their authority in the community to legitimate their decisions. They may have a deep understanding of their ecosystem and its funding needs. An obvious problem with this is the risk that committee leaders opportunistically fund projects based on personal relationships, rather than ecosystem value.

The alternative model — and the most common one — is putting all grant proposals up to a vote of all relevant stakeholders, that is, holders of a governance token. Designing structures for effective collective decision-making is one of the hardest problems in political science. It is no surprise that some decision-making in the nascent blockchain governance world have been controversial.

But there’s a fundamental problem with this democratic model to grant making: it makes very little sense to believe that a full distributed democratic community can make the sort of entrepreneurial decisions that we expect from both venture finance and grant giving bodies themselves. Why would we expect a diverse, pseudonymous community of governance token holders to coordinate around extremely uncertain entrepreneurial decisions?

Throwing every proposal to a mass vote is the worst of all worlds. First, every proposal ultimately becomes a public vote about the objectives of the program itself. Should the treasury’s funds be used for marketing, or research, or to build new infrastructure? Grant recipients, and the ecosystem that relies on them, are left with inconsistency and unpredictability.

Second, there is little reason to believe that a mass vote will reveal the best investments. Highly decentralised voting may protect against opportunism, but it isn’t likely to surface information about entrepreneurial investment opportunities — exactly what is needed for successful grant-giving. This precise information-revelation problem is the motivation and intuition between mechanisms such as quadratic fundingfutarchy, and commitment voting.

A better grant program design

This is a solvable problem. Treasuries should give budgets to individual ‘philanthropists’. Those philanthropists then make entrepreneurial investments to align the compensation of those entrepreneurs with the success of their invested projects.

The full set of tokenholders sets the objective of the grant program, or an individual round. These objectives would shift as a given ecosystem and the broader industry develops — for instance from funding oracle feeds, to bridging infrastructure, to policy change. Grants are broken into funding rounds. The length of those rounds, say a year or two, must be long enough that there are observable outcomes from grant projects. Rounds could be sequential or overlap.

Each round, a set of philanthropists (say, five) are chosen (elected or appointed) and given discrete budgets. The number of philanthropists for a given round could also be decided by all tokenholders.

Once the funds are dispersed to each philanthropist, they run separate and independent grant programs. They must have credible autonomy: with their own rules, their own application processes, and their own interpretation of the objectives of the overall grant program.

At the end of the round, the full set of tokenholders rank each of the five philanthropists according to how successful (how much value was added, how closely they aligned to objectives) their grants were. The philanthropists are compensated for their work based on that ranking, with the top-ranked getting the most reward.

In this way the grants program is designed to both fund projects, and to incentivise decision-making philanthropists to do a good job.

Our proposal drives the same sort of competitive, entrepreneurial energy that we see in venture finance into defi grant distribution.

Through grant program design we can encourage effective decision-making through feedback loops, while maintaining decentralisation (the risk that philanthropists will behave badly is limited to the length of a grant round) and giving philanthropists a personal stake in the success of the grants that they have distributed (encouraging them to support and shepherd them to fruition).

Grant program design matters a lot

It might be easy to dismiss grant program design as a sideshow in the blockchain industry, marginally interesting but ultimately not a central part of the success of any particular protocol. It would be wrong to do so.

Analogies in blockchain are difficult. But if DAOs are like corporations, then grant programs are how they do internal capital allocation — and as Alfred D. Chandler Jr. has shown, internal capital allocation has determined the shape of global capitalism. Alternatively, if blockchain ecosystems are like countries with governments, then when we talk about grant programs we’re talking about public finance — they are how we pay for public goods and deploy scarce resources in a democratic context.

Ultimately, the sustainability and robustness blockchain ecosystems require effective use of resources. The success of grant programs will form a critical part of the success of blockchain and dapp protocols. They should seek to harness the same entrepreneurial energy and effort that has driven the rest of the blockchain industry.

Towards a Digital CBD

With Darcy Allen and Jason Potts

The COVID-19 pandemic is both a public health crisis, and a digital technology accelerant. Pre-pandemic, our economic and social activities were done predominantly in cities. We connected and we innovated in these centralised locations.

But then a global pandemic struck. We were forced to shop, study and socialise in a distributed way online. This shock had an immediate impact on our cities, with visceral images of closed businesses and silent streets.

Even after COVID-19 dissipates, the widespread digital adoption that the pandemic brought about means that we are not snapping back to pre-pandemic life.

The world we are entering is hybrid. It is both analogue and digital, existing in both regions and cities. Understanding the transition is critical because cities are one of our truly great inventions. They enable us to trade, to collaborate, and to innovate. In other words, cities aggregate economic activity.

The Digital CBD project is a large-scale research project that asks: what happens when that activity suddenly disaggregates? What happens to the city and its suburbs? What happens to the businesses that have clustered around the CBD? What infrastructure do we need for a hybrid digital city? What policy changes will be needed to enable firms and citizens to adapt?

Forced digital adoption

This global pandemic happened at a critical time. Many economies were already transitioning from an industrial to a digital economy. Communications technologies had touched almost every business. Digital platforms were commonly used to engage socially and commercially. But the use of these technologies was not yet at the core of our businesses, it sat on the sidelines. We were only on the cusp of a digital economy.

Then COVID-19 forced deep, coordinated, multi-sector and rapid adoption of digital technologies. The coordination failures and regulatory barriers that had previously held us back were wiped away. We swapped meeting rooms for conference calls, cash for credit cards, pens-and-paper for digital signatures. There had been a desire for these changes for a long time.

These changes make even more frontier technologies suddenly come into view. Blockchains, artificial intelligence, smart contracts, the internet of things and cybersecurity technologies are now more viable because of this base-level digital adoption.

Importantly, this suite of new technologies doesn’t just augment and improve the productivity of existing organisations, they make new organisational forms possible. It changes the structure of the economy itself.

Discovering our digital CBD

Post-pandemic, parts of our life and work will return to past practices. Some offices will reopen, requiring staff to return to rebuild morale and culture. And those people will also flood back into CBD shops, bars and restaurants. They will, as all flourishing cities encourage, meet and innovate.

But of course some businesses will relish their new-found productivity benefits – and some workers will guard the lifestyle benefits of working from home. Many firms will never fully reopen their offices and will brag about their remote-work dynamic culture.

The potential implications for cities, however, are more complex. Cities will fundamentally have different patterns of specialisation and trade than a pre-pandemic economy. Those new patterns are enabled by a suite of decentralised technologies, including blockchains and smart contracts, that were already disrupting how we organise our society.

We can now organise economic activity in new ways. CBDs have historically housed large, hierarchical industrial-era companies. As we have written elsewhere, decentralised infrastructure enables new types of organisational forms to emerge. Blockchains industrialise trust and shift economic activities towards decentralised networks.

How do these new types of industrial organisation change the way that we work, and the location of physical infrastructure? What are the policy changes necessary to enable these new organisations to flourish in particular jurisdictions?

Economies and cities are fundamentally networks of supply chains, and that infrastructure is turning digital too. The pandemic has accelerated the transition to digital trade infrastructure that provides more trusted and granulated information about goods as they move. How can we ensure that these digital supply chains are resilient to future shocks? What opportunity is there for regions to become a digital trade hub?

Another impact of digital technology is that labour markets just became more global. The acquisition of talented labour is no longer bounded by physical distance. Our collaborations are structured around timezones, rather than geography.

Labour market dynamism presents unique opportunities, but will also require secure infrastructure both to validate credentials and to facilitate ongoing productivity. How can Melbourne, a world-class cluster of universities, place itself for this new environment?

A research and a policy problem

Building a digital CBD is fundamentally an entrepreneurial problem—a problem of discovering what these new digital ways of coordinating and collaborating look like. Our Digital CBD research program contributes to this challenge with insights from economics, law, political science, finance, accounting and more. We aim to use this interdisciplinary research base to make policy recommendations that help our digital CBD to flourish.

Building a grammar of blockchain governance

With Darcy Allen, Sinclair Davidson, Trent MacDonald and Jason Potts. Originally a Medium post.

Blockchains are institutional technologies made of rules (e.g. consensus mechanisms, issuance schedules). Different rule combinations are entrepreneurially created to achieve some objectives (e.g. security, composability). But the design of blockchains, like all institutions, must occur under ongoing uncertainty. Perhaps a protocol bug is discovered, a dapp is hacked, treasury is stolen, or transaction volumes surge because of digital collectible cats. What then? Blockchain communities evolve and adapt. They must change their rules (e.g. protocol security upgrades, rolling back the chain) and make other collective decisions (e.g. changing parameters such as interest rates, voting for validators, or allocating treasury funds).

Blockchain governance mechanisms exist to aid decentralised evolution. Governance mechanisms include online forums, informal polls, formal improvement processes, and on-chain voting mechanisms. Each of these individual mechanisms — let alone their interactions — are poorly understood. They are often described through sometimes-useful but imperfect analogies to other institutional systems with deeper histories (e.g. representative democracy). This is not a robust way to design the decentralised digital economy. It is necessary to develop a shared language, and understanding, of blockchain governance. That is, a grammar of rules that can describe the entire possible scope of blockchain governance rules, and their relationships, in an analytically consistent way.

A starting point for the development of this shared language and understanding is a methodology and rule classification system developed by 2009 economics Nobel Laureate Elinor Ostrom to study other complex, nested institutional systems. We propose an empirical project that seeks conceptual clarity in blockchain governance rules and how they interact. We call this project Ostrom-Complete Governance.

The common approach to blockchain governance design has been highly experimental — relying very much on trial and error. This is a feature, not a bug. Blockchains are not only ecosystems that require governance, but the technology itself can open new ways to make group decisions. While being in need of governance, blockchain technology can also disrupt governance. Through lower costs of institutional entrepreneurship, blockchains enable rapid testing of new types of governance — such as quadratic voting, commitment voting and conviction voting — that were previously too costly to implement at scale. We aren’t just trying to govern fast-paced decentralised technology ecosystems, we are using that same technology for its own governance.

This experimental design challenge has been compounded by an ethos and commitment to decentralisation. That decentralisation suggests the need for a wide range of stakeholders with different decision rights and inputs into collective choices. The lifecycle of a blockchain exacerbates this problem: through bootstrapping a blockchain ecosystem can see a rapidly shifting stakeholder group with different incentives and desires. Different blockchain governance mechanisms are variously effective in different stages of blockchain development. Blockchains, and their governance, begin relatively centralised (with small teams of developers), but projects commonly attempt to credibly commit to rule changes towards a system of decentralised governance.

Many of these governance experiments and efforts have been developed through analogy or reference to existing organisational forms. We have sought to explain and design this curious new technology by looking at institutional forms we know well, such as representative democracy or corporate governance. Scholars have looked to existing familiar literature such as corporate governance, information technology governance, information governance, and of course political constitutional governance. But blockchains are not easily categorised as nation states, commons, clubs, or firms. They are a new institutional species that has features of each of these well-known institutional forms.

An analogising approach might be effective to design the very first experiments in blockchain governance. But as the industry matures, a new and more effective and robust approach is necessary. We now have vast empirical data of blockchain governance. We have hundreds, if not thousands, of blockchain governance mechanisms, and some evidence of their outcomes and effects. These are the empirical foundations for a deeper understanding of blockchain governance — one that embraces the institutional diversity of blockchain ecosystems, and dissects its parts using a rigorous and consistent methodology.

Embracing blockchain institutional diversity

Our understanding of blockchain governance should not flatten or obscure away from its complexity. Blockchains are polycentric systems, with many overlapping and nested centres of decision making. Even with equally-weighted one-token-one-vote blockchain systems, those systems are nested within other processes, such as a github proposal process and the subsequent execution of upgrades. It is a mistake to flatten these nested layers, or to assume some layers are static.

Economics Nobel LaureateElinorOstrom and her colleagues studied thousands of complex polycentric systems of community governance. Their focus was on understanding how groups come together to collectively manage shared resources (e.g. fisheries and irrigation systems) through systems of rules. This research program has since studied a wide range of commons including cultureknowledge and innovation. This research has been somewhat popular for blockchain entrepreneurs, in particular through using the succinct design principles (e.g. ‘clearly defined boundaries’ and ‘graduated sanctions’) of robust commons to inform blockchain design. Commons’ design principles can help us to analyse blockchain governance — including whether blockchains are “Ostrom-Compliant” or at least to find some points of reference to begin our search for better designs.

But beginning with the commons design principles has some limitations. It means we are once again beginning blockchain governance design by analogy (that blockchains are commons), rather than understanding blockchains as a novel institutional form. In some key respects blockchains resemble commons — perhaps we can understand, for instance, the security of the network as a common pool resource — but they also have features of states, firms, and clubs. We should therefore not expect that the design principles developed for common pool resources and common property regimes are directly transferable to blockchain governance.

Beginning with Ostrom’s design principles begins with the output of that research program, rather than applying the underlying methodology that led to that output. The principles were discovered as a meta-analysis of the study of thousands of different institutional rule systems. A deep blockchain-specific understanding must emerge from empirical analysis of existing systems.

We propose that while Ostrom’s design principles may not be applicable, a less-appreciated underlying methodology developed in her research is. In her empirical journey, Ostrom and colleagues at the Bloomington School developed a detailed methodological approach and rule classification system. While that system was developed to dissect the institutional complexity of the commons, it can also be used to study and achieve conceptual clarity in blockchain governance.

The Institutional Analysis and Development (IAD) framework and the corresponding rule classification system, is an effective method for deep observation and classification of blockchain governance. Utilising this approach we can understand blockchains as a series of different nested and related ‘action arenas’ (e.g. consensus process, a protocol upgrade, a DAO vote) where different actors engage, coordinate and compete under sets of rules. Each of these different action arenas have different participants (e.g. token holders), different positions (e.g. delegated node), and different incentives (e.g. to be slashed), which are constrained and enabled by rules.

Once we have identified the action arenas of a blockchain we can start to dissect the rules of that action arena. Ostrom’s 2005 book, Understanding Institutional Diversity, provides a detailed classification of rules classification that we can use for blockchain governance, including:

  • position rules on what different positions participants can hold in a given governance choice (e.g. governance token holder, core developer, founder, investor)
  • boundary rules on how participants can or cannot take part in governance (e.g. staked tokens required to vote, transaction fees, delegated rights)
  • choice rules on the different options available to different positions (e.g. proposing an upgrade, voting yes or no, delegating or selling votes)
  • aggregation rules on how inputs to governance are aggregated into a collective choice (e.g. one-token-one-vote, quadratic voting, weighting for different classes of nodes).

These rules matter because they change the way that participants interact (e.g. how or whether they vote) and therefore change the patterns that emerge from repeated governance processes (e.g. low voter turnout, voting deadlocks, wild token fluctuations). There have been somestudies that have utilised the broad IAD framework and commons research insights to blockchain governance, but there has been no deep empirical analysis of the rule systems of blockchains using the underlying classification system.

The opportunity

Today the key constraint in advancing blockchain governance is the lack of a standard language of rules with which to describe and map governance. Today in blockchain whitepapers these necessary rules are described in a vast array of different formats, with different underlying meanings. That hinders our capacity to compare and analyse blockchain governance systems, but can be remedied through applying and adopting the same foundational grammar. Developing a blockchain governance grammar is fundamentally an empirical exercise of observing and classifying blockchain ecosystems as they are, rather than imposing external design rules onto them. This approach doesn’t rely on analogy to other institutions, and is robust to new blockchain ecosystem-specific language and new experimental governance structures.

Rather than broadly describing classes of blockchain governance (e.g., proof-of-work versus proof-of-stake versus delegated-proof-of-stake) our approach begins with a common set of rules. All consensus processes have sets of boundary rules (who can propose a block? how is the block-proposer selected?), choice rules (what decisions do block-proposers make, such as the ordering of transactions?), incentives (what is the cost of proposing a bad block? what is the reward for proposing a block), and so on. For voting structures, we can also examine boundary rules (who can vote?), position rules (how can a voter get a governance token?) choice rules (can voters delegate? who can they delegate to?) and aggregation rules (are vote weights symmetrical? is there a quorum?).

We can begin to map and compare different blockchain governance systems utilising this common language. All blockchain governance has this underlying language, even if today that grammar isn’t explicitly discussed. The output of this exercise is not simply a series of detailed case studies of blockchain governance, it is detailed case studies in a consistent grammar. That grammar — an Ostrom-Complete Grammar — enables us to define and describe any possible blockchain governance structure. This can ultimately be leveraged to build new complete governance toolkits, as the basis for simulations, and to design and describe blockchain governance innovations.

DeFi governance needs better tokenomics

With Sinclair Davidson, published in Coindesk, 13 April 2021

The controversy surrounding the launch of the Fei stablecoin protocol last week reveals a lot about DeFi’s problems with tokenomics. We know what a governance token offers its holders – the right to vote on changes to fees, and the protocol itself. But what should these rights be worth? 

The Fei protocol is engineered to maintain stability against the U.S. dollar by charging a penalty for selling and a bonus for buying the Fei token when it is below the $1 peg. It is an innovative design, albeit highly experimental. But as Fei has drifted further and further from the peg since launch, early buyers found themselves in the unfortunate position of being unable to liquidate their positions without taking substantial loss. 

By the end of the week, Fei suspended the penalties and rewards to try to stabilize the protocol. Until then, these mechanisms were functioning exactly as intended. Careful investors would have seen everything spelled out in the Fei white paper.

We might say this is a simple “buyer beware” story. But it is complicated by the simultaneous airdrop and distribution of Fei’s governance token, TRIBE, that was intended to allocate control rights over the protocol itself. In practice, buyers were trading an appreciating asset (ETH) for a stablecoin (FEI) to get access to the real prize: TRIBE.

In the crypto and DeFi industry many think that governance is just about voting. Voting is important of course – it is the governing part of governance. But it is only a part. In the traditional corporate world, governance rights come with a complex and coherent set of rights and obligations clearly tied to the underlying value of the firm. 

Share ownership represents a right to the cash flow of the company, and a residual claim over the company’s assets if, for whatever reason, it is wound up. The structure of these rights are the result of hundreds of years of evolution in corporate governance. 

If voting rights and the rights over the cashflow and the assets of the firm are misaligned, there can be perverse results. In crypto, we shouldn’t just want governance token-holders to vote. We should want them to vote well  making governance choices that are shaped by their interest in increasing the value produced by the protocol, and their knowledge that they will benefit directly from those choices. 

The initial “investors” in Fei are not really investors in FEI at all. They are customers who spent ETH to buy FEI. And there is an important difference between being a customer and an owner. The difference between being able to complain – to Tweet about how you’ve been wronged – and the ability to do something to recover your money. Because of the design of Fei’s “protocol controlled value” pool of ETH, FEI holders have no residual ownership claim over the ETH, just the right to sell their new FEI on a secondary market.

What governance rights FEI holders have is only as a result of being airdropped TRIBE, a fork of Compound’s COMP token. Like COMP and many other DeFi governance tokens, TRIBE gives voting rights, but does not allocate cash flow rights. 

True, TRIBE holders might in the future vote for protocol amendments that allocate those rights. Even so, the token at best represents an option to participate in unspecified governance that might result in cash flow, but might not. 

The crisis happened because an unexpectedly large number of people bought into FEI to get TRIBE, and then tried to sell out of FEI. That’s understandable: nobody wants to hold a stablecoin in a bull market. This rush for the exits triggered Fei’s penalty and reward nosedive. 

There is a subtle but critical lesson here. If the unique selling proposition of your crypto-economic system is predictability and stability – as it must be for a stablecoin – having the initial demand for that coin driven by a highly speculative governance token that will offer ambiguous future rights is asking for trouble. 

Indeed, it is a lesson that ought to be considered by all token designers in the DeFi world, not just stablecoins. The decision not to specify how value accrues to governance tokens is not just risky for investors. It is risky for the protocol itself.

For example, online chatter suggests that if Fei’s future had been put to a governance vote over the course of the week, there would have been substantial support for distributing its enormous ETH treasury back to FEI buyers. This would have recouped individual losses, but probably also have wound the protocol up entirely.

The Fei protocol is trying to do a lot of innovative work at once. If it turns out to be a success, it won’t have been the only successful protocol that had a rocky bootstrapping phase. But it should offer future protocols a critical lesson in tokenomics. 

Governance tokens are one of the most interesting innovations in DeFi. They seem to offer a fast path to decentralization, handing over control from entrepreneurs to a distributed community as quickly as possible, at, after, or even before launch. But the role of governance cannot be an afterthought – a bolt-on that can be pushed to a governance token and left to unknown future decision-makers.

Governance is the philosophical and economic heart of the blockchain and cryptocurrency industry. After all, decentralization is nothing if not the decentralization of governance. As Fei shows, dumping protocol governance onto a speculative token with unclear cash flow and ownership rights introduces a lot of instability into already ambitious protocols.   

Social media has huge problems with free speech and moderation. Could decentralised platforms fix this?

Published at The Conversation. With Marta Poblet and Elizabeth Morton

Over the past few months, Twitter took down the account of the then-President of the United States and Facebook temporarily stopped users from sharing Australian media content. This begs the question: do social media platforms wield too much power?

Whatever your personal view, a variety of “decentralised” social media networks now promise to be the custodians of free-spoken, censorship-resistant and crowd-curated content, free of corporate and political interference.

But do they live up to this promise?

Cooperatively governed platforms

In “decentralised” social media networks, control is actively shared across many servers and users, rather than a single corporate entity such as Google or Facebook.

This can make a network more resilient, as there is no central point of failure. But it also means no single arbiter is in charge of moderating content or banning problematic users.

Some of the most prominent decentralised systems use blockchain (often associated with Bitcoin currency). A blockchain system is a kind of distributed online ledger hosted and updated by thousands of computers and servers around the world.

And all of these plugged-in entities must agree on the contents of the ledger. Thus, it’s almost impossible for any single node in the network to meddle with the ledger without the updates being rejected.

Gathering ‘Steem’

One of the most famous blockchain social media networks is Steemit, a decentralised application that runs on the Steem blockchain.

Because the Steem blockchain has its own cryptocurrency, popular posters can be rewarded by readers through micropayments. Once content is posted on the Steem blockchain, it can never be removed.

Not all decentralised social media networks are built on blockchains, however. The Fediverse is an ecosystem of many servers that are independently owned, but which can communicate with one another and share data.

Mastodon is the most popular part of the Fediverse. Currently with close to three million users across more than 3,000 servers, this open-source platform is made up of a network of communities, similar to Reddit or Tumbler.

Users can create their own “instances” of Mastodon — with many separate instances forming the wider network — and share content by posting 500-character-limit “toots” (yes, toots). Each instance is privately operated and moderated, but its users can still communicate with other servers if they want to.

What do we gain?

A lot of concern around social media involves what content is being monetised and who benefits. Decentralised platforms often seek to shift the point of monetisation.

Platforms such as Steemit, Minds and DTube (another platform built on the Steem social blockchain) claim to flip this relationship by rewarding users when their content is shared.

Another purported benefit of decentralised social media is freedom of speech, as there’s no central point of censorship. In fact, many decentralised networks in recent years have been developed in response to moderation practices.

Mastodon provides a set of guidelines for user conduct and has moderators within particular servers (or communities). They have the power to disable, silence or suspend user access and even to apply server-wide moderation.

As such, each server sets its own rules. However, if a server is “misbehaving”, the entire server can be put under a domain block, with varying degrees of severity. Mastodon publicly lists the moderated servers and the reason for restriction, such as spreading conspiracy theories or hate speech.

Some systems are harder to moderate. Blockchain-based social network Minds claims to base its content policy on the First Amendment of the US constitution. The platform attracted controversy for hosting neo-Nazi groups.

Users who violate a rule receive a “strike”. Where the violation relates to “not safe for work” (NSFW) content, three strikes may result in the user being tagged under a NSFW filter. If this happens, other users must opt in to view the NSFW content, for “total control” of their feed.

Minds’s content policy states NSFW content excludes posts of an illegal nature. These result in an immediate user ban and removal of the content. If a user wants to appeal a decision, the verdict comes from a randomly-selected jury of users.

Even blockchain-based social media networks have content moderation systems. For example, Peepeth has a code of conduct adapted from a speech by Vietnamese Thiền Buddhist monk and peace activist Thích Nhất Hạnh.

“Peeps” falling afoul of the code are removed from the main feed accessible from the Peepeth website. But since all content is recorded on the blockchain, it continues to be accessible to those with the technical know-how to retrieve it.

Steemit will also delete illegal or harmful content from its user-accessible feed, but the content remains on the Steem blockchain indefinitely.

The search for open and safe platforms continues

While some decentralised platforms may claim to offer a free for all, the reality of using them shows us some level of moderation is both inevitable and necessary for even the most censorship-resistant networks. There are a host of moral and legal obligations which are unavoidable.

Traditional platforms including Twitter and Facebook rely on the moral responsibility of a central authority. At the same time, they are the target of political and social pressure.

Decentralised platforms have had to come up with more complex, and in some ways less satisfying, moderation techniques. But despite being innovative, they don’t really resolve the tension between moderating those who wish to cause harm and maximising free speech.

Financial rules for the algorithm age

Published in the Australian Financial Review

A lot has changed in cryptocurrency since the last bull run in 2017. And these changes have made the regulatory regime that emerged in Australia since the invention of bitcoin look decidedly creaky – if not completely incoherent – and a serious barrier to fintech innovation and investment.

For the most part, Australian policymakers have preferred to squeeze digital assets into existing regulatory frameworks rather than create new frameworks.

For tax purposes, cryptocurrency has been treated as a traditional financial asset subject to capital gains tax – unless it is used in regular transactions, then it is treated like currency. An initial coin offering, where tokens are sold to early investors and users, is generally treated as a share offering or managed investment scheme.

This was the right approach. Entrepreneurs may not have loved the heavy compliance burdens, but at least those burdens were well understood. And we have avoided regulatory disasters like New York’s “BitLicense”, which led to cryptocurrency firms fleeing the city almost the moment it was introduced.

But where in 2017 cryptocurrency users and investors were limited to a relatively small number of digital assets trading on a couple of centralised exchanges, a new class of decentralised finance (DeFi) products have enabled the development of complex financial products and services that are completely decentralised. DeFi completely undermines Australia’s regulatory approach to cryptocurrency and blockchain.

Everything from loans to derivatives to exchanges are being rebuilt as autonomous digital products outside the traditional finance system. These are not niche innovations. Some estimates have upwards of $50 billion locked up in DeFi products right now.

Consider one of the most innovative financial services in the DeFi space: automated market makers. These AMMs allow users to trade one digital asset for another without going through a traditional central orderbook. Investors – “liquidity providers” – put assets into a pool. People who wish to trade one asset for another make exchanges with the pool, which reprices each asset automatically to keep the pool in balance. Investors get fees and bear risk if the external price of the assets change.

AMMs are a brilliant innovation and a regulatory nightmare. Let us start with tax. The Australian Taxation Office treats any token-to-token exchange as a capital gains event, where profits and losses incur a tax liability is incurred, just like a normal exchange of financial assets. This regime makes sense for traditional finance. But it creates huge burdens for DeFi.

Imagine a relatively simple DeFi investment – putting bitcoin in an AMM. First, you have to bring your bitcoin onto a smart contract network like ethereum. Bitcoin can only truly exist on the bitcoin blockchain, so you vouchsafe your coins with a provider who then mints a digital representation of your bitcoin on the ethereum network. You deposit this “wrapped bitcoin” token (and usually another token) into the AMM. You get a receipt – just another token – that represents your share of the pool.

Each of these exchanges are capital gains events. None of them are denominated in Australian dollars. Even the most diligent DeFi user will inevitably make mistakes when trying to account for the capital gains and losses. Few users even realise they are actually performing a token-to-token exchange when they make AMM investments. It is hard to describe the capital gains treatment of DeFi as a functioning part of the tax system at all.

The tax regime may be a compliance nightmare, but at least it is navigable. There are even harder compliance questions in our imagined DeFi investment. For instance, what actually is an AMM, in law? It looks a lot like a managed investment scheme – that is probably what ASIC will think. Like a traditional managed investment scheme, investors pool money in return for profits and don’t have day-to-day control of the investments. But if an AMM is a managed investment scheme … well, it doesn’t have a manager. Algorithms can’t hold financial licences. Nor on a censorship-resistant blockchain can they be shut down.

There are solutions to these problems. Capital gains events should be limited to when cryptocurrency is converted to fiat or used to buy goods or services. My colleagues Darcy Allen, Aaron Lane and I have called for a new exemption to the managed investment scheme framework – what we call “autonomous investment products”. Where a product is entirely algorithmic, has no ongoing responsible party, and is completely open source and auditable by investors, the heavy compliance burdens of a managed investment scheme don’t make sense.

But these solutions will almost certainly require legislative change. Until now, Australia’s cryptocurrency policy has been made via regulatory guidance. That approach has reached its use-by date. Fintech innovation can’t be left to suffocate under regulatory uncertainty and incoherence.

After GameStop, the rise of Dogecoin shows us how memes can move markets

Published in The Conversation with Jason Potts

One of the most difficult problems in finance right now is figuring out the fundamental economic value of cryptocurrencies. And the past week has complicated this further.

For many cryptocurrency investors, the value of Bitcoin is based on the fact it is artificially scarce. A hard cap on “minting” new coins means there will only ever be 21 million Bitcoin in existence. And unlike national currencies such as the Australian dollar, the rate of release for new Bitcoin is slowing down over time.

Dogecoin, a cryptocurrency that takes its name and logo from a Shiba Inu meme that was popular several years ago, have a cap. Launched in 2013, there are now 100 billion Dogecoin in existence, with as many as five billion new coins minted each year.

But how can a currency with a seemingly unlimited supply have any value at all? And why did Dogecoin’s price suddenly surge more than 800 per cent in 24 hours on January 29?

At the time of publication, the “memecoin” was worth about $5.6 billion on the stockmarket.

Dogecoin is one of the original “altcoins”: cryptocurrencies released in the few years after the pseudonymous Satoshi Nakamoto first released Bitcoin into the wild.

From a technical perspective, Dogecoin isn’t very innovative. Like many early altcoins, it’s based on the original source code of Bitcoin.

Or more technically, it’s based on Litecoin, which in turn was based on Bitcoin — but with some small modifications such as faster transactions and the removal of the supply cap. But Dogecoin is much more interesting when seen through a cultural lens.

The cryptocurrency was created by software engineers Billy Markus and Jackson Palmer — although Palmer, an Australian, has since walked away from the project. They branded it with the Doge meme partly to be funny, but also to distance it from Bitcoin’s then questionable reputation as a currency for illicit transactions.

Now, Dogecoin has outlasted almost all the early derivative altcoins and has a thriving community of investors. In 2014, Dogecoin holders sponsored the Jamaican bobsled team. Soon after, they sponsored a NASCAR driver.

Elon Musk, the world’s richest man, is among the cryptocurrency’s high-profile advocates. In December last year, a tweet from Musk sent Dogecoin’s price soaring.

Reddit threads proclaim Dogecoin’s value as a new global currency. Musk himself shared a similar sentiment a few days ago. Speaking on the app Clubhouse, he said:

Dogecoin was made as a joke to make fun of cryptocurrencies, but fate loves irony. The most ironic outcome would be that Dogecoin becomes the currency of Earth in the future.

But Dogecoin is best thought of as a cultural product, rather than a financial asset. The reality is few cryptocurrency users hold it as a serious investment or to use in regular transactions. Instead, to own Dogecoin is to participate in a culture.

People buy it because it’s fun to have, is inherently amusing and comes with a welcoming and enjoyable community experience.

If we start thinking of the cryptocurrency as a cultural product, last week’s sudden jump in Dogecoin’s price makes sense. The boost came just after a meme-centric community managed to drive the share price of videogame retailer GameStop from US$20 to US$350 in mere days.

This swarm behaviour was unlike anything seen before — and it frightened global financial markets.

One particularly interesting aspect of the Reddit forum r/WallStreetBets — which coordinated the attack on the hedge fund that had effectively bet on GameStop’s share price falling — was how many users were having fun.

It’s no surprise activity surrounding Dogecoin has a similar vibe; it was designed to be fun right from the start.

Some people participate in financial markets as a form of consumption — meaning for entertainment, leisure and to experience community — just as much as they do for investment.

Cultural assets such as Dogecoin are hard to systematically value when compared to financial assets, a bit like how we don’t have a fundamental theorem for pricing art.

Almost by definition, the demand for a memecoin will fluctuate as wildly as internet culture itself does, turning cultural bubbles into financial bubbles. RMIT professor and crypto-ethnographer Ellie Rennie calls these “playful infrastructures“.

By inspecting Dogecoin closely, we can learn a lot about the interplay of technology, culture and economics.

Moreover, cryptocurrencies are extraordinarily diverse. Some are built for small payments or to be resilient holders of value. Others protect financial privacy or act as an internal token to manage smart contracts, supply chains or electricity networks.

Under the hood, Bitcoin and Dogecoin look almost exactly the same. Their code differs in only a few parameters. But their economic functions are almost entirely opposite.

Bitcoin is a kind of “digital gold” adopted as a secure hedge against political and economic uncertainty. Dogecoin, on the other hand, is a meme people add to their digital wallet because they think it’s funny.

But in an open digital economy, memes move markets.