Why a US crypto crackdown threatens all digital commerce

Australian Financial Review, 10 August 2022

The US government’s action against the blockchain privacy protocol Tornado Cash is an epoch-defining moment, not only for cryptocurrency but for the digital economy.

On Tuesday, the US Treasury Department placed sanctions on Tornado Cash, accusing it of facilitating the laundering of cryptocurrency worth $US7 billion ($10.06 billion) since 2019. Some $455 million of that is connected to a North Korean state-sponsored hacking group.

Even before I explain what Tornado Cash does, let’s make it clear: this is an extraordinary move by the US government. Sanctions of this kind are usually put on people – dictators, drug lords, terrorists and the like – or specific things owned by those people. (The US Treasury also sanctioned a number of individual cryptocurrency accounts, in just the same way as they do with bank accounts.)

But Tornado Cash isn’t a person. It is a piece of open-source software. The US government is sanctioning a tool, an algorithm, and penalising anyone who uses it, regardless of what they are using it for.

Tornado Cash is a privacy application built on top of the ethereum blockchain. It is useful because ethereum transactions are public and transparent; any observer can trace funds through the network. Blockchain explorer websites such as Etherscan make this possible for amateur sleuths, but there are big “chain analysis” firms that work with law enforcement that can link users and transactions incredibly easily.

Tornado Cash severs these links. Users can send their cryptocurrency tokens to Tornado Cash, where they are mixed with the tokens of other Tornado Cash users and hidden behind a state-of-the-art encryption technique called “zero knowledge proofs”. The user can then withdraw their funds to a clean ethereum account that cannot be traced to their original account.

Obviously, as the US government argues, there are bad reasons that people might want to use such a service. But there are also very good reasons why cryptocurrency users might want to protect their financial privacy – commercial reasons, political reasons, personal security, or even medical reasons. One mundane reason that investment firms used Tornado Cash was to prevent observers from copying their trades. A more serious reason is personal security. Wealthy cryptocurrency users need to be able to obscure their token holdings from hackers and extortionists.

Tornado Cash is a tool that can make these otherwise transparent blockchains more secure and more usable. No permission has to be sought from anyone to use Tornado Cash. The Treasury department has accused Tornado Cash of “laundering” more than $US7 billion, but that seems to be the total amount of funds that have used the service at all, not the funds that are connected to unlawful activity. There is no reason to believe that the Tornado Cash developers or community solicited the business of money launderers or North Korean hackers.

Now American citizens are banned from interacting with this open-source software at all. It is a clear statement from the world’s biggest economy that online privacy tools – not just specific users of those tools, but the tools themselves – are the targets of the state.

We’ve been here before. Cryptography was once a state monopoly, the exclusive domain of spies, diplomats and code breakers. Governments were alarmed when academics and computer scientists started building cryptography for public use. Martin Hellman, one of those who invented public key cryptography in the 1970s (along with Whitfield Diffie and Ralph Merkle), was warned by friends in the intelligence community his life was in danger as a result of his invention. In the so-called “crypto wars” of the 1990s, the US government tried to enforce export controls on cryptographic algorithms.

One of the arguments made during those political contests was that code was speech; as software is just text and lines of code, it should be protected by the same constitutional protections as other speech.

GitHub is a global depository for open-source software owned by Microsoft. Almost immediately after the Treasury sanctions were introduced this week, GitHub closed the accounts of Tornado Cash developers. Not only did this remove the project’s source code from the internet, GitHub and Microsoft were implicitly abandoning the long-fought principle that code needs to be protected as a form of free expression.

An underappreciated fact about the crypto wars is that if the US government had been able to successfully restrict or suppress the use of high-quality encryption, then the subsequent two decades of global digital commerce could not have occurred. Internet services simply would not have been secure enough. People such as Hellman, Diffie and Merkle are now celebrated for making online shopping possible.

We cannot have secure commerce without the ability to hide information with cryptography. By treating privacy tools as if they are prohibited weapons, the US Treasury is threatening the next generation of commercial and financial digital innovation.

Reliable systems out of unreliable parts

Amsterdam Law & Technology Institute Forum, 27 July 2022. Originally published here.

How we understand where something comes from shapes where we take it, and I’m now convinced we’re thinking about the origins of blockchain wrong.

The typical introduction to blockchain and crypto for beginners – particularly non-technical beginners – gives Bitcoin a sort of immaculate conception. Satoshi Nakamoto suddenly appears with a fully formed protocol and disappears almost as suddenly. More sophisticated introductions will observe that Bitcoin is an assemblage of already-existing technologies and mechanics – peer to peer networking, public-key cryptography, the principle of database immutability, the hashcash proof of work mechanism, some hand-wavey notion of game theory – put together in a novel way. More sophisticated introductions again will walk through the excellent ‘Bitcoin’s academic pedigree’ paper by Arvind Narayanan and Jeremy Clark that guides readers through the scholarship that underpins those technologies.

This approach has many weaknesses. It makes it hard to explain proof-of-stake systems, for one. But what it really misses – what we fail to pass on to students and users of blockchain technology – is the sense of blockchain as a technology for social systems and economic coordination. Instead, it comes across much more like an example of clever engineering that gave us magic internet money. We cannot expect every new entrant or observer of the industry to be fully signed up to the vision of those that came before them. But it is our responsibility to explain that vision better.

Blockchains and crypto are the heirs of a long intellectual tradition building fault tolerant distributed systems using economic incentives. The problem this tradition seeks to solve is: how can we create reliable systems out of unreliable parts? In that simply stated form, this question serves not just as a mission statement for distributed systems engineering but for all of social science. In economics, for example, Peter Boettke and Peter Leeson have called for a ‘robust political economy’, or the creation of a political-economic system robust to the problems of information and incentives. In blockchain we see computer engineering converge with the frontiers of political economy. Each field is built on radically different assumptions but have come to the same answers.

So how can we tell an alternative origin story that takes beginners where they need to go? I see at least two historical strands, each of which take us down key moments in the history of computing.

The first starts with the design of fault tolerant systems shortly after the Second World War. Once electronic components and computers began to be deployed in environments with high needs for reliability (say, for fly-by-wire aircraft or the Apollo program) researchers turned their mind to how to ensure the failure of parts of a machine did not lead to critical failure of the whole machine. The answer was instinctively obvious: add backups (that is, multiple redundant components) and have what John von Neumann in 1956 called a ‘restoring organ’ combine their multiple outputs into a single output that can be used for decision-making.

But this creates a whole new problem: how should the restoring organ reconcile those components’ data if they start to diverge from each other? How will the restoring organ know which component failed? One solution was to have the restoring organ treat each component’s output as a ‘vote’ about the true state of the world. Here, already, we can see the social science and computer science working in parallel: Duncan Black’s classic study of voting in democracies, The Theory of Committees and Elections was published just two years after von Neumann’s presentation of the restoring organ tallying up the votes of its constituents.

The restoring organ was a single, central entity that collated the votes and produced an answer. But in the distributed systems that started to dominate the research on fault tolerance through the 1970s and 1980s there could not be a single restoring organ – the system would have come to consensus as a whole. The famous 1982 paper ‘The Byzantine Generals’ Problem’ paper by Leslie Lamport, Robert Shostak and Marshall Peace (another of the half-taught and quarter-understood parts of the origins of blockchain canon) addresses this research agenda by asking how many voting components are needed for consensus in the presence of faulty – malicious – components. One of their insights was cryptographically unforgeable signatures makes the communication of information (‘orders’) much simplifies the problem.

The generation of byzantine fault tolerant distributed consensus algorithms that were built during the 1990s – most prominently Lamport’s Paxos and the later Raft – now underpin much of global internet and commerce infrastructure.

Satoshi’s innovation was to make the distributed agreement system permissionless – more precisely, to join the network as a message-passer or validator (miner) does not require the agreement of all other validators. To use the Byzantine generals’ metaphor, now anyone can become a general.

That permissionlessness gives it a resilience against attack that the byzantine fault tolerant systems of the 1990s and 2000s were never built for. Google’s distributed system is resilient against a natural disaster, but not a state attack that targets the permissioning system that Google as a corporate entity oversees. Modern proof-of-stake systems such as Tendermint and Ethereum’s Casper are an evolutionary step that connects Bitcoin’s permissionlessness with decades of knowledge of fault tolerant distributed systems.

This is only a partial story. We still need the second strand: the introduction of economics and markets into computer science and engineering.

Returning to the history of computing’s earliest days, the institutions that hosted the large expensive machines of the 1950s and 1960s needed to manage the demand for those machines. Many institutions used sign-up sheets, some even had dedicated human dispatchers to coordinate and manage a queue. Timesharing systems tried to spread the load on the machine so multiple users could work at the same time.

It was not long before some researchers realised that sharing time on a machine was fundamentally a resource allocation problem that could be tackled by with relative prices. By the late 1960s Harvard University was using a daily auction to reserve space on their PDP-1 machine using a local funny money that was issued and reissued each day.

As the industry shifted from a many-users, one-computer structure to a many-users, many-distributed-computers structure, the computer science literature started to investigate the allocation of resources between machines. Researchers stretched for the appropriate metaphor: were distributed systems like organisations? Or were they like separate entities tied together by contracts? Or were they like markets?

In the 1988 Agoric Open Systems papers, Mark S. Miller and K. Eric Drexler argued not simply for the use of prices in computational resource allocation but to reimagine distributed systems as a full-blown Hayekian catallaxy, where computational objects have ‘property rights’ and compensate each other for access to resources. (Full disclosure: I am an advisor to Agoric, Miller’s current project.) As they noted, one missing but necessary piece for the realisation of this vision was the exchange infrastructure that would provide an accounting and currency layer without the need for a third party such as a bank. This, obviously, is what Bitcoin (and indeed its immediate predecessors) sought to provide.

We sometimes call Bitcoin the first successful fully-native, fully-digital money, but skip over why that is important. Cryptocurrencies don’t just allow for censorship-free exchange. They radically expand the number of exchange that can occur – not just between people but between machines. Every object in a distributed system, all the way up and down the technology stack, has an economic role and can form distinctly economic relationships. We see this vision in its maturity in the complex economics of resource allocation within blockchain networks.

Any origin story is necessary simplified, and the origin story I have proposed here skips over many key sources of the technology that is now blockchain: cryptography, the history and pre-history of smart contracts, and of course the cypherpunk community from which Bitcoin itself emerged. But I believe this narrative places us on a much sounder footing to talk about the long term social and economic relevance of blockchain.

As Sinclair Davidson, Jason Potts and I have argued elsewhere, blockchains are an institutional technology. They allow us to coordinate economic activity in radically different ways, taking advantage of the global-first, trust-minimised nature of this distributed system to create new types of contracts, exchanges, organisations, and communities. The scale of this vision is clearest when we compare it with what came before.

Consider, for instance, the use of prices for allocating computer time. The early uses of prices were either to recoup the cost of operation for machines, or as an alternative to queuing, allowing users to signal the highest value use of scarce resources. But prices in real-world markets do a lot more than that. By concentrating dispersed information about preferences they inspire creation – they incentivise people to bring more resources to market, and to invent new services and methods of production that might earn super-normal returns. Prices helped ration access to Harvard’s PDP-1, but could not inspire the PDP-1 to grow itself more capacity.

The Austrian economist Ludwig von Mises wrote that “the capitalist system is not a managerial system; it is an entrepreneurial system”. The market that is blockchain does not efficiently allocate resources across a distributed system but instead has propelled an explosion of entrepreneurial energy that is speculative and chaotic but above all innovative. The blockchain economy grows and contracts, shaping and reshaping just like a real economy. It is not simply a fixed network with nodes and connections. It is a market: it evolves.

We’ve of course seen evolving networks in computation before. The internet itself is a network – a web that is constantly changing. And you could argue that the ecosystem of open-source software that allows developers to layer and combine small, shared software components into complex systems looks a lot like an evolutionary system. Neither of these directly use the price system for coordination. They are poorer for it. The economic needs of internet growth has encouraged the development of a few small and concentrated firms while the economic needs of open-source are chronically under-supplied. To realise the potential of distributed computational networks we need the tools of an economy: property rights and a native means of exchange.

Networks can fail for many reasons: nodes might crash, might fail to send or receive messages correctly, their responses might be delayed longer than the network can tolerate, they might report incorrect information to the rest of the network. Human social systems can fail when information is not available where and when it is needed, or if incentive structures favour anti-social rather than pro-social behaviours.

As a 1971 survey of the domain of fault tolerant computing noted “The discipline of fault-tolerant computing would be unnecessary if computer hardware and programs would always behave in perfect agreement with the designer’s or programmer’s intentions”. Blockchains make the joint missions of economics and computer science stark: how to build reliable systems out of unreliable parts.

On Coase and COVID-19

Abstract: From the epidemiological perspective, the COVID-19 pandemic is a public health crisis. From the economic perspective, it is an externality and a social cost. Strikingly, almost all economic policy to address the infection externality has been formulated within a Pigovian analysis of implicit taxes and subsidies directed by a social planner drawing on social cost-benefit analysis. In this paper, we draw on Coase (1960) to examine an alternative economic methodology of the externality, seeking to understand how an exchange-focused analysis might give us a better understanding of how to minimise social cost. Our Coasean framework allows us to then further develop a comparative institutional analysis as well as a public choice theory analysis of the pandemic response.

Author(s): Darcy W. E. Allen, Chris Berg, Sinclair Davidson, Jason Potts

Journal: European Journal of Law and Economics

Vol: 54 Year: 2022 Pages: 107–125

DOI: 10.1007/s10657-022-09741-w

Cite: Allen, Darcy W. E., Chris Berg, Sinclair Davidson, and Jason Potts. “On Coase and COVID-19.” European Journal of Law and Economics, vol. 54, 2022, pp. 107–125.

1 Introduction

Government responses to the COVID-19 public health pandemic have rested on the notion that governments can intervene to mitigate the externalities of the virus. The dominant policy response has been to impose ‘social distancing’ on much of the economy to mitigate transmission externalities. Recent literature argues that those externalities might be less prevalent and less costly than otherwise assumed (see Leeson & Rouanet, 2021) and that we must consider the realistic information and incentive assumptions that underpin government responses to those externalities (see Coyne et al., 2021; Powell, 2021). Our contribution in this paper is, like that of Ronald Coase’s work, a methodological one. We explore and contrast the standard Pigovian analysis of the pandemic with a Coasean and comparative institutional approach (focused on bargaining and exchange over externalities).

Many economists responded to the COVID-19 pandemic by seeking a unified epidemiological-economic analytic framework that might yield insight into optimal lockdown policies. Economists combined the canonical susceptible-infected-recovered (SIR) epidemic model of the dynamics of a virus through a population with canonical macroeconomic models, including Dynamic Stochastic General Equilibrium (DSGE) models (Alvarez et al., 2020; Eichenbaum et al., 2020; Gonzalez-Eiras & Niepelt, 2020) with extensions to learning-by-doing models (Jones et al., 2020) and search-and-match models (Garibaldi et al., 2020). This family of unified SIRDSGE models show that the decentralized equilibrium is inefficient because of the contagion externality and why a social welfare maximizing social planner will frontload mitigation and favour an earlier and harder lockdown than would decentralized agents.

A major limitation of this social planner perspective, however, is that the health shock and policy response are both assumed to be exogenous. This is particularly the case in epidemiological models that tend to use historical data to calibrate agent behaviour, rather than assuming agents will respond to incentives or themselves form rational expectations-type models to guide their decentralised actions. In a Lucas-critique style analysis, Chang and Velasco (2020) develop an economic theory of pandemics with forward-looking agents that shows how individual decisions about whether to go to work affect transmission dynamics, yet these decisions are endogenous to economic expectations of policy actions that affect the consequences of going to work (e.g. trust in government, expected stimulus, expected behaviour of other agents given the expected stimulus, etc.). The extent of endogeneity in externalities and expectations and the complex feedback between public health diagnostics and economic policy treatments has been a major revelation as both public health experts and economic policy-makers have scrambled to deal with the COVID-19 pandemic (see also Born et al., 2021).

Another limitation of the social planner models is their reliance on representa-tive-agent or uniform models. They treat populations and the policies imparted on them as homogenous. This simplified population analysis is largely for modelling convenience and due to severe real time data limitations. Elaborations of the unified SIR-DSGE social choice models have introduced targeted-versus-uniform lockdown policy in a multi-risk model, reporting “the qualitative finding that semi-targeted policies significantly outperform uniform policies” (Acemoglu et al., 2020, p. 4). In that model, social welfare was maximized with targeted policies that focused on protecting high-risk subpopulations rather than the uniform policies favoured by epidemiologists. This analysis is interesting because it shows how in a pandemic context a social planner can trade-off the costs and benefits to different groups in order to maximize a social welfare function. Social planners internalise externalities through taxation of work and consumption to affect containment (the stick) and redistribution to subsidise containment (the carrot).

This conception of social distancing is particularly Pigovian, where a mandatory social distancing policy is the equivalent of a 100 per cent tax rate on that activity. This approach forms the dominant modelling assumption so far used in analysis of optimal policy response to COVID-19. But once we start thinking about pandemics from an economic policy perspective, and then extend that to multiple agents, a new analytic framework comes into view: a Coasean analysis built around Ronald Coase’s famous theorem (Coase, 1959, 1960). A Coasean approach to the pandemic variously focuses on the reciprocal nature of externalities and the institutional conditions under which those externalities may or may not be bargained away through exchange (e.g. see Williamson, 2020; Coyne et al., 2021; Leeson & Rouanet, 2021; Boettke & Powell, 2021; Paniagua & Rayamajhee, 2021). This paper builds these insights together with comparative institutional analysis to explore and understand public policy responses to the pandemic.

In the Pigovian model of externalities, a social planner intervenes to reallocate economic resources in order to internalise the externality (in this case the contagion and congestion externality, Jones et al., 2020). In the Coasean approach, parties bargain their way to a solution that is resolved with an exchange that internalizes the externality. In the uniform SIR-DSGE model there is no possibility of Coasean bargaining because there is effectively just one agent (the susceptible population) who then experience probabilities of transitions to states of infection and recovery. It is the social planner who chooses in this formulation, and the policy choices then update the parameters of the model society.

In a model with individual decentralized agents, each agent’s choices impose externalities on others. These impositions are propagated unevenly and fall unevenly across the economy. When someone decides to go to leave their home when infected they may be incentivized to do so for leisure or other economic benefits, but there is no way for those who are harmed by that action (by increasing the risk of infection for others) to offer an incentive to stay home. There are missing property rights and missing markets to enable all the third parties to pay them to stay at home. It is important to note here that, as argued in Leeson and Rouanet (2021, p. 1113), the externalities imposed on private property are of a different nature to that on public property: “residual infection risk that visitors face from the on-site behaviours of other visitors is infection risk that they face contractually and thus risk that does not impose on-site external costs” (see also Boetkke & Powell, 2021). That is, externalities on private property, such as on-site transmissions that occur at a cafe, are internalized through contracts with higher or lower prices. Nevertheless, in economic theory the transaction costs of each agent contracting with others to stay at home, or not go shopping, would swamp the expected benefits of the trade. It would simply be too costly for those mutually beneficial trades to be discovered, negotiated and enforced. So a decentralised model will not be able to internalise the externality through trade. This is why the unified SIR-DSGE models recognize the existence of the COVID-19 externality and that social welfare suboptimality of a decentralized solution.

Now consider the Coasean analysis between coalitions sorted by risk, as in the multi-risk Acemoglu et al. (2020) model.1 Suppose for simplicity there are just two self-identified and self-sorting groups: (1) high risk of death from the disease; and (2) low risk of death. (We consider below the implications of strategic deception about identity in these groups, and the effects of uncertainty.) Around the world, governments have everywhere adopted a uniform lockdown policy, regardless of cohort risk. This has in effect imposed different statistical costs and benefits on the different risk groups. The low-risk groups are paying a large price in terms of lost utility from work and consumption to benefit a different group in terms of changed risk of mortality. But these are statistical not absolute risk groups. A uniform policy means that governments do not need to identify, target or differentially reinforce policies: they apply to all citizens. Nevertheless, as the Acemoglu et al. (2020) analysis indicates, if the information and enforcement cost assumptions of the model are true, a targeted approach would be superior in terms of minimizing deaths and economic losses.

A Coasean analysis asks a different question: could the groups themselves bargain their way to the same Pareto superior equilibrium? Or is a social planner necessary to get to good equilibria? To address this, we need to think about which group is imposing externalities on whom. If the low-risk group is freely moving about then they are imposing a contagion externality on the high-risk group. This is due to a lower expected cost of infection, i.e. the low-risk group expects to recover and not to die. But if the high-risk group prohibits the low-risk group from moving freely about and making a living, then the externality is being imposed by the high-risk group on the low-risk group. (There is also a congestion externality that is imposed inter-group, as each person who goes to work or to market imposes an increased risk on everyone else who also decides to go, but we ignore that here.) Just as in Coase (1960), both groups are imposing costs on the other groups-that is, the externalities are reciprocal. The Coasean analysis asks not who is causing the harm (a question perhaps of morality or justice), but rather who can avoid the harm at lowest cost (a question of economic efficiency). Under institutional conditions of clear property rights and low transaction costs we expect that a bargaining solution to arrive at the most economically efficient solution (see McChesney, 2006; Fox, 2007). Indeed, as Boettke and Powell (2021, p. 1095) describe in the standard law and economics approach this would involve “assigning rights such that the least cost mitigator bears the burden of adjusting to the externality.”

Whether such conditions hold depends upon a range of cultural, social and political factors. For instance, the high-risk groups (typically those over 65) may bargain with low-risk groups (working age populations) through political-democratic brokering. For instance, they might send messages to politicians that use voting blocs as rewards or threats. High risk groups could use political power to force a uniform policy. In turn low-risk groups may agree to that bargain in return for expected wealth transfers through raids on fixed income pension commitments through inflation, etc. As such, we can think of the policy decision not through the additive utility lens of a social welfare function (a Pigovian lens) but rather as political brokering of coalitional exchanges across different risk groups in society (a Coasean lens). In this way we can shift from what James Buchanan (1964) referred to as an ‘allocation’ lens of economic inquiry towards an ‘exchange’ paradigm. The former emphasizes the allocation of resources by a social planner with relevant information, while the latter “focuses on the process of interaction between people within a context-specific, and varying, institutional environments”, with more realistic assumptions about information and incentives (Coyne et al., 2021, p. 1122).

If transaction costs were zero, then “…each person would strike a deal with every other person whose infection risk their behavior might affect or whose behavior might affect their infection risk. All costs of such behavior would be internalised” (Leeson & Rouanet, 2021, p. 1109). Of course, even that simple two-group polit-ically-mediated Coasean bargaining prospect is conditional upon secure property rights (in this case well-formed and credible voting coalitions) as well as good and easily identifiable information about which risk group each individual belongs to. As further groups are identified the epistemic challenges compound. As Coyne et al., (2021, p. 1119) explore, the political economy of state responses to COVID-19 must consider both policymakers’ “epistemic constraints they face in trying to solve that problem” as well as those policymakers’ incentives. As Williamson (2020) argues in developing a Coasean social contract model of the pandemic, the “… large variations in individual trade-offs and private information about such trade-offs” suggests solutions based on individual choices and incentives rather than mandates. Early in the pandemic, however, there was a great deal of ambiguity about information such as risk profiles, and while growing evidence does seem to confirm that specific factors do characterise distinct risk groups (e.g. age, comorbidity) significant epidemiological uncertainty remains. Nevertheless, even a simple or ‘naive Coasean’ analysis of the exchange approach to COVID-19 policy is likely to yield valuable insight to address important economic issues and considerations that are largely or entirely ignored by the Pigovian or social welfare economic analysis.

A public health crisis involving an infectious disease is clearly a negative externality. Those infected individuals encountering non-infected healthy individuals can pass on the disease to those individuals resulting in their subsequent illness, or even death. In the very first instance, this can be described as being a ‘health externality’, which are typically negative.2 Further distinctions between the type of externalities in a pandemic have been outlined, such as ‘on-site externalities’ (where people impose externalities on others at a given site) and ‘off-site externalities’ (where the externalities are the effect of increasing infection risk of others at different sites) (see Leeson & Rouanet, 2021). Further complicating the nature of pandemic externalities, Rayamajhee et al. (2021) argue that pandemic externalities, rather than being global as is often assumed, are in reality “nested externalities at multiple scales”, where different actions at different scales have costs or benefits. For our purposes, we distinguish simply between a ‘health externality’ and a ‘behavioural externality’.

One of the challenges facing decision makers in relation to the COVID-19 pandemic is the lack of information associated with the virus itself, including the ‘health externality’ associated with it. Initially there was no knowledge of the characteristics of the virus-how much time there was between infection and symptoms, how contagious it might be and under what circumstances, what the fatality rate was, and so on. The social cost associated with spreading COVID-19 was unknown or highly uncertain through February and March 2020 when policy choices were being made. For the most part, policymakers in most countries assumed the social cost would be very high, and the unprecedented global policy responses (relative to viruses such as the seasonal flu) reflect that assumption.

The second externality (the ‘behavioural externality’) caused by COVID-19 may be either negative or positive. The behavioural response to the pandemic resulted in individuals voluntarily self-isolating in order to prevent themselves from contracting COVID-19. This ‘behavioural externality’ has similarities to a pecuniary externality in a market. To the extent that individuals withdraw from economic activity and reduce their consumption, this imposes costs on others and is a negative externality. It could also be the case, however, that these individuals, by following their own self-interest, inhibit the spread of the virus. If this were the case, then their behavioural response is a positive externality. On balance, the net externality could be positive or negative. For reasons that we explain below, policy makers acted in a way that suggests the net effect of this behavioural externality to be negative.

While previous studies have examined the nature of externalities in the pandemic (e.g. Leeson & Rouanet, 2021; Rayamajhee et al., 2021), the political economy of state responses (e.g. Boettke & Powell, 2021; Coyne et al., 2021), the complexity of pandemic policy (e.g. Pennington, 2021) and the economic consequences of the pandemic (Allen et al., 2020), this paper draws theoretical insights from Coasean economic theory, integrating these findings into a broader comparative institutional and public choice perspective. In Sect. 2 we discuss the origin and various meanings of the Coase theorem. In Sect. 3 we apply these to the COVID-19 pandemic using the framework of comparative institutional analysis. Section 4 considers pandemic management as a transaction cost problem. Section 5 examines some public choice theory considerations. Conclusions are offered in Sect. 6.

2 Beyond vulgar Coaseanism

One challenge for economists when approaching the Coase theorem is that there are many interpretations of what that theorem might be. Some economists, like Paul Samuelson, suggested that the Coase theorem was not a theorem at all. Other economists, like George Stigler, conflated it with other insights (on Stigler and Coase see Marciano, 2018). Unfortunately, Ronald Coase himself gave some credence to the Stigler interpretation, while insisting that he had made more of a methodological contribution as opposed to a hard and fast insight.

Coase (1988, p. 157) reports that he first expressed his theorem in a 1959 paper that had appeared in the Journal of Law and Economics. There he had made use of the example of a newly discovered cave. The argument was that the initial ownership of the cave and the ultimate use of the cave were independent of each other. The cave would be put to its most valuable use. The more famous 1960 article was an elaboration of that principle. As Coase (1988) notes, in the absence of transaction costs there can be no deviation between private and social costs.

Deirdre McCloskey (1988, p. 368) argues that this version of the Coase theorem is really ‘Adam Smith’s theorem’-that resources will gravitate into the hands of those who value them the most-if transaction costs are zero. According to McCloskey, the Coase theorem tells us that transaction costs do matter. While this is correct, it does seem to abstract from Coase’s other important contributions in his 1960 paper. Rather than express a theorem, Coase was attempting to make a methodological point-that how economists thought about social cost suffered from basic defects.

Coase recognised and emphasised that social costs problems (i.e. externalities) were reciprocal. In many of his examples the individuals were imposing harm upon each other. The question in Coase’s mind was: who should harm whom? The answer that he kept returning to was the arrangement which maximised the value of production. By contrast, the Pigovian solution to externalities would be to determine who had injured whom, and then require the advantaged party to compensate the injured party or levy a tax on the advantaged party.

Coase (1960, p. 131) also suggested, unkindly but not incorrectly, that economists did not carefully think through the problems at hand, leading them “to declaim about the disadvantages of private enterprise and the need for Government regulation.” He also argued that economists did not explore the full set of possible solutions to any problem of social cost. When confronted by a social cost, rather than immediately consider government regulation, there are other solutions that should be carefully evaluated. The immediate and obvious solution to any problem, he argued, is to do nothing-arguing that very often the costs of doing something would be greater than the benefits of that action. Then he suggests that markets could be deployed to resolve social costs. In a world of zero transaction costs the Adam Smith principle applies. Coase (1960), however, recognised that transaction costs may not be zero, or even low, and that markets would not always be able to resolve negative externalities.

It is important to dwell on that point. Very often Coasean solutions to externalities suggest that all that needs to be done is for property rights to be allocated to a party and then leave the market to allocate use rights. This may be a Coasean solution to the problem of social cost, but it is not the Coasean solution. This ‘let winners compensate losers’ or ‘losers bribe winners’ approach to resolving problems of social costs can be described as ‘vulgar Coaseanism’.

In the presence of market failure, given transaction costs, Coase (1960, p. 115) points to his 1937 paper on the nature of firm. There he had argued that hierarchical costs within the firm could be lower than transaction costs within the market and that firms existed when administrative decision-making costs were lower than market transaction costs. It is possible that some social costs can be privatised through vertical integration.

Only after the relative costs and benefits of doing nothing, relying on market forces, and vertical integration were considered, should government intervention be considered. Importantly Coase suggests that government intervention-Pigovian solutions-have costs and benefits and may fail to resolve problems of social cost just as markets do. Coase suggests that government intervention is likely to be more effective when coordination costs are high. Government does not need to incur the same coordination costs as do private actors-it can simply deploy its police power to impose solutions, including in situations where “… a large number of people are involved and in which therefore the costs of handling the problem through the market or the firm may be high” (Coase 1960, p. 118).

Coase’s (1960) contribution was not to demonstrate that transaction costs do or do not matter, or that market solutions require property rights, or that government intervention can fail too. His contribution was that economists should think carefully about potential solutions to the problem of social cost and evaluate real world alternatives. Indeed, “satisfactory views on policy can only come from a patient study of how, in practice, the market, firms and governments handle the problem of harmful effects.” (Coase 1960: 118). Harold Demsetz (1969) described decision making based on blackboard economics as being ‘nirvana economics’, which advocates a comparison between an idealized alternative and a real-world alternative. Coase advocates what Demsetz (1969) labels as being ‘comparative institutional’ analysis between real world alternatives. We apply such an approach to the COVID19 policy responses in the following sections.

3 An institutional choice framework for the COVID-19 pandemic

To understand the various responses to the externalities generated by the COVID-19 pandemic, we draw on this Coasean lens and combine it with the institutional possibilities frontier framework first proposed by Djankov et al. (2003). Djankov et al. (2003) were interested in explaining the growth of regulation over the course of the twentieth century and to explain why regulation seemed more prevalent in highincome economies. The frontier itself traces the trade-off between (private) disorder costs and (public) dictatorship costs. Following the Coasean insight, the costs of using market-based regulatory mechanisms are traded-off against the costs of using government-based regulatory mechanisms. In this context, disorder is defined as being ” $\ldots$. the risk to individuals and their property of private expropriation in such forms as banditry, murder, theft, violation of agreements, torts, or monopoly pricing” (Djankov et al., 2003, p. 598). Dictatorship is defined as being “… the risk to individuals and their property of expropriation by the state and its agents in such forms as murder, taxation, or violation of property” (Djankov et al., 2003, p. 598).

Djankov et al. (2003) then use this framework to examine four broad governance strategies that can be used to achieve some regulatory objective: ‘market discipline’, ‘private litigation’, ‘public regulatory enforcement’, and ‘state ownership’. In the analysis that follows we define disorder costs as the negative externality imposed on other individuals due to infection and a voluntary behavioural response to the pandemic. Dictatorship costs are the costs imposed by the government in response to the pandemic such as enforcement of quarantine, loss of civil liberties, and the like. Dictatorship costs include loss of economic opportunity that results from quarantine policies. It does not, however, include the costs of ‘hibernating’ the economy and costs incurred in restarting the economy after the quarantine period ends.

With that background, it is possible to set out a series of responses and policy approaches to the COVID-19 pandemic. For the sake of completeness, we include a ‘Do-nothing’ response. In this response, nobody does anything in response to the pandemic. Individuals do not modify their behaviour in any way, nor do governments respond in any way. Under this response, individuals go about their lives and infect other individuals. Currently the medical understanding of COVID-19 is that some infected individuals will not develop any symptoms of the disease and will not feel unwell at all. Asymptomatic individuals may still be infectious. Other infected individuals will become ill but will recover. Yet others will become very ill, and some will die. In this response the disorder costs are very high. The virus simply transmits through the population and the costs associated with the health externality are maximised. An epidemiological model of this process can be calibrated with the standard SIR model.

This ‘Do-nothing’ scenario is extremely unlikely and did not occur. Individuals respond to medical crises. For example, individuals who become ill may take sickleave from work. Those individuals who are vulnerable to infection may self-isolate. Others may withdraw their children from school or stop visiting crowded places such as cinemas, clubs, gyms, and the like. This scenario we label as ‘voluntary individual self-isolation’. In this response we see some reduction in the costs due to a health externality, but the introduction of a behavioural net negative externality. Some service providers may experience financial loss due to consumers reducing their purchases and changes to consumer behaviour. Related to this, Leeson and Rouanet (2021) argue that the externality context of COVID-19 suggests that the externalities are somewhat self-limiting.

The next response level we describe as being ‘voluntary corporate self-isolation’. Employers may voluntarily reduce the scale of their operations or even cease operations to protect their staff. This could entail reduced working hours, or fewer staff working during each shift. Schools could adopt distance learning models and some employees could work from home. Note that this is distinct from government mandating employers to restrict their staff, which occurred in many jurisdictions.

The responses we have described so far are voluntary. The social costs that are being imposed are disorder costs. Government may have provided public information and/or made recommendations in the scenarios and responses that we have described, but as yet there are no dictatorship costs in the composition of social costs being incurred. What is important to note is that as each scenario has emerged that the social costs due to the health externality are likely to be falling, while the social costs due to the behavioural externality are likely to be rising. The behavioural externality will result in disorder costs such as reduced amounts of economic activity resulting in job losses. It could (and did) result in panic buying and hoarding. Many countries experienced toilet paper shortages for example-prior to the imposition of formal and mandatory quotas. Very few governments appear to have relied on a voluntary response to the pandemic-Sweden and some Swiss cantons appear to have adopted this approach, while the United Kingdom initially indicated that it would adopt a voluntary approach to the pandemic it quickly changed tack.

Government responses to the COVID-19 pandemic have focussed on the health externality. Individual responses that were based on voluntary self-isolation were transformed by government fiat into involuntary quarantine policies. For the purposes of illustration three versions of involuntary quarantine policy can be described.

Mild quarantine consists of the government requiring that most people stay at home with only essential workers going to work. Essential workers here can be broadly defined. Under mild quarantine individuals might be allowed out of their homes for shopping and exercise at their own discretion. The police, however, do enforce the quarantine and do issue fines for quarantine violation. Strict quarantine consists of more restrictive definitions of essential workers and fewer exemptions to home quarantine. Individuals may be restricted on what they may buy (e.g. some countries have closed non-essential retail stores) or when they may leave their homes (e.g. only one adult may leave the home every three days, or an overnight curfew). Absolute quarantine-also included for completeness-is a situation where no-one is permitted to leave their homes for any reason. This form of quarantine is viable for very short periods of time only.

In these scenarios the behavioural externality that previously existed is now replaced with a dictatorship cost. Those individuals who would have self-isolated anyway under the same conditions as the government imposes are no better or worse off than they were before. Those individuals who would have self-isolated to a lesser extent or not at all are worse off than they were.

The important question, however, is which response or scenario results in a minimisation of social costs (from both disorder and dictatorship).

The first point to make is that the existence of a negative externality is not itself a necessary and sufficient condition for a response. As Coase pointed out, doing nothing is an option. As we know, however, people do not do nothing in the face of a medical emergency. There is a response-people both self-isolate and change their consumption and productive behaviour. For there to be a justification for policy intervention an externality must persist in equilibrium (Buchanan & Stubblebine, 1962). In disequilibrium social costs and private costs may diverge from each other. As externalities are internalised due to behavioural responses the divergence between social and private costs will fall. If that differential falls to zero before equilibrium, then there is no market failure. In the Buchanan and Stubblebine terminology the externality is not Pareto relevant. It may be the case, however, that the externality persists in equilibrium-that is, it is Pareto relevant. At that point market failure has occurred.

In the case of COVID-19 market failure occurs when individuals, despite their voluntary behavioural responses, are still imposing costs upon each other. Given that there are two externalities at work, this is very likely to be the case. The health externality and the behavioural externality work in opposite directions to each other. As more people choose to voluntarily self-isolate to avoid contracting the virus, they impose greater behavioural costs on others.

4 Pandemic management as a transaction cost problem

Setting out the institutional choices in such a way requires us to ask several uncomfortable and unavoidable questions. What is the optimal rate of infection from a public policy perspective? How does that compare to a private perspective? Arrow’s impossibility theorem indicates that the policy choice made by the government is not going to be some aggregate of private preferences.

Most developed world governments have sought to slow the rate of infections to target medical capacity to deal with the pandemic. This is the ‘flatten the curve’ model, most influential in March 2020 when many governments were making their institutional choices, in which the total number of individuals who are eventually infected is fixed (see Allen et al., 2020). The goal of flattening the curve is spacing infections through time to prevent a sudden influx of COVID-19 cases from overburdening health care systems.

The choice of this objective function introduces another consideration into the debate: individuals are not just imposing costs upon each other; they are imposing a cost on the health system. This healthcare system may or may not be public, or have complex public/private entanglements (on entanglement see Smith et al., 2011; Wagner, 2016). It may well be the case that no externality exists in equilibrium from a health perspective but for the health system. This insight is a law and economics, or public choice, problem and we defer discussion to a later section.

Irrespective of why governments chose a particular objective function the net effect of intervention was to assume that a health externality persisted in equilibrium, and to substitute private disorder costs due to the behavioural response to the pandemic with dictatorship costs. This cannot be an equilibrium solution. Government intervention in response to externality is to restore an equilibrium situation that would exist but for the shock to the equilibrium.

Market failure is usually due to one of four factors: monopoly problems, missing markets, asymmetric information, or transaction costs. The COVID-19 pandemic is not obviously a monopoly problem. It would be too easy and glib to suggest that the COVID-19 market failure was due to missing markets. To suggest that individuals vulnerable to the virus be given property rights to their continued health and be paid to self-isolate would be a ‘vulgar Coasean’ solution. So too the notion that vulnerable individuals pay everyone else to remain in quarantine.

It is useful, however, to think about ‘rights’ that individuals may have in the face of a pandemic. Vulnerable people can suggest that they have a right to life that in this instance includes the right to not become infected. Others may argue that they have a right to a livelihood or a right to choose. Reconciling competing rights, especially in the absence of cash payments, is difficult at the best of times. It does, however, go the question of who should be quarantined-just the vulnerable, or everyone? Almost uniformly governments have chosen to quarantine everyone (although in many jurisdictions vulnerable people have been subject to stricter rules, such as restrictions on access to aged care). Garzaerlli et al. (2022, p. 1) explores this uniformity through the lens of Rawlsianism, arguing that “lockdown by fiat is a policy that is closer to a maximin equity criterion rather than to a utilitarian one”.

What many governments have also chosen to do is make payments to those individuals who have either lost their jobs (beyond the usual unemployment benefit that might normally be paid under such situations) or have been temporarily stood down or furloughed. Details vary across jurisdictions, but the principle is broadly similar-employers who have been impacted by the quarantine policy can apply for a wage subsidy to be paid to their employees. This may strike some readers as being a Coasean payment for lost wages. But this money is not being transferred from winners to losers. Instead, it is being transferred through time, from future generations to current generations. The money is being borrowed (or printed-the macroeconomic consequences of the quarantine policy will be debated for decades and is beyond the scope of this article) and will be repaid from future tax revenues or budget cuts or inflation.

It is likely that a market failure exists due to transaction cost problems and information cost problems. A lack of information problem is distinct from an asymmetric information problem. Asymmetric information is possible but not likely to be significant for this analysis. For example, it may be possible for an individual to be knowingly infected, or to believe they are likely to be infected, but externally asymptomatic and infect others.

Before we proceed to discuss transaction cost and information cost problems, it is useful to point out that the market failure is not due to individuals simply being selfish. It is easy to argue that markets could simply fail to clear because individuals are selfish-the welfare of others simply does not enter into their utility function, and they are simply indifferent to other individuals’ premature or preventable death. This argument has been made by the authorities when justifying authoritarian regulation or enforcement of quarantine. The health externality is reciprocal-individuals may either infect others or become infected themselves. Indeed, Williamson (2020, p. 157) proposes a Coasean social contract model that “recognizes the reciprocal nature of the problem.” While the virus does tend to be more fatal to older and immuno-compromised individuals it is infectious and has a non-zero fatality rate for all humans. To the extent that individuals have no voluntary behavioural response to the COVID-19 pandemic this is very likely due to information asymmetry or direct economic incentive. Responding to direct economic incentives may be anti-social but it is not a market failure.

The direct cause of the market failure-assuming the market has failed-is the existence of radical uncertainty. Mainstream economics tends to make strong information assumptions to drive its results. When those assumptions are relaxed, they are so that information is costly (i.e. the information exists but must be acquired at a price) or is asymmetrically distributed. One of the features of the COVID-19 pandemic is that information either did not exist, or was highly uncertain, or contested. Behaviour must be conditioned by expectations which in turn is conditioned upon information. Bounded rationality – first proposed by Herbert Simon and popularised by Oliver Williamson (1985) – results in individuals making and using heuristics, rules of thumb, and various mental short-cuts when decision-making.

In the months that the COVID-19 disease emerged and spread globally, there was a high degree of this sort of radical uncertainty, around almost all epidemiologically relevant aspects of the disease. It is possible that individuals under-estimated the COVID-19 infection rate or severity and subsequently self-isolated too little, resulting in a health social cost in equilibrium. It is also possible that the government over-estimated the COVID-19 infection rate and imposed high dictatorship costs on the economy when there was no social cost in equilibrium. Given the breadth of these uncertainties and the sensitivity of comparative institutional analysis to those uncertain factors, it is implausible to suggest that the policy choices made between February and March 2020 were anywhere approaching optimal.

But epistemic issues facing the epidemiology of the virus itself are only the “first layer of complexity that policymakers must contend with” (Pennington, 2021, p. 204). Drawing on Hayek’s distinction between simple and complex phenomena, Pennington notes that even while government action might be warranted in response to the externalities, the complex nature of the problem means that determining an effective policy response is difficult. Indeed, the health effects of the virus spread are interacting with further complex phenomena of “political, economic, cultural and institutional arrangements” (Pennington, 2021, p. 208).

Radical uncertainty, however, does not directly explain why different governments imposed various degrees of strictness on quarantine conditions. While information about COVID-19 was uncertain, and the medical science preliminary, given the extraordinary effort made by public authorities and researchers around the world to investigate the characteristics of the disease, the information was highly accessible. International coordinating organisations, such as the World Health Organisation, also sought to provide governments with consistent responses.

One explanation of differing policies is different levels of ‘trust’ or ‘civic capital’ in different jurisdictions, as identified in the Djankov et al. (2003) framework. For instance, jurisdictions with higher levels of ‘civic capital’ might (1) be more confident that populations will voluntarily comply; and (2) have less tolerance of high dictatorship costs because of their democratic ideals, leading them to have fewer restrictions. As Rayamajhee et al. (2021, p. 12) argues, because ‘social distancing’ is co-produced between citizens and governments, “a provincial or national authority with a history of betraying public trust is unlikely to effectively implement social distancing guidelines/ policies”. Related to this, Paniague and Rayamajhee (2021) draw on Elinor Ostrom’s work to frame the challenge of the pandemic as one of “nested externalities that are organized in multiple, overlapping scales”. This suggests the need for a polycentric approach to pandemic governance challenges, acknowledging the need for institutional diversity and flexibility in response (see also Allen et al., 2020).

The stark differences in cross-jurisdictional approaches can also be explained from an epistemic perspective. The various costs of dictatorship and disorder relating to pandemic policies are subjective. As Allen and Berg (2017) argue, societies that view the trade-offs between different regulatory regimes in different ways. Similarly focusing on the epistemic challenges of pandemic policies-with an emphasis on the complexity of the problem-Coyne et al. (2021) point to the need to match externalities with their lowest level of decision making. From this perspective, the different policy approaches across jurisdictions have some benefit, in revealing or discovering information about effective policy responses. This policy learning process, however, is limited by a “‘signal extraction problem’ in deciphering what the results of various policy experiments may mean and whether any lessons can be applied elsewhere” (Pennington, 2021, p. 213).

5 The law and economics of the COVID-19 pandemic

Pigovian approaches to policy are made more fraught by the fact that the government is not a disinterested actor. Public choice theory-and law and economics more broadly-is the study of how government (and the politicians and bureaucrats that comprise it) is a distinct actor within an economic system with its own economic incentives (Mueller, 1976). One theoretical foundation is Arrow’s (1950, 1951) critique of social choice functions, which showed how it was impossible to aggregate private utility into a social utility function without violating some desirable conditions, one being the no dictatorship rule (i.e. that one agent’s preferences dominate all other agent’s preferences). Yet in the context of public policy to address COVID-19, exactly this situation has arisen in which the ‘dictator’s’ preferences for resource allocation may depart from the preferences of individual citizens, however aggregated.3 Note this does not depend on citizens having different preferences, and this wedge between the incentives of the state and the sum of incentives of citizens will hold even with identical preferences across all citizens.

As we introduced in Sect. 3 above, when individuals move from susceptible to infectious they are not just imposing costs upon each other as a contagion externality on private individuals, they are also imposing a cost on the health system, as an externality on the state. For instance, individual citizens will have private preferences not to become infected and to die from COVID-19, and these preferences will extend to social preferences4 for this fate to not befall others too. Governments, on the other hand, have preferences focused on the public health system, which they seek to protect, not on individual citizens. This is not a cynical point: the UK government, for instance, has directly explained this point in public communication, namely that the strategy was to protect their National Health System (NHS). The ‘flatten the curve’ diagrams were expressly designed, and communicated, as a strategy to protect the capacity of the public hospital systems. To put this succinctly, from the government’s perspective, they obviously do not want their citizens to die; but should they die, it is better that they should do so without harming treatment capacity in public hospitals. This is not a heartless statement, but an expression of the margin of concern for the government supplying public healthcare during a pandemic.

Another way of seeing this same point is to look at it from a dynamic planning perspective, recognising that it is extremely costly to ramp up or quickly substitute one type of health service for another due to asset specificity in medical equipment, hospitals, and skilled labour. Medical equipment cannot be quickly repurposed. At time \(t=0\), governments allocate funding X to public health on the assumption that it will need to provide services Y at \(\mathrm{t}=1\). Any demand above Y at \(\mathrm{t}=1\) (or a different configuration of demand) creates rationing, which is a cost borne by citizens. This is politically costly as those rationed citizens will punish the incumbent government because the excess demand signals that \(\mathrm{t}=0\) government failed to properly plan for \(\mathrm{t}=1\) scenarios.

But this sort of bureaucratic forward planning and budget allocations is destined to fail because of poor information and incentives by the bureaucratic agents (Mises, 1944), or worse, will be fully captured to pursue the agents own ends (Niskanen, 1971). Nevertheless, the legitimacy of the modern welfare system relies very heavily on the state delivering public goods to the population. Having the public health system collapse under the weight of a pandemic would be a major embarrassment to the perceptions of competence and legitimacy of a government.

Once a health budget has been allocated at \(\mathrm{t}=0\) (including a spending level and an allocation across services) a government experiences what Williamson (1985) calls a ‘fundamental transformation’ where they no longer have a wide set of options going forward, but a narrow set of capabilities that can only deal with a predetermined range of events. Any events falling outside that planning window will overwhelm the health system, which means to blow out the budget. Governments will therefore be incentivised to order events so that they fall within the health sector capability, even when that means imposing an externality back on citizens by for instance shutting down all elective surgery or banning any activity that could place demand on the health system such as driving, sports. Similar incentives extend to preferences over subsidising employment (in Australia, called the JobKeeper program) to avoid overwhelming the unemployment provisions and budgets allocated to welfare.

The COVID-19 pandemic and the government response also raise further questions that can be analysed through law and economics (and public choice theory) that we flag here as topics for subsequent inquiry. Most broadly, as Boettke and Powell (2021, p. 1090) argue, we must examine the “incentives and information that confront policymakers and voters and the institutional environments that shape their incentives and information”. As Coyne et al (2021) point out, the nature of political competition can lead to rent seeking in response to a public health crisis, where individuals exert influence on the public policy process to allocate resources that benefit themselves.

First, following Downs’ (1957) theory of log-rolling and the economics of political parties, we predict that the urgency to enact legislation to address the pandemic will significantly lower bargaining costs associated with vote trading (Buchanan & Tullock, 1962), leading to an increased number of back-room deals being made in order for the party in power to be able to present a single coronavirus emergency response omnibus bill before parliament or congress for expedited approval during some manner or restricted debate or sitting period.

A further prediction is that the lowered bargaining costs, due to the higher opportunity cost of a failure to reach consensus and to deliver effective emergency measures legislation, is that the efficiency of a multi-party system is reduced, as there only effectively needs to be one party, with all special interests able to deal behind the scenes. In times of crisis there is a common tendency to rally-around-the-flag (Mueller, 1985) and to support incumbent leaders and their party. Knowing this, rational opposition parties will put effort into political bargaining (vote-trading or log-rolling) toward a consensus bill rather than seeking to present an alternative legislative agenda.

This collapse in multi-party competition driven by falling political bargaining costs (because of the emergency response) and resultant omnibus legislative bundle rushed through the political process (to economise on political costs), which will therefore be complex and far less scrutinized than in normal times, is then predicted to have a further behavioural effect that the legislative act will be difficult to understand by individual voters, who indeed will have no incentive to understand the details (they will be rationally ignorant, Caplan, 2007), but will also give rise to expressive voting (Buchanan & Brennan, 1984), or conspicuous signalling of support for the consensus bill, and using social mechanisms to enforce compliance (shaming in public or on media, rallies of support, expressions of anger and even violence). In the COVID-19 pandemic, this process of aggressive public consent soon targeted any counter-narrative of the value of opening the economy back up.

We could also consider the long run effect examined by Olson (1982) on the economies of Germany and Japan after the Second World War, in which one of the benefits of losing the war was the institutional destruction of rent seeking regulations and legislation and clearing away of thickets of special deals between Pre-War elites that had accumulated over long periods of peaceful prosperity, but were a significant drag on efficient competition and resource allocation. The urgent deregulation of unnecessary regulations and licencing regimes, particularly in relation to urgently needed production and innovation in health and other essential industries, provides the opportunity for a constitutional or institutional reset, following defeat. From this perspective, a pandemic may have similar effects on long run economic growth as losing a war due to the opportunities for institutional creative destruction.

6 Conclusion

Negative externalities arising from an economic activity impose a social cost. This can be dealt with through government intervention targeting that activity-directly through regulation, or indirectly through market interventions (e.g. through taxation or subsidy to internalise the externality and minimise the social cost of an economic activity). This is called the Pigovian approach, after A.C. Pigou, who developed the foundations of modern welfare economics. But Ronald Coase recognised that this was not the only solution to the problem of social cost because it failed to recognise the symmetry in any situation of externalities, and therefore fails to focus on the problem of maximizing economic efficiency. Provided transaction costs are low and property rights are clear, Coase explained, parties can bargain their way to an efficient solution to externality problems.

We expect the coming months and years to feature heated retrospective debate about what policies were most effective in limiting the spread of COVID-19, and the relative trade-offs of those policies vis-a-vis their effect on economic activity. Much of that work will be empirical. But this paper has argued that there is a higher-level debate to be had about the policy framework that was adopted.

COVID-19 is a viral pandemic, causing a global public health crisis, but from an economic perspective it can be understood as a negative externality, thus presenting two policy pathways forward. To date, and globally, almost all public economic policy response has gone down the Pigovian path. However, from an economic theory perspective, there are arguments as to why a Coasean perspective could on some margins be a superior basis for public policy. We have sought to set those arguments out here. As governments prepare for economies to unfreeze (Allen et al., 2020), or prepare for future pandemics, this analysis urges policymakers to better understand the scope and limitations of policy responses available to them.

References

  • Acemoglu, D., Chernozhukov, V., Werning, I., & Whinston, M. (2020). A multi-risk sir model with optimally targeted lockdown. NBER working papers w27102.
  • Allen, D.W.E., Berg, C., Davidson, S., Lane, A.M., & Potts. J. (2020). Unfreeze how to create a high growth economy after the pandemic. American Institute for Economic Research.
  • Allen, D. W. E., & Berg, C. (2017). Subjective political economy. New Perspectives on Political Economy, 13(1-2), 19-40.
  • Alvarez, F., Argente, D., & Lippi, F. (2020). A simple planning problem for COVID-19 lockdown. NBER Working Paper w26981.
  • Arrow, K. (1950). A difficulty in the concept of social welfare. Journal of Political Economy, 58(4), $328-346$.
  • Arrow, K. (1951). Social choice and individual values. Wiley.
  • Boettke, P., & Powell, B. (2021). The political economy of the COVID-19 pandemic. Southern Economic Journal, 87(4), 1090-1106.
  • Born, B., Dietrich, A., & Meuller, G. (2021). The lockdown effect: A counterfactual for Sweden. PLoS ONE, 16(4), e0249732.
  • Brennan, G., & Buchanan, J. (1984). Voter choice: Evaluating political alternatives. American Behavioral Scientist, 28, 185-201.
  • Buchanan, J., & Tullock, G. (1962). The calculus of consent. University of Michigan Press.
  • Buchanan, J. (1964). What should economists do? Southern Economic Journal, 30(3), 213-222.
  • Buchanan, J., & Stubblebine, C. (1962). Externality. Economica, 29, 371-384.
  • Caplan, B. (2007). The myth of the rational voter. Princeton University Press.
  • Chang, R., & Velasco, A. (2020). Economic policy incentives to preserve lives and livelihoods. Covid Economics, 14, 33-56.
  • Coase, R. (1960) The Problem of Social Cost. Journal of Law and Economics. 3: 1-44. Reproduced in R. Coase. 1988. The Firm, the Market and the Law. University of Chicago Press: Chicago.
  • Coase, R. (1988). Notes on the problem of social cost. In R. Coase (Ed.), The firm, the market and the law. University of Chicago Press.
  • Coase, R. (1937). The nature of the firm. Economica, 4, 386-405.
  • Coase, R. (1959). The federal communications commission. Journal of Law and Economics, 2, 1-40.
  • Coyne, C. J., Duncan, T. K., & Hall, A. R. (2021). The political economy of state responses to infectious disease. Southern Economic Journal, 87(4), 1119-1137.
  • Demsetz, H. (1969). Information and efficiency: Another viewpoint. Journal of Law and Economics, 12, $1-22$.
  • Djankov, S., Glaeser, E., La Porta, R., Lopez-de-Silanes, F., & Shleifer, A. (2003). The new comparative economics. Journal of Comparative Economics, 31, 595-619.
  • Downs, A. (1957). An economic theory of democracy harper row. Harper Row.
  • Eichenbaum, M., Rebelo, S., & Trabandt, M. (2020). The macroeconomics of epidemics. NBER Working Papers w26882.
  • Fehr, E., & Fischbacher, U. (2002). Why social preferences matter: The impact of non-selfish motives on competition, cooperation and incentives. Economic Journal, 112(478), 1-33.
  • Fox, G. (2007). The real coase theorems. Cato Journal, 27(3), 373-398.
  • Garibaldi, P., Moen, E., & Pissarides, C. (2020). Modelling contacts and transitions in the SIR epidemics model. Covid Economics, 5, 1-20.
  • Garzarelli, G., Keeton, L., & Sitoe, A. A. (2022). Rights redistribution and COVID-19 lockdown policy. European Journal of Law and Economics. Available online.
  • Gonzalez-Eiras, M., & Niepelt, D. (2020). On the optimal lockdown during an epidemic. – Covid Economics, 7, 68-87.
  • Jones, C., Philippon, T., & Venkateswaran, V. (2020). Optimal mitigation strategies in a pandemic: Social distancing and work from home. NBER Working Papers w26984.
  • Leeson, P. T., & Rouanet, L. (2021). Externality and COVID-19. Southern Economic Journal, 87(4), $1107-1118$.
  • Marciano, A. (2018). ‘Why is “stigler’s Coase theorem” stiglerian? A methodological explanation. In Including a symposium on Bruce Caldwell’s beyond positivism after 35 years. Emerald Publishing Limited.
  • McChesney, F. S. (2006). Coase, demsetz, and the unending externality debate. Cato Journal, 26(1), $179-200$.
  • McCloskey, D. (1988). The so-called coase theorem. Eastern Economic Journal, 24, 367-371.
  • Mises, L. (1944). Bureaucracy. Republished by Mises Institute https://mises.org/library/bureaucracy.
  • Mueller, D. (1976). Public choice: A survey. Journal of Economic Literature, 14(2), 395-433.
  • Mueller, J. (1985). War, presidents, and public opinion. University Press of America.
  • Niskanen, W. (1971). Bureaucracy and representative government. Transaction Publishers.
  • Oslon, M. (1982). Decline and rise of nations. Yale University Press.
  • Paniagua, P., & Rayamajhee, V. (2021). A polycentric approach for pandemic governance: Nested externalities and co-production challenges. Journal of Institutional Economics. https://doi.org/10.1017/ S1744137421000795
  • Pennington, M. (2021). Hayek on complexity, uncertainty and pandemic response. The Review of Austrian Economics, 34(2), 203-220.
  • Powell, B. (2021). Government failure vs the market process during the COVID-19 pandemic. Available at SSRN 3919790.
  • Rayamajhee, V., Shrestha, S., & Paniagua, P. (2021). Governing nested externalities during a pandemic: Social distancing as a coproduction problem. Cosmos and Taxis, 9(5-6), 64-80.
  • Smith, A., Wagner, R. E., & Yandle, B. (2011). A theory of entangled political economy, with application to TARP and NRA. Public Choice, 148, 45-66.
  • Wagner, R. E. (2016). Politics as a peculiar business: Insights from a theory of entangled political economy. Edward Elgar.
  • Williamson, O. (1985). The economics institutions of capitalism. The Free Press.
  • Williamson, B. (2020). Beyond COVID-19 lockdown: A coasean approach with optionality. Economic Affairs, 40(2), 155-161.

Footnotes

  1. Also see Boettke and Powell (2021, p. 1095), whose analysis includes sorting society “into two discreet groups of young/healthy and old/infirm”. ↩︎
  2. For instance, Leeson and Rouanet (2021, p. 1109) argue that it is also possible that behaviours that increase others’ risks of infection can confer positive externalities on those who are already voluntarily locking others away, for instance by reaching ‘herd immunity’ faster. ↩︎
  3. We use the term ‘dictator’ in the technical sense to refer to a government’s largely suspending normal democratic or parliamentary processes in order to impose choices made by a select insider expert group on a civilian population, which is then strictly enforced. ↩︎
  4. Social preferences are defined as other people’s utility functions appearing as arguments in an individual’s utility function (Fehr & Fischbacher, 2002). ↩︎

So, you run a university

This essay is authored by Darcy W.E. Allen, Chris Berg, Sinclair Davidson, Leon Gettler, Ethan Kane, Aaron M. Lane and Jason Potts. It was published originally on Substack

The COVID-19 pandemic threatens the global university sector like the internet threatened journalism two decades ago. Both faced shocks disrupting long established and highly successful business models that defined the industrial landscape of the twentieth century. The internet tore down some of the largest, most historic, and most high-profile media businesses in the world. The media business model changed forever. So too will the university sector. And many universities will not emerge from this disruption intact.

The pandemic won’t simply shrink the size of the university sector for a few years, before a rebound when borders open again. The effect is much more deep, dramatic and frightening than that. It requires a re-think of the fundamental business model of the university. The pandemic undermines the complex, hidden, and mostly obscure cross-subsidies that keep universities functioning. Universities are phenomenally complicated platform organisations. They are platforms because their primary role is to match different stakeholder groups together so that they can trade with each other. The university is a platform that matches teachers with students, researchers with industry, graduates with employers, donors with social ventures, and on and on and on.

Many businesses are platforms. Platforms typically use one side of the market to cross-subsidize the other—the goal being to bring as many people onto the platform as they can. Platforms have network effects: the more readers a newspaper has, the more attractive that newspaper is for advertisers. More advertising money supports more journalism, bringing in more readers.

But as the media sector learned, these relationships are vulnerable to disruption. That disruption can pull down an entire sector. Old media platforms fell apart, disrupted by platforms with vastly different business models. For universities, that once-in-a-century disruption has just happened.

This eight-part essay offers a guide to how universities can survive in a post-pandemic world.

Continue reading “So, you run a university”

Trust and Governance in Collective Blockchain Treasuries

With Darcy WE Allen and Aaron M Lane. Available at SSRN

Abstract: Blockchain treasuries are pools of digital assets earmarked for funding goods and services within a blockchain ecosystem that have some public purpose, such as protocol upgrades. Ecosystem participants face a trust problem in ensuring that the treasury is robust to opportunism, such as theft or misappropriation. Treasury governance tools, such as expert committees or stakeholder voting, can bolster trust in treasury functions. In this paper we use new comparative economics to examine how treasury governance mechanisms minimise different types of costs, thereby bolstering trust. We interpret case studies of innovative treasury governance within this framework, finding that the costs shift throughout the lifecycle of an ecosystem, and those subjective costs are revealed through crisis. These changes lead ecosystem participants to choose and innovate on treasury governance.

A better design for defi grant programs

With Darcy WE Allen

The blockchain and defi sector should understand more about how real world grant giving bodies function. Nowhere is this clearer than in the recent debate about UniSwap and its new $20 million Defi Education Fund.

In the real world, grant giving is a lot like venture finance. It is an entrepreneurial activity involving the discovery of new information, new opportunities, and new ideas. It helps realise those opportunities and ideas and is rewarded for doing so.

The fact that grants are done with a for-purpose goal while venture finance is done with a for-profit goal only makes a difference at the margin. The best grant giving bodies in the world work very hard to ensure that the custodians of funds have incentives tightly aligned to the overall objectives of the body. Some even use external independent auditors to see whether grants align to objectives, and penalise the program’s management if they do not. These rules bind the grant makers, allowing the grant seekers to innovate and discover how best to achieve the programs objectives.

Admittedly, it can be sometimes hard to see the entrepreneurial and discovery nature of grant programs. Academic research grants tend to be highly bureaucratic processes with layers of committees and appointed experts collating and judging grant proposals at arms-length from the funders.

But ultimately this bureaucracy has a purpose. Those systems of rules might seem inefficient, but they have been designed to align the dispersal of funds with the objectives of the fund. In the case of the Australian Research Council, all those committees are intended to fulfil the objectives of the Department of Education’s scientific mandate through discovery and investment. (Let’s not get hung up about how effective these government programs are.)

At the other end of the spectrum is Tyler Cowen’s Emergent Ventures grant program, where almost all decision-making is Cowen’s judgement. But this too is a structure designed to align objectives with fund dispersal. The objectives of the fund are to allow Cowen to use his knowledge to support “high-risk, high-reward ideas that advance prosperity, opportunity, and wellbeing” — and by all accounts the program is an incredible success.

Two approaches to defi grants

Right now we broadly have two models of grant giving in the defi space. The first is small centralised grant committees. These tend to be small groups of authoritative community leaders with near absolute control of large treasuries assessing and granting funds to desirable projects. These leaders may be elected or appointed, but either way they are using their authority in the community to legitimate their decisions. They may have a deep understanding of their ecosystem and its funding needs. An obvious problem with this is the risk that committee leaders opportunistically fund projects based on personal relationships, rather than ecosystem value.

The alternative model — and the most common one — is putting all grant proposals up to a vote of all relevant stakeholders, that is, holders of a governance token. Designing structures for effective collective decision-making is one of the hardest problems in political science. It is no surprise that some decision-making in the nascent blockchain governance world have been controversial.

But there’s a fundamental problem with this democratic model to grant making: it makes very little sense to believe that a full distributed democratic community can make the sort of entrepreneurial decisions that we expect from both venture finance and grant giving bodies themselves. Why would we expect a diverse, pseudonymous community of governance token holders to coordinate around extremely uncertain entrepreneurial decisions?

Throwing every proposal to a mass vote is the worst of all worlds. First, every proposal ultimately becomes a public vote about the objectives of the program itself. Should the treasury’s funds be used for marketing, or research, or to build new infrastructure? Grant recipients, and the ecosystem that relies on them, are left with inconsistency and unpredictability.

Second, there is little reason to believe that a mass vote will reveal the best investments. Highly decentralised voting may protect against opportunism, but it isn’t likely to surface information about entrepreneurial investment opportunities — exactly what is needed for successful grant-giving. This precise information-revelation problem is the motivation and intuition between mechanisms such as quadratic fundingfutarchy, and commitment voting.

A better grant program design

This is a solvable problem. Treasuries should give budgets to individual ‘philanthropists’. Those philanthropists then make entrepreneurial investments to align the compensation of those entrepreneurs with the success of their invested projects.

The full set of tokenholders sets the objective of the grant program, or an individual round. These objectives would shift as a given ecosystem and the broader industry develops — for instance from funding oracle feeds, to bridging infrastructure, to policy change. Grants are broken into funding rounds. The length of those rounds, say a year or two, must be long enough that there are observable outcomes from grant projects. Rounds could be sequential or overlap.

Each round, a set of philanthropists (say, five) are chosen (elected or appointed) and given discrete budgets. The number of philanthropists for a given round could also be decided by all tokenholders.

Once the funds are dispersed to each philanthropist, they run separate and independent grant programs. They must have credible autonomy: with their own rules, their own application processes, and their own interpretation of the objectives of the overall grant program.

At the end of the round, the full set of tokenholders rank each of the five philanthropists according to how successful (how much value was added, how closely they aligned to objectives) their grants were. The philanthropists are compensated for their work based on that ranking, with the top-ranked getting the most reward.

In this way the grants program is designed to both fund projects, and to incentivise decision-making philanthropists to do a good job.

Our proposal drives the same sort of competitive, entrepreneurial energy that we see in venture finance into defi grant distribution.

Through grant program design we can encourage effective decision-making through feedback loops, while maintaining decentralisation (the risk that philanthropists will behave badly is limited to the length of a grant round) and giving philanthropists a personal stake in the success of the grants that they have distributed (encouraging them to support and shepherd them to fruition).

Grant program design matters a lot

It might be easy to dismiss grant program design as a sideshow in the blockchain industry, marginally interesting but ultimately not a central part of the success of any particular protocol. It would be wrong to do so.

Analogies in blockchain are difficult. But if DAOs are like corporations, then grant programs are how they do internal capital allocation — and as Alfred D. Chandler Jr. has shown, internal capital allocation has determined the shape of global capitalism. Alternatively, if blockchain ecosystems are like countries with governments, then when we talk about grant programs we’re talking about public finance — they are how we pay for public goods and deploy scarce resources in a democratic context.

Ultimately, the sustainability and robustness blockchain ecosystems require effective use of resources. The success of grant programs will form a critical part of the success of blockchain and dapp protocols. They should seek to harness the same entrepreneurial energy and effort that has driven the rest of the blockchain industry.

Towards a Digital CBD

With Darcy Allen and Jason Potts

The COVID-19 pandemic is both a public health crisis, and a digital technology accelerant. Pre-pandemic, our economic and social activities were done predominantly in cities. We connected and we innovated in these centralised locations.

But then a global pandemic struck. We were forced to shop, study and socialise in a distributed way online. This shock had an immediate impact on our cities, with visceral images of closed businesses and silent streets.

Even after COVID-19 dissipates, the widespread digital adoption that the pandemic brought about means that we are not snapping back to pre-pandemic life.

The world we are entering is hybrid. It is both analogue and digital, existing in both regions and cities. Understanding the transition is critical because cities are one of our truly great inventions. They enable us to trade, to collaborate, and to innovate. In other words, cities aggregate economic activity.

The Digital CBD project is a large-scale research project that asks: what happens when that activity suddenly disaggregates? What happens to the city and its suburbs? What happens to the businesses that have clustered around the CBD? What infrastructure do we need for a hybrid digital city? What policy changes will be needed to enable firms and citizens to adapt?

Forced digital adoption

This global pandemic happened at a critical time. Many economies were already transitioning from an industrial to a digital economy. Communications technologies had touched almost every business. Digital platforms were commonly used to engage socially and commercially. But the use of these technologies was not yet at the core of our businesses, it sat on the sidelines. We were only on the cusp of a digital economy.

Then COVID-19 forced deep, coordinated, multi-sector and rapid adoption of digital technologies. The coordination failures and regulatory barriers that had previously held us back were wiped away. We swapped meeting rooms for conference calls, cash for credit cards, pens-and-paper for digital signatures. There had been a desire for these changes for a long time.

These changes make even more frontier technologies suddenly come into view. Blockchains, artificial intelligence, smart contracts, the internet of things and cybersecurity technologies are now more viable because of this base-level digital adoption.

Importantly, this suite of new technologies doesn’t just augment and improve the productivity of existing organisations, they make new organisational forms possible. It changes the structure of the economy itself.

Discovering our digital CBD

Post-pandemic, parts of our life and work will return to past practices. Some offices will reopen, requiring staff to return to rebuild morale and culture. And those people will also flood back into CBD shops, bars and restaurants. They will, as all flourishing cities encourage, meet and innovate.

But of course some businesses will relish their new-found productivity benefits – and some workers will guard the lifestyle benefits of working from home. Many firms will never fully reopen their offices and will brag about their remote-work dynamic culture.

The potential implications for cities, however, are more complex. Cities will fundamentally have different patterns of specialisation and trade than a pre-pandemic economy. Those new patterns are enabled by a suite of decentralised technologies, including blockchains and smart contracts, that were already disrupting how we organise our society.

We can now organise economic activity in new ways. CBDs have historically housed large, hierarchical industrial-era companies. As we have written elsewhere, decentralised infrastructure enables new types of organisational forms to emerge. Blockchains industrialise trust and shift economic activities towards decentralised networks.

How do these new types of industrial organisation change the way that we work, and the location of physical infrastructure? What are the policy changes necessary to enable these new organisations to flourish in particular jurisdictions?

Economies and cities are fundamentally networks of supply chains, and that infrastructure is turning digital too. The pandemic has accelerated the transition to digital trade infrastructure that provides more trusted and granulated information about goods as they move. How can we ensure that these digital supply chains are resilient to future shocks? What opportunity is there for regions to become a digital trade hub?

Another impact of digital technology is that labour markets just became more global. The acquisition of talented labour is no longer bounded by physical distance. Our collaborations are structured around timezones, rather than geography.

Labour market dynamism presents unique opportunities, but will also require secure infrastructure both to validate credentials and to facilitate ongoing productivity. How can Melbourne, a world-class cluster of universities, place itself for this new environment?

A research and a policy problem

Building a digital CBD is fundamentally an entrepreneurial problem—a problem of discovering what these new digital ways of coordinating and collaborating look like. Our Digital CBD research program contributes to this challenge with insights from economics, law, political science, finance, accounting and more. We aim to use this interdisciplinary research base to make policy recommendations that help our digital CBD to flourish.

Building a grammar of blockchain governance

With Darcy Allen, Sinclair Davidson, Trent MacDonald and Jason Potts. Originally a Medium post.

Blockchains are institutional technologies made of rules (e.g. consensus mechanisms, issuance schedules). Different rule combinations are entrepreneurially created to achieve some objectives (e.g. security, composability). But the design of blockchains, like all institutions, must occur under ongoing uncertainty. Perhaps a protocol bug is discovered, a dapp is hacked, treasury is stolen, or transaction volumes surge because of digital collectible cats. What then? Blockchain communities evolve and adapt. They must change their rules (e.g. protocol security upgrades, rolling back the chain) and make other collective decisions (e.g. changing parameters such as interest rates, voting for validators, or allocating treasury funds).

Blockchain governance mechanisms exist to aid decentralised evolution. Governance mechanisms include online forums, informal polls, formal improvement processes, and on-chain voting mechanisms. Each of these individual mechanisms — let alone their interactions — are poorly understood. They are often described through sometimes-useful but imperfect analogies to other institutional systems with deeper histories (e.g. representative democracy). This is not a robust way to design the decentralised digital economy. It is necessary to develop a shared language, and understanding, of blockchain governance. That is, a grammar of rules that can describe the entire possible scope of blockchain governance rules, and their relationships, in an analytically consistent way.

A starting point for the development of this shared language and understanding is a methodology and rule classification system developed by 2009 economics Nobel Laureate Elinor Ostrom to study other complex, nested institutional systems. We propose an empirical project that seeks conceptual clarity in blockchain governance rules and how they interact. We call this project Ostrom-Complete Governance.

The common approach to blockchain governance design has been highly experimental — relying very much on trial and error. This is a feature, not a bug. Blockchains are not only ecosystems that require governance, but the technology itself can open new ways to make group decisions. While being in need of governance, blockchain technology can also disrupt governance. Through lower costs of institutional entrepreneurship, blockchains enable rapid testing of new types of governance — such as quadratic voting, commitment voting and conviction voting — that were previously too costly to implement at scale. We aren’t just trying to govern fast-paced decentralised technology ecosystems, we are using that same technology for its own governance.

This experimental design challenge has been compounded by an ethos and commitment to decentralisation. That decentralisation suggests the need for a wide range of stakeholders with different decision rights and inputs into collective choices. The lifecycle of a blockchain exacerbates this problem: through bootstrapping a blockchain ecosystem can see a rapidly shifting stakeholder group with different incentives and desires. Different blockchain governance mechanisms are variously effective in different stages of blockchain development. Blockchains, and their governance, begin relatively centralised (with small teams of developers), but projects commonly attempt to credibly commit to rule changes towards a system of decentralised governance.

Many of these governance experiments and efforts have been developed through analogy or reference to existing organisational forms. We have sought to explain and design this curious new technology by looking at institutional forms we know well, such as representative democracy or corporate governance. Scholars have looked to existing familiar literature such as corporate governance, information technology governance, information governance, and of course political constitutional governance. But blockchains are not easily categorised as nation states, commons, clubs, or firms. They are a new institutional species that has features of each of these well-known institutional forms.

An analogising approach might be effective to design the very first experiments in blockchain governance. But as the industry matures, a new and more effective and robust approach is necessary. We now have vast empirical data of blockchain governance. We have hundreds, if not thousands, of blockchain governance mechanisms, and some evidence of their outcomes and effects. These are the empirical foundations for a deeper understanding of blockchain governance — one that embraces the institutional diversity of blockchain ecosystems, and dissects its parts using a rigorous and consistent methodology.

Embracing blockchain institutional diversity

Our understanding of blockchain governance should not flatten or obscure away from its complexity. Blockchains are polycentric systems, with many overlapping and nested centres of decision making. Even with equally-weighted one-token-one-vote blockchain systems, those systems are nested within other processes, such as a github proposal process and the subsequent execution of upgrades. It is a mistake to flatten these nested layers, or to assume some layers are static.

Economics Nobel LaureateElinorOstrom and her colleagues studied thousands of complex polycentric systems of community governance. Their focus was on understanding how groups come together to collectively manage shared resources (e.g. fisheries and irrigation systems) through systems of rules. This research program has since studied a wide range of commons including cultureknowledge and innovation. This research has been somewhat popular for blockchain entrepreneurs, in particular through using the succinct design principles (e.g. ‘clearly defined boundaries’ and ‘graduated sanctions’) of robust commons to inform blockchain design. Commons’ design principles can help us to analyse blockchain governance — including whether blockchains are “Ostrom-Compliant” or at least to find some points of reference to begin our search for better designs.

But beginning with the commons design principles has some limitations. It means we are once again beginning blockchain governance design by analogy (that blockchains are commons), rather than understanding blockchains as a novel institutional form. In some key respects blockchains resemble commons — perhaps we can understand, for instance, the security of the network as a common pool resource — but they also have features of states, firms, and clubs. We should therefore not expect that the design principles developed for common pool resources and common property regimes are directly transferable to blockchain governance.

Beginning with Ostrom’s design principles begins with the output of that research program, rather than applying the underlying methodology that led to that output. The principles were discovered as a meta-analysis of the study of thousands of different institutional rule systems. A deep blockchain-specific understanding must emerge from empirical analysis of existing systems.

We propose that while Ostrom’s design principles may not be applicable, a less-appreciated underlying methodology developed in her research is. In her empirical journey, Ostrom and colleagues at the Bloomington School developed a detailed methodological approach and rule classification system. While that system was developed to dissect the institutional complexity of the commons, it can also be used to study and achieve conceptual clarity in blockchain governance.

The Institutional Analysis and Development (IAD) framework and the corresponding rule classification system, is an effective method for deep observation and classification of blockchain governance. Utilising this approach we can understand blockchains as a series of different nested and related ‘action arenas’ (e.g. consensus process, a protocol upgrade, a DAO vote) where different actors engage, coordinate and compete under sets of rules. Each of these different action arenas have different participants (e.g. token holders), different positions (e.g. delegated node), and different incentives (e.g. to be slashed), which are constrained and enabled by rules.

Once we have identified the action arenas of a blockchain we can start to dissect the rules of that action arena. Ostrom’s 2005 book, Understanding Institutional Diversity, provides a detailed classification of rules classification that we can use for blockchain governance, including:

  • position rules on what different positions participants can hold in a given governance choice (e.g. governance token holder, core developer, founder, investor)
  • boundary rules on how participants can or cannot take part in governance (e.g. staked tokens required to vote, transaction fees, delegated rights)
  • choice rules on the different options available to different positions (e.g. proposing an upgrade, voting yes or no, delegating or selling votes)
  • aggregation rules on how inputs to governance are aggregated into a collective choice (e.g. one-token-one-vote, quadratic voting, weighting for different classes of nodes).

These rules matter because they change the way that participants interact (e.g. how or whether they vote) and therefore change the patterns that emerge from repeated governance processes (e.g. low voter turnout, voting deadlocks, wild token fluctuations). There have been somestudies that have utilised the broad IAD framework and commons research insights to blockchain governance, but there has been no deep empirical analysis of the rule systems of blockchains using the underlying classification system.

The opportunity

Today the key constraint in advancing blockchain governance is the lack of a standard language of rules with which to describe and map governance. Today in blockchain whitepapers these necessary rules are described in a vast array of different formats, with different underlying meanings. That hinders our capacity to compare and analyse blockchain governance systems, but can be remedied through applying and adopting the same foundational grammar. Developing a blockchain governance grammar is fundamentally an empirical exercise of observing and classifying blockchain ecosystems as they are, rather than imposing external design rules onto them. This approach doesn’t rely on analogy to other institutions, and is robust to new blockchain ecosystem-specific language and new experimental governance structures.

Rather than broadly describing classes of blockchain governance (e.g., proof-of-work versus proof-of-stake versus delegated-proof-of-stake) our approach begins with a common set of rules. All consensus processes have sets of boundary rules (who can propose a block? how is the block-proposer selected?), choice rules (what decisions do block-proposers make, such as the ordering of transactions?), incentives (what is the cost of proposing a bad block? what is the reward for proposing a block), and so on. For voting structures, we can also examine boundary rules (who can vote?), position rules (how can a voter get a governance token?) choice rules (can voters delegate? who can they delegate to?) and aggregation rules (are vote weights symmetrical? is there a quorum?).

We can begin to map and compare different blockchain governance systems utilising this common language. All blockchain governance has this underlying language, even if today that grammar isn’t explicitly discussed. The output of this exercise is not simply a series of detailed case studies of blockchain governance, it is detailed case studies in a consistent grammar. That grammar — an Ostrom-Complete Grammar — enables us to define and describe any possible blockchain governance structure. This can ultimately be leveraged to build new complete governance toolkits, as the basis for simulations, and to design and describe blockchain governance innovations.

Setting the reserve price for the Tracer DAO Gnosis auction

With Peyman Khezr

Introduction: Selling multiple units of a homogeneous good in an auction is one way of determining the market price. Uniform-price auctions have been used in many real-world markets because of their price discovery property: All winning bidders pay the same price (either highest losing bid or lowest winning bid). The question is how a seller could compute an optimal reserve price in a uniform price auction. First we should note that literature suggests a positive reserve price is usually better than no reserve price as it reduces the chance of underbidding by bidders. However, to compute the reserve price for a uniform price auction there are no clear criteria. In this note we follow the criteria given for the second-price auction as the best approximate of the uniform-price auction.

PDF available here

An economic theory of blockchain foundations

With Jason Potts, Darcy WE Allen, Sinclair Davidson and Trent MacDonald

Abstract: Blockchain (or crypto) foundations are nonprofit organizations that supply public goods to a crypto-economy. The standard theory of crypto foundations is that they are like governments with respect to a national or regional economy, i.e. raising a public treasury and allocating resources to blockchain specific capital works, education, R&D, etc., to benefit the community and develop the ecosystem. We propose an alternative theory of what foundations do, namely that the treasury they manage is a moat to raise the cost of exit or forking because the benefit of the fund is only available to those who stay with the chain. Furthermore, building and maintaining a large treasury is a costly signal that only a high quality chain could afford to do (Spence 1973). We review these two models of the economic function of a blockchain foundation – (1) as a private government supplying local public goods, and (2) as a moat to raise the opportunity costs of exit. We outline the empirical predictions each theory makes, and examine the implications for optimal foundation design. We conclude that foundations should be funded by a pre-mine of tokens, and work best when large, visible, transparent, rigorously managed, and with a low burn rate.

Available at SSRN.