Dutton is losing the debate over nuclear energy right when we need it for AI

Published in Crikey

Peter Dutton is losing the debate over nuclear power. Even the pro-nuclear Financial Review agrees, which ran an editorial last week wondering where the Coalition’s details were. And the Coalition’s proposal for the government to own the nuclear industry has made it look more like election boondoggle than visionary economic reform. 

It is starting to look like a big missed opportunity. 

Because in 2024, the question facing Australian governments is not only how to transition from polluting energy sources to non-polluting sources. It is also how to set up an economic and regulatory framework to service what is likely to be massive growth in electricity demand over the next decade.

The electrification revolution is part of that demand, with, for instance, the growing adoption of electric vehicles. But the real shadow on the horizon is artificial intelligence. The entire global economy is embedding powerful, power-hungry AI systems into every platform and every device. To the best of our knowledge, the current generation of AI follows a simple scaling law: the more data and the more powerful the computers processing that data, the better the AI. 

We should be excited for AI. It is the first significant and positive productivity shock we’ve had in decades. But the industry needs more compute, and more compute needs more energy.

That’s why Microsoft is working to reopen Three Mile Island — yes, that Three Mile Island — and has committed to purchasing all the electricity from the revived reactor to supply its AI and data infrastructure needs. Oracle plans to use three small nuclear reactors to power a massive new data centre. Amazon Web Services is buying and plans to significantly grow a data centre next to a nuclear plant in Pennsylvania

Then there’s OpenAI. The New York Times reports that one of the big hurdles for OpenAI in opening US data centres is a lack of adequate electricity supply. The company is reportedly planning to build half a dozen data centres that would each consume as much electricity as the entire city of Miami. It is no coincidence that OpenAI chief Sam Altman has also invested in nuclear startups.

One estimate suggests that data centres could consume 9% of US electricity by 2030.

Dutton, to his credit, appears to understand this. His speech to the Committee for Economic Development of Australia (CEDA) last week noted that nuclear would help “accommodate energy intensive data centres and greater use of AI”. 

But the Coalition’s mistake has been to present nuclear (alongside a mixture of renewables) as the one big hairy audacious plan to solve our energy challenge. They’ve even selected the sites! Weird to do that before you’ve even figured out how to pay for the whole thing.

Nuclear is not a panacea. It is only appealing if it makes economic sense. Our productivity ambitions demand that energy is abundant, available and cheap. There has been fantastic progress in solar technology, for instance. But it makes no sense to eliminate nuclear as an option for the future. When the Howard government banned nuclear power generation in 1998, it accidentally excluded us from competing in the global AI data centre gold rush 26 years later.

Legalising nuclear power in a way that makes it cost effective is the sort of generational economic reform Australian politicians have been seeking for decades. I say in a way that makes it cost effective because it is the regulatory superstructure laid on top of nuclear energy globally that accounts for many of the claims that nuclear is uneconomic relative to other renewable energy sources. 

A Dutton government would have to not only amend the two pieces of legislation that specifically exclude nuclear power plants from being approved, but also establish dedicated regulatory commissions and frameworks and licencing schemes to govern the new industry — and in a way that encouraged nuclear power to be developed, not blocked. And all of this would have to be pushed through a presumably sceptical Parliament. 

That would be a lot of work, and it would take time. But I’ve been hearing that nuclear power is “at least 10 to 20 years away” for the past two decades. Allowing (not imposing) nuclear as an option in Australia’s energy mix would be our first reckoning with the demands of the digital economy.

Albo’s reckless and draconian misinformation legislation completely undermines itself

Published in Crikey

The Albanese government’s misinformation legislation — a new draft of which was introduced in Parliament late last week — is one of the most extraordinary and draconian pieces of legislation proposed in Australia in the past few decades. It is so obviously misconceived, recklessly drafted and wilfully counterproductive that it undermines the entire argument against political misinformation.

The bill would grant the Australian Communications and Media Authority (ACMA) a vast regulatory authority over digital platforms such as Facebook and X, roughly similar to the sort of controls it imposes on broadcast television and radio. 

On the surface, these new powers seem modest. ACMA would have the ability to approve “codes” and make “standards” for the platforms’ anti-misinformation policies. It would impose record-keeping requirements and transparency obligations for fact-checking.

The government says the bill does not provide ACMA with the power to directly censor any particular internet content or any particular users. And that’s exactly right! Instead, the bill empowers ACMA to write codes of conduct and standards that require digital platforms to conduct censorship on its behalf.

Censorship done at arm’s length is still censorship. The point of the legislation is to make codes that are legally enforceable. We already have a voluntary disinformation code. The government is trying to launder the radicalism of this legislation through the complexities of delegation and regulatory outsourcing.

Anybody with a passing familiarity with the evolution of Australian policy can guess what happens next. When the (children’s) eSafety commissioner was established under the Abbott government it was meant to target cyberbullying against children in response to specific requests. A decade later the commissioner is trying to take content down from X globally because of the risk that some (adult) Australians may be using virtual private networks. We’re a long way from the original intent of the Parliament in 2015. 

I’m not trying to make a slippery slope argument here (“this bill seems reasonable, but it’ll lead to something outrageous later on”). The misinformation bill is outrageous already. 

It targets misinformation as content that can cause serious harm to the electoral process, harm to public health (and the efficacy of public health measures), content that vilifies, that risks damage to critical infrastructure, and might cause imminent harm to the Australian economy, including to financial markets or the banking system.

These categories are ripe for abuse. It is trivially easy to imagine how the concept of “serious harm” could be manipulated by this government or a future one. Let’s say we have a debate over voter ID at polling booths in the coming years. Would we really be better off having that debate mediated for us by Commonwealth regulators and Meta’s compliance department? If anything, that would be more damaging to trust in the electoral system than leaving the discussion unbridled.

Digital platform fact-checkers can be skittish and they tend to overreact, particularly when they have regulators peering down their necks and when political tensions are elevated. Mark Zuckerberg admitted as much in August, saying it had gone too far during the pandemic and the 2020 presidential election. But moments of high tension are when censorship does the most damage to trust in institutions and the political system. High tension is when we need free speech the most.

The inclusion of banking and financial market harm as regulated misinformation is bizarre. What’s the most generous interpretation of this provision? That Facebook might be able to stave off a bank run through judicious content deletion? There is no credible economic theory that says suppressing public discussion about the financial system makes it more resilient.

The inclusion of public health, too, is galling if we see it in the current political context. The Albanese government has declined to institute a full inquiry into the COVID-19 policy responses of state and federal governments. Something seems backwards here. We’re not getting a proper audit of what was true and what was not true during the pandemic, but we are getting laws that would prevent untruths from being shared?

The government has been incensed by misinformation since it lost the Voice referendum, convinced that its opponents were being misleading. But it is often a mistake to turn political arguments into concrete legislation. Imagine if the misinformation law had passed before the referendum. It would have been an absolute gift to the No campaign — what are they hiding from you? The Albanese government has not thought this through.

The fact that the misinformation bill excludes the mainstream press and government speech from misinformation is obviously self-interested. But more critically this legislation reveals the incoherence of the anti-misinformation crusade. By trying to be precise about what speech is out-of-bounds, the government is asserting an authority over information it does not, and could not, have. We will absolutely regret putting the government in charge of public debate about the government.

Dutton’s anti-immigration stance is a symptom of a deep failure in Australian public policy

Published in Crikey

It wasn’t long after it lost the 2022 election that we started to hear how the Coalition would be focusing its attacks on immigration.

Anti-immigration is a crutch, one that political parties use to avoid facing up to Australia’s actual economic problems. Ramping up the rhetoric against migrants is not honest and courageous. It is evasive and cowardly.

By far the most galling example of this is housing. Reducing house prices is what passes as the respectable centrepiece of the Coalition’s argument for reducing immigration. Opposition Leader Peter Dutton put it this way in his May budget reply: “By getting the migration policy settings right, the Coalition can free up more houses for Australians.”

Dutton’s description of the problem is revealing. And weird. We don’t need to free up more houses, as if the policy question is how to shuffle around a fixed stock of houses until they are allocated to their most virtuous occupants. We need to create more houses.

We cannot build enough homes to support our growing population because we wrap building up in an absurd mesh of regulatory burdens that slow construction and raise prices. In the middle of a growing population, our home-building approvals have been virtually flat. Australia had fewer new houses approved in July 2024 than it did in July 2014

The guilty regulations aren’t federal regulations, sure, but that makes it worse. It means the Coalition is attacking migrants and international students because it lacks the courage to stand up to local governments and planning bureaucrats.

Of course many of these same charges have to be levied against the Albanese government: its caps on international students will do nothing to slow house price growth. 

Dutton and his colleagues have also cited the burden of immigrants on infrastructure: roads, hospitals, public transport. Again, an alternative to cutting migration then could be to build more infrastructure to cope with a growing population, regardless of the origin of that growth. 

But infrastructure development in Australia is very expensive and very slow. One cause of this is high construction labour costs that are not justified by high productivity. Another is the high regulatory burden imposed on projects — particularly environmental regulation. And the Reserve Bank’s struggle to get inflation on track is making the infrastructure cost problem worse.

However, the Coalition is still shellshocked from its efforts under John Howard to introduce industrial relations reform. It is bruised from the on-again, off-again Australian Building and Construction Commission saga, reintroduced every time there is a Coalition government and eliminated every time Labor returns. 

It’s true Dutton has promised to reintroduce the ABCC if elected. On Wednesday this week he said he also wants to try — again — to reform section 487 of the Environment Protection and Biodiversity Conservation Act (which expands legal standing to activists so they can object to major projects) — a policy the Coalition had to give up in government because it couldn’t get it through the Senate.

But these policies aren’t going to boost infrastructure building in any serious way. We’ve tried the ABCC before, and section 487 is a convenient scapegoat for a much deeper problem. A 2013 Productivity Commission report found that development approvals for major projects were rife with “unnecessary complexity and duplicative processes”, “lengthy approval timeframes” and a “lack of regulatory certainty and transparency in decision making”. Reforming section 487 is tinkering. It is not the root-and-branch regulatory reform required to build the needed infrastructure at scale. That would be hard. Pointing at migrants is easy.

In truth, the level of migration we’re experiencing is not unexpected or surprising. We’re on roughly the same trajectory of increasing permanent and long-term arrivals that we have been since the 1990s. Remember that our migration numbers dropped to virtually zero in 2020 and 2021. You’d expect there was a lot of deferred migration as a result. But we are only back on the pre-pandemic trend.

And while permanent and long-term arrival numbers are larger than ever before, so is our economy. And so is the need in the economy for workers. The fact the Australian political class cannot get the settings right for economic growth — and so have to shunt the blame for their own failure onto migrants and students — is damning. Dutton’s anti-immigration stance is a symptom of a very deep failure in Australian public policy.

Jim Chalmers’ spray at the RBA is embarrassing

Published in Crikey

There’s nothing in the Reserve Bank Act, or in the concept of central bank independence more generally, that says the treasurer can’t be as critical of the RBA as he likes. There’s a lot of silly hand-wringing about “inappropriateness” every time this happens. Our economists should not be so delicate. A government at war with its own money printer is a sign of the bank’s independence, rather than a lack of it.

But Jim Chalmers’ salvo against the RBA this close to an election is embarrassing and desperate. Foreshadowing the anaemic GDP growth figures released yesterday, the treasurer declared that the fault is all with the RBA: it is “smashing the economy” by keeping interest rates high to slow inflation. 

It’s not unusual for governments to be frustrated when monetary policy contradicts their political strategy. It is unusual for a treasurer to so aggressively try to offload blame for sluggish growth onto a central bank whose governor he appointed and whose mandate and approach he endorsed less than a year ago.

The problem for the Albanese government is simple. There is a fundamental tension between the government’s election strategy (to relieve the pressure of inflation on household budgets through fiscal transfers and try to prop up the economy with government spending) and the RBA’s requirement to get inflation down — inflation that is exacerbated by the government’s fiscal transfers and expenditure. So we have had higher interest rates for longer while the Albanese government has tried to shield voters from the impact of those higher rates while keeping spending high.

Chalmers knows full well that monetary and fiscal policy can work against each other. Back during the global financial crisis, an internal government meeting between treasurer Wayne Swan, prime minister Kevin Rudd, treasury secretary Ken Henry, and “senior staff” specifically discussed how, if government spending increased, the RBA would likely keep interest rates higher than it would otherwise (I wrote about the implications of this meeting for ABC’s The Drum here). Chalmers was Swan’s principal adviser when that meeting occurred. 

If only the government and the RBA could row in the same direction. But the blame for policy divergence has to rest entirely with the government. RBA policy choices are strictly bounded by its legislative objectives and its extremely limited set of tools. Chalmers has a lot more discretion.

We might have some sympathy for Chalmers’ predicament. It must be galling to see other central banks starting to reduce rates. Voters always blame the elected government for a poor economy. They are right to. Ultimately it is Parliament that has the most tools to boost productivity and through that economic growth. 

But there’s no time before the election to turn private sector growth around and there’s seemingly no appetite within the government to resolve the fiscal-monetary contradiction. Chalmers’ comments on Sunday were immediately following Anthony Albanese’s Saturday announcement of further “cost of living” relief in the form of increased rent assistance payments. 

After the economic data this week, there’s a good chance that the RBA will change tack soon. But you can see what Chalmers is trying to do: shift blame onto the bank for the economy’s poor performance generally.

I started by observing that there’s nothing wrong, in principle, with the treasurer complaining about RBA policy. Yet this is a sensitive moment for the central bank. At the same time as Chalmers is accusing the bank of economic recklessness, he is also trying to finalise the overhaul of its governance, splitting the board into a monetary and governance committee. The treasurer wants this reform to be bipartisan. After this week’s events, the Coalition should insist that any reform and associated personnel choices wait until election season is over, whoever wins.

Telegram founder’s arrest is radical — if it’s a crime to build privacy tools, there will be no privacy

Published in Crikey

The arrest of the Telegram CEO Pavel Durov in France this week is extremely significant. It confirms that we are deep into the second crypto war, where governments are systematically seeking to prosecute developers of digital encryption tools because encryption frustrates state surveillance and control. While the first crypto war in the 1990s was led by the United States, this one is led jointly by the European Union — now its own regulatory superpower. 

What these governments are insisting on, one criminal case at a time, is no less than unfettered surveillance over our entire digital lives.

Durov, a former Russian, now French citizen, was arrested in Paris on Saturday, and has now been indicted. You can read the French accusations here. They include complicity in drug possession and sale, fraud, child pornography and money laundering. These are extremely serious crimes — but note that the charge is complicity, not participation. The meaning of that word “complicity” seems to be revealed by the last three charges: Telegram has been providing users a “cryptology tool” unauthorised by French regulators.

In other words, the French claim is that Durov developed a tool — a chat program that allowed users to turn on some privacy features — used by millions of people, and some small fraction of those millions used the tool for evil purposes. Durov is therefore complicit in that evil, not just morally but legally. This is an incredibly radical position. It is a charge we could lay at almost every piece of digital infrastructure that has been developed over the past half century, from Cloudflare to Microsoft Word to TCP/IP. 

There have been suggestions (for example by the “disinformation analysts” cited by The New York Times this week) that Telegram’s lack of “content moderation” is the issue. There are enormous practical difficulties with having humans or even AI effectively moderate millions of private and small group chats. But the implication here seems to be that we ought to accept — even expect — that our devices and software are built for surveillance and control from the ground up: both the “responsible technology” crowd and law enforcement believe there ought to be a cop in every conversation. 

It is true that Telegram has not always been a good actor in the privacy space, denigrating genuinely secure-by-design platforms like Signal while granting its own users only limited privacy protection. Telegram chats are not fully or always encrypted, which leaves users exposed to both state surveillance and non-state criminals. Wired magazine has documented how the Russian government has been able to track users down for their apparently private Telegram conversations. For that matter, it would not be surprising to learn that there are complex geopolitical games going on here between France and Russia.

But it would be easier to dismiss the claims made against Durov as particular to Telegram, or dependent on some specific action of Durov as an individual, if he was alone in being targeted as an accomplice for criminal acts simply because he developed privacy features for the digital economy.

The Netherlands have imprisoned the developer Alexey Pertsev for being responsible for the malicious use of a cryptocurrency privacy tool he developed, Tornado Cash. Again, Pertsev was not laundering money; he built a tool to protect every user’s privacy. The United States has arrested the developers of a Bitcoin privacy product, Samourai Wallet, also for facilitating money laundering.

The arrest of Durov suggests that the law enforcement dragnet is being widened from private financial transactions to private speech. If it is a crime to build privacy tools, there will be no privacy.

Taxpayers should not bail out journalism. They do so already!

Published in Crikey. Part of a debate about whether taxpayers should fund journalism.

The case for subsidising journalism is weak. The case for subsidising journalism more than we already do is incredibly weak.

The government already directly pays for journalism through the ABC ($1.1 billion in the 2022-23 budget) and SBS ($316 million). With my colleague Sinclair Davidson I am famously sceptical that public broadcasting is a good idea. (Maybe infamously.) But put the argument for privatising the ABC and SBS aside. Policy choices do not exist in a vacuum. Any case for journalism subsidies should first explain why our already significant expenditure has failed, and whether there are any ways to reform our public broadcasters to more directly align with our policy goals. There is a lot the ABC and SBS do that isn’t journalism — would some of it be better redirected?

It is true that democracy relies on a thriving public sphere, of which news and journalism are critical parts. But on this count, Australian democracy doesn’t seem to be doing too badly. In the digital age, our problem as citizens and voters is not an information deficit but an information surplus — there is an enormous amount of online and offline content about the actions of the Australian government and civil society that we can consume. Digging through that content is the real challenge. Usually, we say that governments should subsidise things if the market underprovides for them. What is underprovided here? How should we measure it?

The real struggle is within media firms. Having lost their monopoly over advertising to a richer, more diverse, and more complex digital ecosystem, they find themselves competing to produce an extremely low-margin product while trying to support their legacy, high labour and production costs. I understand that the media industry has gone through 20 years of industrial pessimism. But at the same time, there are now senior journalists who have experienced nothing but disruption and have thrived within it. Too often policymakers confuse protecting established companies with supporting what they produce.

Practical considerations also undermine the case for journalism subsidies.

Almost any policy framework to subsidise journalism favours the large players that already dominate the Australian institutional media. Crikey has been arguing for a long time that News Corp pays less tax than it ought to. Guess who the biggest private beneficiaries of subsidised journalism are?

Maybe we can imagine a way to only favour the journalism we want, or to only favour smaller firms. But a policy framework that tried to discriminate against (say) the conservative talking shop ADH TV to only fund a left-leaning equivalent would merely invite the same government interference that the ABC labours under. A government unhappy with coverage could threaten to take away a media outlet’s privileges.

Government-subsidised journalism — whether through public broadcasting, tax breaks or direct subsidies — is fundamentally misconceived. It makes civil society the handmaiden of the state, rather than the other way around.

But in an important sense, the sort of policy rationalism I’m presenting here is beside the point. The question before policymakers is not whether subsidising journalism is a good use of taxpayer funds. The question is what to do with the Morrison government’s News Media Bargaining Code now that Meta is refusing to play ball. 

The code is a legendarily outrageous example of rent-seeking in the history of Australian public policy. It is simply one sector using the government to directly extort money from another sector of the economy. And on the flimsiest pretence too: we have been asked to believe that allowing users to share news links with friends is somehow a violation of intellectual property. 

The only “bargaining” that is going on here is between the media giants and the government. Meta and Google are the objects of the bargaining, not the participants. 

The irony is that, if anything, the digital firms that are being targeted have been responsible for what has historically been the sharpest growth in the public sphere since the Gutenberg press. If democracy is first and foremost about citizen engagement, then they have been great for democracy.

Scratch the whole thing and start over. Media companies never had a natural right to advertising dollars and they have absolutely no right to funds forcibly extracted from companies in another sector. If we think the market is underproviding journalism then let’s see if our public broadcasters can spend their budgets better. At the very least, it is time to draw a line under this shameful, rent-seeking episode.

The institutional economics of quantum computing

With Jason Potts, first published on Medium

What happens when quantum computing is added to the digital economy?

The economics of quantum computing starts from a simple observation: in a world where search is cheaper, more search will be consumed. Quantum computing offers potentially dramatic increases in the ability to search through data. Searching through an unstructured list of 1,000,000 entries, a ‘classical’ computer would take 1,000,000 steps. For a mature quantum computer, the search would only require about 1,000 steps. 

Bova et al. (2021) describe this capability generally as a potential advantage at solving combinatorics problems. The goal in combinatorics problems is often to search through all possible arrangements of a set of items to find a specific arrangement that meets certain criteria. While the cost of error correction or quantum architecture might erode the advantage quantum computers have in search, this is more likely to be an engineering hurdle to be overcome than a permanent constraint.

Economics focuses on exchange. To our knowledge no analysis of the economic impact of quantum computing has been focused on the effect that quantum computing has on the practice and process of exchange. Where there have been estimates of the economic benefits of quantum computing, those analyses have focused on the possibility that this technology might increase production through scientific discovery or by making production processes more efficient (for example by solving optimisation problems). So what impact will more search have on exchange?

In economics, search is a transaction cost (Stigler 1961, Roth 1982) that raises the cost of mutually beneficial exchange. Buyers have to search for potential sellers and vice versa. Unsurprisingly, much economic organisation is structured around reducing search costs. Indeed, it is the reduction of search costs that structures the digital platform economy. Multi-sided markets like eBay match buyers with sellers at global scale, allowing for trades to occur that would not be possible otherwise due to the high cost of search.

Quantum computing offers a massive reduction in this form of transactions cost. And all else being equal, we can expect that a massive reduction in search costs would have a correspondingly large effect on the structure of economic activity. For example, search costs are one reason that firms (and sub-firm economic agents like individuals) prefer to own resources rather than access them over the market. When you have your own asset, it is quicker to utilise that asset than seeking a market counterpart who will rent it to you. 

Lowering search costs favours outsourcing rather than ownership (‘buy’ in the market, rather than ‘make’ inhouse). Lower search costs have a globalising effect — it allows economic actors to do more search — that is, explore a wider space for potential exchange. This has the effect of increasing the size of the market, which (as Adam Smith tells us), increases specialisation and the gains from trade. In this way, quantum computing powers economic growth.

Typically specialisation and globalisation increases the winner-take-all effect — outsized gains to economic actors at the top of their professions. However, a countervailing mechanism is that cheaper search also widens the opportunities to undercut superstar actors. This suggests an important implication of greater search on global inequality — it is easier to identify resources outside a local area. That should reduce rents and result in more producers (ie workers) receiving the marginal product of their labour as determined by global prices, rather than local prices. In this way, quantum computing drives economic efficiency.

Quantum and the digital stack

Of course other transactions costs (the cost of making the exchange, the cost of contract enforcement etc), can reduce the opportunities for faster search to disrupt existing patterns of economic activity. Here we argue that quantum is particularly effective in an environment of digital (or digitised) trade and production — in the domain of the information economy. 

The process of digitisation is the process of creating more economic objects and through the use of distributed ledgers and digital twins, forming more and more precise property rights regimes. In Berg et al (2018), we explored one of the implications of this explosion in objects with precisely defined property rights. We argued that the increasingly precise and security digital property rights over objects would allow artificially intelligent agents to trade assets on behalf of their users, facilitating barter-like exchanges and allowing a greater variety of assets to be used as ‘money’. Key to achieving this goal is deep search across a vast matrix of assets, where the optimal path between two assets has to be calculated according to the pre-defined preferences not only of the agents making the exchange, but of each of the holders of the assets that form the paths. 

This illuminates one of the ways in which quantum interacts with the web3 tech stack. While some quantum computation scientists have identified the opportunity for quantum to be used in AI training, we see the opportunity for quantum to be used by AI agents to search for exchange with other AI agents; an exchange theoretic rather than production-centric understanding of quantum’s contribution to the economy. The massive technological change we are experiencing is both cumulative and non-sequential — rapid developments in other parts of the tech stack further drive demand for the quantum compute. This is the digital quantum flywheel effect.

Compute as a commodity

Compute is a commodity and follows the rules of commodity economics. Just as buyers of coal or electricity are ultimately buying the energy embodied in those goods, buyers of compute are ultimately buying a reduction in the time it takes to perform a computational task (Davies 2004). There are computational tasks where classical computers are superior (either cheaper or faster), where quantum computers are superior (or could be superior), and those where both quantum and classical computers can satisfy demand. Users of compute should be indifferent as to the origin of the compute they consume, but they have specific computational needs that they wish to perform subject to budget and time constraints. And they should be indifferent to the mixture of classical and quantum computing that best suits their needs and constraints.

This indifference between classical and quantum has significant consequences for how quantum computing is distributed between firms in the economy — and, indeed, between geopolitical powers. At this stage in the development of quantum computing, the major open question is how relatively large the space of computational tasks that are best suited for classical computing are versus that for quantum computing.

For computational tasks where classical computers dominate, compute is already massively decentralised — not just with multiple large cloud services (AWS, Google etc) but in the devices on our desks and in our pockets. There is no barrier to competition in classical compute, nor any risk of one geopolitical actor dominating. Where bottlenecks in classical compute emerge are in the production networks for semiconductor chips — a known problem with a known menu of policy stances and responses. Similarly, no such risk emerges around computational tasks where classical or quantum systems are equally suited.

The salient question is whether there will arise a natural monopoly in quantum compute? This could arise as a result of bottlenecks (say of scarce minerals, or caused by market structure as in the semiconductor chip industry), or as an outcome of competition in quantum computing development. As an example, one argument might be that as quantum compute power scales exponentially with the number of qubits then a geopolitical or economic actor that establishes a lead in qubit deployment could maintain that lead indefinitely due to compounding effects. This is a quantum takeoff analogous to the hypothesised ‘AI takeoff’ (see Bostrom 2014).

Several factors mitigate against this. The diversity of architectures for quantum computing being built suggests that the future is likely to be highly competitive; not merely between individual quantum compute systems but between classes of architectures (eg. superconducting, ion trap, photonics). While quantum compute research and development is very high cost, it is proceeding widely and with significant geographical dispersion. There are at least eight distinct major systems or architectures for quantum computing, seven of which have successfully performed basic computational tasks such as the control of qubits (see the survey by Bremmer et al 2024). 

Nor is there any obvious concern that first-mover advantage implies competitive lock-in. Quantum compute is quite unlike AI safety scenarios, where ‘superintelligence’ or ‘foom’ is hypothesised to lead to a single monopolistic AI as a result of the superintelligence using its capabilities to 1) develop itself exponentially and 2) act to prevent competitors emerging. Quantum computing is and will be, for the long foreseeable future, a highly specialised toolset for particular tasks, not a general program that could pursue world domination either autonomously or under the direction of a bad actor.

One significant caveat to this analysis is that the capabilities of quantum compute might have downstream consequences for the economy, and this could . The exponential capabilities at factoring provided by quantum compute could undermine much of the cryptography that protects global commerce, and underlines the need for the development and deployment of post-quantum cryptography. We have argued elsewhere that the signals for the emergence of quantum supremacy in code breaking will emerge in the market prices of cryptocurrency (Rohde et al 2021). There is a significant risk mitigation task ahead of us to adopt post-quantum cryptography. It is a particularly difficult task because while the danger is concrete, the timeline for a quantum breakthrough is highly uncertain. Nonetheless, the task of migrating between cryptographic standards is akin to many other cybersecurity mitigations that have been performed in the digital economy, and while challenging should not be seen as existential.

Instead, the institutional economic view of quantum computing emphasises the possibilities of this new technology to radically grow the space for market exchange — particularly when we understand the possibility of quantum computing as co-developing alongside distributed ledgers, smart contracts (that is, decentralised digital assets) and artificial intelligence. Quantum computing lowers the cost and increases the performance of economic exchange across an exponentially growing ecosystem of digital property rights. It will be an important source of future economic value from better economic institutions.

References

Berg, Chris, Sinclair Davidson, and Jason Potts. ‘Beyond Money: Cryptocurrencies, Machine-Mediated Transactions and High-Frequency Hyperbarter’, 2018, 8.

Bremner, Michael, Simon Devitt, and Dr Eser Zerenturk. “Quantum Algorithms and Applications.” Office of the NSW Chief Scientist & Engineer, March 2024.

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Reprint edition. OUP Oxford, 2014.

Bova, Francesco, Avi Goldfarb, and Roger G. Melko. ‘Commercial Applications of Quantum Computing’. EPJ Quantum Technology 8, no. 1 (December 2021): 1–13. https://doi.org/10.1140/epjqt/s40507-021-00091-1.

Davies, Antony. “Computational Intermediation and the Evolution of Computation as a Commodity.” Applied Economics, June 20, 2004. https://www.tandfonline.com/doi/abs/10.1080/0003684042000247334.

Rohde, Peter P, Vijay Mohan, Sinclair Davidson, Chris Berg, Darcy W. E. Allen, Gavin Brennen, and Jason Potts. “Quantum Crypto-Economics: Blockchain Prediction Markets for the Evolution of Quantum Technology,” 2021, 12.

Roth, Alvin E. ‘The Economics of Matching: Stability and Incentives’. Mathematics of Operations Research 7, no. 4 (1982): 617–28.

Stigler, George J. ‘The Economics of Information’. Journal of Political Economy 69, no. 3 (1961): 213–25.

How Web3’s ‘programmable commerce layer’ will transform the global economy

World Economic Forum, 28 November 2022. Originally published here. With Justin Banon, Jason Potts and Sinclair Davidson.

The world economy is in the early stages of a profound transition from an industrial to a digital economy.

The industrial revolution began in a seemingly unpromising corner of northwest Europe in the early 1800s. It substituted machine power for animal and human power, organized around the factory system of economic production. Soon, it created the conditions to lift millions of humans from a subsistence economy into a world of abundance.

The digital economy began with similarly unpromising origins when Satoshi Nakomoto published his Bitcoin white paper to an obscure corner of the internet in late 2008. We call this the origin of Web3 now – with the first blockchain – but this revolution traces back decades as the slow economic application of scientific and military technologies of digital communication. The first wave of innovation was in computers, cryptography and inter-networking – Web1.

By the late 1990s, so-called “e-commerce” emerged as new companies, which soon became global platforms, built technologies that enabled people to find products, services and each other through new digital markets. That was Web2, the dot-com age of social media and tech giants.

But the actual age of digital economies was not down to these advances in information and communications technologies but to a very different type of innovation: the manufacture of trust. And blockchains industrialize trust.

Industrial economies industrialized economic production using physical innovations, such as steam engines and factories. Such institutional technologies organize people and machines into high production. What the steam engine did for industry, the trust engine will do for society. The fundamental factor of production that a digital economy economizes on is trust.

Blockchain is not a new tool. It is a new economic infrastructure that enables anyone, anywhere, to trust the underlying facts recorded in a blockchain, including identity, ownership and promises represented in smart contracts.

These economic facts are the base layer of any economy. They generally work well in small groups – a family, village or small firm – but the verification of these facts and monitoring of how they change becomes increasingly costly as economic activity scales up.

Layers of institutional solutions to trust problems have evolved over perhaps thousands of years. These are deep institutional layers – the rule of law, principles of democratic governance, independence of bureaucracy etc. Next, there are administrative layers containing organizational structures – the public corporation, non-profits, NGOs and similar technologies of cooperation. Then we have markets – institutions that facilitate exchange between humans.

It has been the ability to “truck, barter and exchange” over increasing larger markets that has catapulted prosperity to the levels now seen around the world.

Information technology augments our ability to interact with other people at all levels – economic, social and political. It has expanded our horizons. In the mid-1990s, retail went onto the internet. The late 1990s saw advertising on the internet. While the mid-2000s saw the news, information and friendship groups migrate to the internet. Since their advent in 2008, cryptocurrencies and natively digital financial assets have also come onto the internet. The last remaining challenge is to put real-world (physical) assets onto the internet.

The technology to do so already exists. Too many people think of non-fungible tokens (NFTs) as trivial JPEGs. But NFTs are not just collectable artworks; they are an ongoing experiment in the evolution of digital property rights. They can represent a certificate of ownership or be a digital twin of a real-world asset. They enable unique capital assets to become “computable,” that is, searchable, auditable and verifiable. In other words, they can be transacted in a digital market environment with a low cost of trust.

The internet of things can track real-world assets in real-time. Oracles can update blockchains regarding the whereabouts of physical assets being traded on digital markets. For example, anyone who has used parcel tracking over the past two years has seen an early version of this technology at work.

Over the past few years, people have been hard at work building all that is necessary to replicate real-world social infrastructure in a digital world. We now have money (stablecoins), assets (cryptocurrencies e.g. Bitcoin), property rights (NFTs) and general-purpose organizational forms (decentralized autonomous organizations (DAOs)). Intelligent people are designing dispute-resolution mechanisms using smart contracts. Others are developing mechanisms to link the physical and digital worlds (more) closely.

When will all this happen? The first-mover disadvantage associated with technological adoption has been overcome, mostly by everyone having to adopt new practices and technology simultaneously. Working, shopping and even entertaining online is now a well-understood concept. Digital connectedness is already an integral part of our lives. A technology that enhances that connectedness will have no difficulty in being accepted by most users.

It is very easy to imagine an interconnected world where citizens, consumers, investors and workers seamlessly live their lives transitioning between physical and digital planes at will before the decade concludes.

Such an economy is usefully described as a digital economy because that is the main technological innovation. And the source of economic value created is rightly thought of as the industrialization of trust, which Web3 technologies bring. But when the physical parts of the economy and the digital parts become completely and seamlessly join, this might well be better described as a “computable economy.” A computable economy has low-cost trust operating at global market scale.

The last part of this system that needs to fall into place is “computable capital.”

Now that we can tokenize all the world’s physical products and services into a common, interoperable format; list them within a single, public ledger; and enable market transactions with low cost of trust, which are governed by rules encoded within and enforced by the underlying substrate, what then?

Then, computable capital enables “programmable commerce,” but more than that – it enables what we might call a “turing-complete economy.”

Why a US crypto crackdown threatens all digital commerce

Australian Financial Review, 10 August 2022

The US government’s action against the blockchain privacy protocol Tornado Cash is an epoch-defining moment, not only for cryptocurrency but for the digital economy.

On Tuesday, the US Treasury Department placed sanctions on Tornado Cash, accusing it of facilitating the laundering of cryptocurrency worth $US7 billion ($10.06 billion) since 2019. Some $455 million of that is connected to a North Korean state-sponsored hacking group.

Even before I explain what Tornado Cash does, let’s make it clear: this is an extraordinary move by the US government. Sanctions of this kind are usually put on people – dictators, drug lords, terrorists and the like – or specific things owned by those people. (The US Treasury also sanctioned a number of individual cryptocurrency accounts, in just the same way as they do with bank accounts.)

But Tornado Cash isn’t a person. It is a piece of open-source software. The US government is sanctioning a tool, an algorithm, and penalising anyone who uses it, regardless of what they are using it for.

Tornado Cash is a privacy application built on top of the ethereum blockchain. It is useful because ethereum transactions are public and transparent; any observer can trace funds through the network. Blockchain explorer websites such as Etherscan make this possible for amateur sleuths, but there are big “chain analysis” firms that work with law enforcement that can link users and transactions incredibly easily.

Tornado Cash severs these links. Users can send their cryptocurrency tokens to Tornado Cash, where they are mixed with the tokens of other Tornado Cash users and hidden behind a state-of-the-art encryption technique called “zero knowledge proofs”. The user can then withdraw their funds to a clean ethereum account that cannot be traced to their original account.

Obviously, as the US government argues, there are bad reasons that people might want to use such a service. But there are also very good reasons why cryptocurrency users might want to protect their financial privacy – commercial reasons, political reasons, personal security, or even medical reasons. One mundane reason that investment firms used Tornado Cash was to prevent observers from copying their trades. A more serious reason is personal security. Wealthy cryptocurrency users need to be able to obscure their token holdings from hackers and extortionists.

Tornado Cash is a tool that can make these otherwise transparent blockchains more secure and more usable. No permission has to be sought from anyone to use Tornado Cash. The Treasury department has accused Tornado Cash of “laundering” more than $US7 billion, but that seems to be the total amount of funds that have used the service at all, not the funds that are connected to unlawful activity. There is no reason to believe that the Tornado Cash developers or community solicited the business of money launderers or North Korean hackers.

Now American citizens are banned from interacting with this open-source software at all. It is a clear statement from the world’s biggest economy that online privacy tools – not just specific users of those tools, but the tools themselves – are the targets of the state.

We’ve been here before. Cryptography was once a state monopoly, the exclusive domain of spies, diplomats and code breakers. Governments were alarmed when academics and computer scientists started building cryptography for public use. Martin Hellman, one of those who invented public key cryptography in the 1970s (along with Whitfield Diffie and Ralph Merkle), was warned by friends in the intelligence community his life was in danger as a result of his invention. In the so-called “crypto wars” of the 1990s, the US government tried to enforce export controls on cryptographic algorithms.

One of the arguments made during those political contests was that code was speech; as software is just text and lines of code, it should be protected by the same constitutional protections as other speech.

GitHub is a global depository for open-source software owned by Microsoft. Almost immediately after the Treasury sanctions were introduced this week, GitHub closed the accounts of Tornado Cash developers. Not only did this remove the project’s source code from the internet, GitHub and Microsoft were implicitly abandoning the long-fought principle that code needs to be protected as a form of free expression.

An underappreciated fact about the crypto wars is that if the US government had been able to successfully restrict or suppress the use of high-quality encryption, then the subsequent two decades of global digital commerce could not have occurred. Internet services simply would not have been secure enough. People such as Hellman, Diffie and Merkle are now celebrated for making online shopping possible.

We cannot have secure commerce without the ability to hide information with cryptography. By treating privacy tools as if they are prohibited weapons, the US Treasury is threatening the next generation of commercial and financial digital innovation.

Reliable systems out of unreliable parts

Amsterdam Law & Technology Institute Forum, 27 July 2022. Originally published here.

How we understand where something comes from shapes where we take it, and I’m now convinced we’re thinking about the origins of blockchain wrong.

The typical introduction to blockchain and crypto for beginners – particularly non-technical beginners – gives Bitcoin a sort of immaculate conception. Satoshi Nakamoto suddenly appears with a fully formed protocol and disappears almost as suddenly. More sophisticated introductions will observe that Bitcoin is an assemblage of already-existing technologies and mechanics – peer to peer networking, public-key cryptography, the principle of database immutability, the hashcash proof of work mechanism, some hand-wavey notion of game theory – put together in a novel way. More sophisticated introductions again will walk through the excellent ‘Bitcoin’s academic pedigree’ paper by Arvind Narayanan and Jeremy Clark that guides readers through the scholarship that underpins those technologies.

This approach has many weaknesses. It makes it hard to explain proof-of-stake systems, for one. But what it really misses – what we fail to pass on to students and users of blockchain technology – is the sense of blockchain as a technology for social systems and economic coordination. Instead, it comes across much more like an example of clever engineering that gave us magic internet money. We cannot expect every new entrant or observer of the industry to be fully signed up to the vision of those that came before them. But it is our responsibility to explain that vision better.

Blockchains and crypto are the heirs of a long intellectual tradition building fault tolerant distributed systems using economic incentives. The problem this tradition seeks to solve is: how can we create reliable systems out of unreliable parts? In that simply stated form, this question serves not just as a mission statement for distributed systems engineering but for all of social science. In economics, for example, Peter Boettke and Peter Leeson have called for a ‘robust political economy’, or the creation of a political-economic system robust to the problems of information and incentives. In blockchain we see computer engineering converge with the frontiers of political economy. Each field is built on radically different assumptions but have come to the same answers.

So how can we tell an alternative origin story that takes beginners where they need to go? I see at least two historical strands, each of which take us down key moments in the history of computing.

The first starts with the design of fault tolerant systems shortly after the Second World War. Once electronic components and computers began to be deployed in environments with high needs for reliability (say, for fly-by-wire aircraft or the Apollo program) researchers turned their mind to how to ensure the failure of parts of a machine did not lead to critical failure of the whole machine. The answer was instinctively obvious: add backups (that is, multiple redundant components) and have what John von Neumann in 1956 called a ‘restoring organ’ combine their multiple outputs into a single output that can be used for decision-making.

But this creates a whole new problem: how should the restoring organ reconcile those components’ data if they start to diverge from each other? How will the restoring organ know which component failed? One solution was to have the restoring organ treat each component’s output as a ‘vote’ about the true state of the world. Here, already, we can see the social science and computer science working in parallel: Duncan Black’s classic study of voting in democracies, The Theory of Committees and Elections was published just two years after von Neumann’s presentation of the restoring organ tallying up the votes of its constituents.

The restoring organ was a single, central entity that collated the votes and produced an answer. But in the distributed systems that started to dominate the research on fault tolerance through the 1970s and 1980s there could not be a single restoring organ – the system would have come to consensus as a whole. The famous 1982 paper ‘The Byzantine Generals’ Problem’ paper by Leslie Lamport, Robert Shostak and Marshall Peace (another of the half-taught and quarter-understood parts of the origins of blockchain canon) addresses this research agenda by asking how many voting components are needed for consensus in the presence of faulty – malicious – components. One of their insights was cryptographically unforgeable signatures makes the communication of information (‘orders’) much simplifies the problem.

The generation of byzantine fault tolerant distributed consensus algorithms that were built during the 1990s – most prominently Lamport’s Paxos and the later Raft – now underpin much of global internet and commerce infrastructure.

Satoshi’s innovation was to make the distributed agreement system permissionless – more precisely, to join the network as a message-passer or validator (miner) does not require the agreement of all other validators. To use the Byzantine generals’ metaphor, now anyone can become a general.

That permissionlessness gives it a resilience against attack that the byzantine fault tolerant systems of the 1990s and 2000s were never built for. Google’s distributed system is resilient against a natural disaster, but not a state attack that targets the permissioning system that Google as a corporate entity oversees. Modern proof-of-stake systems such as Tendermint and Ethereum’s Casper are an evolutionary step that connects Bitcoin’s permissionlessness with decades of knowledge of fault tolerant distributed systems.

This is only a partial story. We still need the second strand: the introduction of economics and markets into computer science and engineering.

Returning to the history of computing’s earliest days, the institutions that hosted the large expensive machines of the 1950s and 1960s needed to manage the demand for those machines. Many institutions used sign-up sheets, some even had dedicated human dispatchers to coordinate and manage a queue. Timesharing systems tried to spread the load on the machine so multiple users could work at the same time.

It was not long before some researchers realised that sharing time on a machine was fundamentally a resource allocation problem that could be tackled by with relative prices. By the late 1960s Harvard University was using a daily auction to reserve space on their PDP-1 machine using a local funny money that was issued and reissued each day.

As the industry shifted from a many-users, one-computer structure to a many-users, many-distributed-computers structure, the computer science literature started to investigate the allocation of resources between machines. Researchers stretched for the appropriate metaphor: were distributed systems like organisations? Or were they like separate entities tied together by contracts? Or were they like markets?

In the 1988 Agoric Open Systems papers, Mark S. Miller and K. Eric Drexler argued not simply for the use of prices in computational resource allocation but to reimagine distributed systems as a full-blown Hayekian catallaxy, where computational objects have ‘property rights’ and compensate each other for access to resources. (Full disclosure: I am an advisor to Agoric, Miller’s current project.) As they noted, one missing but necessary piece for the realisation of this vision was the exchange infrastructure that would provide an accounting and currency layer without the need for a third party such as a bank. This, obviously, is what Bitcoin (and indeed its immediate predecessors) sought to provide.

We sometimes call Bitcoin the first successful fully-native, fully-digital money, but skip over why that is important. Cryptocurrencies don’t just allow for censorship-free exchange. They radically expand the number of exchange that can occur – not just between people but between machines. Every object in a distributed system, all the way up and down the technology stack, has an economic role and can form distinctly economic relationships. We see this vision in its maturity in the complex economics of resource allocation within blockchain networks.

Any origin story is necessary simplified, and the origin story I have proposed here skips over many key sources of the technology that is now blockchain: cryptography, the history and pre-history of smart contracts, and of course the cypherpunk community from which Bitcoin itself emerged. But I believe this narrative places us on a much sounder footing to talk about the long term social and economic relevance of blockchain.

As Sinclair Davidson, Jason Potts and I have argued elsewhere, blockchains are an institutional technology. They allow us to coordinate economic activity in radically different ways, taking advantage of the global-first, trust-minimised nature of this distributed system to create new types of contracts, exchanges, organisations, and communities. The scale of this vision is clearest when we compare it with what came before.

Consider, for instance, the use of prices for allocating computer time. The early uses of prices were either to recoup the cost of operation for machines, or as an alternative to queuing, allowing users to signal the highest value use of scarce resources. But prices in real-world markets do a lot more than that. By concentrating dispersed information about preferences they inspire creation – they incentivise people to bring more resources to market, and to invent new services and methods of production that might earn super-normal returns. Prices helped ration access to Harvard’s PDP-1, but could not inspire the PDP-1 to grow itself more capacity.

The Austrian economist Ludwig von Mises wrote that “the capitalist system is not a managerial system; it is an entrepreneurial system”. The market that is blockchain does not efficiently allocate resources across a distributed system but instead has propelled an explosion of entrepreneurial energy that is speculative and chaotic but above all innovative. The blockchain economy grows and contracts, shaping and reshaping just like a real economy. It is not simply a fixed network with nodes and connections. It is a market: it evolves.

We’ve of course seen evolving networks in computation before. The internet itself is a network – a web that is constantly changing. And you could argue that the ecosystem of open-source software that allows developers to layer and combine small, shared software components into complex systems looks a lot like an evolutionary system. Neither of these directly use the price system for coordination. They are poorer for it. The economic needs of internet growth has encouraged the development of a few small and concentrated firms while the economic needs of open-source are chronically under-supplied. To realise the potential of distributed computational networks we need the tools of an economy: property rights and a native means of exchange.

Networks can fail for many reasons: nodes might crash, might fail to send or receive messages correctly, their responses might be delayed longer than the network can tolerate, they might report incorrect information to the rest of the network. Human social systems can fail when information is not available where and when it is needed, or if incentive structures favour anti-social rather than pro-social behaviours.

As a 1971 survey of the domain of fault tolerant computing noted “The discipline of fault-tolerant computing would be unnecessary if computer hardware and programs would always behave in perfect agreement with the designer’s or programmer’s intentions”. Blockchains make the joint missions of economics and computer science stark: how to build reliable systems out of unreliable parts.