Jim Chalmers’ spray at the RBA is embarrassing

Published in Crikey

There’s nothing in the Reserve Bank Act, or in the concept of central bank independence more generally, that says the treasurer can’t be as critical of the RBA as he likes. There’s a lot of silly hand-wringing about “inappropriateness” every time this happens. Our economists should not be so delicate. A government at war with its own money printer is a sign of the bank’s independence, rather than a lack of it.

But Jim Chalmers’ salvo against the RBA this close to an election is embarrassing and desperate. Foreshadowing the anaemic GDP growth figures released yesterday, the treasurer declared that the fault is all with the RBA: it is “smashing the economy” by keeping interest rates high to slow inflation. 

It’s not unusual for governments to be frustrated when monetary policy contradicts their political strategy. It is unusual for a treasurer to so aggressively try to offload blame for sluggish growth onto a central bank whose governor he appointed and whose mandate and approach he endorsed less than a year ago.

The problem for the Albanese government is simple. There is a fundamental tension between the government’s election strategy (to relieve the pressure of inflation on household budgets through fiscal transfers and try to prop up the economy with government spending) and the RBA’s requirement to get inflation down — inflation that is exacerbated by the government’s fiscal transfers and expenditure. So we have had higher interest rates for longer while the Albanese government has tried to shield voters from the impact of those higher rates while keeping spending high.

Chalmers knows full well that monetary and fiscal policy can work against each other. Back during the global financial crisis, an internal government meeting between treasurer Wayne Swan, prime minister Kevin Rudd, treasury secretary Ken Henry, and “senior staff” specifically discussed how, if government spending increased, the RBA would likely keep interest rates higher than it would otherwise (I wrote about the implications of this meeting for ABC’s The Drum here). Chalmers was Swan’s principal adviser when that meeting occurred. 

If only the government and the RBA could row in the same direction. But the blame for policy divergence has to rest entirely with the government. RBA policy choices are strictly bounded by its legislative objectives and its extremely limited set of tools. Chalmers has a lot more discretion.

We might have some sympathy for Chalmers’ predicament. It must be galling to see other central banks starting to reduce rates. Voters always blame the elected government for a poor economy. They are right to. Ultimately it is Parliament that has the most tools to boost productivity and through that economic growth. 

But there’s no time before the election to turn private sector growth around and there’s seemingly no appetite within the government to resolve the fiscal-monetary contradiction. Chalmers’ comments on Sunday were immediately following Anthony Albanese’s Saturday announcement of further “cost of living” relief in the form of increased rent assistance payments. 

After the economic data this week, there’s a good chance that the RBA will change tack soon. But you can see what Chalmers is trying to do: shift blame onto the bank for the economy’s poor performance generally.

I started by observing that there’s nothing wrong, in principle, with the treasurer complaining about RBA policy. Yet this is a sensitive moment for the central bank. At the same time as Chalmers is accusing the bank of economic recklessness, he is also trying to finalise the overhaul of its governance, splitting the board into a monetary and governance committee. The treasurer wants this reform to be bipartisan. After this week’s events, the Coalition should insist that any reform and associated personnel choices wait until election season is over, whoever wins.

Telegram founder’s arrest is radical — if it’s a crime to build privacy tools, there will be no privacy

Published in Crikey

The arrest of the Telegram CEO Pavel Durov in France this week is extremely significant. It confirms that we are deep into the second crypto war, where governments are systematically seeking to prosecute developers of digital encryption tools because encryption frustrates state surveillance and control. While the first crypto war in the 1990s was led by the United States, this one is led jointly by the European Union — now its own regulatory superpower. 

What these governments are insisting on, one criminal case at a time, is no less than unfettered surveillance over our entire digital lives.

Durov, a former Russian, now French citizen, was arrested in Paris on Saturday, and has now been indicted. You can read the French accusations here. They include complicity in drug possession and sale, fraud, child pornography and money laundering. These are extremely serious crimes — but note that the charge is complicity, not participation. The meaning of that word “complicity” seems to be revealed by the last three charges: Telegram has been providing users a “cryptology tool” unauthorised by French regulators.

In other words, the French claim is that Durov developed a tool — a chat program that allowed users to turn on some privacy features — used by millions of people, and some small fraction of those millions used the tool for evil purposes. Durov is therefore complicit in that evil, not just morally but legally. This is an incredibly radical position. It is a charge we could lay at almost every piece of digital infrastructure that has been developed over the past half century, from Cloudflare to Microsoft Word to TCP/IP. 

There have been suggestions (for example by the “disinformation analysts” cited by The New York Times this week) that Telegram’s lack of “content moderation” is the issue. There are enormous practical difficulties with having humans or even AI effectively moderate millions of private and small group chats. But the implication here seems to be that we ought to accept — even expect — that our devices and software are built for surveillance and control from the ground up: both the “responsible technology” crowd and law enforcement believe there ought to be a cop in every conversation. 

It is true that Telegram has not always been a good actor in the privacy space, denigrating genuinely secure-by-design platforms like Signal while granting its own users only limited privacy protection. Telegram chats are not fully or always encrypted, which leaves users exposed to both state surveillance and non-state criminals. Wired magazine has documented how the Russian government has been able to track users down for their apparently private Telegram conversations. For that matter, it would not be surprising to learn that there are complex geopolitical games going on here between France and Russia.

But it would be easier to dismiss the claims made against Durov as particular to Telegram, or dependent on some specific action of Durov as an individual, if he was alone in being targeted as an accomplice for criminal acts simply because he developed privacy features for the digital economy.

The Netherlands have imprisoned the developer Alexey Pertsev for being responsible for the malicious use of a cryptocurrency privacy tool he developed, Tornado Cash. Again, Pertsev was not laundering money; he built a tool to protect every user’s privacy. The United States has arrested the developers of a Bitcoin privacy product, Samourai Wallet, also for facilitating money laundering.

The arrest of Durov suggests that the law enforcement dragnet is being widened from private financial transactions to private speech. If it is a crime to build privacy tools, there will be no privacy.

Taxpayers should not bail out journalism. They do so already!

Published in Crikey. Part of a debate about whether taxpayers should fund journalism.

The case for subsidising journalism is weak. The case for subsidising journalism more than we already do is incredibly weak.

The government already directly pays for journalism through the ABC (1.1 billion in the 2022-23 budget) and SBS (316 million). With my colleague Sinclair Davidson I am famously sceptical that public broadcasting is a good idea. (Maybe infamously.) But put the argument for privatising the ABC and SBS aside. Policy choices do not exist in a vacuum. Any case for journalism subsidies should first explain why our already significant expenditure has failed, and whether there are any ways to reform our public broadcasters to more directly align with our policy goals. There is a lot the ABC and SBS do that isn’t journalism — would some of it be better redirected?

It is true that democracy relies on a thriving public sphere, of which news and journalism are critical parts. But on this count, Australian democracy doesn’t seem to be doing too badly. In the digital age, our problem as citizens and voters is not an information deficit but an information surplus — there is an enormous amount of online and offline content about the actions of the Australian government and civil society that we can consume. Digging through that content is the real challenge. Usually, we say that governments should subsidise things if the market underprovides for them. What is underprovided here? How should we measure it?

The real struggle is within media firms. Having lost their monopoly over advertising to a richer, more diverse, and more complex digital ecosystem, they find themselves competing to produce an extremely low-margin product while trying to support their legacy, high labour and production costs. I understand that the media industry has gone through 20 years of industrial pessimism. But at the same time, there are now senior journalists who have experienced nothing but disruption and have thrived within it. Too often policymakers confuse protecting established companies with supporting what they produce.

Practical considerations also undermine the case for journalism subsidies.

Almost any policy framework to subsidise journalism favours the large players that already dominate the Australian institutional media. Crikey has been arguing for a long time that News Corp pays less tax than it ought to. Guess who the biggest private beneficiaries of subsidised journalism are?

Maybe we can imagine a way to only favour the journalism we want, or to only favour smaller firms. But a policy framework that tried to discriminate against (say) the conservative talking shop ADH TV to only fund a left-leaning equivalent would merely invite the same government interference that the ABC labours under. A government unhappy with coverage could threaten to take away a media outlet’s privileges.

Government-subsidised journalism — whether through public broadcasting, tax breaks or direct subsidies — is fundamentally misconceived. It makes civil society the handmaiden of the state, rather than the other way around.

But in an important sense, the sort of policy rationalism I’m presenting here is beside the point. The question before policymakers is not whether subsidising journalism is a good use of taxpayer funds. The question is what to do with the Morrison government’s News Media Bargaining Code now that Meta is refusing to play ball. 

The code is a legendarily outrageous example of rent-seeking in the history of Australian public policy. It is simply one sector using the government to directly extort money from another sector of the economy. And on the flimsiest pretence too: we have been asked to believe that allowing users to share news links with friends is somehow a violation of intellectual property. 

The only “bargaining” that is going on here is between the media giants and the government. Meta and Google are the objects of the bargaining, not the participants. 

The irony is that, if anything, the digital firms that are being targeted have been responsible for what has historically been the sharpest growth in the public sphere since the Gutenberg press. If democracy is first and foremost about citizen engagement, then they have been great for democracy.

Scratch the whole thing and start over. Media companies never had a natural right to advertising dollars and they have absolutely no right to funds forcibly extracted from companies in another sector. If we think the market is underproviding journalism then let’s see if our public broadcasters can spend their budgets better. At the very least, it is time to draw a line under this shameful, rent-seeking episode.

Trade integration through digital infrastructure

Submission to House of Representatives Inquiry into Australian Agriculture in Southeast Asian Markets, with Darcy WE Allen and Aaron M Lane

The core of our submission is to emphasise the importance of digital economic infrastructure (e.g. identity systems, payments, traceability) for trade and economic development. This digital infrastructure can not only lower costs to facilitate more trade, but also is a critical mechanism by which Australian agriculture can continue to develop a trusted premium market positioning in the region.

View the full submission in PDF here.

Open problems in DAOs

Available at arXiv. With Joshua Tan, Tara Merk, Sarah Hubbard, Eliza R. Oak, Helena Rong, Joni Pirovich, Ellie Rennie, Rolf Hoefer, Michael Zargham, Jason Potts, Reuben Youngblom, Primavera De Filippi, Seth Frey, Jeff Strnad, Morshed Mannan, Kelsie Nabben, Silke Noa, Elrifai, Jake Hartnell, Benjamin Mako Hill, Tobin South, Ryan L. Thomas, Jonathan Dotan, Ariana Spring, Alexia Maddox, Woojin Lim, Kevin Owocki, Ari Juels, and Dan Boneh.

Abstract: Decentralized autonomous organizations (DAOs) are a new, rapidly growing class of organizations governed by smart contracts. Here we describe how researchers can contribute to the emerging science of DAOs and other digitally-constituted organizations. From granular privacy primitives to mechanism designs to model laws, we identify high-impact problems in the DAO ecosystem where existing gaps might be tackled through a new data set or by applying tools and ideas from existing research fields such as political science, computer science, economics, law, and organizational science. Our recommendations encompass exciting research questions as well as promising business opportunities. We call on the wider research community to join the global effort to invent the next generation of organizations.

The institutional economics of quantum computing

With Jason Potts, first published on Medium

What happens when quantum computing is added to the digital economy?

The economics of quantum computing starts from a simple observation: in a world where search is cheaper, more search will be consumed. Quantum computing offers potentially dramatic increases in the ability to search through data. Searching through an unstructured list of 1,000,000 entries, a ‘classical’ computer would take 1,000,000 steps. For a mature quantum computer, the search would only require about 1,000 steps. 

Bova et al. (2021) describe this capability generally as a potential advantage at solving combinatorics problems. The goal in combinatorics problems is often to search through all possible arrangements of a set of items to find a specific arrangement that meets certain criteria. While the cost of error correction or quantum architecture might erode the advantage quantum computers have in search, this is more likely to be an engineering hurdle to be overcome than a permanent constraint.

Economics focuses on exchange. To our knowledge no analysis of the economic impact of quantum computing has been focused on the effect that quantum computing has on the practice and process of exchange. Where there have been estimates of the economic benefits of quantum computing, those analyses have focused on the possibility that this technology might increase production through scientific discovery or by making production processes more efficient (for example by solving optimisation problems). So what impact will more search have on exchange?

In economics, search is a transaction cost (Stigler 1961, Roth 1982) that raises the cost of mutually beneficial exchange. Buyers have to search for potential sellers and vice versa. Unsurprisingly, much economic organisation is structured around reducing search costs. Indeed, it is the reduction of search costs that structures the digital platform economy. Multi-sided markets like eBay match buyers with sellers at global scale, allowing for trades to occur that would not be possible otherwise due to the high cost of search.

Quantum computing offers a massive reduction in this form of transactions cost. And all else being equal, we can expect that a massive reduction in search costs would have a correspondingly large effect on the structure of economic activity. For example, search costs are one reason that firms (and sub-firm economic agents like individuals) prefer to own resources rather than access them over the market. When you have your own asset, it is quicker to utilise that asset than seeking a market counterpart who will rent it to you. 

Lowering search costs favours outsourcing rather than ownership (‘buy’ in the market, rather than ‘make’ inhouse). Lower search costs have a globalising effect — it allows economic actors to do more search — that is, explore a wider space for potential exchange. This has the effect of increasing the size of the market, which (as Adam Smith tells us), increases specialisation and the gains from trade. In this way, quantum computing powers economic growth.

Typically specialisation and globalisation increases the winner-take-all effect — outsized gains to economic actors at the top of their professions. However, a countervailing mechanism is that cheaper search also widens the opportunities to undercut superstar actors. This suggests an important implication of greater search on global inequality — it is easier to identify resources outside a local area. That should reduce rents and result in more producers (ie workers) receiving the marginal product of their labour as determined by global prices, rather than local prices. In this way, quantum computing drives economic efficiency.

Quantum and the digital stack

Of course other transactions costs (the cost of making the exchange, the cost of contract enforcement etc), can reduce the opportunities for faster search to disrupt existing patterns of economic activity. Here we argue that quantum is particularly effective in an environment of digital (or digitised) trade and production — in the domain of the information economy. 

The process of digitisation is the process of creating more economic objects and through the use of distributed ledgers and digital twins, forming more and more precise property rights regimes. In Berg et al (2018), we explored one of the implications of this explosion in objects with precisely defined property rights. We argued that the increasingly precise and security digital property rights over objects would allow artificially intelligent agents to trade assets on behalf of their users, facilitating barter-like exchanges and allowing a greater variety of assets to be used as ‘money’. Key to achieving this goal is deep search across a vast matrix of assets, where the optimal path between two assets has to be calculated according to the pre-defined preferences not only of the agents making the exchange, but of each of the holders of the assets that form the paths. 

This illuminates one of the ways in which quantum interacts with the web3 tech stack. While some quantum computation scientists have identified the opportunity for quantum to be used in AI training, we see the opportunity for quantum to be used by AI agents to search for exchange with other AI agents; an exchange theoretic rather than production-centric understanding of quantum’s contribution to the economy. The massive technological change we are experiencing is both cumulative and non-sequential — rapid developments in other parts of the tech stack further drive demand for the quantum compute. This is the digital quantum flywheel effect.

Compute as a commodity

Compute is a commodity and follows the rules of commodity economics. Just as buyers of coal or electricity are ultimately buying the energy embodied in those goods, buyers of compute are ultimately buying a reduction in the time it takes to perform a computational task (Davies 2004). There are computational tasks where classical computers are superior (either cheaper or faster), where quantum computers are superior (or could be superior), and those where both quantum and classical computers can satisfy demand. Users of compute should be indifferent as to the origin of the compute they consume, but they have specific computational needs that they wish to perform subject to budget and time constraints. And they should be indifferent to the mixture of classical and quantum computing that best suits their needs and constraints.

This indifference between classical and quantum has significant consequences for how quantum computing is distributed between firms in the economy — and, indeed, between geopolitical powers. At this stage in the development of quantum computing, the major open question is how relatively large the space of computational tasks that are best suited for classical computing are versus that for quantum computing.

For computational tasks where classical computers dominate, compute is already massively decentralised — not just with multiple large cloud services (AWS, Google etc) but in the devices on our desks and in our pockets. There is no barrier to competition in classical compute, nor any risk of one geopolitical actor dominating. Where bottlenecks in classical compute emerge are in the production networks for semiconductor chips — a known problem with a known menu of policy stances and responses. Similarly, no such risk emerges around computational tasks where classical or quantum systems are equally suited.

The salient question is whether there will arise a natural monopoly in quantum compute? This could arise as a result of bottlenecks (say of scarce minerals, or caused by market structure as in the semiconductor chip industry), or as an outcome of competition in quantum computing development. As an example, one argument might be that as quantum compute power scales exponentially with the number of qubits then a geopolitical or economic actor that establishes a lead in qubit deployment could maintain that lead indefinitely due to compounding effects. This is a quantum takeoff analogous to the hypothesised ‘AI takeoff’ (see Bostrom 2014).

Several factors mitigate against this. The diversity of architectures for quantum computing being built suggests that the future is likely to be highly competitive; not merely between individual quantum compute systems but between classes of architectures (eg. superconducting, ion trap, photonics). While quantum compute research and development is very high cost, it is proceeding widely and with significant geographical dispersion. There are at least eight distinct major systems or architectures for quantum computing, seven of which have successfully performed basic computational tasks such as the control of qubits (see the survey by Bremmer et al 2024). 

Nor is there any obvious concern that first-mover advantage implies competitive lock-in. Quantum compute is quite unlike AI safety scenarios, where ‘superintelligence’ or ‘foom’ is hypothesised to lead to a single monopolistic AI as a result of the superintelligence using its capabilities to 1) develop itself exponentially and 2) act to prevent competitors emerging. Quantum computing is and will be, for the long foreseeable future, a highly specialised toolset for particular tasks, not a general program that could pursue world domination either autonomously or under the direction of a bad actor.

One significant caveat to this analysis is that the capabilities of quantum compute might have downstream consequences for the economy, and this could . The exponential capabilities at factoring provided by quantum compute could undermine much of the cryptography that protects global commerce, and underlines the need for the development and deployment of post-quantum cryptography. We have argued elsewhere that the signals for the emergence of quantum supremacy in code breaking will emerge in the market prices of cryptocurrency (Rohde et al 2021). There is a significant risk mitigation task ahead of us to adopt post-quantum cryptography. It is a particularly difficult task because while the danger is concrete, the timeline for a quantum breakthrough is highly uncertain. Nonetheless, the task of migrating between cryptographic standards is akin to many other cybersecurity mitigations that have been performed in the digital economy, and while challenging should not be seen as existential.

Instead, the institutional economic view of quantum computing emphasises the possibilities of this new technology to radically grow the space for market exchange — particularly when we understand the possibility of quantum computing as co-developing alongside distributed ledgers, smart contracts (that is, decentralised digital assets) and artificial intelligence. Quantum computing lowers the cost and increases the performance of economic exchange across an exponentially growing ecosystem of digital property rights. It will be an important source of future economic value from better economic institutions.

References

Berg, Chris, Sinclair Davidson, and Jason Potts. ‘Beyond Money: Cryptocurrencies, Machine-Mediated Transactions and High-Frequency Hyperbarter’, 2018, 8.

Bremner, Michael, Simon Devitt, and Dr Eser Zerenturk. “Quantum Algorithms and Applications.” Office of the NSW Chief Scientist & Engineer, March 2024.

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Reprint edition. OUP Oxford, 2014.

Bova, Francesco, Avi Goldfarb, and Roger G. Melko. ‘Commercial Applications of Quantum Computing’. EPJ Quantum Technology 8, no. 1 (December 2021): 1–13. https://doi.org/10.1140/epjqt/s40507-021-00091-1.

Davies, Antony. “Computational Intermediation and the Evolution of Computation as a Commodity.” Applied Economics, June 20, 2004. https://www.tandfonline.com/doi/abs/10.1080/0003684042000247334.

Rohde, Peter P, Vijay Mohan, Sinclair Davidson, Chris Berg, Darcy W. E. Allen, Gavin Brennen, and Jason Potts. “Quantum Crypto-Economics: Blockchain Prediction Markets for the Evolution of Quantum Technology,” 2021, 12.

Roth, Alvin E. ‘The Economics of Matching: Stability and Incentives’. Mathematics of Operations Research 7, no. 4 (1982): 617–28.

Stigler, George J. ‘The Economics of Information’. Journal of Political Economy 69, no. 3 (1961): 213–25.

Towards legal recognition of Decentralised Autonomous Organisations

Abstract: Decentralised Autonomous Organizations (DAOs) are a typical organisation form in the Web3 economy. DAOs are internet-native organisations that are coordinated and governed by pseudonymous community members through a nexus of blockchain-based digital assets and smart contracts. There is over US$26 billion locked in over 2,300 active DAOs globally. This article examines the legal recognition of DAOs in an Australian context. A recent Australian Senate Inquiry recommended DAOs be recognised as a distinct business structure. This article makes three contributions towards this goal: (1) critically evaluate options for DAO recognition under Australian law; (2) a comparative analysis of United States DAO laws; and (3) an analytical outline of the key design features of an Australian DAO law.

Author(s): Aaron M. Lane, Darcy W. E. Allen, Chris Berg

Journal: Australian Business Law Review

Vol: 52 Year: 2024 Pages: 96–116

Available at: Australian Business Law Review, June 2024 and working paper at SSRN.

Cite: Lane, Aaron M., Darcy W. E. Allen, and Chris Berg. “Towards Legal Recognition of Decentralised Autonomous Organisations.” Australian Business Law Review, vol. 52, 2024, pp. 96–116.

Continue reading “Towards legal recognition of Decentralised Autonomous Organisations”

Common knowledge theory of stablecoins

With Chloe White and Jason Potts. Available at SSRN.

Abstract: We propose a new theory of stablecoins based on common knowledge. We contrast this with the ‘better money’ theory of stablecoins, which emphasises marginal improvements over the standard origin of money theory as: medium of exchange, unit of account, store of value.

Managing Generative AI in Firms: The Theory of Shadow User Innovation

With Julian Waters-Lynch, Darcy WE Allen, and Jason Potts. Available at SSRN.

Abstract: This paper explores the management challenge posed by pervasive and unsupervised use of generative AI (GenAI) applications in firms. Employees are covertly experimenting with these tools to discover and capture value from their use, without the express direction or visibility of organisational leaders or managers. We call this phenomenon shadow user innovation. Our analysis integrates literature on user innovation, general purpose technologies and the evolution of firm capabilities. We define shadow user innovation as employee-led user innovation inside firms that is opaque to management. We explain how this opacity obstructs a firm’s ability to translate the use of GenAI into visible improvements in productivity and profitability, because employees can currently privately capture these benefits. We discuss potential management responses to this challenge, outline a research program, and offer practical guidance for managers.

Voting with time commitment for decentralized governance: Bond voting as a Sybil-resistant mechanism

With Vijay Mohan and Peyman Khezr. Published in Management Science, online March 2024. Early version available at SSRN

Abstract: In this paper, we examine the usefulness of time commitment as a voting resource for decentralized governance when the identity of voters cannot be verified. In order to do so, we take a closer look at two issues that confront token-based voting systems used by blockchain communities and organizations: voter fraud through the creation of multiple identities (Sybil attack) and concentration of voting power in the hands of the wealthy (plutocracy). Our contribution is threefold: first, we lay analytical foundations for the formal modeling of the necessary and sufficient conditions for a voting system to be resistant to a Sybil attack; second, we show that tokens as the only instrument for weighting votes cannot simultaneously achieve resistance to both Sybil attacks and a plutocracy in the voting process; and third, we design a voting mechanism, bond voting, that is Sybil resistant and offers a second instrument (time commitment) that is effective for countering plutocracy when large token holders also have a relatively high opportunity cost of locking tokens for a vote. Overall, our paper emphasizes the importance of time-based suffrage in decentralized governance.

Not on this website yet. I have a simple explainer of the bond voting mechanism at Substack.