Dutton is losing the debate over nuclear energy right when we need it for AI

Published in Crikey

Peter Dutton is losing the debate over nuclear power. Even the pro-nuclear Financial Review agrees, which ran an editorial last week wondering where the Coalition’s details were. And the Coalition’s proposal for the government to own the nuclear industry has made it look more like election boondoggle than visionary economic reform. 

It is starting to look like a big missed opportunity. 

Because in 2024, the question facing Australian governments is not only how to transition from polluting energy sources to non-polluting sources. It is also how to set up an economic and regulatory framework to service what is likely to be massive growth in electricity demand over the next decade.

The electrification revolution is part of that demand, with, for instance, the growing adoption of electric vehicles. But the real shadow on the horizon is artificial intelligence. The entire global economy is embedding powerful, power-hungry AI systems into every platform and every device. To the best of our knowledge, the current generation of AI follows a simple scaling law: the more data and the more powerful the computers processing that data, the better the AI. 

We should be excited for AI. It is the first significant and positive productivity shock we’ve had in decades. But the industry needs more compute, and more compute needs more energy.

That’s why Microsoft is working to reopen Three Mile Island — yes, that Three Mile Island — and has committed to purchasing all the electricity from the revived reactor to supply its AI and data infrastructure needs. Oracle plans to use three small nuclear reactors to power a massive new data centre. Amazon Web Services is buying and plans to significantly grow a data centre next to a nuclear plant in Pennsylvania

Then there’s OpenAI. The New York Times reports that one of the big hurdles for OpenAI in opening US data centres is a lack of adequate electricity supply. The company is reportedly planning to build half a dozen data centres that would each consume as much electricity as the entire city of Miami. It is no coincidence that OpenAI chief Sam Altman has also invested in nuclear startups.

One estimate suggests that data centres could consume 9% of US electricity by 2030.

Dutton, to his credit, appears to understand this. His speech to the Committee for Economic Development of Australia (CEDA) last week noted that nuclear would help “accommodate energy intensive data centres and greater use of AI”. 

But the Coalition’s mistake has been to present nuclear (alongside a mixture of renewables) as the one big hairy audacious plan to solve our energy challenge. They’ve even selected the sites! Weird to do that before you’ve even figured out how to pay for the whole thing.

Nuclear is not a panacea. It is only appealing if it makes economic sense. Our productivity ambitions demand that energy is abundant, available and cheap. There has been fantastic progress in solar technology, for instance. But it makes no sense to eliminate nuclear as an option for the future. When the Howard government banned nuclear power generation in 1998, it accidentally excluded us from competing in the global AI data centre gold rush 26 years later.

Legalising nuclear power in a way that makes it cost effective is the sort of generational economic reform Australian politicians have been seeking for decades. I say in a way that makes it cost effective because it is the regulatory superstructure laid on top of nuclear energy globally that accounts for many of the claims that nuclear is uneconomic relative to other renewable energy sources. 

A Dutton government would have to not only amend the two pieces of legislation that specifically exclude nuclear power plants from being approved, but also establish dedicated regulatory commissions and frameworks and licencing schemes to govern the new industry — and in a way that encouraged nuclear power to be developed, not blocked. And all of this would have to be pushed through a presumably sceptical Parliament. 

That would be a lot of work, and it would take time. But I’ve been hearing that nuclear power is “at least 10 to 20 years away” for the past two decades. Allowing (not imposing) nuclear as an option in Australia’s energy mix would be our first reckoning with the demands of the digital economy.

Managing Generative AI in Firms: The Theory of Shadow User Innovation

With Julian Waters-Lynch, Darcy WE Allen, and Jason Potts. Available at SSRN.

Abstract: This paper explores the management challenge posed by pervasive and unsupervised use of generative AI (GenAI) applications in firms. Employees are covertly experimenting with these tools to discover and capture value from their use, without the express direction or visibility of organisational leaders or managers. We call this phenomenon shadow user innovation. Our analysis integrates literature on user innovation, general purpose technologies and the evolution of firm capabilities. We define shadow user innovation as employee-led user innovation inside firms that is opaque to management. We explain how this opacity obstructs a firm’s ability to translate the use of GenAI into visible improvements in productivity and profitability, because employees can currently privately capture these benefits. We discuss potential management responses to this challenge, outline a research program, and offer practical guidance for managers.

Institutions to constrain chaotic robots: why generative AI needs blockchain

With Sinclair Davidson and Jason Potts. Available at SSRN

Abstract: Generative AI is a very powerful new computing technology, but the problem of how to make it economically useful (Alice: “hello LLM, please send an email to Bob”) is limited by its inherent unpredictability. It might send the email, but it might do something else too. As a consequence, the large language models that underpin generative AI are not safe to use for most economically useful and valuable interactions with the world. This is the ‘economic alignment’ problem between the AI as an ‘agent’ and the human ‘principal’ who wants the LLM to interact in the world on their behalf. The answer we propose is smart contracts that can take LLM outputs and filter them as deterministic constraints. With smart contracts, LLMs can interact safely in the real world, and can unlock the vast economic opportunity of economically aligned and artificially intelligent agents.

Large language models reduce agency costs


With Jason Potts, Darcy W E Allen, and Nataliya Ilyushina. Available on SSRN.

Large Language Models (LLMs) or generative AI have emerged as a new general-purpose technology in applied machine learning. These models are increasingly employed within firms to support a range of economic tasks. This paper investigates the economic value generated by the adoption and use of LLMs, which often occurs on an experimental basis, through two main channels. The first channel, already explored in the literature (e.g. Eloundou et al. 2023, Noy and Wang 2023), involves LLMs providing productive support akin to other capital investments or tools. The second, less examined channel concerns the reduction or elimination of agency costs in economic organisation due to the enhanced ability of economic actors to insource more tasks. This is particularly relevant for tasks that previously required contracting within or outside a firm. With LLMs enabling workers to perform tasks in which they had less specialisation, the costs associated with managing relationships and contracts decrease. This paper focuses on this second path of value creation through adoption of this innovative new general purpose technology. Furthermore, we examine the wider implications of the lower agency costs pathway on innovation, entrepreneurship and competition.

The problem of ubiquitous computing for regulatory costs

Working paper on SSRN

The benefits of regulation should exceed the cost of regulating. This paper investigates the impact of widespread general-purpose computing on the cost of enforcing of regulations on generative artificial intelligence (AI) and decentralized finance (DeFi). We present a simple model illustrating regulators’ preferences for minimising enforcement costs and discuss the implications of regulatory preferences for the number and size of regulated firms. Regulators would rather regulate a small number of large firms rather than a large number of small firms. General-purpose computing radically expands the number of potentially regulated entities. For Defi, the decentralized nature of blockchain technology, global scale of transactions, and decentralised hosting increase the number of potentially regulated entities by an order of magnitude. Likewise, locally deployed open-source generative AI models make regulating AI safety extremely difficult. This creates a regulatory dilemma that forces regulators to reassess the social harm of targeted economic activity. The paper draws a historical comparison with the attempts to reduce copyright infringement through file sharing in the early 2000s in order to present strategic options for regulators in addressing the challenges of AI safety and DeFi compliance.

The Case for Generative AI in Scholarly Practice

Available at SSRN

Abstract: This paper defends the use of generative artificial intelligence (AI) in scholarship and argues for its legitimacy as a valuable tool for contemporary research practice. It uses a emergent property rights model of writing to shed light on the evolution of scholarly norms and practices in academic practice. The paper argues that generative AI extends the capital-intensive nature of modern academic writing. The paper discussing three potential uses for AI models in research practice: AI as a mentor, AI as an analytic tool, and AI as a writing tool. The paper considers how the use of generative AI interacts with two critical norms in scholarship: norms around authorship attribution and credits for contributions, and the norm against plagiarism. It concludes that the effective use of generative AI is a legitimate research practice for scholars seeking to experiment with new technologies that might enhance their productivity.