The Minns AI disaster

Published in the Spectator Australia, 14 August 2025

Last week, with almost no fanfare, the Minns government introduced legislation to regulate the use of artificial intelligence in the workplace.

This would be one of Australia’s first AI laws. Unfortunately, it is a lesson about how laws on frontier technology can sound reasonable but be unworkable and counterproductive in practice. The bill is a recklessly broad bid for union control over workplaces, and, if passed, would be a serious brake on business productivity growth in New South Wales.

The bill is a revised version of the workers compensation bill stalled in the upper house. Unlike the earlier bill it also creates a new health and safety duty for employers that use “digital work systems” to ensure that the way these tools allocate or monitor work does not create health and safety risks.

The idea is to prevent digital systems from being used to push workers into unreasonable workloads, to prevent businesses from imposing excessive worker surveillance, and to provide protection from discrimination. Union officials will be empowered to inspect the digital work systems if they suspect a violation.

You might think all of this is fair: the idea of being digitally managed and remotely monitored by our employers is pretty dystopian. But the bill is wildly over-drafted, and would give unions power to interfere with almost every aspect of the workplace.

On suspicion that a computer is “unreasonably” being used to allocate, coordinate, or monitor work, union inspectors would be able to access and inspect the software platforms and data that power every organisation.

The bill defines a “digital work system” as an “algorithm, artificial intelligence, automation, online platform or software”. This definition reads like the government is just throwing everything at the wall to see what sticks. It is redundant, for one. Artificial intelligence, automation, online platforms, and software are all made of algorithms.

But more importantly, this definition covers basically any way a business uses computers for work allocation. Everything from Microsoft Teams to Slack would fall under this umbrella. Email is a digital work system – so routine task allocation through a calendar invite could be grounds for union inspection if it is deemed “unreasonable”.

And this bill will present a serious disincentive to use modern AI platforms like ChatGPT in business. Explaining why a prompt returned one output and not another output is an unsolved problem in AI research. Any use of these AI models for management would be begging for union scrutiny under this new regime.

Businesses that don’t want to hand unions leverage over their basic operations will either sever the digital work allocation systems from other systems, or avoid using them all together.

That may, of course, be the goal. Workplace law already targets psychosocial risks like excessive workloads and demands. The NSW Workplace Surveillance Act already regulates worker monitoring, and discrimination is the subject of a vast array of state and Commonwealth law. The novelty of the NSW bill is that it targets digital technology directly.

At the Commonwealth level, the Albanese government is currently in the middle of an internal debate about whether to regulate AI with a big, economy-wide bill or just address problems as they come. The government is reluctant to do the former because it is desperate for the productivity boost that many economists believe AI will spark. Productivity is meant to be the theme for the second term of the Albanese government. The union movement disagrees with this strategy. They want unions to have a veto over technologies that might threaten jobs. The Minns government’s proposal goes a long way towards achieving the unions’ goal: giving unions the right to inspect digital technologies that manage work.

We’ve been here before. During the Fraser government in the 1970s there was an energetic debate about the impact of computers on work. Then as now, many feared that the emerging frontier of digital systems might cause profound disruptions to the organisation of work. The union movement wanted businesses to be forced to consult with unions before they introduced computers.

Once again, the Australian economy is suffering through a severe productivity crisis. Once again, we have a suite of technologies that promise massive productivity gains. And once again, this technological revolution is being used as a ploy for union control over business.

AI tapping copyrighted content to learn from it is not piracy

Published in the Australia Financial Review, 9 August 2025

The Productivity Commission announced this week that it was investigating how artificial intelligence models could be more easily trained on Australian copyrighted content. The backlash from our creative industry has been severe and instant.

For the past few years, AI labs have been accused in Australia and elsewhere of large-scale piracy. It is, we are told, outrageous that the PC would be providing moral cover for this theft.

But the PC is right to probe here. We need a copyright regime that reflects how AI models actually work, and our policymakers need to understand the full economic and geopolitical stakes that the AI revolution represents.

The PC’s report into data and digital technology is more modest than you would expect from the reaction of the creative industry. It is “seeking feedback about whether reforms are needed to better facilitate the use of copyrighted materials” for AI training.

But we need to be clear about how AI training actually works. AI models do not copy the content they are trained on. They learn from that content. Specifically, when they “read” a text, they identify patterns in it and relate those to patterns they’ve learnt from other texts.

If a person reads a book and learns from it – updating the weights in their own neural network – we do not accuse them of piracy. What we do when we learn, and what AI labs do when they train their models, is quite different from copying. There are some legal subtleties here.

In the US, courts have distinguished between how the models are trained and how the training data is collected.

Meta and Anthropic are accused of downloading large quantities of copyrighted books and papers from piracy websites to feed them into the training process.

If they were to do so in Australia that would probably be a violation of our copyright laws. But that doesn’t mean the training itself would necessarily be.

The PC notes that the process of AI training necessarily involves temporarily copying content onto the labs’ servers. But that proves too much. We do the same when we read anything on the internet. The moment we browse to a website, our computer downloads that website into a cache folder. But that downloading is a technical necessity, is not economically meaningful, and we don’t treat it as a violation of intellectual property.

All these subtleties around AI training were, of course, completely unforeseen by the parliaments that created our copyright regime decades ago. We don’t need to review the economic upside of AI here. It has been interesting to watch the Albanese government over the past year realise that AI could be a Hail Mary pass.

We might be able to fix our deep productivity problems without the need for tedious reform. AI presents the best chance we have right now to bring about a surge in economic growth.

But there are also real geopolitical reasons not to hamper AI development in Australia and the rest of the free world. We are in the middle of a great global technological contest around AI capability. The contest is of a larger scale and is more economically consequential than the space race of the 1950s and 1960s.

The Western world dominates AI chip development. This domination allows the US to exert a degree of influence over Chinese AI capabilities through export controls. But there is, almost certainly, a moment coming when Chinese chips will be competitive, and China will have full sovereign capability over the complete stack necessary for state-of-the-art AI.

Mark it: this will be a political shock in the West, much greater than when Deepseek R1 was released in January. When it happens, I hope it will finally pop the sense of complacency that has allowed us to indulge the idea that US tech firms are the bad guys.

Some in the creative industry would like AI training to be a matter of negotiation between rights holders and AI labs, book by book, photo by photo.

The Chinese AI labs do not share the same view. In a statement published this year, one website hosting pirated books – they call themselves “shadow-libraries” – stated that while most US firms have shied away, “Chinese firms have enthusiastically embraced our collection, apparently untroubled by its legality”.

The more data a model is trained upon, the better the model. We should not be trying to cripple AI in Australia while others rush ahead.

There are good reasons that authors and other creatives should want their work to be part of AI training sets. What writer would wish their work to be unknown by the first superintelligence?

But policymakers have a choice here. If they want Australia to shape the future of AI, they need to develop a policy regime that adapts to innovation, not a stagnant one that gives our geopolitical rivals an advantage.

Dutton is losing the debate over nuclear energy right when we need it for AI

Published in Crikey

Peter Dutton is losing the debate over nuclear power. Even the pro-nuclear Financial Review agrees, which ran an editorial last week wondering where the Coalition’s details were. And the Coalition’s proposal for the government to own the nuclear industry has made it look more like election boondoggle than visionary economic reform. 

It is starting to look like a big missed opportunity. 

Because in 2024, the question facing Australian governments is not only how to transition from polluting energy sources to non-polluting sources. It is also how to set up an economic and regulatory framework to service what is likely to be massive growth in electricity demand over the next decade.

The electrification revolution is part of that demand, with, for instance, the growing adoption of electric vehicles. But the real shadow on the horizon is artificial intelligence. The entire global economy is embedding powerful, power-hungry AI systems into every platform and every device. To the best of our knowledge, the current generation of AI follows a simple scaling law: the more data and the more powerful the computers processing that data, the better the AI. 

We should be excited for AI. It is the first significant and positive productivity shock we’ve had in decades. But the industry needs more compute, and more compute needs more energy.

That’s why Microsoft is working to reopen Three Mile Island — yes, that Three Mile Island — and has committed to purchasing all the electricity from the revived reactor to supply its AI and data infrastructure needs. Oracle plans to use three small nuclear reactors to power a massive new data centre. Amazon Web Services is buying and plans to significantly grow a data centre next to a nuclear plant in Pennsylvania

Then there’s OpenAI. The New York Times reports that one of the big hurdles for OpenAI in opening US data centres is a lack of adequate electricity supply. The company is reportedly planning to build half a dozen data centres that would each consume as much electricity as the entire city of Miami. It is no coincidence that OpenAI chief Sam Altman has also invested in nuclear startups.

One estimate suggests that data centres could consume 9% of US electricity by 2030.

Dutton, to his credit, appears to understand this. His speech to the Committee for Economic Development of Australia (CEDA) last week noted that nuclear would help “accommodate energy intensive data centres and greater use of AI”. 

But the Coalition’s mistake has been to present nuclear (alongside a mixture of renewables) as the one big hairy audacious plan to solve our energy challenge. They’ve even selected the sites! Weird to do that before you’ve even figured out how to pay for the whole thing.

Nuclear is not a panacea. It is only appealing if it makes economic sense. Our productivity ambitions demand that energy is abundant, available and cheap. There has been fantastic progress in solar technology, for instance. But it makes no sense to eliminate nuclear as an option for the future. When the Howard government banned nuclear power generation in 1998, it accidentally excluded us from competing in the global AI data centre gold rush 26 years later.

Legalising nuclear power in a way that makes it cost effective is the sort of generational economic reform Australian politicians have been seeking for decades. I say in a way that makes it cost effective because it is the regulatory superstructure laid on top of nuclear energy globally that accounts for many of the claims that nuclear is uneconomic relative to other renewable energy sources. 

A Dutton government would have to not only amend the two pieces of legislation that specifically exclude nuclear power plants from being approved, but also establish dedicated regulatory commissions and frameworks and licencing schemes to govern the new industry — and in a way that encouraged nuclear power to be developed, not blocked. And all of this would have to be pushed through a presumably sceptical Parliament. 

That would be a lot of work, and it would take time. But I’ve been hearing that nuclear power is “at least 10 to 20 years away” for the past two decades. Allowing (not imposing) nuclear as an option in Australia’s energy mix would be our first reckoning with the demands of the digital economy.

Managing Generative AI in Firms: The Theory of Shadow User Innovation

With Julian Waters-Lynch, Darcy WE Allen, and Jason Potts. Available at SSRN.

Abstract: This paper explores the management challenge posed by pervasive and unsupervised use of generative AI (GenAI) applications in firms. Employees are covertly experimenting with these tools to discover and capture value from their use, without the express direction or visibility of organisational leaders or managers. We call this phenomenon shadow user innovation. Our analysis integrates literature on user innovation, general purpose technologies and the evolution of firm capabilities. We define shadow user innovation as employee-led user innovation inside firms that is opaque to management. We explain how this opacity obstructs a firm’s ability to translate the use of GenAI into visible improvements in productivity and profitability, because employees can currently privately capture these benefits. We discuss potential management responses to this challenge, outline a research program, and offer practical guidance for managers.

Institutions to constrain chaotic robots: why generative AI needs blockchain

With Sinclair Davidson and Jason Potts. Available at SSRN

Abstract: Generative AI is a very powerful new computing technology, but the problem of how to make it economically useful (Alice: “hello LLM, please send an email to Bob”) is limited by its inherent unpredictability. It might send the email, but it might do something else too. As a consequence, the large language models that underpin generative AI are not safe to use for most economically useful and valuable interactions with the world. This is the ‘economic alignment’ problem between the AI as an ‘agent’ and the human ‘principal’ who wants the LLM to interact in the world on their behalf. The answer we propose is smart contracts that can take LLM outputs and filter them as deterministic constraints. With smart contracts, LLMs can interact safely in the real world, and can unlock the vast economic opportunity of economically aligned and artificially intelligent agents.

Large language models reduce agency costs


With Jason Potts, Darcy W E Allen, and Nataliya Ilyushina. Available on SSRN.

Large Language Models (LLMs) or generative AI have emerged as a new general-purpose technology in applied machine learning. These models are increasingly employed within firms to support a range of economic tasks. This paper investigates the economic value generated by the adoption and use of LLMs, which often occurs on an experimental basis, through two main channels. The first channel, already explored in the literature (e.g. Eloundou et al. 2023, Noy and Wang 2023), involves LLMs providing productive support akin to other capital investments or tools. The second, less examined channel concerns the reduction or elimination of agency costs in economic organisation due to the enhanced ability of economic actors to insource more tasks. This is particularly relevant for tasks that previously required contracting within or outside a firm. With LLMs enabling workers to perform tasks in which they had less specialisation, the costs associated with managing relationships and contracts decrease. This paper focuses on this second path of value creation through adoption of this innovative new general purpose technology. Furthermore, we examine the wider implications of the lower agency costs pathway on innovation, entrepreneurship and competition.

The problem of ubiquitous computing for regulatory costs

Working paper on SSRN

The benefits of regulation should exceed the cost of regulating. This paper investigates the impact of widespread general-purpose computing on the cost of enforcing of regulations on generative artificial intelligence (AI) and decentralized finance (DeFi). We present a simple model illustrating regulators’ preferences for minimising enforcement costs and discuss the implications of regulatory preferences for the number and size of regulated firms. Regulators would rather regulate a small number of large firms rather than a large number of small firms. General-purpose computing radically expands the number of potentially regulated entities. For Defi, the decentralized nature of blockchain technology, global scale of transactions, and decentralised hosting increase the number of potentially regulated entities by an order of magnitude. Likewise, locally deployed open-source generative AI models make regulating AI safety extremely difficult. This creates a regulatory dilemma that forces regulators to reassess the social harm of targeted economic activity. The paper draws a historical comparison with the attempts to reduce copyright infringement through file sharing in the early 2000s in order to present strategic options for regulators in addressing the challenges of AI safety and DeFi compliance.

The Case for Generative AI in Scholarly Practice

Available at SSRN

Abstract: This paper defends the use of generative artificial intelligence (AI) in scholarship and argues for its legitimacy as a valuable tool for contemporary research practice. It uses a emergent property rights model of writing to shed light on the evolution of scholarly norms and practices in academic practice. The paper argues that generative AI extends the capital-intensive nature of modern academic writing. The paper discussing three potential uses for AI models in research practice: AI as a mentor, AI as an analytic tool, and AI as a writing tool. The paper considers how the use of generative AI interacts with two critical norms in scholarship: norms around authorship attribution and credits for contributions, and the norm against plagiarism. It concludes that the effective use of generative AI is a legitimate research practice for scholars seeking to experiment with new technologies that might enhance their productivity.