The Minns AI disaster

Published in the Spectator Australia, 14 August 2025

Last week, with almost no fanfare, the Minns government introduced legislation to regulate the use of artificial intelligence in the workplace.

This would be one of Australia’s first AI laws. Unfortunately, it is a lesson about how laws on frontier technology can sound reasonable but be unworkable and counterproductive in practice. The bill is a recklessly broad bid for union control over workplaces, and, if passed, would be a serious brake on business productivity growth in New South Wales.

The bill is a revised version of the workers compensation bill stalled in the upper house. Unlike the earlier bill it also creates a new health and safety duty for employers that use “digital work systems” to ensure that the way these tools allocate or monitor work does not create health and safety risks.

The idea is to prevent digital systems from being used to push workers into unreasonable workloads, to prevent businesses from imposing excessive worker surveillance, and to provide protection from discrimination. Union officials will be empowered to inspect the digital work systems if they suspect a violation.

You might think all of this is fair: the idea of being digitally managed and remotely monitored by our employers is pretty dystopian. But the bill is wildly over-drafted, and would give unions power to interfere with almost every aspect of the workplace.

On suspicion that a computer is “unreasonably” being used to allocate, coordinate, or monitor work, union inspectors would be able to access and inspect the software platforms and data that power every organisation.

The bill defines a “digital work system” as an “algorithm, artificial intelligence, automation, online platform or software”. This definition reads like the government is just throwing everything at the wall to see what sticks. It is redundant, for one. Artificial intelligence, automation, online platforms, and software are all made of algorithms.

But more importantly, this definition covers basically any way a business uses computers for work allocation. Everything from Microsoft Teams to Slack would fall under this umbrella. Email is a digital work system – so routine task allocation through a calendar invite could be grounds for union inspection if it is deemed “unreasonable”.

And this bill will present a serious disincentive to use modern AI platforms like ChatGPT in business. Explaining why a prompt returned one output and not another output is an unsolved problem in AI research. Any use of these AI models for management would be begging for union scrutiny under this new regime.

Businesses that don’t want to hand unions leverage over their basic operations will either sever the digital work allocation systems from other systems, or avoid using them all together.

That may, of course, be the goal. Workplace law already targets psychosocial risks like excessive workloads and demands. The NSW Workplace Surveillance Act already regulates worker monitoring, and discrimination is the subject of a vast array of state and Commonwealth law. The novelty of the NSW bill is that it targets digital technology directly.

At the Commonwealth level, the Albanese government is currently in the middle of an internal debate about whether to regulate AI with a big, economy-wide bill or just address problems as they come. The government is reluctant to do the former because it is desperate for the productivity boost that many economists believe AI will spark. Productivity is meant to be the theme for the second term of the Albanese government. The union movement disagrees with this strategy. They want unions to have a veto over technologies that might threaten jobs. The Minns government’s proposal goes a long way towards achieving the unions’ goal: giving unions the right to inspect digital technologies that manage work.

We’ve been here before. During the Fraser government in the 1970s there was an energetic debate about the impact of computers on work. Then as now, many feared that the emerging frontier of digital systems might cause profound disruptions to the organisation of work. The union movement wanted businesses to be forced to consult with unions before they introduced computers.

Once again, the Australian economy is suffering through a severe productivity crisis. Once again, we have a suite of technologies that promise massive productivity gains. And once again, this technological revolution is being used as a ploy for union control over business.

AI tapping copyrighted content to learn from it is not piracy

Published in the Australia Financial Review, 9 August 2025

The Productivity Commission announced this week that it was investigating how artificial intelligence models could be more easily trained on Australian copyrighted content. The backlash from our creative industry has been severe and instant.

For the past few years, AI labs have been accused in Australia and elsewhere of large-scale piracy. It is, we are told, outrageous that the PC would be providing moral cover for this theft.

But the PC is right to probe here. We need a copyright regime that reflects how AI models actually work, and our policymakers need to understand the full economic and geopolitical stakes that the AI revolution represents.

The PC’s report into data and digital technology is more modest than you would expect from the reaction of the creative industry. It is “seeking feedback about whether reforms are needed to better facilitate the use of copyrighted materials” for AI training.

But we need to be clear about how AI training actually works. AI models do not copy the content they are trained on. They learn from that content. Specifically, when they “read” a text, they identify patterns in it and relate those to patterns they’ve learnt from other texts.

If a person reads a book and learns from it – updating the weights in their own neural network – we do not accuse them of piracy. What we do when we learn, and what AI labs do when they train their models, is quite different from copying. There are some legal subtleties here.

In the US, courts have distinguished between how the models are trained and how the training data is collected.

Meta and Anthropic are accused of downloading large quantities of copyrighted books and papers from piracy websites to feed them into the training process.

If they were to do so in Australia that would probably be a violation of our copyright laws. But that doesn’t mean the training itself would necessarily be.

The PC notes that the process of AI training necessarily involves temporarily copying content onto the labs’ servers. But that proves too much. We do the same when we read anything on the internet. The moment we browse to a website, our computer downloads that website into a cache folder. But that downloading is a technical necessity, is not economically meaningful, and we don’t treat it as a violation of intellectual property.

All these subtleties around AI training were, of course, completely unforeseen by the parliaments that created our copyright regime decades ago. We don’t need to review the economic upside of AI here. It has been interesting to watch the Albanese government over the past year realise that AI could be a Hail Mary pass.

We might be able to fix our deep productivity problems without the need for tedious reform. AI presents the best chance we have right now to bring about a surge in economic growth.

But there are also real geopolitical reasons not to hamper AI development in Australia and the rest of the free world. We are in the middle of a great global technological contest around AI capability. The contest is of a larger scale and is more economically consequential than the space race of the 1950s and 1960s.

The Western world dominates AI chip development. This domination allows the US to exert a degree of influence over Chinese AI capabilities through export controls. But there is, almost certainly, a moment coming when Chinese chips will be competitive, and China will have full sovereign capability over the complete stack necessary for state-of-the-art AI.

Mark it: this will be a political shock in the West, much greater than when Deepseek R1 was released in January. When it happens, I hope it will finally pop the sense of complacency that has allowed us to indulge the idea that US tech firms are the bad guys.

Some in the creative industry would like AI training to be a matter of negotiation between rights holders and AI labs, book by book, photo by photo.

The Chinese AI labs do not share the same view. In a statement published this year, one website hosting pirated books – they call themselves “shadow-libraries” – stated that while most US firms have shied away, “Chinese firms have enthusiastically embraced our collection, apparently untroubled by its legality”.

The more data a model is trained upon, the better the model. We should not be trying to cripple AI in Australia while others rush ahead.

There are good reasons that authors and other creatives should want their work to be part of AI training sets. What writer would wish their work to be unknown by the first superintelligence?

But policymakers have a choice here. If they want Australia to shape the future of AI, they need to develop a policy regime that adapts to innovation, not a stagnant one that gives our geopolitical rivals an advantage.