A Nanny State On IR Policy Is The Liberal Choice

In politics, sometimes it’s best not to go into detail. This is the lesson Eric Abetz learned after he explained part of the Coalition’s industrial relations policy last Thursday.

Abetz told the Australian that, under an Abbott government, the Fair Work Commission would not approve workplace agreements that raised real wages unless there had been “appropriate discussion and consideration of productivity” (paywall).

Why? So “lazy companies don’t just give wage increases because it’s the easiest thing to do.”

It is one of the founding assumptions of Australia’s system of industrial relations that workers are unable to negotiate with bosses in their own best interest.

Around this paternalistic assumption we have built a superstructure of industrial relations law, tribunals, and controlled wages unique in the developed world.

Now the Coalition seems to think some bosses are just as incapable of looking after their interests. And that a government regulator knows how to run a business better than the business itself.

If true, one wonders how the labour market functions at all.

The Coalition’s policy is patronising, illiberal, and fundamentally anti-market.

Now we know the true legacy of WorkChoices.

The fight over WorkChoices represents the moment the Coalition turned its mind from liberalising industrial relations to regulating it.

More on that in a moment. On Friday poor old Senator Abetz was accused by his colleagues of “freelancing” – that is, speaking only for himself – and advised to avoid interviews for the next few weeks.

But the Coalition’ official workplace policy document does, in fact, say “before an enterprise agreement is approved, the Fair Work Commission will have to be satisfied that the parties have at least discussed productivity as part of their negotiation process.”

If anything, Abetz softened the policy, suggesting Fair Work will only second-guess agreements if they give pay increases above inflation.

That’s how sensitive the Coalition is to the WorkChoices tag – even talking about its own policy is off-message.

Industrial relations has a special place in the Australian political compact. It is Labor’s raison d’etre; the world’s oldest party was born as the political wing of the union movement. Obviously they have a deep interest in wages policy.

In the Liberal Party there have always been free traders and protectionists, conservatives and liberals, fans of both big government and small. But one thing has bound the party together since conception – an antipathy to union power and prominence.

So Labor supporters recount our political history as a contest between employees (labour) and employers (capital). For Liberal supporters our history is a contest between sectional interests (union thugs) and the mainstream (Forgotten People).

Yet eight decades of the Australian Settlement concealed a few subtleties in the Liberal view.

For all that time, being opposed to union power and supporting greater market control over wage price setting was, effectively, synonymous.

When, during the Hawke, Keating, and Howard eras, labour law was slowly liberalised, this equivalence was superficially reinforced. As labour markets became freer, unions declined.

But then Kevin Rudd repealed WorkChoices. Rudd’s move was the first time since the reform era began that a liberalisation – in any sector of the economy – had been reversed. In 2007 Australia hit the market reform wall. This was very disorientating.

(I’ve described WorkChoices here as “liberalisation” because that’s what all sides of politics imagine John Howard’s policy was. In fact it was a complex regulatory takeover of workplace relations by the federal government. Still, perception is what matters.)

Now the Liberal Party has to figure out what its industrial relations priority is: to pursue a free market in labour, or to battle the unions.

Put another way, is Australia’s industrial relations dilemma that it is too highly regulated? Or is the dilemma that unions are too prominent?

After the 2007 defeat, there are many on the Liberal side who say the latter; many who imagine they are fighting a guerrilla war against the union movement. There are hints of this attitude in the Australian article. Abetz says the Coalition’s policy was developed “in response to unions ‘bragging’ that they had secured productivity-free pay increases.”

The Coalition’s solution to such hubris? Increase workplace regulation. If the government has to nanny lazy companies to reduce union power, then so be it.

Never mind that both sides of a mutually beneficial exchange should be “bragging” about the great deal they got.

It’s worth pointing out that unions would exist in a free society. They would have no privileged position in the law, and no coercive power, but, as Friedrich Hayek once wrote, everybody “ought to have the right to join a trade union.”

The dust from WorkChoices has settled. Now that Coalition is preparing to form government again, what does it really want for industrial relations? Labour market freedom, or just defeat of the union movement?

Sport And Betting Have Always Been Teammates

Victorian Greens senator Richard Di Natale has drafted a bill to ban betting odds being aired during sports broadcasts.

No, let’s rewrite that. Senator Di Natale has drafted a bill to kick Tom Waterhouse off the television.

Of course, Di Natale’s bill is no more likely to go anywhere than the other few dozen or so bills that have been introduced to the Parliament by minor parties. They are really just written for symbolic purposes.

And appropriately enough, in this case. Banning betting odds during broadcasts is the ultimate symbolic gesture – arbitrary feel-goodism masquerading as social policy.

The backlash against sports betting exposes the flimsy edifice that Australian culture has built around sport. On the one hand, we know sport is a multimillion-dollar corporate business where young and athletic men are split into groups, churned through training regimes, and paid to compete for our amusement. It is a vast money-making ecosystem.

Sport is like Hollywood, but much less risky: investors don’t have to worry about whether the creative types will come up with new and exciting stuff.

This industry is the world of Tom Waterhouse and government subsidies for stadiums and the Australian Crime Commission’s report into sports doping and the $1.2 billion the Seven Network and Foxtel paid for AFL television broadcast rights. It is a world where behaviour standards are written into player employment contracts to ”protect the brand”. People get rich, people get sacked, people get sued. In other words, sport is an industry like any other.

And that is all great. Industries are great. Yet onto this particular industry we impose a web of mythology and fantasy that tries to lift sport above a business to a quasi-religious undertaking. Nobody works themselves into a moral fervour about drug use in investment banking, or in motion pictures. But they do in sports. The sporting world is obsessed with honour and sportsmanship. And purity. It is no coincidence people keep calling for sporting codes to be “cleaned up”, or say a game was played “clean”.

The ideologists of sport proclaim it can bring communities together. In past eras – especially before the violent 20th century – they thought sport could replace warfare. These days, it is mostly about children and vague feelings of social cohesion. The federal government offers funding for a Multicultural Youth Sports Partnership Program. AFL clubs eagerly promote Harmony Day. It’s all very … romantic.

Yes, apparently there are still people who believe sport reduces social tension; people who are able to ignore the decades of violence and nationalistic politics that have swirled around domestic and international sport. And many of these romanticists appear to view the industry of sport with horror.

By now, everybody who is not a first-year arts student has come to terms with the fact that sport involves money. An older debate along these lines – about whether sport should remain amateur or go professional – looks very quaint from the vantage of the 21st century.

Sports betting is just the latest bogyman – yet another threat to that romantic vision. Yet betting on sport is as old as sport itself. One British sports historian, Wray Vamplew, says that much of the strict codification of the rules of sport in the 19th century was driven by the needs of gambling. Early punters found it hard to bet when the rules weren’t codified.

So the sudden panic about odds being broadcast on television is a bit precious – a triumph of the mythology of sport over the reality of sport. It is indicative that most critics of sports betting say they are not worried about the betting so much as seeing the odds on television. They don’t want to break the fantasy. They don’t want to see the revenue streams behind the curtain.

For the hyperbole and hand-wringing, sports betting is a tiny sliver of gambling in Australia.

The Queensland government keeps national gambling statistics. In 2009-10 (the latest year for which comparable figures are available), Australians spent a total of $18.5 billion on all gambling. This number includes everything from racetrack betting to pokies to TattsLotto. They only spent $303 million on sports betting – just over 1.5 per cent of the total.

Yet one academic proclaimed on The Conversation website last week that sports betting represented the steady ”gamblification” of everyday life – Tom Waterhouse is a sign that Australia is being buried by gambling.

The evidence suggests quite the opposite. Total expenditure on gambling has remained steady over the past decade. And if we take population growth into account, then in recent years gambling has begun to decline. Nothing here screams ”impending social problem”.

Instead, the Greens’ Richard Di Natale falls back on an old standard. ”It’s becoming increasingly hard for young kids to know where the sport ends and the gambling begins,” he said in a press release announcing his bill.

That’s the think-of-the-children argument, a favourite of censors, wowsers and reactionaries for two centuries.

It is fine to view sport through a romantic lens. But that lens won’t survive if it requires deliberate ignorance.

An Assault On Diet

When the National Health and Medical Research Council released its official new dietary guidelines this week, they helpfully included a sample daily meal plan.

This was a mistake. The meal plan inadvertently demonstrates how ridiculously austere the NHMRC’s ideal diet is. It’s almost comic. We’re being recommended the culinary equivalent of sexual abstinence.

For an average man, the hypothetical day begins with toast (wholemeal, two slices), baked beans (half a can), a tomato (medium size), and a glass of milk (250ml, reduced fat).

Breakfast is as good as it gets. Lunch is a sandwich (wholemeal) with 65 grams of sliced roast beef, 20 grams of reduced fat cheese and some salad. Two small coffees may be consumed at your discretion. For dinner, look forward to a tiny piece of fish – 100 grams maximum – rice, and a small, boiled potato. End your day with a glass of water. (Dinner for women: a cup of pasta, 65 grams of beef mince, kidney beans and half an onion.)

Pity those who try to follow the government’s new diet. This is self-denial pretending to be cuisine.

According to the NHMRC you mustn’t even use salt – that mineral essential to the human practice of cooking. It’s no exaggeration to say the desire for salt has shaped civilisation. To eliminate salt is to reject thousands of years of food wisdom.

Official dietary guidelines have been steadily reducing any pleasure we might draw from food. The government-endorsed diet is getting worse; more ascetic, more brutal, more surreal. It’s entirely divorced from human taste.

The CSIRO’s bestselling 2005 Total Wellbeing Diet was positively decadent compared to the NHMRC’s new rules. Male dieters were permitted between 2½ and four times as much meat for their dinner. Salt was allowed, in moderation. And the entire point of the CSIRO’s recommended diet was to help people lose weight. The spartan new guidelines are for people who already have a healthy weight.

Dietary guidelines are highly political. There are many special interests with a special interest in what we eat. Industries that find their products downgraded protest loudly.

Meat and livestock producers don’t like the idea we should eat less meat. In the United States, dietary recommendations have been forever shaped by lobbyists. The subsidised sugar industry has political clout.

But there’s a deeper ideological battle going on around nutrition.

After all, what is the point of providing ”guidelines” that are so far removed from the experiences of Australian eaters? Surely health tips should not simply be scientifically accurate, but also socially plausible.

Advice is pointless if it’s going to be ignored. If our best medical minds have decided that drawing any pleasure from food is too risky, perhaps they should rethink their goals.

In 2008, the NHMRC decided any more than two glasses of wine in a single session constituted ”binge drinking”. This decision turned the previously benign cultural practice of sharing a bottle of wine into dangerous hedonism.

But ”binge” is a moral concept rather than a scientific one – it’s just a synonym for ”bad”. Since risky behaviour exists on a continuum, this redefinition was little more than an attempt to berate people into changing their behaviour.

That was five years ago. Now public health activists are pushing the message ”there is no safe level of alcohol consumption”. Another banality pretending to be insight. There’s no totally safe level of doing anything. But expect to find ”no alcohol” on official recommendations soon.

Food and drink are deeply intertwined with cultural identity. No wonder our palate is a political plaything. Environmentalists are frustrated the NHMRC didn’t focus on sustainability. Social-justice types want more attention on equity and fairness.

In Bold Palates: Australia’s Gastronomic Heritage, the historian Barbara Santich relates the story of a Sydney doctor who in 1893 proposed a national dish in the lead-up to Federation: perhaps a ”vegetable curry”, he thought, ”or some well-concocted salad”. Such a delicate, health-focused dish was never likely to be embraced in a land of mutton, damper, and kangaroo-tail soup.

In 2013 we still don’t have a consensus national dish (why would we want one?) but the success of MasterChef and My Kitchen Rules suggests a cultural change in food and dining. Australia is the perfect combination: a rich, immigrant, and agricultural nation. Our cuisine is starting to reflect that holy trinity.

The government’s health guidelines are directly opposed to this new culinary culture. They would strip away the pleasure and meaning of food.

Indeed, there’s something symbolic in the way the NHMRC has offered different menus for men and women. Sharing a meal with the opposite sex is getting in the way of kilojoule management.

Our new health guidelines are more utopian than honest. They may be theoretically ideal – nutritionists can argue the details – but they’re also unrealistic, implausible, and unappealing.

Maybe culinary abstinence is the healthy choice. But replacing the joys of cooking and eating with a tightly engineered formula of self-denial is unlikely to be the happy choice.

Videogame Blame Distracts From The Real Gun Debate

Before Adam Lanza murdered 20 children and six adults at Sandy Hook Elementary School, he smashed the hard drive on his computer.

That act may frustrate investigators trying figure out his motives, but it has proved to be no obstacle for amateur psychologists.

Indeed, the closest the National Rifle Association’s Wayne LaPierre came to coherence in his bizarre press conference last week was when he blamed “vicious, violent” games like Grand Theft Auto, Mortal Kombat, and an obscure browser game called Kindergarten Killer.

There are now two competing lessons about the Sandy Hook school shooting. One focuses on Lanza’s access to guns, and one focuses on Lanza’s fondness for videogames.

A plumber who worked on his home says Lanza was “obsessed” by games. This psychological issue was apparently diagnosed in the time it took to work on the Lanza household’s pipes.

A high-school classmate says his preferred weapon in videogames was an assault rifle; a damning assessment only if you’ve never played any games at all. (Assault rifles tend to be best in-game weapons.)

Police investigators report there were “thousands of dollars” of games in the home: another tidbit which is superficially compelling if you don’t know a game can cost about $100 and most gamers acquire dozens of them.

Lanza “played videogames for hours” breathlessly reported the New York Daily News, which is not remarkable considering most games last around 10 hours.

Take these little factoids with a grain of salt. The Sandy Hook shooting was nearly two weeks ago but like all tragedies almost every piece of information is provisional. The world’s press has swept through Connecticut trying to find new angles and dig up tales about the killer.

Falsehoods become embedded in our mind when they tell a compelling story. At his press conference Wayne LaPierre listed the most violent sounding games his researchers could find, but – as far as we know – Lanza’s favourite game was StarCraft, a science fiction strategy game. This Washington Post story says he was particularly good at Dance Dance Revolution. Not many assault rifles in that game.

The charge that videogames cause violence is easy to refute. There is a large amount of research on the question and it’s compelling. To give just a taste: game sales have skyrocketed in the last decade in the United States but the rate of violent crime heading towards historic lows. There’s no obvious relationship between videogame usage and gun-related murder, as this ten country comparison demonstrates. A study published in August this year (PDF) found videogames don’t seem to have consequences – negative or positive – on adolescent aggression in the short or long term. Virtual violence doesn’t desensitise gamers to real-world violence.

On the more particular topic of school shootings, a joint report by the US Secret Service and Department of Education in 2002 found only 12 per cent of school shooters had expressed an interest in violent videogames.

Obviously, Wayne LaPierre mentioned videogames to muddy the policy waters. Far from the principled defenders of the American constitution, the gun lobby is happy to attack the First Amendment to protect the Second.

But targeting videogames allowed him to make this obscene claim: “does anybody really believe that the next Adam Lanza isn’t planning his attack on a school he’s already identified at this very moment?”

This is both logical and stupid: millions of people play videogames, so millions of people are potential mass murderers. But how LaPierre thought mass panic would serve the interests of gun owners is difficult to understand. (Although you can see why gun manufacturers might be pleased.)

In a way, it’s too late. The American education system descended into security paranoia long ago.

After the Columbine shootings in 1999, many states rigidly enforced zero-tolerance approaches to violent or threatening behaviour in schools. Zero-tolerance made sense at the time. One of the Columbine killers, Dylan Klebold, had written violent essays, and it was tempting to think the massacre could have been averted if his teachers were on guard.

But in practice zero-tolerance was highly repressive. There are countless stories of children being expelled or suspended for simply drawing pictures of guns, for playing cops and robbers, for bringing a paring knife in their lunchbox to cut fruit. These tales would be laughable if they weren’t so cruel.

On top of unthinking zero-tolerance policies, we can add metal detectors at schools, massive arrays of CCTV cameras, random locker and car searches, and armed police. This recollection of education in Virginia on BoingBoing offers a glimpse of the security madness which now characterises the American public school system.

It will only get worse. In response to the Sandy Hook shooting, all 4,000 elementary schools in Ontario (yes, the Canadian province, that Ontario) will be implementing a “locked door” policy during school hours.

The NRA’s plan was to use fear – fear of videogames, of violent culture, of “the next Adam Lanza” – as a distraction from the gun debate. Perhaps they needn’t have bothered: that fear and paranoia was already there.

Grandstanding about mobiles won’t reduce the road toll

It’s an old principle of policing – if you can’t enforce the laws on the books, demand more laws.

More than 55,000 people in Victoria were booked for using their mobile phones while driving last year. That’s around 150 people a day.

So on Monday, the front page of the Herald Sun reported that Victoria’s chief highway patrol cop wanted the government to force drivers to switch their phone off in cars.
Never mind that a ban on phones in cars would be completely unenforceable.

Victorian road rules are clear. The Road Safety Act bans mobile phone use while a car is running. The only exception is receiving calls or using navigation functions with a commercially fitted holder. Even then, the driver cannot touch the phone at any time. The fine is $300 and three demerit points. New South Wales enacted similar laws last week.

Yet one survey suggests around 60 per cent of Victorians still use their phone while driving. That 55,000 people booked isn’t a lot, considering more than two million of the state’s 3.7 million licensed drivers are breaking the rules.

The Herald Sun article said “thousands of rogue motorists flout the law”. No – millions do.

First things first: it is incredibly stupid to use a mobile phone while travelling at speed. Driving is a complex task. Sending a text message on a phone increases the risk of accident up to 23 times. That much is easy to demonstrate in simulations and in-car experiments.

But things get less certain from there.

The “while driving” data is a bit misleading. They include a lot of circumstances we wouldn’t usually call driving – like checking your phone while stopped at a traffic light. But if the engine is running, it counts.

The NSW government commissioned a study into the extent of the problem earlier this year as part of a parliamentary inquiry. The results were striking and counterintuitive.

Seven per cent of accidents in NSW in the last decade involved driver distraction. And within that 7 per cent, only 1 per cent involved a handheld phone.

Don’t get too hung up on the specific numbers. There are many complicated definitional issues. There’s a large body of academic research on driver distraction but it’s not all comparable. And, obviously, the ideal number of accidents is zero, whether related to phones or anything else.

Yet it still remains that mobile phones are extremely small proportion of the causes of distracted driving involved in accidents. The majority of distractions come from outside the car. Then there are those within the car – like fellow passengers, grooming, or eating and drinking.

There are even three times as many accidents involving police pursuit as mobile phones.

The overwhelming majority of accidents involve exactly what you’d expect: speed, fatigue, and drink. Mobile phones hardly rate.

But you wouldn’t know that from the press. Phones dominate the popular discussion of car accidents. Using a phone while driving seems to be the ultimate in recklessness. It is terrifying to imagine there are people speeding down the freeway while tapping out text messages.

Smart phones are a novelty, and novelty makes news. Stories about how mobile phones cause accidents has all the characteristics of a moral panic – a disproportionate reaction to a small problem. Drivers face worse distractions. There are more disconcerting risks on the road.

For instance, one 2005 study found in-car entertainment systems are a far bigger real-world distraction than phones. You have to take your eyes off the road to change a CD or radio station. Handheld phones are problematic not because they impair drivers physically, but because talking while driving takes extra mental effort. It’s the conversation which is dangerous, not the phone. (This explains why some studies have found hands-free phone systems are no safer than hand-held ones.)

These are uncomfortable findings. No politician wants to challenge the right of drivers to chat with passengers or listen to the radio. Anyway, that’s why we have careless driving laws, and take recklessness and negligence into account in criminal accident proceedings.

Nevertheless, there has been a remarkable decline in car fatalities over the past few decades. The Commonwealth government has been tracking road deaths since 1925. Deaths have reduced from 30 per 100,000 population in 1970, to seven in 2008. If anything, that understates the decline: we’re driving twice as much as we did 40 years ago. And the death toll is still going down, even as more people buy more complicated phones.

A society should try not to have too many unenforceable laws. They breed contempt for the law as an institution. If people get used to disobeying one law, they may become comfortable with disobeying others.

As the American writer Radley Balko has argued, calls to increase restrictions on mobile phones in cars aren’t about safety; they’re about symbolism.

It’s already illegal to use phones in the car. Lots of people do it anyway. But political grandstanding about mobiles is not the same as reducing the road toll.

The puritanical public health movement

For eight weeks in 2011, four public health researchers – three from the Cancer Council, one from the University of Western Australia – watched 792 music videos aired on Australian television. They recorded all the mentions of alcohol, tobacco or illegal drugs.

The results were published in the journal Alcohol and Alcoholism in September this year. About one-third of the music videos referenced drugs. The vast majority of those references were to alcohol.

Here the full horror is unveiled: “references to alcohol generally associated it with fun and humour”. Only 7 per cent of the music videos that referred to booze presented alcohol in a neutral or negative light.

The paper argued music videos mentioning alcohol positively should be classified differently and regulated out of the morning timeslot.

But more broadly, the implicit claim of this research is there is something wrong with our culture: not just “the culture of drinking”, but culture in general.

Society associates alcohol consumption with fun, humour and celebration. According to the researchers, that association is “insidious”. One might add: pretty accurate.

This minor paper tells us a lot about the spreading ambition of public health activism.

The modern field of public health started with campaigns against ignorance. Educational programs were designed to inform the citizenry of the health consequences of their choices. The messages were simple. Smoking is bad for you. Keep fit. Eat more vegetables.

Such benign information provision is a thing of the past. Now public health is a great social project. It desires nothing more than a complete rewiring of our preferences and a rewiring of the culture which it assumes formed them.

It’s not just that the study of public health is deeply paternalistic and patronising. Nanny state accusations have pursued the field for decades. And no wonder: the Rudd government’s Preventative Health Taskforce even recommended the Government regulate the portion size of restaurant food.

But nanny state doesn’t quite capture it. Public health is an imperial discipline, dragging in everything from cultural studies to urban planning. And it does so all in the service of an increasingly ambitious program to reshape society and prioritise health above all other moral values.

Take the most fashionable adjective in public health right now: “obesogenic”. This pseudo-medical term describes an environment – usually physical, but sometimes social and cultural – which encourages over-eating and under-exercising.

Under the obesogenic flag, public health activists seek to colonise debates over housing sprawl, economic policy, public transport, childcare, house size, telecommuting, infrastructure spending, consumerism, and sustainability. Even law and order has been dragged into the public health domain: high crime rates mean parents don’t let their child walk to school which means those children get fat.

Here public health becomes less a medical concern and more an umbrella social critique. As one book, Obesogenic Environments, puts it, obesity is first and foremost a social problem. Certain obesity-encouraging practices have become culturally embedded. We eat out more. We drive instead of walk. It is the self-appointed task of public health activists to change those embedded practices; that is, “promote healthier choices”. Town planning has to change. Tax policy has to change. Infrastructure spending has to be reprioritised. Our preferences have to be redirected.
With its grand social crusade, the public health movement has come full circle.

Temperance activists in the late nineteenth and twentieth century talked as much about social practices as alcohol consumption. The major American temperance lobby was called the Anti-Saloon League. Saloons weren’t just bad because they were where the drinking happened. They kept men away from their families, and encouraged other sinful behaviour.

In Australia temperance activists lashed out at everything. In 1896 the South Australian politician King O’Malley described barmaids as “the polished fangs of the stagger-juice rattlesnake… angels of mercy luring men to their own destruction”. Several states banned barmaids. One major avenue for female employment – and the economic independence it brought – was closed. Poor old barmaids were merely collateral damage for the monomaniacs obsessed with stamping out booze.

In the same way, today’s public health movement is willing to jettison many other values in its quest to rewire society.

The hard-won conveniences of modern life – cars, restaurants – are obstacles to a better world. Popular culture is “insidious”, simply because it reflects our own beliefs back at us. Choosing what we eat and drink is not a right, it is a prison.

Public health is groping towards a full-blown political philosophy. Sure, it speaks the language of medicine. But it is more ambitious and vague than that modest field. The paper on music videos is a ham-fisted attempt to give cultural studies a scientific patina.

Like the puritans of the past, the public health movement is flailing against a society and economy it believes is deeply unwell.

Behavioural Economics: An Excuse To Tax And Regulate

Few areas of study are as fashionable as behavioural economics – the integration of psychological factors into economic analysis.

No wonder. Behavioural economics seems tailor-made for public policy. If people do not act rationally and do not pursue their own best interests, then perhaps markets aren’t that good. From there, the case for government intervention seems pretty obvious.

Two of Australia’s left-wing think tanks, the Centre for Policy Development and Per Capita, have released reports specifically on the implications of behavioural economics. And it is a rare paper from the Australia Institute which doesn’t discuss how market actors are riddled with biases, psychological flaws, and irrationalities. Therefore, they all conclude, governments need more power. There’s hardly a regulation or tax that hasn’t been justified by reference to the behavioural economics literature.

But the public policy implications of behavioural economics are more interesting than that.

The study of behavioural economics has largely focused on the irrationality of participants in the market. Yet there are two sides to policymaking. Regulators, bureaucrats, and politicians are just as affected by psychological ticks as consumers and businesses.

A newly published paper in the Journal of Regulatory Studies, “Behavioural economics: implications for regulatory behaviour” makes the obvious point: if the claims made by this field are right, then it should make us think just as sceptically about government action as consumer action.

After all, it would be no good to destroy the myth of Homo Economicus just to replace it with an equally pernicious myth of Homo Bureaucratus – a clearheaded and efficient policy designer.

There is no reason to believe that someone moving from the private sector to the public sector suddenly becomes more rational and unbiased. The dispassionate, rational economic actor might be a convenient fiction dreamt up by modellers and theoreticians, but then so is the dispassionate, rational, unbiased policymaker.

The paper’s authors, James C Cooper and William E Kovacic, look specifically at anti-trust law, where behavioural economics is commonly used to study business decisions to enter or exit markets, to merge with other firms, or split. Cooper and Kovacic argue that the bureaucrats who regulate those decisions are likely to have biases that undermine the effectiveness of government intervention.

Regulators are like the rest of us. They are over-confident, thinking they can understand complex behaviour. Hindsight bias leads them to believe events are more predictable than they are. And, unsurprisingly, they are driven by action bias – a tendency to favour interventionist solutions when faced with a problem.

In fact, regulatory biases could be worse than market ones. Behavioural economics tells us that irrationality is everywhere. But the marketplace provides firms and consumers with instant or near-instant feedback. In a competitive market, psychological bias can lead to failure or loss of market-share. With such feedback, market participants will change their actions. Make a mistake, lose money… do better next time.

By contrast, regulators receive little feedback at all. They operate in a political world, not an economic one. Regulatory or bureaucratic error is hard to pin down. It’s harder to allocate blame for errors. It’s even harder to quantify the costs of those errors.

Market participants learn from their mistakes. But regulators are completely isolated from the consequences of their decisions, so it’s much harder for them to learn.

Compounding that, confirmation bias – where the introduction of new, ambiguous information leads to the unjustified hardening of previous conclusions – may steer regulators and their political masters to believing a policy has been a triumph when it has not.

Indeed, even what constitutes success or failure in the public sector is debatable. Few policies have defined criteria whereby we can determine if they have succeeded or not.

In the Centre for Policy Development’s 2008 paper, You Can See a Lot by Just Looking: Understanding human judgement in financial decision-making, Ian McAuley rightly points out that humans are susceptible to the fallacy of sunk costs.

“We find it very difficult,” McAuley writes, “to make decisions solely on the basis of future costs and benefits, particularly if it means implicitly admitting that we have made poor decisions in the past”.

This is true for private actors, but is especially true for governments. Old bureaucracies never die – they just get renamed. Subsidies survive long past their use-by date. And taxes are stubborn things.

So far, the policy debate around behavioural economics has led with ideological conclusions – apparently offering those who believe governments should tax and regulate more a cutting-edge reason for doing so.

But if we want to fully understand the implications of behavioural economics, we’ll have to recognise that the field offers an even harsher critique of government than it does of markets. And the safe money says policy makers and bureaucrats will not enjoy the spotlight on them.

The True Origins Of Anti-Paternalism

Opposition to government paternalism wasn’t always a conservative or libertarian thing. Indeed, the use of the word “nanny” to describe state interference in individual choices originally came from the left.

In a 1960 article in the New Statesman, the magazine set up by members of the Fabian Society, nanny was deployed to attack the British Board of Film Censors. “Novels and the Press get along, not too calamitously, without this Nanny; why shouldn’t films?” asked a New Statesman columnist William Whitebait. Nanny “exercises a crippling drag on the growth of a serious and healthy British cinema.”

Eight years earlier, the American journalist Dorothy Thompson (and one time wife of Sinclair Lewis, the Nobel-winning socialist writer) was using nanny to describe British imperialism in the Middle East.

Western empires, Thompson wrote in her syndicated column, have “filled the role of headmaster, or Nanny-governess”. The West does not treat the inhabitants of its colonies as equals. She continued:

It is an amusing notion that comes to me that, with the retreat of empire, Britons are turning Britain itself into a Nanny-state, perhaps out of a long habit in persuading or coercing natives to do what is good for them.

Anti-censorship and anti-empire. These are not typical conservative positions. But both were drawn from the same anti-paternalism that drives the modern resistance to public health regulation – a belief that a powerful class should not impose their own values on the rest of society.

Colonial masters instructed their subjects in the best way to live their lives – lessons given force by military domination. And 20th century censors claimed to be protecting the less refined from the crude excesses of popular culture – judgements only moral superiors could make. Whitebait made much of the fact the British censors were aging aristocrats. Sir Sidney Harris, 83, was being replaced by Lord Morrison of Lambeth, 72. Who were they to tell Britons what they could or could not watch?

Of course, this is not how public health activists record the history of anti-paternalism. I gullibly took their claims at face value in May last year when I wrote in The Drum that “nanny state” is first found in the Spectator in 1965. This is more than a decade after Thompson used it.

According to this story – told by the Australian public health luminary Mike Daube in a 2008 paper in Tobacco Control – it was coined by the former Conservative minister of health Iain Macleod, who later died of a heart attack. (Macleod was a deeply ill man, suffering from an inherited weakness for gout, a war wound, and a chronic inflammatory disease. But Daube and his co-authors imply it was just smoking that did him in.)

Does nanny’s origin matter? Yes, insofar as it demonstrates that anti-paternalism is not – or at least was not – the exclusive preserve of the right.

How would the readers of those words in the New Statesman have responded to the claim by the British Labour leader Ed Miliband last week that the Tories had failed to stop the sale of discount chocolate oranges to the masses? Yes, Cameron complained about the same thing when he was in opposition. But, as our New Statesman readers might say, Cameron is a Tory. You’d expect a bit of Tory paternalism from him.

Or how would those who nodded along with Dorothy Thompson’s distaste of imperial paternalism feel about the recent Australian complaints that Aldi is selling cheap alcohol?

Rejecting eight out of Aldi’s 20 liquor license applications in New South Wales, the chairman of the state liquor regulator said last week that “I don’t know if there are areas that have too many bottle shops but certainly there are areas that have enough”. Are there? That seems a call best made by Aldi and its customers.

One could even go so far as to say that cheap is a good thing. Recall that it was Aldi which consistently won the Rudd government’s Grocery Choice competition. Aldi is at once hero for selling consumers goods they want cheaply (food), and pariah for selling consumers goods they want cheaply (alcohol). This doesn’t have just a whiff of paternalism. It has a stench.

And somehow such paternalism is even more obnoxious when it is petty, as it is with the Aldi licence rejection. Sure, reports suggest Aldi’s cheapest wines are rubbish. This is no surprise at $2.49 a bottle (By contrast, a critic in The Australian suggests that the 83c beer is “gluggable”). But so what? Cut-price bottom-shelf alcohol is already available at other stores. Geographic limits on bottle shop numbers are designed to do nothing more than frustrate purchasers. “All this of course for our good,” as Whitebait sarcastically told his New Statesman readers.

Those who seek to limit our choices usually have good intentions.

The film censors who banned Battleship Potemkin for three decades believed they did so in the British people’s best interests. The colonialists believed the same about the third world. And those who would limit bottleshops in New South Wales also believe they are doing the right thing.

But the underlying philosophy is the same: a deep paternalist belief that people must not be trusted to look after themselves.

Fat chance of cutting calories

“Given the lack of evidence that calorie posting reduces calorie intake, why is the enthusiasm for the policy so pervasive”, asked an editorial in the American Journal of Clinical Nutrition in February this year.

It’s a good question, considering that mandatory nutrition menu labelling in chain restaurants is being rolled out in New South Wales and Victoria this year. Major fast food outlets, pizzerias, cafes, bakeries, and juice bars will have to show the amount of kilojoules in each item on their menu.

Mandatory menu labelling is a useful case study in the lack of good evidence behind much public health paternalism. “Useful” because it has a stronger evidentiary base than most. But that’s not saying much.

New York City began mandating nutrition labels in chain restaurants in 2008. The New York Department of Health predicted big things – 150,000 people would be saved from obesity within five years, preventing “more than” 30,000 cases of diabetes.

Preventative health policy is contagious. Menu labelling was picked up by Australia’s Preventative Health Taskforce, and included in its paper on obesity released in October 2008. The New York Health Department’s ambitious but well-publicised claims were quoted.

The critical piece of scholarly evidence that backers of this policy have relied upon is a 2008 study published in the American Journal of Public Health (it predates New York’s mandatory measures).

The study’s authors looked at a sample of customers from 11 fast food chains across New York’s five boroughs. All provided nutritional information to customers. But only one provided that information at the point of sale. The rest simply displayed it on a website or tucked away in the store.

No shock then only 4 per cent of customers reported seeing it in those latter stores, as opposed to 32 per cent who saw it when displayed on the menu.

A third of those who reported seeing the information purchased items with fewer calories.

Pretty conclusive, you’d think. Except for the fact that the restaurant which displayed the calories was Subway – a chain which has deliberately marketed itself to the health conscious, not least of all by displaying calories on its menus.

That’s no incidental detail. The big issue in obesity policy is that many interventions require a pre-existing preference to eat healthy. Subway regulars are likely pickier about their food intake than, say, McDonald’s regulars. There’s a reason their mascot is a guy who lost of lot of weight.

Even then, the overwhelming majority of Subway customers did not reduce calories even after they saw the information.

Still, as far as evidence goes the paper isn’t bad – equivocal in the details, but better than nothing. So Kevin Rudd’s Preventative Health Taskforce cited the paper, and recommended menu labelling (Of course, the taskforce didn’t mention the chain in question was Subway).

But evidence for the effectiveness of menu labelling has slid backwards fast since the policy was made mandatory for all New York chains in 2008.

A 2009 paper in Health Affairs found that while 27 per cent of people self-reported that the calorie labelling influenced their purchase decisions, analysis of actual purchase receipts did not bear this out. People were saying one thing, but doing another. Just one more example of why we shouldn’t put much stock in social science surveys.

The Health Affairs study looked at New York’s low income communities. Calories consumed had actually gone up slightly in some restaurants. Some academics have supposed that menu labelling helps a subset of consumers to calculate the most calories they can purchase for their buck.

A similar phenomenon was detected in a 2009 study in the American Economic Review, which found that “providing calorie information may have small effects on food choices, but may also produce perverse effects.”

A study published this year in BMJ also found that, while chain by chain the response could vary significantly, mandatory labelling led to no overall decline in calories purchased.

Of course, the New York City Health Department now says it has new – and unpublished – evidence which apparently proves the regulation works in four of the 13 chains it surveyed (alternatively: it doesn’t work in nine).

So it’s fair to say that in 2011 there is no good reason to suppose that mandatory menu labelling will have a discernable effect on obesity rates.

That the one real-world example of mandatory labelling appears to have failed should give pause.

But that failure has not stopped public health lobbyists from working hard to impose them. Or governments from pressing ahead.

The Australian Obesity Policy Coalition’s policy brief on menu labelling cites the Subway research, without naming Subway, and without mentioning the easily accessible recent studies of mandatory menu labelling’s real-world effectiveness. The brief was written this year, so no excuse.

Newspaper articles announcing the Victorian decision also only referred to the Subway research (again, without naming the chain), but not the subsequent studies. Politicians leant on the once-over-lightly – and now outdated – Preventative Health Taskforce report, which, as we’ve seen, also relied on this 2008 paper.

The Heart Foundation, to its credit, did a proper literature review up to 2010. And having done so, it could conclude no more than mandatory menu labelling should be trialled to see if more concrete evidence could be found.

Nevertheless, joining John Brumby at the policy’s announcement, the co-chair of the Heart Foundation claimed implementing menu labelling permanently was a “fantastic initiative” and “definitely will raise the awareness around what people are eating”. No calls for a limited trial. No equivocation. No scholarly dispassion there.

It’s almost as if the evidence is beside the point. The public health community have their mind fixed on mandatory menu labelling.

Menu labelling is now going to be rolled out across the United States. In Australia, activists are talking about the need for a “national approach”.

Sure, as far as regulations go, it’s a relatively minor one. Yes it is expensive to implement. But many restaurants are doing it anyway (Subway was merely first). The legislation is limited to large chains which can spread the cost. And if it saves just one calorie…

But the published data can be very inaccurate. Sixteen-year-old pizza chefs don’t exactly measure pepperoni by the gram. The McDonald’s auditors can’t control exactly how many fries constitute “medium” fries. There’s a lot of variation in even the most regimented cooking.

It should be concerning that labelling could result in poor people consuming more calories rather than less – a result which nobody predicted before the New York regulations were imposed. Unintended consequences are like that.

Nevertheless, as a case study in public health paternalism, it should be more concerning how policies which have little evidence to support them gather an unstoppable inertia.

The Nanny State Is Coming…For Your Democratic Soul

Is saying “nanny state” just a cheap slur?

The term was coined in The Spectator in 1965 and clearly bears the marks of that publication and that era.

I have in my collection half a dozen academic papers published in public health journals decrying its use: “The nanny state fallacy”; “No need for nanny”; “Nanny or Steward?” “Medical police and the nanny state: Public health or private autonomy?” and so on.

One, co-written by a prominent member of the Rudd government’s Preventative Health Taskforce, Mike Daube, compares “nanny state” to the phrase “health Nazi”. Daube and his colleagues argue the latter is needlessly offensive and the former is “an easy phrase in the same tradition”.

Daube’s artless comparison reminds us that one can protest too much. The problem some in the public health community have with the nanny state appellation isn’t that it’s unfair. It’s that it resonates with the public.

And we rely on “nanny state” because it describes something very specific – an observable and concrete change in the way government relates to individuals. It has no obvious or elegant alternative.

Where more “traditional” regulatory interventions try to protect individuals against the adverse consequences of the decisions of others, the nanny state primarily seeks to protect individuals from themselves. As with all public policy, supporters deploy a wide variety of justifications, but this is what makes nanny unique.

And the attitude which underpins such paternalistic policy has implications far beyond alcohol and fast food. It is, in a very real way, profoundly undemocratic.

If we can’t trust people to choose their poison, then how can we trust them with a vote?

That question may sound glib. But democratic legitimacy rests on a positive belief that while not all citizens may be equally intelligent or informed, they are equally sovereign, and as a consequence have the right to have a say about the country’s future. In their own small way.

The systematic chipping away by nanny state activists of these assumptions – that people should be assumed to be competent to make such portentous choices themselves – presents more of a challenge to democratic legitimacy than the public health community may recognise.

Nobody is suggesting a cadre of experts should guide citizens to make the correct political choices. They never will. (The public health community would no doubt be horrified at the thought.) But it is just as hard to see not, given this paternalist philosophical stance.

After all, we have undermined the notion of individual responsibility to such a degree that some government advisors no longer even trust people to, say, manage their own food serving size (the Preventative Health Taskforce suggested regulators standardise portions in restaurants).

Basic notions of political equality should compel us to leave those food portion decisions in the hands of individuals, not state-appointed experts. We cannot pretend to have a legitimate democracy if the government operates under a presumption that voters are idiots.

So dismissing individual responsibility has consequences. Once you’ve accepted that the government should not treat people as autonomous, all sorts of authoritarian policy results.

The increasing centrality of income management for welfare recipients is driven by the same philosophy. To support nanny regulations yet oppose income management is incoherent.

That’s a hypocrisy found on the left. On the right, drug-warriors against the nanny state are just as contradictory.

This is why the strongest objections to nanny interventions have always been philosophical, not practical or economic. The nanny state is a radical reworking of the relationship between individual and state.

Certainly, some nanny interventions have been going on for a long time, and, in retrospect, few find them are objectionable.  Seatbelt laws protect people from the consequences of their own decisions and potentially the behaviour of others.

The difference is of degree, not kind. But there is a difference nonetheless.

Where we can isolate one or two regulations of the past with similar purpose, the nanny state of the 21st century is expansive and insatiable. It’s a volume thing.

The Preventative Health Taskforce proposed 122 separate recommendations to clamp down on alcohol, tobacco, and weight-gain. It recommended seven entirely new bureaucracies be set up. It suggested twenty-six new laws and regulations.

Some critics have begun to describe paternalist interventions as indicative of a “bully state” mentality; a graduation from nudging to shoving. While this is true for some new and proposed regulations, it’s not clear that, say, imposing new, simplified food labeling laws is really bullying.

The Nobel-winning economist James Buchanan tried to rename the nanny state “parental socialism”. Buchanan’s alternative has an appeal, but its second word is louder than the first. And nanny is the better metaphor. In a democratic system, government is the hired help who we delegate to perform collective tasks – not actually our progenitors or superiors.

We could invent other phrases.

Public health activists are clearly frustrated by the nanny state critique. So they should be. They do not understand how substantial a challenge their ideas are to the philosophical assumptions which underpin liberal democracy.