Repeat After Me: All Tyranny Is Evil And Wrong

Simon Winchester is a best-selling author. You’ve probably seen his popular histories, in particular, The Surgeon of Crowthorne. He received an OBE from the Queen in 2006 for services to journalism and literature. He is apparently witty, charming and intelligent. And he thinks we’ve all been a bit unfair to North Korea.

In the London Times shortly after the death of Kim Jong-il, Winchester argued that, sure, the Hermit Kingdom has its ”flaws”, but life there is ”not nearly as bad as is supposed”. The restaurants are few, but the medical clinics are clean. The bars sell imported beer, and the hairdressers are friendly.

But for Winchester, the great thing about North Korea is that it isn’t South Korea. The North hasn’t been ”utterly submerged in neon, hip-hop and every imaginable American influence”. It is ”a place uniquely representative of an ancient and rather remarkable Asian culture. And that, in a world otherwise rendered so bland, is perhaps no bad thing.”

Never mind the poverty. The tyranny. Or that Winchester visited at the tail end of a famine that killed about 10 per cent of the population – a famine caused deliberately by a hereditary dictatorship. The real issue is Western consumerism. North Korea is desperately poor, but let’s focus on how crass America is.

Winchester is not alone. Writing in Crikey, Guy Rundle argued that yes, North Korea is in a state of oppression, but (don’t you know?) neoliberalism and globalisation are really bad too.

There is a long history of left-wing intellectuals apologising for communist dictatorships. And it’s always been less about the places they’ve venerated, and more about the intellectuals themselves: their deep, unshakable dislike of capitalism, and their belief that Western liberties just result in vulgarity. In his 1991 book The Wilder Shores of Marx, the English psychiatrist Anthony Daniels wrote about returning from a visit to North Korea a few years before. He recalls describing to a colleague, a professor of medicine, the pervasive propaganda and brainwashing in Kim Il-sung’s regime. The professor responded: ”But have you considered how much power Rupert Murdoch wields in this country?”

Sure, no 20th-century dictatorship has been without its defenders. Stalin’s Russia, Mao’s China, Pol Pot’s Cambodia, Castro’s Cuba, Ho Chi Minh’s Vietnam: they’ve all been praised by Western socialists looking for a model of the good society. And their ”flaws”, the tyranny and terror and poverty, have been downplayed.

Charlie Chaplin’s The Great Dictator is one of the great anti-dictatorship films, but his opposition to tyranny was selective. Chaplin thought ”the only people who object to Communism [are] Nazi agents”. When he heard about Stalin’s purges, Chaplin said they were ”wonderful” and were needed in America.

When Mao died in 1976, Gough Whitlam couldn’t praise the dictator enough: Mao ”was the inspiration to the Chinese people” and made China ”secure, stable and self-confident”. Of course, he killed 45 million people doing so. One would have hoped the wistful romance of tyranny would have disappeared after the fall of the Berlin Wall, and its final, conclusive, unavoidable demonstration of the dangers of communism.

Not so. The romanticisation of communism survives. When the former Czech president and anti-communist dissident Vaclav Havel died last month, Guardian columnist Neil Clark complained Havel had never talked about communism’s good side. Communism offered welfare, education, and women’s rights. So ”the question which needs to be asked”, intoned Clark, is did Havel’s ”political campaigning [make] his country and the world a better place?”

Havel had been repeatedly tortured by the Czech police. He was punished for demanding democracy and human rights. But perhaps Havel’s experience of torture and imprisonment blinded him to how great life under Marxist dictatorship actually was.

Or perhaps many Western writers are so desperate to blame capitalism for the world’s problems that they’re willing to forgive, even support, non-capitalist tyranny.

Someone is always saying something nice about the worst totalitarian states. After Margaret Chan, Director-General of the World Health Organisation, visited North Korea in 2010, she told the media she’d seen few signs of malnutrition. Mind you, she only visited Pyongyang and had been escorted the whole time by North Korean officials.

Even better, Chan had seen no signs of obesity. North Koreans, the Director-General approvingly noted, do a lot of walking. People in affluent Westernised Asian countries do not. I guess the upside to material deprivation is how it encourages physical exercise. And there is, Simon Winchester might say, a real ”authenticity” in that.

What’s Wrong With ‘No’?

The Government and its supporters apparently believe Tony Abbott’s big character flaw is that he opposes their policies. They make up cute little jokes about it. Noalition. Mr Negativity. There is apparently an ALP pamphlet called The Little Book of Dr No.

But who is this critique supposed to convince? Apart from people who believe the Government can do no wrong, that is.

Every government accuses those on the other side of parliament of being too negative. When they were in power, the Coalition did the exact same thing.

“It’s not the job of an opposition to be negative when it’s against the national interests to be negative,” said John Howard in 2005. Government press releases said Kim Beazley opposed Coalition legislation “for the sake of opposing”. So did Simon Crean. And Mark Latham.

While he was the workplace relations minister, Tony Abbott complained bitterly about Craig Emerson’s “feral” opposition to any workplace reform.

Presumably Emerson genuinely thought workplace reform was a bad idea. If so, opposing it seems to be exactly what a person in his position should have done. And Abbott, presumably, thinks that the carbon price is a bad idea. (At least, he does now. He’s definitely more convinced of its badness than Julia Gillard is convinced of its goodness.)

On the surface, the attack on Tony Abbott’s “relentless negativity” is a claim that the Opposition has no values, that it is only driven by political considerations.

But the attack is guilty of the same. It has nothing to do with the content of any of the Government’s policies – just an assertion that the Opposition is getting in the way… of what?

Abbott is a conservative, and one who understands what conservatism means. His book Battlelines heavily cites Edmund Burke and Roger Scruton. No fair reading would dismiss his discussion of conservatism as lightweight. Abbott has tried to shape the Liberal Party as not simply a conservative party, but a philosophically conservative party.

For a philosophical conservative, change should be incremental and hesitant. Even reluctant. In Battlelines, Abbott approvingly quotes the philosopher Michael Oakeshott when he said the “modern mindset” is “in love with change … fascination of what is new is felt far more keenly than the comfort of what is familiar.”

Abbott responds that the known status quo, with all its flaws, is preferable to a hypothetical ideal.

“Conservatism prefers facts to theory… To a conservative, intuition is as important as reasoning, instinct as important as intellect.”

This may seem like a deeply anti-intellectual approach, but it isn’t. Conservatism is a recognition that very smart economists working with very smart legislators can sometimes get things wrong – even though they’re very, very smart, and their intentions are very, very good.

So a conservative would absolutely oppose the majority of new government programs – especially reforms on the scale of the carbon price or the mining tax.

And the No Strategy certainly is better than the aimlessness of most oppositions. Shadow ministers under Brendan Nelson and Malcolm Turnbull would criticise the Government for being “all talk and no action”. A program is behind schedule! An election promise hasn’t been kept! But a bad promise that hasn’t been kept is a good thing, surely? Before Abbott, the Coalition seemed to be arguing that the Rudd government was not being the best Rudd government it could be.

The real problem with Abbott’s No Strategy isn’t that it is negative. The problem is that his negativity might not be consistent.

There’s no point saying no in opposition if you’re going to say yes in government. As prime minister, Abbott will be lobbied by ministers trying to fund their pet projects. Some of those ministers will be puppets of their departments, for whom prestige and expenditure are the same thing. Other ministers will just want to spend their time in government divvying out cash. Abbott knows all about the latter ministers. He was one of them.

Unfortunately, even in opposition Abbott’s negativity is selective. When he doesn’t say no – that is, when he comes up with positive policies of his own – they don’t tend to be very good. Take his expanded anti-dumping program, which requires importers to prove they’re not selling in Australia below cost. Or take his parental leave scheme and the vast increase in middle-class welfare that it represents.

Just last week Joe Hockey proposed a crackdown on foreign investment in agriculture. This is a prime candidate for no. But many in the National Party don’t like it when Chinese companies buy farms, and Abbott doesn’t want to be negative to his Coalition partners.

There’s nothing wrong with no. Abbott just needs to say it more often.

And You, Sir, Are No Libertarian

Last week, the New York Times editorialised the Ron Paul newsletters could “leave a lasting stain” on libertarianism and the libertarian movement. Who knew the Newspaper of Record was so concerned about the reputation of small government and individual liberty?

Yes, the newsletters matter. Libertarians mustn’t pretend they don’t reflect poorly on Ron Paul himself. But, no, the actions, or even views, of a presidential candidate 20 years ago don’t discredit a philosophy of government.

Published in the late 1980s and 1990s under titles like The Ron Paul Survival Report, they purported to be written by their namesake. Some of them are foul. They’re racist and homophobic and shrill. Like “We can safely assume that 95 per cent of the black males in [Washington DC] are semi-criminal or entirely criminal”. Or “I miss the closet. Homosexuals, not to speak of the rest of society, were far better off when social pressure forced them to hide their activities”.

There’s a lot more of those quotes. They’re no better – often worse – when read in context.

Until recently libertarianism was a niche philosophy. Its tenets have been held by few, and professed by even fewer. The content of Paul’s old newsletters reflects a strategy by some libertarians in the 1990s to build a coalition with cultural conservatives – to bring those cultural conservatives to radical free market economics.

This paleolibertarian strategy was formulated by Lew Rockwell and the economist Murray Rothbard, both friends of Ron Paul. Writing in 1990, Rockwell said the “Woodstockian flavour” of contemporary libertarianism was off-putting. It was time for libertarians to abandon the “Age of Aquarius” and ground themselves in religion, the family, Western culture, and the middle class. Libertarianism and libertinism are not the same thing; paleolibertarianism was supposed to sever whatever connection there was between the two completely.

That was the theory. The practice was the reactionary racism seen in the newsletters. Many libertarian commentators believe Rockwell ghost-wrote the most incendiary material. It is fairly well established Paul did not, and few believe he holds – or held – any of the views attributed to him by the newsletters.

Yet that isn’t quite an excuse. Paul may not have written the newsletters personally, but he signed off on the paleo strategy, gave his name to it, and handed his byline to its advocates. He didn’t write them, sure, but they went out with his blessing.

Strictly, libertarianism is only a philosophy of government. It does not offer a vision of the good or moral life. A libertarian can, in theory, hold any social belief they like. All they have to do is oppose the government forcing those beliefs on others.

Yet it is hard to square the racism of the newsletters with the understanding of human worth that underpins libertarian philosophy. It does not make sense to place individuals at the centre of your politics then to denigrate people based on their group membership. Ayn Rand famously said racism is the most primitive form of collectivism.

And libertarians have always emphasised the role of markets promoting toleration, because toleration is a good thing.

So the paleo fusion was awkward. Re-read the quotes at the start of this column. They’re not remotely libertarian. Society was better off when all gay people were in the closet? How do you reconcile that claim with a belief in the moral autonomy of free individuals?

And as a political strategy, it didn’t work. Libertarianism isn’t now popular because Lew Rockwell decided to play on white resentment. It is popular because it is the only philosophy of government that takes individual liberty seriously across all policy areas.

The issues animating the libertarian resurgence in 2012 are “cosmopolitan”, not paleo. Keynesian economics and the response to the global financial crisis. Drug law reform. The expansion of presidential power. Just last week Barack Obama authorised the indefinite detention without trial of US citizens suspected of terrorism. Not even George W Bush did anything so draconian, yet Ron Paul is the only presidential candidate opposed.

As Ed Crane, president of the Cato Institute, wrote in the Wall Street Journal on Saturday, Paul is the sole contender who believes in constitutionally limited government. All candidates have talked big about small government; only Ron Paul means it.

And Paul’s arguments on civil liberties (on, for example, the Bradley Manning case) have meant that his political base is now college students, not survivalists.

Paul’s most questionable policy views are the ones calibrated to the latter audience. His opposition to the North American Free Trade Agreement is less about the virtues of bilateral trade deals and more about “sovereignty”. In 2006 he claimed NAFTA was a stalking horse for a North American Union that would have a common currency, international bureaucracy, and “virtually borderless travel”. This is conspiracy theory stuff. The newsletters are full of it.

Views like those detract from the seriousness of Paul’s other messages. A recent feature on the Daily Beast of “10 Outrageous Ron Paul Quotes” lumped his NAFTA allegations with his support for property rights and drug reform, as if they were all equally crazy.

It’s disappointing that Paul’s run crowded former New Mexico governor Gary Johnson out of the Republican debates. Johnson doesn’t have a paleo bone in his body. He has none of Paul’s baggage. In many ways he is also more libertarian. Johnson switched to the Libertarian Party in December.

I wrote earlier that the past actions or views of a candidate don’t discredit a political philosophy. That’s not quite true. It’s never the scandal that hurts, it’s the cover-up.

Yes, many progressives have jumped on the newsletters to attack Paul while excusing the appalling civil liberties record of the Obama administration. Such is politics.

But if those who support Ron Paul in 2012 pretend the newsletters are no big deal (or worse, try to rationalise them) it will suggest to others intrigued by the philosophy of freedom that there is a relationship between racism and libertarianism.

There isn’t. And libertarianism is about more than Ron Paul.

The Good News Is, The News Is Getting Better All The Time

Europe is on the brink of collapse, the Arab Spring has taken an illiberal turn, Chinese growth might be faltering, and North Korea’s future is questionable. But there are a lot of reasons to be optimistic about 2012.

Bad geopolitical news and economic indicators cloud good news and omens. Disasters – whether economic, social, environmental – are rarely as bad as predicted. And there is no reason to believe that our long-term trajectory of higher living standards, better wellbeing, and more riches will change.

Better health alone should give us reason to be optimistic. The World Health Organisation says global life expectancy has increased by two years for men and three for women in the past decade. The poorest countries have seen the greatest increase.

Fewer newborns are dying: from 29 per 1000 live births in 2000 to 24 in 2009. Maternal deaths have decreased from 3.4 per 1000 to 2.6 in the same time.

There is less tuberculosis and HIV. Malaria has taken a dive. Recession or not, things will improve – we now have seven billion people to develop cures and prevention.

Wars are getting rarer and less violent. Actually, it’s better than that. In his book The Better Angels of Our Nature, Steven Pinker makes a bolder claim: globally, violence itself is declining. There is less corporal and capital punishment. Less torture. Less crime targeting gender or sexuality.

In 1993, there were 55 countries classed by Freedom House as ”not free”. Today there are 47.

Since 2005, 19 million people have migrated from one country to another. This is great. They’ve found new opportunities and more work.

Half of the world’s population now lives in cities. Urban environments offer stronger job prospects, more accessible services, cheaper transport, and more varied human relationships.

Humans are getting better at coping with natural disasters. Food prices are going down after recent highs. The prices of rare earth metals – used in virtually every electronic device – have fallen too.

And in 2011, people had more of what they want. Ignore the prattling about consumerism. Surely the fulfilment of human desire is a good sign.

Small medical clinics can now buy a hand-held ultrasound machine, connected to a smart phone, for about $7000. A team at Stanford University has learned to ”program” the cells in living organisms to act as computers. Implications range from fighting cancer to managing fertility.

Google is testing a self-driving car. Fifty million people now use Twitter every day, forging bonds across the world. E-books have made reading cheaper and easier.

Even economic crises have their upsides. Productive people kicked out of jobs are forced to be creative. Sure, a cloud with a silver lining is still a cloud. But economic turmoil can spark innovation from those compelled by circumstance to take risks. Less foreign aid has spurred economic reform. Africa is now the world’s fastest growing continent; Ethiopia grew by 7.5 per cent last year. Many analysts claimed the developing world would suffer most from the economic crisis. They were wrong.

Humans tend to equate short-term problems with long-term ones. If there is an economic downturn, we imagine it indicates a deeper, systemic problem with society and capitalism. If food prices go up, we imagine it’s the end of cheap food.

In the early 1990s recession, the United Nations held a conference in Rio de Janeiro where it was claimed that ”humanity stands at a defining moment in history”, and the world is ”confronted with a perpetuation of disparities within and between nations, a worsening of poverty, hunger, ill health and illiteracy”.

Yet, as Matt Ridley writes in his recent book The Rational Optimist, the next decade ”saw the sharpest decrease in poverty, hunger, ill health and illiteracy in human history”.

So, yes, the euro is stuffed and China might sneeze. Both would be bad. But there’s still every reason to be optimistic.

The Socialist Calculation Debate

The great debate in political economy isn’t between Friedrich Hayek and John Maynard Keynes, but between Friedrich Hayek and Oskar Lange.

This debate began in the 1920s and focussed on whether it was theoretically possible for a socialist country to plan its economy, as advocates of socialism suggested.

Could a socialist planner allocate scarce resources efficiently? How would they decide whether to send rubber to Tyre Factory 12 or Hose Factory 7? In a market economy, the factory that needed the rubber most would be willing to pay the highest price. But there is no natural price system in socialism – consumer prices are decided by the planner, and rubber allocated according to their diktat.

Hayek thought socialist planning was practically impossible – the information to choose without prices was too hard to get. His mentor, Ludwig von Mises, also believed planning was theoretically impossible – without market prices, the necessary information simply wouldn’t exist.

On the other side was a group of socialist economists, led by the Polish Oskar Lange. Lange argued all the information buried in prices would be accessible to socialist planners: they could carefully watch inventories to ascertain supply and demand and therefore where goods should be allocated. Lange’s views firmed in later life as he recognised the power of computers, writing just before his death in 1965 the market process was just “a computing device of the pre-electronic age” and therefore “old-fashioned”. Computers could do everything a market does, and do it fairer.

Such was the theoretical debate. Lange and his contemporaries lost twice: first with the fall of the Soviet Empire, and second with the left’s apparent embrace of the market. The same Australian Labor Party that damned Hayek during the global financial crisis can’t stop praising the virtues of market pricing when talking about its emissions trading scheme.

But both sides of the socialist calculation debate spoke in ideal terms. Lange and his contemporaries imagined a unified and purposeful socialist commonwealth. And, however critical they were, Hayek and Mises assumed the same. The question was whether socialism could work in theory – not whether socialism worked.

Actually existing socialism was nothing like Lange’s ideal model. In a 2004 book, The Political Economy of Stalinism, the economic historian Paul R Gregory dug through the Russian archives to see how socialist planning worked in practice.

It was chaotic. At best, resources were allocated throughout the Soviet Union “by feel and intuition”. Stalin had an enormous bureaucracy at his command but there was little delegation. Even the smallest economic decisions were pushed up to the Politburo, and to the dictator himself. Should the state buy an oil tanker? Should they sell 200 trucks to Mongolia? Should steel pipes be imported or produced domestically? It was not disinterested planners working towards efficiency, but Stalin and his senior colleagues who decided such things.

This extreme centralisation was not some Stalinist aberration, as Gregory points out. Any political order that wishes to plan for national uniformity inevitably has to concentrate power. Stalin was not overworked because he was a totalitarian leader, but because he was a socialist one.

There was a great irony in Oskar Lange’s faith in the power of computers to resolve the calculation problem. The development of a Soviet computer was itself a case study in the inability of socialism to efficiently produce and innovate. While the West had pushed ahead with the development of computers after the Second World War, the Soviet hierarchy apparently saw little need to do so until the mid-1960s.

When a native computer was finally mass-produced, it was so poorly built it was virtually worthless. And the central plan called for the production of lots of computers, not for them to be well maintained or integrated. The USSR was never able to implement the technocratic ideal. It was constitutionally unable to do so.

Central planning failed not because it was logically impossible, but because it couldn’t deal with the ignorance and self-interest that characterises all human activity.

This is still the basic problem in public policy. Governments no longer comprehensively “plan” their economies. Yet they now try to supervise them. We are often told the debate between Hayek and Keynes is the great question in political economy. But whether governments can spend their way out of recessions is just one element of the larger debate about the coordinating power of markets.

Lange believed economic calculation was just a matter of throwing enough computing power at the problem. Today’s regulators believe the same thing – extensive risk models purport to give regulators enough information to manage the private sector. The global financial crisis demonstrated that those models are elaborate fictions. Yet the response from regulators has been to double down and insist on greater powers and more complex models.

The calculation problem is endemic in highly regulated sectors like health, where it is not the price system that coordinates resources, but bureaucrats and politicians. Every government proposes to reform health but the sector will remain unreformable until this basic problem is recognised. In the meantime health will continue to be dominated by rent-seekers and rife with inefficiency.

And the calculation problem shapes the debate over tax and spending. Social democrats claim in certain circumstances governments can spend money more rationally, more efficiently than taxpayers could. But this claim relies on a belief that bureaucracies have enough knowledge to do so, and can surmount the political and commercial interests which flock to the centres of power.

Much has changed since the 1920s but the basic problem in political economy has not – ignorance. We should not be talking about Hayek versus Keynes, but Hayek versus Lange.

Convergence Review: Complete, Spectacular Failure

The Convergence Review “has assembled what could be a workable model for regulating the converged media environment,” said Greens Senator Scott Ludlam last week.

Really? The review’s interim report, released on Thursday, is a lot of things but “workable” isn’t one of them.

The Convergence Review’s purpose is to reshape communications and media law in light of the rapid technological changes over the last decade.

And if its interim report is anything to go by, the review has completely, spectacularly failed.

Just take one of the most prominent examples of its entirely unworkable suggestions. The report recommends imposing minimum Australian content requirements on all “Content Service Enterprises” that provide audio-visual services. Those Content Service Enterprises include websites.

The extent of regulatory intervention to do so would be extraordinary. The effort to distinguish Australian websites from international websites would be significant. The incentives to avoid these new regulations would be enormous.

Certainly, the Interim Report says “emerging services, start-up businesses and individuals should not be captured by unnecessary requirements and obligations”.

Yet that one-sentence caveat begs more questions than it answers. Who draws the line between an “emerging service” and an established one, and according to what principle? And why, exactly, is it that start-ups and individuals should be excluded? What theory of media regulation distinguishes between old and new companies, between companies run by one person and companies run by two, between companies doing innovative things and those which are not?

So that caveat, rather than suggesting the Convergence Review has thoughtfully engaged with the complexities of its task, reveals it has been unable to devise a coherent model of communications regulation which makes sense in an online world.

This failure is a particular disappointment because the Convergence Review was supposed to be the real game. Yes, it is just one of a bunch of reviews into media law. But only the deliberately naïve think the Independent Media Inquiry is anything but a political attack on hostile newspapers, and the National Classification Scheme Review is too constrained by its limited brief to recommend any serious reform.

The Convergence Review, by contrast, had scope and ambition. Scope: it was to look at all media from broadcast television to blogs to newspapers. Ambition: it was to take the communications revolution seriously and construct a regulatory framework which could last 20 years.

And it asked the right question. Now that you can listen to the radio on your computer, browse the internet on your TV, and read newspapers on your phone, why should the law treat each service and each technology differently? Forget whether News Limited gave the stimulus package a fair go, or whether Rob Oakeshott is being quoted accurately. This is the most important media policy question right now.

In The Drum in September I argued media convergence necessarily implies deregulation.

It is impossible to impose on the internet the same complex, technocratic, micromanaging regulations which have governed Australian broadcasters.

And even if it were possible, it would not be desirable. Any limit or imposition on what an organisation can publish or broadcast is a restriction on freedom of speech. In Crikey last week, Bernard Keane wrote the Convergence Review “represents the most far-reaching proposals for internet regulation since the Howard government banned online gambling” – much more substantial and threatening than the internet filter ever was.

It follows that if we are to have a new framework regulating all services consistently, broadcasting regulations should be lowered, not internet regulations raised.

Yet such genuine reform would require challenging the obsolete content regulations which have built up over the last half-century. The idea “Australian voices” need to be protected and subsidised is anachronistic – since the rise of home video, television networks or regulators stopped being able to dictate what media content we watch. More than ever our media consumption is about choice. If Australians want Australian content they will seek it out. If they don’t, they won’t.

The Convergence Review goes boldly in the other direction. Drawing on a “wide range of views”, the report concludes there “is an ongoing need for government intervention to support the production and distribution of Australian content”. This claim makes it impossible for the review to meet its brief.

Not to say they haven’t tried. One option for Content Service Enterprises, if they can’t produce Australian content themselves, is to support “a converged content production fund”. In practice, that seems to be a tax on websites to fund Australian television production companies.

Not quite the radical, principled rethink about media regulation we were hoping from the Convergence Review.

But a sad reminder of how hard it will be for regulators and legislators to ever come to grips with the communications revolution.

How The Red Cross Virtually Lost The Plot

As long as human beings have been creating fictional worlds, moralists have been denouncing their creations. But the news that the Red Cross might prosecute 600 million video gamers for war crimes was still pretty ground-breaking. A daily bulletin of the organisation’s annual conference two weeks ago recorded an ”overall consensus and motivation” to act ”against violations of international human rights law in video games”.

The conflicts simulated in games like Call of Duty, Battlefield and Metal Gear don’t rigorously comply with the Geneva Conventions. Game developers are understandably more interested in playability than legal realism.

But the bulletin had been written ambiguously. A week later, the Red Cross clarified that ”serious violations of the laws of war can only be committed in real-life situations”. It just wants to ”engage in a dialogue with the video gaming industry”. So we can all breathe a sigh of relief. Log back on to Xbox Live. Reinstall the iPhone games. Plug the Playstation into the TV again. But the very fact that the Red Cross decided to investigate video games is deeply, almost incomprehensibly, absurd. It is about as sensible as objecting to slasher movies because murder is against the law.

This year has been one of the most important years in human rights in decades. Yet the supreme deliberative body of the biggest human rights organisation in the world thought now would be a good time to discuss how international law is portrayed in entirely fictional settings. This suggests that some human rights activists are animated not just by an admirable defence of individual rights around the world, but by an all-encompassing moral crusade. Sure, the Red Cross does a lot of great work, but does it really think fictional violence, in games played mostly by those who will never enter a combat zone, is an urgent problem?

The liberal philosopher Richard Flathman talks about the pervasive tendency in politics towards moralism. Handwringing, showy and excessive moral judgmentalism infects democratic debate around the world. It’s driven by politicians and professional moral activists. They’re extremely confident in the rightness of their cause. They’re deeply earnest. They have a belief in an ideal world – they’re on a quest for purity. And they believe that to achieve the pure goals stipulated by their moral vision, they need to force change on the rest of society.

For those stirred by such moral fervour, even fictional depictions of the world – in video games, movies, novels – are a challenge to their vision and an opportunity for action.

It was this sort of moral activism which gave us the famous film codes in the mid-20th century. These insisted married couples could not be seen in the same bed, and no evil could be depicted as ”attractive” or ”alluring”.

And in our century, the same passion motivates the public health activists trying to ban cigarettes in movies, anti-consumerists denouncing product placement in television shows, and religious groups picketing Harry Potter book launches. Sometimes they want the offending material banned. Other times they just want to ”work with” the transgressing filmmakers and artists. Either way, moralists believe that society should be engineered to make it more moral, more ethical, more clean. And they appear to have infiltrated the otherwise clear-headed and respected Red Cross.

There’s hardly any better example of this moral self-seriousness than the 2009 research report which sparked the Red Cross’ video games discussion. Playing by the Rules, produced by a Geneva-based advocacy group, pedantically scrutinises popular games according to a strict legal criteria.

For example, in 24: The Game, a terrorist is killed after he surrenders. The report concludes that this is a violation of Article 3(1) of the Geneva Conventions, and Article 8(2)(b)(vi) of the Rome Statute of the International Criminal Court. Then one of the terrorists – sorry, ”alleged” terrorists – takes a hostage. This is a clear breach of the 1979 International Convention against the Taking of Hostages. Of course, there is no cause to believe the game developers approve of terrorists taking hostages. Or that gamers will be convinced hostage-taking is an admirable thing to do.

In one edition of the Call of Duty franchise, set during the Second World War, players can use flamethrowers. Such weapons were used in that conflict, but were technically illegal according to the 1907 Hague Conventions. So, the report meticulously points out that this too is a human rights violation.

Such absurdities are apparently enough to get the world’s peak human rights watchdog in a flurry. Certainly, the Red Cross has a remit to ”promote respect” of the rules of war. But the elimination of war crimes will not be furthered one bit by changing video game content. No person has ever believed that Castle Wolfenstein is a guide to just or unjust behaviour. Yet the Red Cross still solemnly claimed that ”600 million gamers” may be ”virtually violating” international human rights law. If this is not an attempt to stoke a moral panic, then nothing deserves that title.

New Technology And The Call For Censorship

The first recorded call for press censorship wasn’t for reasons of politics, or heresy, or public morality. It was to police “quality”. The gatekeeper mentality is a very old one indeed.

Printing spread rapidly after Gutenberg’s first Bible went on sale in 1454. Following the Bible and legal documents, one market priority for early printers was ancient texts. The first edition of Pliny the Elder’s Natural History produced in Italy was printed in 1469. It was riddled with errors and was in some parts incomprehensible. A second edition was printed the next year, by a printer in Rome, whose editor was a Bishop by the name of Giovanni Andrea Bussi.

Bussi’s edition also had problems. Lots of them. Demand for books at their now much lower prices was enormous, and Pliny was not the only book the editor was working on at the time. (Bussi blamed “technical reasons” for errors in his work – an excuse no more convincing then than it is today.)

The print industry was already highly competitive, and Bussi’s rivals played dirty. One of those rivals was Niccolò Perotti, an archbishop and author of one of the earliest guides to Latin grammar.

Perotti wrote a letter to Pope Paul II. Bussi’s corrupt version of Pliny, Perotti complained, was one of many corrupt versions of Roman and Greek books being pushed around Italy. Editors who “set themselves up as correctors and masters of antique books… pervert what is correctly written”. They do not understand what they are editing. They interfere and impose their own views on the classical masters.

Perotti’s solution was two-fold. First, there should be a common standard for editors – a code of practice, we would say. But no doubt some editors would violate the standard. So Perotti asked the Pope to set up a bureau to regulate the quality of books. This bureau would “prescribe to the printers regulations governing the printing of books” and “examine and emend” each book. “Reckless advertisement” of the editor’s views would be limited. The performance of this task “calls for intelligence, singular erudition, incredible zeal, and the highest vigilance”.

The Pope did not take up Perotti’s proposal. Censorship in the decades to come focused on banning heretical and Protestant books, and regulating obscenity.

But this early peculiarity in the history of censorship looks conspicuously like a debate we are having five and a half centuries later.

It took a few decades for Church and secular authorities to understand the revolutionary potential of mass printing. But they got there. The institutions to censor and restrict bad books were being developed half a century before Martin Luther posted his 95 Theses against Rome. The medium necessitated censorship more than the message.

Perotti’s argument is almost an exact parallel of one made today. Online media is out of control. In the print media, editorialising is crowding out description. The pressure of competition is undermining quality everywhere. New technology is bringing out the worst in the journalist and reader alike.

Niccolò Perotti welcomed the printing press yet said it was being abused and needed to be regulated. The head of the Press Council Julian Disney told the Independent Media Inquiry last month that the internet is “a cacophony” and that “serious bloggers and serious websites” should submit to Press Council regulations. The council has written that bloggers exist in a “regulatory void” and “print or post material before facts have been adequately checked”.

One academic submission to the Media Inquiry decried “blog troll chatter”. Another group of academics suggested that the Media Entertainment and Arts Alliance’s union code of ethics was vital for blogs (even though they are not bound by it) because the codes’ “standard is one against which their actions can be judged”. Ken McKinnon, a former Press Council chair, argued “news-type” blogs should be dragged into the council’s jurisdiction.

The internet is to these advocates what the printing press was to Perotti – something that, unless judiciously tamed, will lead to the coarsening of public debate. According to this mindset, new technology has to be bought under old frameworks. It is too anarchic to be left by itself. Online debate is wild and uncontrolled.

“Cacophony” is an evocative word. It doesn’t mean simply too many loud voices. It means too many loud, discordant, clashing, harsh voices. Online debate is not being coordinated by a body like the Press Council. It is meaningless until it is tamed by regulators. Julian Disney’s complaint seems like an aesthetic one on the surface, but it masks a deeper objection to the nature of democracy. When everybody can have a say, everybody will have a say.

You would think this is a good thing.

But just as Perotti’s vehement attack on Bussi was driven by rivalry, so too is the backlash against online media being driven by those who see it as a threat to the established order.

Perotti eventually took Bussi’s job. He produced his own version of Pliny’s Natural History in 1473 – which was promptly denounced by another scholar for being even more error ridden.

And his proposal was ridiculous – Perotti obviously did not foresee the explosion of book production in the subsequent decades, let alone centuries. Obviously the Church had no moral issue with censorship. But even if the papacy had wanted to enforce quality in the press, how could it do so?

We will remember complaints about the “cacophony” of the internet as just as foolish.

Every new media technology is met with earnest concern that it undermines standards or is out of control.

The ‘Right’ Morally Culpable For Breivik’s Actions, Really?

Serial killers and terrorists often claim to be making political statements through violence. But we don’t immediately have to take their word for it.

Last week Norwegian psychiatrists declared that Anders Behring Breivik, who killed 77 people in Oslo and the island of Utøya in July, is insane.

Breivik disagrees. Through lawyers he told a Norwegian newspaper that the psychiatrists “do not have enough knowledge of political ideologies”.

The psychiatrist’s 243 page report will be reviewed by the Norwegian Board of Forensic Medicine – the assessment may then be changed – and then presented to the court – which may not accept it anyway.

Perhaps Breivik is clinically insane, perhaps he is not.

But a surprising amount seems to rest on the diagnosis.

On Utøya: Anders Breivik, Right Terror, Racism And Europe was launched by Lee Rhiannon in October. Edited by Elizabeth Humphrys, Guy Rundle and Tad Tietze, the book is an unapologetic attempt to make “the Right” morally culpable for Anders Breivik’s actions.

They argue “the significance of Utøya has been demoted, obscured and ignored” by “hard right commentators”. Calling Breivik insane is a furphy used to downplay his political significance (Tietze also argued this on The Drum last week). Breivik executed terror “in the name of the West, against those too ‘tolerant’ of Islam”. The Utøya massacre was “an unambiguous attack on the Left” and now “[t]he task for the Left is … to ruthlessly expose the true nature of the Right and its authoritarian project”.

If the shape of this argument seems familiar, no wonder: it is an almost exact inversion of that made by some conservatives in response to terror attacks carried out by Muslims.

The conservative thesis is that terror conducted by Muslims reflects something intrinsically violent in Islam itself. The thesis of Humphrys, Rundle, Tietze, and their contributors is terror conducted by someone who cites John Howard and claims to be of “the Right” reflects the dark heart of mainstream conservatism.

It is no more convincing when the protagonists have been reversed.

Mainstream Muslims exist in the same “general ideological framework” as Osama bin Laden, insofar as they share a religion. Yet Muslims who condemn violence are in no way responsible for violence perpetrated by others. It is obscene to suggest otherwise. So surely neither are conservatives, who loudly condemned Breivik in any way, responsible for his actions.

One could draw other parallels which would be equally damning and equally hollow. All supporters of the carbon price have some moral relationship to eco-terrorism. Stalin’s Great Terror means mainstream social democrats need to have a good hard think about themselves. Scientists are at all times one step away from fascist eugenicists. This makes good polemic, and it’s idiotic.

There is an enormous moral leap between believing multiculturalism is a bad policy and systematically slaughtering 77 members of the Norwegian Labour Party, some as young as 14 years old. To suggest they are on the same continuum is to obscure how anybody could make that leap.

And to suggest so in order to make a domestic political point (Andrew Bolt is not mentioned once in Breivik’s manifesto, but is mentioned 21 times in On Utøya) is opportunistic and petty.

The authors argue Anders Breivik is a leading indicator of the rise of a violent far right in Europe: the massacre “marked the transition of a section of the current European far Right to lethal violence against political enemies, characteristic of the fascist era.”

If that’s true, so then Breivik’s actions would take on a greater significance, putting aside On Utøya’s cheap political digs.

But the data on politically motivated violence does not bear this claim out.

The latest report of the European Police Office on domestic terror within EU member states documents 249 separate terror attacks in 2010. Of those, 3 attacks were conducted by Islamist organisations. The vast bulk were separatist (160 attacks). There were no “right-wing” terrorist attacks. But there were 45 “left-wing and anarchist” attacks. The Europol report cites the “increased violence”, and “increased transnational coordination between terrorist and extremist left-wing and anarchist groups”.

If we are simply looking for trends, the data suggests we should watch our left, not our right.

In fact, Europol concluded right-wing terrorism was “on the wane”.

Obviously, that assessment was tragically inaccurate. Europol’s analysis may well be very different next year – that is, if they determine the Norway massacre was not an isolated incident.

But, while we wait, the authors of On Utøya do not offer much evidence Breivik is part of a newly violent movement, rather than a shocking outlier. Right-wing terrorism deserves study, certainly. Guy Rundle’s contribution on the history of right-wing terror confidentially reaches back to Julius Caesar’s Gallic campaigns, but stops in Italy in 1980.

Commentators are sickly eager to pin extremist violence on their ideological opponents.

The attempts to characterise Jared Loughner (the definitely mad person who tried to kill a Democratic congresswoman earlier this year) as a child of the Tea Party is just the most farcical illustration. There was, and still is, no reason to believe Loughner had strong political views.

But the problem with On Utøya is deeper than that.

One of the fundamental mistakes in American strategy in the War on Terror has been feeding the egos of the terrorists. Trials by military commission of terrorists confirm their self-image as soldiers of God, where trials in civilian courts would classify them more accurately and mundanely as criminals.

On Utøya does something similar, but does it deliberately. Breivik fantasised his actions and spoke on behalf of critics of multiculturalism. Those critics have uniformly rejected him. Yet On Utøya seeks, bizarrely, to legitimise Breivik – and to claim violence is a logical extension of political debate (There is a striking parallel with Marxist philosopher Slavoj ?i?ek’s argument that terror is a justifiable weapon to fight liberal democracy).

The contributors to On Utøya say Anders Breivik’s actions have been depoliticised. They seek to “repoliticise” them.

But by opportunistically trying to get conservatives to own the Norwegian massacre, they break down the moral barriers between democratic debate and evil.

Phoney Food Fears Ignore Nimble Market Solutions

Nothing brings out the hyperbole like ”food security”. Paul R. Ehrlich – of The Population Bomb fame – appeared on ABC radio in October to declare that ”civilisation is going to collapse” because we are farming land our ancestors were unable to, and we are no longer drinking our water ”right out of the rivers”.

The fear of the moment is that population growth might outstrip food supply. The United Nations says the planet met its 7 billionth inhabitant in November. And the past few years have seen a surprising uptick in food prices. The 20th century saw a decline in the price of food basics, but we’ve had price spikes in 2008 and 2011.

This new food crisis has something for everyone. Tim Flannery’s Climate Change Commission blames climate change. Population panickers blame too many people. Oxfam’s latest campaign attributes higher food prices to ”speculation”, following the ”when in doubt, blame Gordon Gekko” rule.

Two hundred years ago, Thomas Malthus argued population grows at a faster rate than food production. Malthus was wrong then. And his followers are wrong now.

Certainly, high food prices are bad, particularly for those on subsistence income. But our data here is extremely patchy.

Those headline figures trotted out by activists about the millions of people going to bed hungry are so ad hoc as to be quite meaningless.

There is no reason to believe we’re about to enter an era of global hunger. Markets balance themselves. High prices attract new producers into the market, seeking the profits on offer. Those prices also make marginal land more viable. The result? Production goes up, prices go down.

In between their June and November food market report this year, the UN Food and Agricultural Organisation revised its production forecasts significantly up. Wheat prices have plummeted. Analysts now talk of a wheat glut. We can thank Oxfam’s hated ”speculators” for that. Of course, in 2004, before the price spikes, the UN was fretting food prices were too low and farmers weren’t making money.

On climate change, too, the future is far more complex than the doomsayers would have us believe. The Intergovernmental Panel on Climate Change itself says increasing carbon dioxide levels can have a positive effect on agricultural productivity. The 2007 report concluded up to 3 degrees of warming will increase crop yields.

Certainly, higher than 3 degrees and yields could decline. But if we factor in inevitable but unpredictable advances in agricultural technologies, then the outlook for food from climate change is good.

If temperatures and carbon dioxide have been rising throughout the 20th century, as the IPCC’s report emphatically stated, then so too have agricultural efficiency and crop yields. And quietly, away from the terrible prophecies we read in the press, agricultural innovation is happening.

The Borlaug Global Rust Initiative announced in June that scientists were close to developing ”super varieties” of wheat which would boost crop yields by 15 per cent.

A landmark study by the American National Research Council found last year farmers who adopted genetically modified crops increased their productivity. We’ve been manipulating plants since the dawn of agriculture. Genetic modification is just the most recent.

The real threat to the future of food isn’t population or climate change or stock traders. It’s ideology. Greenpeace claims to be worried about food production. But they are unrelentingly hostile to GM crops. Greenpeace activists destroyed a CSIRO crop of experimental GM wheat this year.

No wonder Greenpeace thinks food is going to be a problem in the future. They’re trying to stop the technological solutions designed to fix it. We’ll need scientific progress to feed 7 billion people.

Resistance to that progress is the biggest menace to future food security. And what about once-fashionable green policies about things such as biofuels, which convert food such as corn or sugar cane into fuel to replace petrol? Al Gore admits biofuels are a catastrophe. Americans are now burning one-sixth of the world’s food in their cars.

Yet short-term price instability and spikes are only a problem if you are poor. In the Third World, food insecurity is a symptom of economic underdevelopment. In the First World, the food problem is not scarcity but abundance.

It’s perhaps understandable ideologues are using the recent food price spikes to push their agendas – against globalisation, against population growth, against consumer capitalism. Yet it’s truly amazing that 177 years after Malthus died, we’re still falling for the old food scarcity myth.