How The Red Cross Virtually Lost The Plot

As long as human beings have been creating fictional worlds, moralists have been denouncing their creations. But the news that the Red Cross might prosecute 600 million video gamers for war crimes was still pretty ground-breaking. A daily bulletin of the organisation’s annual conference two weeks ago recorded an ”overall consensus and motivation” to act ”against violations of international human rights law in video games”.

The conflicts simulated in games like Call of Duty, Battlefield and Metal Gear don’t rigorously comply with the Geneva Conventions. Game developers are understandably more interested in playability than legal realism.

But the bulletin had been written ambiguously. A week later, the Red Cross clarified that ”serious violations of the laws of war can only be committed in real-life situations”. It just wants to ”engage in a dialogue with the video gaming industry”. So we can all breathe a sigh of relief. Log back on to Xbox Live. Reinstall the iPhone games. Plug the Playstation into the TV again. But the very fact that the Red Cross decided to investigate video games is deeply, almost incomprehensibly, absurd. It is about as sensible as objecting to slasher movies because murder is against the law.

This year has been one of the most important years in human rights in decades. Yet the supreme deliberative body of the biggest human rights organisation in the world thought now would be a good time to discuss how international law is portrayed in entirely fictional settings. This suggests that some human rights activists are animated not just by an admirable defence of individual rights around the world, but by an all-encompassing moral crusade. Sure, the Red Cross does a lot of great work, but does it really think fictional violence, in games played mostly by those who will never enter a combat zone, is an urgent problem?

The liberal philosopher Richard Flathman talks about the pervasive tendency in politics towards moralism. Handwringing, showy and excessive moral judgmentalism infects democratic debate around the world. It’s driven by politicians and professional moral activists. They’re extremely confident in the rightness of their cause. They’re deeply earnest. They have a belief in an ideal world – they’re on a quest for purity. And they believe that to achieve the pure goals stipulated by their moral vision, they need to force change on the rest of society.

For those stirred by such moral fervour, even fictional depictions of the world – in video games, movies, novels – are a challenge to their vision and an opportunity for action.

It was this sort of moral activism which gave us the famous film codes in the mid-20th century. These insisted married couples could not be seen in the same bed, and no evil could be depicted as ”attractive” or ”alluring”.

And in our century, the same passion motivates the public health activists trying to ban cigarettes in movies, anti-consumerists denouncing product placement in television shows, and religious groups picketing Harry Potter book launches. Sometimes they want the offending material banned. Other times they just want to ”work with” the transgressing filmmakers and artists. Either way, moralists believe that society should be engineered to make it more moral, more ethical, more clean. And they appear to have infiltrated the otherwise clear-headed and respected Red Cross.

There’s hardly any better example of this moral self-seriousness than the 2009 research report which sparked the Red Cross’ video games discussion. Playing by the Rules, produced by a Geneva-based advocacy group, pedantically scrutinises popular games according to a strict legal criteria.

For example, in 24: The Game, a terrorist is killed after he surrenders. The report concludes that this is a violation of Article 3(1) of the Geneva Conventions, and Article 8(2)(b)(vi) of the Rome Statute of the International Criminal Court. Then one of the terrorists – sorry, ”alleged” terrorists – takes a hostage. This is a clear breach of the 1979 International Convention against the Taking of Hostages. Of course, there is no cause to believe the game developers approve of terrorists taking hostages. Or that gamers will be convinced hostage-taking is an admirable thing to do.

In one edition of the Call of Duty franchise, set during the Second World War, players can use flamethrowers. Such weapons were used in that conflict, but were technically illegal according to the 1907 Hague Conventions. So, the report meticulously points out that this too is a human rights violation.

Such absurdities are apparently enough to get the world’s peak human rights watchdog in a flurry. Certainly, the Red Cross has a remit to ”promote respect” of the rules of war. But the elimination of war crimes will not be furthered one bit by changing video game content. No person has ever believed that Castle Wolfenstein is a guide to just or unjust behaviour. Yet the Red Cross still solemnly claimed that ”600 million gamers” may be ”virtually violating” international human rights law. If this is not an attempt to stoke a moral panic, then nothing deserves that title.

Dig In, Don’t Wait. Our Slow Food Nostalgia Is Misplaced

We want food to be simple and honest, local and seasonal. We want it to be organic, ”natural”, free of preservatives and homemade. This, at least, is the message from food journalists and critics, celebrity cooks, recipe books and MasterChef.

It’s a vision of food-before-the-fall, when people had a relationship with what they ate. A lovely dream, but dream it in moderation.

For the most part, when it comes to food and agriculture, industrial is good. Corporate farming is good. Even processed is good. Natural food is an illusion. We wouldn’t want it if we had it. Our ancestors had natural food. It was awful.

The history of eating is the history of shaping, manipulating, preserving and trading our food into digestible shape. Only since the development of modern agriculture, reliable transportation and refrigeration – in other words, industrial society – has food been cheap, plentiful and safe.

In the 17th century, fruit was dismissed as ”unwholesome” and blamed for the plague. It was hard to grow and extremely susceptible to pests and the weather. Today, even the most organic, locally sourced, seasonal tomato is the result of hundreds of years of human manipulation.

And even the most dedicated foodie’s pantry is stuffed with items that are industrial.

Like soy sauce. Nobody makes it from scratch. One recipe warns: ”If you get bored easily … this project might not be the best for you. It can take up to six months to see the finished product.”

You can just buy half a litre for $2, shipped in great quantities from China and available from a corporate supermarket. Not local, not bought at a farmers market, but indispensable.

By far the biggest benefit of industrial food has been saved labour. The only groups who practice ”slow food” (regional cuisines cooked from scratch with local ingredients) are the extremely well-off with the luxury of time and the desperately poor who have no alternative. The rest of us can buy our way out of dreary kitchen work.

As the food historian Rachel Laudan has pointed out, Japanese women in the 20th century embraced white manufactured bread because serving that was a lot easier than getting up early to make rice. Prior to the 1950s, Mexican women spent up to five hours a day making tortillas. And when they became available, Italians eagerly bought dehydrated pasta and canned tomatoes. The potential for gender equality was immeasurably enhanced when women were freed from the kitchen.

Even much-maligned processed food is an advance on the past. The processing of bread has not only made bread safer and healthier but it stores longer and is more nutritious than much of the food eaten by our ancestors.

The nostalgia for a lost world of pure food is nostalgia for a world of nutritional poverty. Laudan describes it as ”culinary Luddism”. And increasingly it has policy implications.

The recent debate over cheap milk was at its heart a debate over how we think about food. Should governments protect family farms? Or accept that in most cases the cheapest and most reliable way to feed the nation is industrial agriculture?

Yes, agribusiness is less romantic than the small farm that’s been worked by a single family for generations. But it’s economically viable. The Senate inquiry into dairy pricing heard stories of independent farmers toiling 12 hours, seven days a week, earning less than they could get from unemployment benefits. That’s no pastoral ideal.

Specialisation and economies of scale are just as necessary in agriculture as any other industry. No wonder most organic food sold in Australia is grown by large agribusiness rather than small family farms.

Throughout history, and for all but the rich, the production of slow, natural food has been an arduous necessity. Making food from scratch was the marker of a life of subsistence. Eating local was a requirement. The family farm was no Arcadian idyll. It’s long been a site of hard labour.

So let’s embrace the higher standards of living offered by commercial, industrial food.

Hey Mr Garrett! Time To Get Off Our Arts And Do Nothing

If everything goes to plan, soon Australia will have its very own national cultural policy.

This is great news if you have been concerned that Australian literature, TV, music, film, theatre, painting and performance art is a bit, well, aimless. Sure, cultural products inform and reflect our views of ourselves – but so what? What’s the end game? Think of what our culture could achieve if it had a policy!

Announced recently by Peter Garrett, what the national cultural policy lacks in ambition, it more than makes up for in discussion points.

Right now it’s just a website, described pompously as a “national conversation”. But the publicly funded arts community has wanted some sort of grandiose policy for a long time. They have always assumed that “national policy” is code for “buckets of cash”. They’re probably right.

According to the Arts Minister, culture does pretty much everything – it creates jobs, attracts tourists, harnesses “understandings” (yeah, I’m not sure what that is either) and lifts our fragile economy. So in Garrett’s opinion, it should be co-ordinated by him.

But when government mates with culture, it breeds bureaucracy. Unless there is a big change in direction, a national cultural policy could easily make this worse; filtering Australia’s artistic output through yet another mesh of subsidy and red tape.

The Commonwealth Arts Council talks about culture as if it can be reduced to key performance indicators – “strategic priorities”, “aims”, “outcomes” and “outputs”. Let’s say you want a few grand for your interpretative dance version of An Inconvenient Truth. I suspect the government would quite like that idea. And once you slog through the 11-stage grant application, provide the dozens of pages of supporting material, CVs and letters of support, you’ll find out if they do. After you successful defend your idea at an assessment panel meeting, of course.

Certainly if we’re going to give money to artists, we might want to run a background check on who we are giving it away to. But government policy seems be aimed at taming our wild culture, burying it in a pile of red tape, and keeping it alive with taxpayers’ money fed through a tube.

After all, it isn’t just bad luck that Australian movies are routinely commercial failures. Filmmakers have realised it’s more important to please funding bodies with depictions of the hollowness of contemporary society than it is trying to please audiences. (I mean, come on, not every movie has to expose the “dark undercurrents of suburbia”.)

But there is an alternative. If Peter Garrett really wants his national cultural policy to make a difference, he should adopt just one principle: Australia’s culture can look after itself.

Which culture would you consider more vibrant: one in which artists are entrepreneurs – testing their work against an audience and in a competitive marketplace, or one that shepherds them into a departmental grant application process?

The entrepreneurial spirit should be as central to the art world as it is to the economy.

It’s not like the marketplace can’t produce culture. Even high culture can be popular. Nearly 40,000 people came to see Andre Rieu’s Docklands show last year. The National Gallery of Victoria puts on exhibitions all Melbourne lines up to see. And while the largest share of Arts Council funding is spent on expensive things such as orchestras, there are privately funded orchestras around the world. Profit-making culture just takes an entrepreneurial passion.

Anyway, there has never been a more futile time to try to define and direct a national culture. The very the idea of an “Australian” culture seems outdated. The internet has put the globalisation of culture into hyperdrive. Most importantly, it has allowed us to choose cultural products that are important to us as individuals, not as a “nation”.

Culture comes from the meanings that individuals derive from art, dance, theatre or film, not from a departmental funding matrix that allocates money to politically favoured art forms. So let’s scrap the idea of a national cultural policy, and embrace our 21 million individual cultural policies. A vibrant culture will come from what people want, not what the Commonwealth funds.

Big Government: A Love Story

Michael Moore’s Capitalism: A Love Story “takes aim” at the capitalist system, as a few dozen supportive reviewers have mindlessly written. But that’s a tough metaphor to uphold. It’s easy to aim when you don’t care what you hit.

Moore is interested in Big-C Capitalism. So after a few stories of families having their homes foreclosed, Moore reveals his thesis.

“Capitalism is a sin”, he gets a series of priests to say darkly into the camera; it’s “obscene” and it’s “radically evil”. Capitalism is a secular “crime” and spiritually “immoral”.

Another priest reflects that he is “really in awe of (pro-capitalism) propaganda”, which is funny to hear from a minister of religion. And a bit rich: one sequence in Moore’s film describes the somewhat icky practice of firms taking out life insurance for their employees, which he tastefully illustrates with lingering shots of a grieving family, as if insurance policies cause cancer.

Moore has always been an awkwardly self-conscious working-class man. In this instalment, he is also God-fearing. And his NASCAR-chic populism is now littered with calls to “people power”, which, coming from a multimillionaire, are as authentic as the Spice Girls’ “girl power”. It’s all so laden that there’s a good chance he wants to run for office.

In a bizarrely misdirected appeal to authority, Moore quizzes the off-Broadway actor Wallace Shawn, who has “studied history and a bit of economics” about what he reckons is the problem with capitalism. (The audience Moore hopes will see his film know Shawn from The Princess Bride. But those who will actually see it know Shawn from My Dinner With Andre.) Shawn’s answer isn’t the point: what possible value could his view add?

But Moore’s argument is even more misdirected. He’s justifiably outraged at the bailouts and the way they were pushed through Congress. Who isn’t? He’s angry about the favour-trading relationship between Wall Street and Washington. Again, who isn’t?

But that’s not capitalism. It’s corporatism - a political system with a veneer of free enterprise but where a network of lobbyists, bureaucrats and politicians use the political system to achieve private goals. Moore would like to add a fourth movement to this symphony - the unions. But unless you think of unions as omniscient and beneficent guardians of the public good, doing so wouldn’t change the corporatist dynamic.

So when he describes a real outrage - like a corruption case in Pennsylvania where a corrupt judge funnelled innocent kids into a privately run juvenile detention centre - he doesn’t quite understand who the bad guy actually is: the politicians and administrators who let it happen. (After this case, two judges face charges of racketeering, fraud, money laundering, extortion, bribery, and federal tax violations. Corruption is, after all, against the law.)

And who to blame for the bailouts? The firms that ask for them, or the politicians that grant them?

For Moore, Barack Obama’s election is a spiritual catharsis, an explosion of people power, and a sudden break with the capitalist nightmare. But the outrages he spent 90 minutes detailing have, if anything, gotten worse under the Obama administration. The employment pipeline between Goldman Sachs and Treasury has is even busier. And Obama has graduated from bailing out banks to bailing out car companies. For Moore, when Bush did this sort of thing, it was capitalism. When Obama does, it’s democracy.

In Capitalism: A Love Story, Moore can’t quite get himself to the problem. If he did, he’d have to admit that the big activist government of his dreams is actually the cause of his nightmares.

Bring On The Acid Bath

Australian public debate is usually sober and routine. Policies are proposed, criticised and eventually watered down. One person calls another person a “neo-liberal” and everybody goes home at a quarter past five.

So when novelist Peter Carey claims that a technical legislative change affecting the publishing industry will encourage the growth of “a new species that can swim in acid”, it is at least an entertaining break from the normal banalities.

The Productivity Commission is investigating the removal of the ban on parallel importing, which makes it illegal to import for sale any book that has already been published in Australia. It seems that any proposal to lift this ban is like kryptonite straight to the groin of Australia’s publishing fraternity.

If the ban is lifted, Carey imagines a very bleak future: “long-term devastation” and “cultural self-suicide”; Australian book editors will be “reduced to nothing, to become marketers and publicists for Paris Hilton”. And according to Carey, treacherous – and apparently acid-resistant – global retailers will take over. They plan to rob Aussie publishers “blind”.

Also chiming in, Tim Winton was slightly less surreal but more poetic, predicting a “great bitterness” would wash through the Australian literary community.

And Matthew Reilly, whose books have sold more than 4 million copies, compared the possible influx of popular books if the ban is lifted to the introduction of McDonald’s.

Our novelists are adopting a whole new strategy into debate over microeconomic reform: emotional blackmail. As a general rule, if a law needs a lot of exceptions to avoid being idiotic, it’s probably not a very good law. And there are a lot of exceptions to the ban on parallel importing.

To ensure Australian readers aren’t shut out of the worldwide book market altogether, if a new book hasn’t found an Australian publisher within 30 days, importers are free to bring it in. Other regulatory exceptions ensure that overseas travellers don’t get arrested for bringing in the Dan Brown novel they picked up at Heathrow, and that booksellers aren’t jailed for ordering books that are out of stock in Australia.

The hardest thing in retail is trying to figure out how much consumers are willing to pay for your product.

Australians might be willing to pay a relatively high price for books, but for the less affluent Indian market, authors and publishers might have to sell at a lower price. Clever capitalists try to segment their market as much as possible – rich people pay more, poor people pay less.

So if parallel importing is legalised, Winton, Carey and a lot of publishers are worried that bookshops will be able to import those cheaper copies.

Well, hey – cheaper books for everyone! And if authors really want to keep selling their books at different prices in different markets, they should be able to use private contracts to prevent their own retailers from undercutting them. Like all protectionist laws, the ban on parallel importing privileges producers over the consumers they are supposed to serve – novelists no more deserve to be insulated from competition and consumer demand than farmers, computer programmers or line workers.

In an era where everything is available on the internet, segmenting a market is getting harder and harder. Over time, the whole issue of parallel importing may become obsolete – call it the Amazon effect. The debate shows how much Australian cultural producers have made it appear that our culture is only possible with government protection.

But strong and vibrant culture doesn’t usually come from a bureaucratically orchestrated jumble of subsidies, regulations and writers’ workshops. Culture shouldn’t need a legislative umbrella to protect it.

Peter Carey may believe that parallel importing will silence Australian authors, but there’s something anachronistic and nationalist about the crusade to encourage specifically Australian voices, Australian stories and Australian images. It is peculiar that while we might believe that modern Australia is a cultural collage of backgrounds and value systems, culture warriors on both sides of politics are not able to admit that this makes the deliberate encouragement of a uniquely Australian culture a sham. Many Australian Muslims might find Islamic authors published overseas more personally enriching than Tim Winton’s descriptions of surf in Western Australia.

Unique voices will continue to find their way in a marketplace no matter how globalised that marketplace is – globalisation may spread McDonald’s outlets across the world, but it also makes far-away Peruvian cultural products easily accessible to punters in Narre Warren.

Yet Australia’s cultural legislation protects and subsidises authors with the aim of constructing some sort of universal story that can be shared by the 21 million people living within the territorial limits of Australia. Apart from being futile, this attitude imagines that Australia is a solitary island, rather than deeply integrated in cultures spanning the globe.

Culture evolves in the wild, battered and shaped by the elements, and by the pressure of competitors. It is more likely to stagnate or starve when protected in an artificial environment. The more Australian authors have to compete, the more rewarding our cultures will become.

Tinseltown Ideology Reflects Our Cultural Obsessions

It’s no surprise that when Hollywood decided to remake the 1951 sci-fi classic The Day the Earth Stood Still for modern audiences, the theme would change from nuclear war to the now much more popular fear of environmental collapse. There is a long tradition of movies with political messages.

But the strikingly different approach of each film speaks volumes about a shift in green philosophy of the last few years. It is apparently now unremarkable to believe that humanity should be sacrificed at the altar of Gaia.

The plot of both The Day the Earth Stood Still films is very simple – an alien named “Klaatu” visits Earth to teach humanity a lesson about its bad ways.

In the 1951 film, Klaatu is a sort of Christ-like figure, whose extraterrestrial intervention into human affairs brings about an age of peace. This original Klaatu is a charming alien who firmly but gently convinces mankind to abandon politics and warfare. Humanity obediently pulls back from the nuclear precipice. Peace and good times are then had by all.

In 2009, the filmmakers have changed Klaatu into a dictatorial environmentalist with a penchant for genocide. Keanu Reeves plays a Klaatu who fairly quickly decides that all humans need to be immediately eliminated for the sake of the earth. The new film is sort of like an episode of Doctor Who where the Daleks are the good guys.

Indeed, the alien civilisations of 2009 appear to be everything that the alien civilisations of 1951 were trying to stop. When the 1951 Klaatu steps out of his space ship, he immediately states that he has come to visit the earth “in peace and with good will”. By contrast, it seems that Keanu Reeves steps out of his space ship only to briefly survey the species he plans to destroy. This film has to be one of the most deeply anti-human movies in a long time.

So what does it say about our collective mental health that, when we try to imagine a “good” race of aliens, we also imagine that they would want to systematically slaughter us? If we’re lucky, the next bunch of extraterrestrial visitors will bring us the anti-depressants we so obviously need.

The extraordinary ideological change between the original The Day the Earth Stood Still and its remake shows how mainstream apocalyptic environmentalism has become. Obviously, the vast majority of those who care for the environment also think that the human race is probably worth keeping alive. But what was, just a few years ago, the harmless spluttering of Malthusian academics certain that the Earth needs to halve its population, is now being repackaged approvingly as infotainment.

Movies have always both reflected and distorted our cultural obsessions. Filmmakers aren’t stupid – they want movies to sell to as wide an audience as possible, so they try to mimic as best they can the attitudes and interests of the population at large.

But at the same time, the political views of most filmmakers hardly reflect the political views of that audience. You could shove the number of conservatives and libertarians in Hollywood into a small Prius, and still have enough room for their pets, or their guns – or whatever profit-loving, environment-hating, worker-oppressing things they like to carry around with them.

So every year, Hollywood produces a couple of films that are little more than vehicles for Tinseltown’s latest trendy ideology. Last year’s otherwise charming Wall-E depicted humanity as not just destructive, but also morbidly obese morons encased in hover-chairs. And the global warming disaster movie The Day After Tomorrow implied that the world was just one bonfire away from a climate implosion.

Presumably the next iteration of Godzilla will be born because of an aggregate global temperate change of 3 degrees spread over a century.

Of course, if you’re too quick to jump at the latest popular cause, it’s easy to make mistakes. Hollywood can get it spectacularly wrong. Remember the overpopulation crisis of the 1970s? A steady stream of films like Logan’s Run, Soylent GreenThe Last Child, and Z.P.G. (which stood for the environmental movement’s aim of “zero population growth”) tried to popularise the bizarre idea that the amount of people the world had in 1978 was exactly the maximum population the world could hold.

But despite Hollywood’s best efforts at convincing us not to, we kept on breeding. At the time of writing, we have not yet had to resort to turning our dead into basic foodstuffs.

For decades, the film industry churned out films about the need for love, peace and just generally getting along. What made them stop? Bring back the original, kindly Klaatu, who wants to help humanity, not destroy it.

Have bad movies edged out good?

A review of Sleaze Artists: Cinema at the Margins of Taste, Style, and Politics.

It may not come as a surprise that Hostel: Part II, the 2007 movie which depicts nearly an hour and a half of brutal, explicit and uninterrupted torture, is part of a rich cultural lineage.Hostel II is part of a new movement of neo-exploitation cinema, and its direct artistic ancestors date back nearly half a century.

So have ‘bad’ movies like these edged out ‘good’ movies?

Few cultural fields illustrate the blurring between ‘highbrow’ art and ‘lowbrow’ craft more than the movies. As Jeffrey Sconce points out in the new edited collection of essays on trash cinema Sleaze Artists: Cinema at the Margins of Taste, Style and Politics, movies were never an elite art; condemned to be practiced and enjoyed only by the cultured few. Instead, movies have always existed only to entertain, and as such, have always been a ‘vulgar medium’ designed to appeal to the unwashed masses.

But there is vulgar, and then there is vulgarSleaze Artists explores the depths of trash, exploitation and grindhouse cinema of the last forty years. Not only do the films discussed inSleaze Artists have no artistic pretentions; they barely even have entertainment pretensions. For the cinema underground, the first priority is to titillate.

The essays in Sleaze Artists are diverse, as is typical for an academic collection, with contributions covering gay military films, boredom as a motif in the Italian underground, the quasi documentary elements of the postwar nudie film, and an account of the production and distribution of a gothic horror movie that couldn’t find an obvious market. The authors are an assortment of professors and cultural studies academics from the United States; if they were Australians, our first reaction would be to decry a university system that redistributes taxpayers’ money to tenured lecturers just so that they can watch all eleven Friday the 13thfilms, but as they are Americans we can just marvel in amusement. So it is easy to write that many of the essays in Sleaze Artists are fascinating. After all, it’s not our taxes.

As an example, an interesting chapter by Kay Dickinson looks at the strange partnership between Italian horror of the 1970s and early 1980s and the often very beautiful soundtracks which accompanied them. In this, the archetypal example is the infamous 1980 film Cannibal Holocaust. The gruesome violence of this film-the director, Ruggero Deodato, was forced to prove in an Italian court that he had not actually killed anybody during filming, and the film shows the actual slaughter of half a dozen live animals-is matched with an unpredictably lush synthesizer jazz score by the composer Riz Ortolani. Dickinson nominates the dissociative and unnatural quality of the synthesiser itself as a conscious artistic decision by the filmmaker to unnerve the viewer-as if seeing a live turtle dissected on screen was not unnerving enough.

Tania Modeleski’s chapter on the 1960s director Doris Wishman is one of the few in Sleaze Artists that shows the necessarily ambiguous relationship modern audiences have with exploitation cinema. Modeleski, a Californian academic with an interest in feminist film criticism, is deeply ambivalent about her subject. Doris Wishman produced some brutal films. Her female protagonists get raped, abused and forced to murder. Every bruise is carefully fetishisticly recorded for the silent male audience.

For Modeleski, that a female director produced the most misogynistic films of the genre is a distinct challenge. Most of the essays in Sleaze Politics seek to normalise their films and their audiences-to make the unusual seem pedestrian. Furthermore, a focus of the cultural studies movement over the last few decades has been not just to make marginalia the focus of legitimate academic study; it has been a conscious effort to detect ‘transgressive’ artistry and politics in the cultural underground. Movies are carefully parsed and examined to discover ironic visions worthy of the twenty-first century arts faculty in even the most forgettable cookie cutter exploitation genres. If you pick up a copy of any schlock horror film in a bargain DVD bin, the advertising on its case will proclaim its ‘subversive’ nature. In most cases, this subversiveness is absent and rarely more than wishful thinking. After all, modern audiences, trained on Quentin Tarantino-esque postmodernism, like to think everything is ironic.

But Wishman’s ‘roughie’ films are too grotesque to support such a reading; there is no self-conscious and knowing winks in her depictions of female abuse. Her protagonists may have lesbian encounters, but Modeleski is unable to interpret these as in any way ‘feminist’-instead, they are shown as just more abusive relationships down the rabbit hole of female degradation. Some of Wishman’s films simply cannot be reformed under the banner irony and subversiveness-they are too repulsive to be squeezed into the feminist narrative, despite Wishman’s gender. (This has not, however, stopped some critics from trying.) Modeleski concludes mundanely that Wishman needed the money, and simply adhered to the conventions of the genre she worked in.

The American movie critic Pauline Kael once provocatively wrote that she found Wild in the Streets, an unassuming and cheaply made film about hippy teens taking over the American government, far more interesting than Stanley Kubrick’s achingly important and serious 2001: A Space Odyssey, made in the same year. The final essay, ‘Movies: A Century of Failure’ takes this observation as its jumping off point, and tries to work out just what the appeal of underground or otherwise unsuccessful films is. How have embarrassingly bad movies-like Jennifer Lopez and Ben Affleck’s wildly unpopular 2002 romantic comedy Gigli, or 2004’sCatwoman, which reduced the Oscar winner Halle Berry to a lifeless, latex wearing sex object-managed to ascend the cultural ladder and gained cult status? How has the 1950s director Ed Wood, whose films are barely able to sustain a timeline, let alone a plot, become a modern film legend? Whenever Wood’s Plan 9 From Outer Space is again nominated as the worst film ever made, it assures that he will be watched and discussed for far longer than some of the middle of the road directors today. And it is likely that Showgirls, the 1995 film that was little more than an excuse to display the former teen actress Elizabeth Berkley naked, will, having now achieved cult status, be seen for decades.

Jeffrey Sconce argues that film going is, at least for those who ask for great things from the movies, almost always one of disappointment-rarely do movies live up to their expectations. Films are always too formulaic, characters are always too poorly drawn, and direction is always too flat to maintain our interest. And so, the pleasure of unexpectedly finding an inexplicably bizarre film on late night SBS or buried at the rental store becomes a far greater thrill than can be provided by the majority of material produced in the Hollywood machine. The frustration with ‘bad’ cinema became a search for ‘so bad it’s good’ cinema.

But, as Sconce writes, disappointment is never too far away, even if we are actively searching out movies that are cringe-inducing sub-par. After all, how could a film with the title ofSatan’s Cheerleaders (the poster for which adorns the cover of Sleaze Artists) ever live up to the expectations encouraged by its title? Ditto for Zombie HolocaustSanta Claus Conquers the MartiansTwo Thousand Maniacs! or Nude for Satan. Could Death Bed: The Bed That Eats ever be as good as it sounds?

It would be easy to conclude that the cinema described in Sleaze Artists is no longer on the cultural margins, but has now firmly entered the mainstream. Quentin Tarantino and Robert Rodriquez self-consciously replicated the underground aesthetic in Grindhouse-their double billed feature which included a road revenge flick Death Proof and the Texas zombie homagePlanet Terror. The video store clerk, proudly schooled in the most obscure exploitation and horror films, is a nearly extinct cliché; displaced by online forums dedicated to bad cinema and the steady archiving of cinema’s miscellany onto DVD.

And our relationship with underground films has even changed in the meantime. In the early 1990s, the American television show Mystery Science Theater 3000 specialised in uncovering some of these B-grade science fiction films and subjecting them to relentless ridicule. Nearly two decades later, our response to yesterday’s cultural leftovers is less likely to be ridicule than ironic respect. Not just the high-profile self conscious mimicking of Tarantino, but scores of films are released each year that resurrect themes and techniques of the underground. The famously dated zoom shot was once an amusing anachronism, but it now appears in many contemporary productions with barely a hint of irony. Contemporary horror franchises likeSaw and Hostel which feature extended torture scenes are nearly indistinguishable from the video nasties popular two decades ago, although more professionally produced.

The English Conservative MP Charles Walker described 2007’s Hostel II not inaccurately when he said that ‘from beginning to end, it depicts obscene, misogynistic acts of brutality against women-an hour and a half of brutality’; a description which could just as easily apply to a Doris Wishman film. Grindhouse cinemas may have closed down and videos been replaced by DVDs and internet file-sharing, but movies whose first priority is to shock are shown in chain theatres across the globe, not in small off-Broadway adults only theatres.

But standards have changed. Modern audiences may accept-it would be inaccurate to write ‘are comfortable with’-special effects depictions of sadistic violence at the cinema but they would not accept the very real slaughter of a very real turtle, as occurs in Cannibal Holocaust. Similarly the masochistic brutality seen in the video nasties are absent in modern homages to exploitation. Even the semi-pornographic undressing scenes which were awkwardly squeezed into the typical underground 1970s horror film have no contemporary equivalent. The moral content of mainstream exploitation in the twenty-first century and postwar underground exploitation may seem superficially similar, but there are major differences; there are new ethical and moral lines which modern filmmakers do not cross.

For these reasons, it is important to avoid the typical conservative reaction to seemingly immoral-or disconcertingly amoral-culture. It is certainly not clear that the mainstreaming of trash is a sign of a cultural decay. Highbrow cultural production exists comfortably beside trash, and more often than not they share the same audiences. Furthermore, there exists no convincing argument that immorality and criminality at the movies transposes to immorality and criminality in the real world. For the most part, violent crime is in decline across the western world.

Filmgoers are not that easily influenced. Individuals who watch the movies invariably apply their own moral standards to the movies, rather than the movies imposing morality upon viewers.

Jeffrey Sconce’s final essay may be melancholic, but it is not uniformly negative about the film industry. And the dominant emotion after having read Sleaze Artists isn’t one of regret for the decline of moral standards. The underground can certainly be ugly, but it is vibrant. For every Oscar winner, there are one hundred middle brow romantic comedies, and ten Nude for Satans. If we ignore our cultural trash, we ignore a large part of our culture.

Goddamn you all to hell: The revealing politics of dystopian movies

 

Available in PDF here.

‘There is, of course, every reason to view the next century with fear,’ wrote a New York Times film reviewer in 1976 after having watched the Charlton Heston vehicle Soylent Green.

Smug pessimism of this type is hardly unusual in political commentary. Indeed, in only the last few years, Hollywood has released V for Vendetta and Children of Men, each of which claim that the Iraq War is the beginning of a cycle of oppression that will lead to dictatorship. Over the last century, the dystopian film has reflected society’s fears of monopoly capitalism, totalitarian socialism, environmental catastrophe, technology out of control, and now, in V for Vendetta and Children of Men, theocracy. The obsessions of the left are reflected in the dystopian movie.

But dystopias are never that simple. Certainly, the dystopian movie presents filmmakers with their opportunity for futuristic pessimism. The dystopia-a fictional society that got lost on the way to utopia-differs from traditional science fiction by its emphasis on political and social systems rather than science or technology, and therefore allows filmmakers to speculate wildly on the political future. But the genre has a tendency to trip up filmmakers, and the way it does so reveals much more about Hollywood leftism than it does the cultural fears of the broader population.

The Orwellian dystopia

George Orwell may not have invented the dystopia – John Stuart Mill coined the word in 1868, and Orwell’s vision was drawn from both Yevgeny Zamyatin’s We and Aldous Huxley’s Brave New World – but with the cultural status of Nineteen Eighty-Four, he owns it. Orwell defined the now archetypical dystopian society in response to the Stalinist communism-an omnipotent, omnipresent state with a single-minded control of its citizens. And the descendants of Nineteen Eighty-Four are many. The films THX 1138, Fahrenheit 451, Alphaville, Sleeper, Brazil, The Island, Equilibrium, Logan’s RunRenaissanceThe Running Man and others are derived from Orwell’s vision of a totalitarian police state.

The traditional dystopia is concerned with the spectre of the over-bearing state-the typical plot trajectory involves the protagonist rejecting the dictatorial controls of the government and finding out the horrible truth. In the 2005 film The Island, Scarlett Johansson and Ewan McGregor escape their post-apocalyptic dictatorship-which is run like a totalitarian fat camp-only to realise that their world was entirely artificial.

The evolution of the dystopian genre can reveal much about the popular obsessions of filmmakers and the audience, but each time those fears fall back upon a fear of the omnipotent state. For instance, even a sub-genre of dystopian films in the 1970s which featured environmental collapse eventually reveal themselves to be more concerned with state oppression than the environment. If this is a reflection of our cultural fears, then the contemporary environmentalists who would like the government to involve itself more and more in our individual choices have a much tougher task ahead of them than current opinion polls suggest.

Dreaming of the apocalypse: environmental dystopias

Paul Ehrlich’s 1968 neo-Malthusian tract The Population Bomb has been entered into history as a colossally inaccurate prediction of apocalyptic overpopulation. Ehrlich’s calculations of hundreds of millions of people starving to death in the 1970s and 1980s as population outstripped resources failed to account for agricultural innovation and slowing birth-rates in developed nations.

But The Population Bomb wasn’t just a simple prediction of global food shortages. To pound his message home, Ehrlich devised an array of future scenarios which could only occur as a consequence of his bleak mathematics. Ehrlich was quick to hedge his bets-‘none of [the scenarios] will come true as stated, but they describe the kinds of disasters that will occur as mankind slips into the famine decades’-but that didn’t stop the Stanford University Professor from wild grade-school speculations that tenuously connected to his arguments. For instance, by 1979, Ehrlich foresaw that:

Only the outbreak of a particularly virulent strain of bubonic plague killing 65 per cent of the starving Egyptian population had averted a direct Soviet-American clash in the Mediterranean.

By 1980:

… general thermonuclear war ensues. Particularly devastating are the high altitude ‘flash’ devices designed to set fire to all flammable materials over huge areas.

After describing his most appealing scenario, which predicts the starvation and death of merely half a billion people, Ehrlich challenges the reader to imagine a more optimistic future, which he is pretty sure can’t be done.

Wild speculations about the future have been a staple of the environmentalist doom-saying ever since; and this sort of casual jumble of non-fiction and undisciplined fantasy doesn’t speak well for environmental pop science.

Ehrlich’s book set the tone in the early 1970s for a whole new type of dystopia. Gone are the obsessions with a monolithic state apparatus and the subjugation of individuality depicted in Zamyatin’s We and Orwell’s Nineteen Eighty-Four – new visions of dystopia arose out of environmental tragedy. And the blame for humanity’s fall no longer lies with power-seeking bureaucrats and dictators, but with humanity itself. In the view of the environmental doomsayers, our own failure to keep pollution and population under control inadvertently leads us towards a dystopian future. And so when Charlton Heston curses mankind at the end of The Planet of the Apes, he speaks for Paul Ehrlich.

The Population Bomb was both serious enough to capture the imagination of the embryonic leftwing environmental movement and fanciful enough to directly inspire a boom in dystopian culture-within a year, Captain Kirk had been abducted by a race of space aliens to solve their overpopulation crisis. The book’s morally repulsive suggestions about coercing Indian males to undertake vasectomies and adding sterilisation to the food supply seem ready made for pot-boiler fiction. The 1971 film The Last Child depicted a society that had implemented a one-child policy and where the elderly were refused medical treatment, and the next year’s Z.P.G. showed a United Nations-esque ban on procreation for a thirty year period. And in 1973 Charlton Heston (an actor who appears to have been purpose-built for dystopia and angry revelations) uncovered the terrible truth behind Soylent Green, a synthetic food substitute made necessary after the United States had suffered complete economic and environmental collapse.

The 1976 classic Logan’s Run sets an Aldous Huxley-style pleasure dictatorship in a Paul Ehrlich world. The free-love and relaxation of the inhabitants of a domed city (a barely disguised shopping mall in Dallas) is only interrupted by the requirement that they have to be killed when they reach the age of thirty. When two escape, they discover themselves in the ruins of a Washington DC that has, it is implied, been decimated by environmental catastrophe caused by overpopulation. Logan’s Run packages all of the major dystopian fears together-a fear of technology (the dictator is in this case what appears to be a self-aware computer), a fear of population controls in the midst of a resource crisis, a fear of the loss of individuality (the Logan character featured in the film’s title actually has a more typically dystopian name-‘Logan 5′) and a fear of environmental apocalypse.

But it isn’t accurate to describe dystopian visions of Logan’s Run, Soylent Green, Z.P.G. andThe Last Child as direct ideological spawn of Paul Ehrlich. The films sympathise with those characters that rebel against the population restrictions-the woman who defies the state by having a baby, the security man who escapes the domed city, and the cop who continues to investigate a murder in defiance of his superiors-and the resolutions inevitably show the masses awakening to the horrible truth. By the time the credits appear, Ehrlich’s suggestions that the government forcibly sterilise the population have been judged as repugnant-as have the suggestions of our modern anti-natalist that we limit population growth under the banner of climate change. The moral simplicity of a Hollywood film turns out to be more ethical than the views of the Sierra Club and other environmentalists who were impressed by the perverse recommendations of The Population Bomb.

Furthermore, the environmental dystopias may initially appear to represent an entirely new cultural fear-that of ecological collapse-but they eventually reveal that they share the obsessions of ‘traditional’ dystopias-a monolithic organisation exerting super-normal controls over an unwilling or ignorant populace. Overpopulation and food shortages may be terrifying, but that terror is trumped by the fear of an omnipotent state.

Orwellian dystopias after the end of the socialist dream

While the dystopian genre has thrived over the last century, depictions of utopias have all but disappeared. The only utopias that are presented are ones that have failed. Part of this is because utopias are inherently dull. For instance, Gulliver’s Travels only loses its pace when Jonathan Swift finally tries to describe his ideal society. The race of intelligent horses called the Houyhnhnms may be perfect, but from a literary perspective they are bland and uninteresting compared to the Lilliputians. George Orwell claimed that this narrative failure of Swift’s presented a major problem for socialist thinkers-the society where everybody is happy is a boring society. And it’s hard to string a narrative around a society in which there is nothing going wrong.

But from a historical perspective, utopias rather than dystopias have been the dominant literary form. Plato and Thomas More used the utopian society to illustrate their political and economic views, which of course were little more than crude socialism. The late nineteenth century was a busy time for utopianist fantasy-classics of this period included Edward Bellamy’s novel Looking Backward and William Morris’ News From Nowhere-but few authors have been able to conceive of utopias that are anything but socialist. (The science fiction writer Robert Heinlein is a notable exception.)

So almost immediately after the world had begun to experience an actual, living communist dictatorship, socialism jumped from a utopian fantasy to a dystopian nightmare. Dystopias replaced utopias just when we realised how bad lived socialism could be-the utopian genre was a casualty of the demise of the socialist dream. Indicatively, We was published in 1921-less than half a decade after the Bolshevik coup d’etat-and was the first novel to be banned by the new Soviet censorship bureau.

As a consequence, from the ‘Khrushchev Thaw’ onwards, political radicals have been unable to come up with a fully-realised alternative to the status quo. Dystopias are much easier to conceive than utopias-after all, who doesn’t oppose dictatorship and forced sterilisation? Devising a plausible non-market economy is much more challenging.

But when Zamyatin and Orwell addressed their audiences in the first half of the twentieth century, it was within the realm of possibility that the Western world could go communist. That same demise of the socialist dream that led to the rise of dominance of the dystopia at the same time made Orwellian vision less poignant-there is simply no chance that the English constitutional monarchy will yield to IngSoc anytime soon.

And so to ensure that their visions remain relevant, filmmakers over the last few decades almost always try to shoe-horn a more modern message into their dystopias. In a particularly grating example of this, THX 1138 awkwardly shoved an anti-consumerist note into its otherwise traditional Orwellian state. A state propaganda machine first extols Robert Duvall’s character to work hard in a typically Stalinist manner: ‘Work hard, increase production, prevent accidents and be happy’. But it then goes on to deliver a message that the Soviet Politburo would have never wanted delivered: ‘Let us be thankful we have commerce. Buy more. Buy more now. Buy. And be happy.’ This clumsy message against consumer capitalism undermines the otherwise compelling vision of THX 1138.

Similarly awkward attempts at relevancy are found in many other dystopian visions. The otherwise clear story of over-population in Logan’s Run is destabilised when the only character who is wise to the cause of humanity’s troubles tries to blame our desire for bigger and bigger houses. More recent films have also tried to ‘contemporise’ their stories uncomfortably-in 2005’s V for Vendetta and 2006’s Children of Men, the War in Iraq is variously described as the catalyst for the end of female fertility, a religious dictatorship in England, the suppression of classical art, total social breakdown, and concentration camps for immigrants. Their political message consists of little more than a list of bad things that could happen-a far cry from the consistent and thematically integrated dystopias of Orwell and Zamyatin. And dystopias are most emotionally powerful when they are seen as possible-nobody but the most smug leftist thinks that George Bush’s occasional affirmation of his religious faith heralds an imminent theocracy.

The 2002 Christian Bale feature Equilibrium completes the migration of the Orwellian vision from the poignant to the absurd. In this totalitarian state, human emotions are suppressed to reduce conflict and ‘Clerics’ police the city to seek out ‘Sense Offenders’. Equilibrium is a successful film from a dramatic perspective, but the improbability of its vision is merely a reflection of the dominant cultural status of Nineteen Eighty-Four – Equilibrium has now achieved cult status on the basis of its fictional martial art ‘gun-kata’ and the ferocity of its fighting sequences rather than any political message it carries.

The inefficient dystopia

By contrast, Terry Gilliam’s joyfully absurdist 1985 film Brazil is a much closer reflection of the lived experience of totalitarian socialism. In Equilibrium and THX 1138, the totalitarian state is an efficient state-public servants are passionate, dedicated, and above all, effective, and the trains run on time. In Brazil, Orwell’s state has fallen into disrepair. The omnipotent eye of the dictator is revealed to be a vast and sluggish bureaucracy. State employees watch old movies when the boss isn’t watching them- the workers are more like Charlie Chaplin than Alexey Stakhanov. Individual bureaucrats act as bullies rather than servants of the state. And in Brazil, tyranny is delivered in triplicate. Terry Gilliam may have set out to make an absurdist comedy out of the traditional dystopia, but in doing so, he made a society which accords more closely with the USSR depicted in memoirs about life in the Soviet Union, especially in the post-Stalin era. Endemic corruption and bureaucratic mismanagement is the experience of socialism, not the clean, streamlined and seamless unitary state of Orwell. Pyongyang’s incomplete and structurally unsound Ryugyong Hotel is more representative of real-world socialist architecture than Oceania’s glistening white Ministry of Love. But in traditional anti-communist dystopias, the government is never so unglamorous as to run out of money. Orwell thought totalitarian communist governments would be terrible, but he also thought they would work.

Perhaps then the most poignant dystopian film made in the last half century is Stanley Kubrick’s 1971 film A Clockwork Orange. Upon first glance, A Clockwork Orange is not immediately recognisable as a dystopia. The biggest indicator – a totalitarian state – is absent in Kubrick’s vision. Indeed, the plot pivots around a politician desperate to solve the crime problem before the next election. And A Clockwork Orange strides across so many themes that its political views are not immediately obvious.

But A Clockwork Orange is a startling film about a decaying socialist Britain-not the socialism of the eastern bloc, but mid-century democratic socialism. The depraved protagonist Alex lives in ‘Municipal Flat Block 18A, Linear North’, part of a vast housing project which is so poorly maintained that it appears to be decomposing. The democratically elected government is revealed to be on a slow decline towards totalitarianism. A writer who eventually kidnaps Alex is described as a ‘subversive’, and perhaps more indicatively, the Minister of the Interior lets slip that he needs to clear the prisons of normal criminals to make room for political prisoners. And it is a society that is about to breakdown. After all, it is quickly indicated that Alex and his droogs are not the only gang terrorising England-law and order appears to be the government’s biggest problem.

When A Clockwork Orange resonates, it does so because social breakdown and socialist decay are very real features of west European states today. The northern banlieues around Paris are just the sort of low-income ghettos which are inhabited by Alex. In these areas, the state is present but ineffective-delivering welfare but not order-and the inhabitants are both oppressed and independent. Indeed, when David Cameron describes England’s ‘broken society’, he raises the spectre of ultra-violent and truant adolescents.

The vision of A Clockwork Orange is, like all dystopias, an exaggeration, but it is far more real than the states of Logan’s Run or THX 1138. And A Clockwork Orange manages to be far more cynical than a democratic socialist like Orwell could ever be. (Both Kubrick’s politics, and the politics of Anthony Burgess who wrote the original novel, could hardly be described as standard arts industry leftyness. Indeed, Burgess went onto write his own dystopian homage to Nineteen Eighty-Four, which he titled 1985, that featured a Britain dominated by trade unions and where Islam had become the dominant political force.)

Images of dystopia are necessarily reflections of their time. When Orwell wrote his book, he addressed it to fellow-travelling socialists-his story was directed at his comrades who supported the Soviet ‘experiment’. Subsequent dystopian visions-at least those ones that have been more than just paint-by-numbers duplications of Nineteen Eighty-Four-have variously railed against environmental destruction, corporate monopolies, genetic engineering, censorship, technological dependence, religious extremism and neo-conservative warmongering. But they always oppose the state-even in those films that blame corporations for the ills of the world, it is the state that provides the power to oppress.

But when a dystopian vision fails, it fails because it misunderstands the nature of the contemporary state. Brazil and A Clockwork Orange are more ominous dystopias because they are-perhaps surprisingly considering that one is an absurdist comedy and the other a violent criticism of behavioural psychology-realistic.

Fame Game Filling Our Need For Celebrities

Australian soccer is salivating over the more than 80,000 people who turned up to Sydney’s Telstra Stadium on Wednesday night and watched David Beckham do at least one of the things he is famous for – take a free kick.

But unfortunately for soccer, it wasn’t sport that brought such high numbers through the gates. Most people who attended were only interested in checking out the man who goes home to Posh Spice. One host of a corporate box reported that he had to explain to his guests that the person running around the field wearing yellow was the umpire.

The question of why we have such a fascination with celebrities is a well-rehearsed one. Fame, after all, has no inherent properties. Being famous doesn’t immediately make someone more virtuous or remarkable.

Similarly, it does not, as Bono seems to believe, impart to you any great insight into development economics or the most appropriate structure for giving economic loans to African nations. If your favourite political cause has a celebrity attached, it’s probably wrong. A busy media schedule leaves little room for even the best-intentioned celebrity to study the most humane way of keeping insects off the backsides of sheep.

But those who attended the Sydney exhibition match weren’t just there because they were fascinated by David Beckham (pictured below). After all, any thirst to discover as much as possible about the soccer star would surely be quelled by his series of autobiographies, David Beckham: My WorldDavid Beckham: My Side and David Beckham: Both Feet on the Ground. It is a testament to the cynical ingenuity of English publishing houses that one person could successfully market three auto-biographies, two of which were released a year apart.

Instead, the spectators were driven by a very human, but also a very peculiar, desire to see the celebrity in the flesh. For many of the spectators at the Sydney match, part of the attraction in attending the game was simply to share Beckham’s space in the world.

Certainly, on a practical level, there are some things that you can only discover by seeing somebody in real life, rather than on television. Those who have met John Howard are able to speak authoritatively about his height – the just-departed prime minister is hardly the munchkin depicted in hostile editorial cartoons.

But our desire to see and meet celebrities is more than a desire to assess their physical attributes up close. We have an almost primordial need to confirm that celebrities are, actually, real. Genuine human communication – even if it is one-sided and yelled from stadium seating – is our attempt at breaking down the barrier between celebrity and reality.

Even better when the celebrity is alerted to those attempts at communication – nothing amuses a heckler more than attention from their target.

Watching how someone carries themself, without the distorting effect of television, somehow gives far more insight into that celebrity’s personality. Everybody thinks they are pretty good at judging character.

Celebrities, many of whom are intelligent, are acutely aware of this curiously asymmetrical relationship. And eager to convert intangible fame into tangible cash, they exploit it. Successful celebrities “up-sell” their time to wealthier fans. For sports stars, a sponsorship deal is not just a colourful logo on a shirt, it is a commitment to meet the sponsoring firm’s clients when needed.

The same is true in many fields. Many firms sponsor ballet productions so their guests can mingle with performers. Ballet companies recognise that audiences like to break down the barrier between stage and stalls.

Nevertheless, at least dancers and soccer players have a day job. Paris Hilton is the archetypal celebrity thought to be famous for having done nothing. She might not be talented, but she sure is entertaining. Her life is a train wreck; a complex human drama conveniently serialised in newspaper headlines.

And Hilton’s business model is the same as Beckham’s – when the socialite was shipped down to Australia for the Melbourne Cup a few years ago, part of her job was to entertain cup sponsors. Celebrities who are famous just for being famous are also the most cunning manipulators of this disconnection between fame and reality.

The market for celebrities seems to work fairly well – there aren’t many opportunities for profit that the famous do not exploit. Our psychological need to humanise celebrities is a demand that is efficiently supplied.

No Need For Local Films On Public Purse

Some phrases deserve scare quotes more than others. And it’s hard to find a better candidate for the sarcastic use of punctuation marks than the phrase “cultural imperialism”.

After all, the popularity of Hollywood films in Australia hardly resembles the violent military occupation of a foreign nation. If cultural imperialism wasn’t invoked so often, it would be self-evidently absurd.

Nevertheless, many people believe that, somehow, cultural products made by Australians are superior to those made by foreigners. Australians should be watching Australian films, listening to Australian music and reading Australian books.

Cultural nationalists — who come from both the left and right of politics — assume that only after burying ourselves in cultural products produced within our geopolitical borders will we be able to develop a genuine national identity.

This is silly on a number of levels. For instance, what about the poor old states — do we suffer from a lack of films set in Victoria and featuring Victorian voices? Similarly, suburbs could also be considered distinct cultural units. If so, we have an oversupply of television programs set in St Kilda and Brunswick, and an undersupply of those set in Frankston and Dandenong.

At the same time, cultural nationalists argue that if Australia’s culture is not protected by government through regulation, subsidies and broadcast quotas, then that culture is at risk. The market cannot provide what Australians need, and the government has to step in.

But the case for cultural protectionism is weak. Often calls for subsidy are just naked special pleading. These are easy to dismiss — probably the worst thing for both taxpayers and artists would be a special category of welfare for creative industries.

Decades of government subsidies have already fostered dependency in the cultural sector. And relying on government rather than consumers for finance provides little incentive for cultural producers to tailor their work to the demands of the public.

As a result, the steady stream of below-par and ideologically heavy-handed productions funded by the Government has given Australian films a poor reputation. Recent films Candy, Little Fish and 2:37 have depicted urban and middle-class life as awash with drug use, depression and death.

For audiences, the “made in Australia” brand now often has negative connotations. And when critics deliberately go easy on local films, they compound the problem.

Taxpayer support is seen as a right by artists who believe they are serving a higher purpose — rather than satisfying the demands of their audience.

But a culture dependent on government handouts is a weak culture. Throughout history, the most vibrant intellectual and artistic cultures have been those that were decentralised, entrepreneurial and commercial.

The market economy has been the driving force behind most of what we consider to be “great” art. Markets in which consumer choice dominates provide cultural producers with far greater freedom to supply niche products to consumers with diverse tastes. And markets discipline artists to produce accessible work.

French history provides an illustration of both the negative consequences of cultural subsidies and the virtues of marketplace-driven art.

French cinema dominated the first few decades of the 20th century. Indeed, it was so popular that American filmmakers argued that the US required protection.

But as the French government set up lavish film bureaucracies after World War II, its industry atrophied and its films grew less popular. US films now make up 60 per cent of the market in France — in the 1930s, that figure was just 15 per cent.

The reaction against cultural imperialism has the unintended consequence of making cultural industries uncompetitive.

Robert Manne hoped in the latest Monthly that if a Kevin Rudd government was elected that the gulf between the government and the nation’s creative artists would be bridged.

It is hard to imagine how turning more artists into tax-eaters would be good for Australian culture.