Professor Olver concludes in his piece at The Conversation that we should test the efficacy of food labels before deciding labels don’t work. Good sentiments, but they’re completely at odds with his curt dismissal of the results of the real world test of menu labelling in New York City.
Professor Olver writes that my column in the Drum cites (and apparently misinterprets, although no grounds for that accusation is offered) one study to conclude menu labelling won’t work.
This is wrong. I cited five separate studies.
Indeed, the (single) BMJ paper from 2011 which he uses as a rebuttal was mentioned in my column too. But he appears not to have read fully even the abstract of that paper, which said that “mean calories did not change from before to after regulation”. Only three of the eleven chains showed significant reductions in calorie consumption. Yes, those chains accounted for 42% of total customers in the paper’s survey. Any fair reading would admit that the BMJ paper is ambiguous for both our cases. I acknowledged that ambiguity but Professor Olver seems to believe the paper is a slam-dunk for menu labelling.
Perhaps it might be, if he had not completely ignored the Health Affairs and American Economic Review papers, the literature review conducted by the Heart Foundation, as well as the editorial in the American Journal of Clinical Nutrition which provided the opening sentence for my column.
It is a shame that Professor Olver did not want to seriously engage with the evidence we have regarding this policy.
The rest of his article tackles another topic entirely: the simplification of nutrition labelling on packaged food. How this rebuts the real-world evidence of the efficacy of restaurant menu labelling we now have from New York isn’t clear. Professor Olver writes “it is clearly erroneous to argue that just because one type of label may have not had a massive impact in one instance, all labels are bound to fail” but I can’t find where I said anything of the sort. That’s his argumentative trick, not mine.
It is important that we accurately test public health interventions. But once we have done so, we must be open-minded enough to actually look at the results of those tests.