Automating the big state will need more than computers

Robodebt – the automated Centrelink debt issuance program that was found invalid by a federal court last month – is not just an embarrassment for the government. It is the first truly twenty-first century administrative policy debacle.

Australian governments and regulators increasingly want to automate public administrative processes and regulatory compliance, taking advantage of new generations of technologies like artificial intelligence and blockchain to provide better services and controls with lower bureaucratic costs. There are good reasons for this. But our would-be reformers will need to study how robodebt went wrong if they want to get automation right.

The robodebt program (officially described as a new online compliance intervention system) was established in 2016 to automate the monitoring and enforcement of welfare fraud. Robodebt compared an individual’s historical Centrelink payments with their averaged historical income (according to tax returns held by the Australian Taxation Office). If the Centrelink recipient had earned more money than they were entitled to under Centrelink rules, then the system automatically issued a debt notice.

That was how it was supposed to work. In practice robodebt was poorly designed, sending out notices when no debt actually existed. Around 20 per cent of debts issued were eventually waived or reduced. The fact that those who bore the brunt of these errors had limited financial resources to contest their debts contributed to robodebt’s cruelty. In November, the federal court declared that debts calculated using the income average approach had not been validly made, and the government has now abandoned the approach.

Automation in government has a lot of promise, and a lot of advocates. Urban planners are increasingly using AI to predict and affect transport flows. The Australian Senate is inquiring into the use of technology for regulatory compliance (‘regtech’) particularly in the finance sector. Some regulatory frameworks are so byzantine that regulated firms have to use frontier technologies just to meet bare compliance rules: Australia’s adoption of the Basel II capital accords led to major changes in IT systems. And the open banking standards being developed by CSIRO’s Data61 promise deeper technological integration between private and public sectors.

Regulatory compliance costs can be incredibly high. The Institute of Public Affairs has estimated that red tape costs the economy around 11 per cent of GDP in foregone output. The cost of public administration to the taxpayer is considerably more. Anything that lowers these costs is desirable.

But robodebt shows us how attempts to reduce the cost of administration and regulatory compliance can be harmful when done incompetently. The reason is built into the modern philosophy of government.

Economists distinguish between administrative regimes governed by discretion and those governed by rules. The prototypical example here is monetary policy. Rules-based monetary policies, where central banks are required to meet targets fixed in advance, are less flexible (as the RBA, which has consistently failed to meet its inflation target is keenly aware) but at the same time provide a lot more certainty to the economy. And while discretionary regimes are flexible, they also vest a lot of power in unelected bureaucrats and regulators, which comes at the cost of democratic legitimacy.

Automation in government is possible when we have clear rules that can be automated. If we are going to build administrative and compliance processes into code, we need to be very specific about what those processes actually are. But since the sharp growth of the regulatory state in the 1980s governments have increasingly relied less on rules and more on discretion. ASIC’s shrinks-in-the-boardroom approach to corporate governance is almost a parody of the discretionary style.

The program of automating public administration is therefore a massive task of converting – or at least adapting – decades of built up discretionary systems into rules-based ones. This was where robodebt fell over. Before robodebt, individual human bureaucrats had to manually process welfare compliance, which gave them some discretion to second-guess whether debt notices should be sent. Automating the process removed that discretion.

The move from discretion to rules is, to be clear, a task very much worth doing. Discretionary administration feeds economic uncertainty, and ultimately lowers economic growth. We have a historically unique opportunity to reduce the regulatory burden and reassert democratic control over the non-democratic regulatory empires that have been building up.

Of course, public administration-by-algorithm is only as effective (or fair, or just, or efficient) as those who write the algorithm build it to be. There’s a lot of discussion at the moment in technology circles about AI bias. But biased or counterproductive administrative systems are not a new problem. Even the best-intentioned regulations can be harmful if poorly designed, or if bureaucrats decide to use discretion in their interest rather than the public interest.

Robodebt failed because of an incompetent attempt to change a discretionary system to a rules-based system, which was then compounded by political disregard for the effect of policy on welfare recipients. But robodebt is also a warning for the rest of government. The benefits of technology for public administration won’t be quickly or easily realised.

Because when we talk about public sector automation, we’re not just talking about a technical upgrade. We’re talking about an overhaul of the regulatory state itself.

Blockchain technology as economic infrastructure: Revisiting the electronic markets hypothesis

With Sinclair Davidson and Jason Potts. Published in Frontiers in Blockchain (2019)

Abstract: In the late 1980s and early 1990s the electronic markets hypothesis offered a prediction about effect of information technology on industrial organisation, and many business writers forecast significant changes to the shape and nature of the firm. However, these changes did not come to pass. This paper provides an economic analysis of why, using the transaction cost economic framework of Ronald Coase and Oliver Williamson. Non-hierarchical corporate organisation struggled against contracting problems in the presence of possible opportunistic behaviour. Technologies of trust offer an institutional mechanism that acts on the margin of trust, suppressing opportunism. The paper concludes that blockchain technology provides an economic infrastructure for the coordination of economic activity and the possible realisation of the electronic markets hypothesis.

Available at Frontiers in Blockchain

Christian Porter’s defamation reform would be a catastrophic mistake

With Aaron M Lane

Attorney-General Christian Porter wants social media platforms like Twitter and Facebook to be legally liable for defamatory comments made by their users.

Right now, the common law can distinguish between the legal liability of active publishers of information (like newspapers and broadcasters) and the passive platform operators that allow users to publish information themselves. Courts decide where this distinction is drawn according the unique facts of each case.

But in a speech to the National Press Club on Wednesday, the Attorney-General declared he wants to eliminate the distinction altogether: “Online platforms should be held to essentially the same standards as other publishers.”

The Attorney-General’s proposal is fundamentally confused. Removing the distinction between digital platforms and newspapers would have a devastating effect on both those platforms and our ability to communicate with each other.

The proposal is bad on its merits. But even besides that, the conservative government needs to understand how destructive it would be to the conservative movement online.

Let’s start with the legal principles. It makes sense that newspapers and broadcasters are liable for what they publish. They actively commission and produce the content that appears on their services. They read it, edit it, arrange and curate it. They pay for it. Newspapers and broadcasters have not only an editorial voice, but complete editorial control. Indeed, it is this close supervision of what they publish that gives them strength in the marketplace of ideas.

Social media platforms do nothing of the sort. Not only do they not commission the content that appears on our newsfeeds (let alone read, factcheck, or edit that content), they don’t typically confirm that their users are even real people – not, say, bots or foreign impersonators. They merely provide a platform for us to communicate with each other. Social media has facilitated a massive, global conversation. But it has no editorial voice.

In the United States a parallel debate is going on among Republicans about whether Section 230 of the Communications Decency Act – which explicitly prevents courts from treating ‘interactive computer services’ as publishers or speakers for the purpose of legal liability – should be abolished.

Section 230 has variously been described by scholars and commentators as “the 26 words that created the internet” or the “the internet’s first amendment”. The internet law professor Jeff Kosseff writes that eliminating this provision would “turn the internet into a closed, one-way street”. Attorney-General Porter’s proposal would have the same effect.

If social media platforms have to bear legal responsibility for what their users say, they will assume editorial responsibility for it. That means editing, deleting, and blocking all content that could be even the least bit legally questionable.

Newspapers and broadcasters sometimes take calculated risks with what they print, if they believe that the information they reveal is in the public interest. But why would a technological company – a company that lacks an editorial voice or the journalistic vision – be anything but hypercautious? Why wouldn’t it delete anything and everything with even the slightest risk?

And here is where the practical politics comes in. Even if the Attorney-General’s proposal was a good idea in principle, this policy would be particularly devastating for the conservative movement that supports his government. Indeed, it is hard to imagine a legislative proposal that would more effectively, and immediately, cut down the Australian conservative movement online.

After all, what side of politics benefits most from the political diversity and openness of the modern internet? What side of politics has relied most on the internet’s ability to bypass traditional media gateways? It is difficult to imagine the conservative political surge in recent years without social media – without Facebook, Twitter, YouTube, and all those podcast platforms.

If conservatives are concerned about social media networks “censoring” conservative content on their services now, well, making them liable for everything conservatives say would supercharge that.

And why would this policy stop at defamation laws? Why wouldn’t it also apply to liabilities around, say, Section 18C of the Racial Discrimination Act? Or our sedition laws? We are looking at a future where technology companies in California (companies that many conservatives believe are stacked with culturally left employees) could be required to second-guess how the most left-wing judges in Australia might enforce this country’s draconian anti-speech restrictions.

The Coalition government should also reflect on how some of its most recent legislative programs have backfired on conservatives. The Foreign Influence Transparency Scheme, passed in 2018 in order to tackle Chinese interference in Australian politics, is now being used to target the organiser of the Australian Conservative Political Action Conference, Andrew Cooper, and even Tony Abbott.

The Attorney-General is right that defamation law needs reform. Australia’s defamation framework is heavy-handed and disproportionately favours private reputation over the public need to discuss significant issues. But removing the courts’ ability to determine liability for defamation – and instead deputising the world’s technology companies to enforce what they imagine it could be – would be a catastrophic mistake.

The Crypto-Circular Economy

With Darcy WE Allen and Jason Potts. Originally a Medium post

If we are going to realise the environmental vision of the circular economy, we need to first think of it as an entrepreneurial economy.

In PIG 05049 the artist Christien Meindertsma shows how the parts of a slaughtered pig get reused downstream. For instance, gelatine derived from the skin ends up in wine, acids from bone fat end up in paint, and pig hair ends up in fertiliser.

The farmer sells what they can to retailers and sells the rest to other businesses, who then process and resell the what they can’t use to other users and businesses, who then process and resell the other parts … anyway you get it the point.

In a world of perfect information and zero-transaction costs this use and reuse would be trivial. The near infinite uses of pig parts would be immediately apparent to everyone in the economy and every part of the pig would be reallocated efficiently.

But of course we don’t live in a world of perfect information. All these reallocations have to be discovered by entrepreneurs and innovators.

PIG 05049 is a story of how resources move through the economy in surprising ways, as entrepreneurs reduce waste in the pursuit of profit.

But a circular economy makes stronger demands on us. The circular economy aspires not simply to minimise waste, but for goods to be “reused, repaired and recycled” after their first users no longer need them.

The circular economy imagines a world in which material goods are recovered, endlessly, and thus the environmental impact of the materials that we rely on for our prosperity is radically reduced.

It’s a powerful vision. But it is a hard vision to realise because transaction costs are not zero. Obviously, as goods travel through their life cycle they deteriorate. Goods get worn out, they rust, they fall apart.

But just as critical is the fact that information about the goods deteriorates as well. Product manuals get lost. Producers go out of business. Critical parts get separated. What the goods are made from is forgotten.

This information loss is a huge problem for the circular economy — it is very extremely expensive to reuse goods when we have lost information about what they are made of and how they work. This information entropy makes it hard for entrepreneurs and innovators to close the loop.

A circular economy with information entropy
This is what that looks like

In some previous work we’ve described a hypothetical “perfect ledger” where information is infinitely accessible, immediately retrievable, completely immutable, perfectly correspondent to reality, and permanently available. The perfect ledger is a thought experiment. It’s a thought experiment like an economy with perfect information or zero transaction costs that allows us to see how our imperfect world differs from an imaginary ideal.

And in a world of perfect ledgers, the circular economy’s information loop is completely closed. There is no information entropy — we never forget, so we can always reuse.

Blockchain technology of course is not a perfect ledger. But on many of the relevant margins, it offers a drastically improved way of managing information about goods as they travel through their lifecycle.

Information can be stored on a distributed ledger in a way that is resistant not only to later amendment, but that persists when it a good is passed from hand to hand, or travels across a political border, or when it is discontinued and forgotten by its designer, or when its original manufacturer goes out of business.

The information about the goods we have sitting on our desk, scattered around our homes and workplaces, built into our buildings, and powering our vehicles is being unpredictably but relentlessly lost. This is the blockchain opportunity for the circular economy. Blockchains can secure more information, better, more permanently and more accessibly about goods, so that they can be more efficiently reused.

And in conjunction with similar technological developments that reduce search costs — that is, that allow innovators to identify underutilised goods in the economy that could be bought and repurposed — the owners of goods will have increased incentivises to store and protect their property, if only to maximise the sale price.

The circular economy is often thought as a problem for governments to bring about. But if the circular economy is to be realised, we need to rethink the problem of waste and reuse as an environmental problem caused by an information problem.

Technological advances in the way we store and trust information offer a vision of large-scale, yet still bottom-up environmental improvements, where market incentives, price signals and contracting work to close the industrial loop.

Selling Your Data without Selling Your Soul: Privacy, Property, and the Platform Economy

With Sinclair Davidson

Executive summary: Humans have always sought to defend a zone of privacy around themselves—to protect their personal information, their intimate actions and relationships, and their thoughts and ideas from the scrutiny of others. However, it is now common to hear that thanks to digital technologies, we now have little expectation of privacy over our personal information.

Meanwhile, the economic value of personal information is rapidly growing as data becomes a key input to economic activity. A major driver of this change is the rise of a new form of business organization that has come to dominate the economy—platforms that can accumulate and store data and information are likely to make that data and information more valuable.

Given the growing economic importance of data, digital privacy has come to the fore as a major public policy issue. Yet, there is considerable confusion in public debates over the meaning of privacy and why it has become a public policy concern. A poor foundational understanding of privacy is likely to result in poor policy outcomes, including excessive regulatory costs, misallocated resources, and a failure to achieve intended goals.

This paper explores how to build a right to privacy that gives individuals more control over their personal data, and with it a choice about how much of their privacy to protect. It makes the case that privacy is an economic right that has largely not emerged in modern economies.

Regulatory attempts to improve individual control over personal information, such as the European Union’s General Data Protection Regulation (GDPR), have unintended consequences and are unlikely to achieve their goals. The GDPR is a quasi-global attempt to institute privacy protections over personal data through regulation. As an attempt to introduce a form of ownership over personal data, it is unwieldy and complex and unlikely to achieve its goals. The GDPR supplants the ongoing social negotiation around the appropriate ownership of personal data and presents a hurdle to future innovation.

In contrast to top-down approaches like the GDPR, the common law provides a framework for the discovery and evolution of rules around privacy. Under a common law approach, problems such as privacy are solved on a case-by-case basis, drawing on and building up a stock of precedent that has more fidelity to real-world dilemmas than do planned regulatory frameworks.

New technologies such as distributed ledger technology—blockchain—and advances in zero-knowledge proofs likewise provide an opportunity for entrepreneurs to improve privacy without top-down regulation and law.

Privacy is key to individual liberty. Individuals require control over their own private information in order to live autonomous and flourishing lives. While free individuals expose information about themselves in the course of social and economic activity, public policy should strive to ensure they do so only with their own implied or explicit consent.

The ideal public policy setting is one in which individuals have property rights over personal information and can control and monetize their own data. The common law, thanks to its case-by-case, evolutionary nature, is more likely to provide a sustainable and adaptive framework by which we can approach data privacy questions.

Published by the Competitive Enterprise Institute

Capitalism after Satoshi

With Sinclair Davidson and Jason Potts. Published in the Journal of Entrepreneurship and Public Policy (2019).

Purpose: The purpose of this paper is to explore the long-run economic structure and economic policy consequences of wide-spread blockchain adoption.

Design/methodology/approach: The approach uses institutional, organisational and evolutionary economic theory to predict consequences of blockchain innovation for economic structure (dehierarchicalisation) and then to further predict the effect of that structural change on the demand for economic policy.

Findings: The paper makes two key predictions. First, that blockchain adoption will cause both market disintermediation and organisational dehierarchicalisation. And second, that these structural changes will unwind some of the rationale for economic policy developed through the twentieth century that sought to control the effects of market power and organisational hierarchy.

Research limitations/implications: The core implication that the theoretical prediction made in this paper is that wide-spread blockchain technology adoption could reduce the need for counter-veiling economic policy, and therefore limiting the role of government.

Originality/value: The paper takes a standard prediction made about blockchain adoption, namely disintermediation (or growth of markets), and extends it to point out that the same effect will occur to organisations. It then notes that much of the rationale for economic policy, and especially industry and regulatory policy through the twentieth century was justified in order to control economic power created by hierarchical organisations. The surprising implication, then, is that blockchain adoption weakens the rationale for such economic policy. This reveals the long-run relationship between digital technological innovation and the regulatory state.

Available at Emerald Insight. Working paper version at SSRN.

Blockchain and the Evolution of Institutional Technologies: Implications for Innovation Policy

Research Policy, Volume 49, Issue 1, February 2020. With Darcy WE Allen, Brendan Markey-Towler, Mikayla Novak, and Jason Potts

Abstract: For the past century economists have proposed a suite of theories relating to industrial dynamics, technological change and innovation. There has been an implication in these models that the institutional environment is stable. However, a new class of institutional technologies — most notably blockchain technology — lower the cost of institutional entrepreneurship along these margins, propelling a process of institutional evolution. This presents a new type of innovation process, applicable to the formation and development of institutions for economic governance and coordination. This paper develops a replicator dynamic model of institutional innovation and proposes some implications of this innovation for innovation policy. Given the influence of public policies on transaction costs and associated institutional choices, it is indicated that policy settings conductive to the adoption and use of blockchain technology would elicit entrepreneurial experiments in institutional forms harnessing new coordinative possibilities in economic exchange. Conceptualisation of blockchain-related public policy an innovation policy in its own right has significant implications for the operation and understanding of open innovation systems in a globalised context.

Available at Research Policy. Accepted version available at SSRN.

Submission on the final report of the Australian Competition and Consumer Commission’s Digital Platforms Inquiry

With Darcy Allen, Dirk Auer, Justin (Gus) Hurwitz, Aaron Lane, Geoffrey A. Manne, Julian Morris and Jason Potts

The emergence of “Big Tech” has caused some observers to claim that the world is entering a new gilded age. In the realm of competition policy, these fears have led to a flurry of reports in which it is asserted that the underenforcement of competition laws has enabled Big Tech firms to crush their rivals and cement their dominance of online markets. They then go on to call for the creation of novel presumptions that would move enforcement of competition policy further away from the effects-based analysis that has largely defined it since the mid-1970s.

Australia has been at the forefront of this competition policy rethink. In July of 2019, the Australian Competition and Consumer Commission (ACCC) concluded an almost two-year-long investigation into the effect of digital platforms on competition in media and advertising markets.

The ACCC Digital Platforms Inquiry Final Report spans a wide range of issues, from competition between platforms to their effect on traditional news outlets and consumers’ privacy. It ultimately puts forward a series of recommendations that would tilt the scale of enforcement in favor of the whims of regulators without regard to the adverse effects of such regulatory action, which may be worse than the diseases they are intended to cure.

Available in PDF here.

Understanding the Blockchain Economy: An Introduction to Institutional Cryptoeconomics

With Sinclair Davidson and Jason Potts. Edward Elgar Publishing 2019

Blockchains are the distributed ledger technology that powers Bitcoin and other cryptocurrencies. But blockchains can be used for more than the transfer of tokens – they are a significant new economic infrastructure. This book offers the first scholarly analysis of the economic nature of blockchains and the shape of the blockchain economy. By applying the institutional economics of Ronald Coase and Oliver Williamson, this book shows how blockchains are poised to reshape the nature of firms, governments, markets, and civil society.

Available now from Edward Elgar Publishing

Regulate? Innovate!

Suddenly, we live in a world of policy dilemmas around social media, digital platforms, personal data, and digital privacy. Voices on both sides of politics are loudly proclaiming we ought to regulate Facebook and Google. From the left, these calls focus on antitrust and competition law—the big platforms are too large, too dominant in their respective markets, and governments need to step in. From the right, conservatives are angry that social media services are deplatforming some popular voices and call for some sort of neutrality standard to be applied to these new ‘utilities’.

Less politically charged but nonetheless highly salient are the concerns about the collection and use of personal data. If ‘data is the new oil’—a commodity around which the global economy pivots—then Facebook and Google look disturbingly like the OPEC oil production cartel. These firms use that data to train artificial intelligence (AI) and serve advertisements to consumers with unparalleled precision. No more is it the case that 50 per cent of advertising is wasted.

These policy dilemmas have come about because the digital environment has changed, and it has changed sharply. Facebook only opened to the public in 2006 and by 2009 already had 242 million users. In the second half of 2019 it has 2.38 billion users.

Facebook is not just central to our lives—one of the primary ways so many of us communicate with family, friends and distant acquaintances—but central to our politics. The first volume of the Mueller investigation into Russian interference in the 2016 American presidential election focused on the use of sock-puppet social media accounts by malicious Russian sponsors. There’s no reason to believe these efforts influenced the election outcome but it is nonetheless remarkable that, through Facebook, Russian agents were able to fraudulently organise political protests (for both left and right causes)—sometimes with hundreds of attendees—by pretending to be Americans.

There always have been and always will be a debate about tax rates, free trade versus protectionism, monetary policy and banking, Nanny State paternalism, or whether railways should be privatised or nationalised. The arguments have been rehearsed since the 19th century, or even earlier. But we are poorly prepared not just for these topics of digital rights and data surveillance, but for new dimensions on which we might judge our freedoms or economic rights.

Private firms are hoovering up vast quantities of data about us in exchange for providing services. With that data they can, if they like, map our lives—our relationships, activities, preferences—with a degree of exactness and sophistication we, as individuals, may not be able to do ourselves. How should we think about Facebook knowing more about our relationships than we do? Do we need to start regulating the new digital economy?

The surveillance economy

One prominent extended case for greater government control is made by Shoshana Zuboff, in her recent book The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs, 2019). For Zuboff, a professor at Harvard Business School, these new digital technologies present a new economic system, surveillance capitalism, that “claims human experience as free raw material for translation into behavioural data”.

Zuboff argues these new firms look a lot like the industrial behemoths of the 19th and 20th century. Google is like General Motors in its heyday, or the robber barons of the Gilded Age. Using Marxist-tinged language, she describes how firms claim the ‘behaviourial surplus’ of this data to feed AI learning and predict our future desires—think Amazon or Netflix recommendation engines.

More sinisterly in Zuboff’s telling, these firms are not simply predicting our future preferences, but shaping them too: “It is no longer enough to automate information flows about us; the goal now is to automate us.” Netflix can put its own content at the top of its recommendation algorithm; Pokémon Go players tend to shop at restaurants and stores near the most valuable creatures.

Where many people spent years worrying about government surveillance in the wake of Edward Snowden’s leaks about the National Security Agency, she argues NSA learned these techniques from Google—surveillance capitalism begets surveillance state. At least the NSA is just focused on spying. Silicon Valley wants to manipulate: “Push and pull, suggest, nudge, cajole, shame, seduce,” she writes. “Google wants to be your co-pilot for life itself.”

Harrowing stuff. But these concerns would be more compelling if Zuboff had seriously engaged with the underlying economics of the business models she purports to analyse. Her argument—structured around an unclearly specified model of ‘surveillance assets’, ‘surveillance revenues’, and ‘surveillance capital’—is a modification of the internet-era adage, “If you’re not paying for the product, you are the product”. Many services we use online are free. The platforms use data about our activities on those platforms to make predictions—for example, about goods and services we might like to consume—and sell those predictions to advertisers. As she describes it:

… we are the objects from which raw materials are extracted and expropriated for Google’s prediction factories. Predictions about our behaviour are Google’s products, and they are sold to its actual customers but not to us. We are the means to others’ ends.

 … the essence of the exploitation here is the rendering of our lives as behavioural data for the sake of others’ improved control of us.

This argument misses a crucial step: what is this control? For the most part, the product derived from our data that is sold to other firms is advertising space: banner ads on news websites, ads dropped into social media feeds, ads threaded above our email inboxes. Seeing an advertisement is not the same as being controlled by a company. The history of advertising dates back at least to Ancient Rome. We are well familiar with the experience of companies trying to sell us products. We do not have to buy if we do not like the look of the products displayed on our feeds. It’s a crudely simple point, but if we do not buy, all that money—all that deep-learning technology, all those neural networks, all that ‘surveillance’—has been wasted.

Two sided markets

So how should we think about the economics of the big technology companies? Google and Facebook are platforms; what Nobel-winning economist Jean Tirole described as ‘two-sided’ markets. Until recently the dominant market structure was a single-sided market: think a supermarket. A supermarket has a one-directional value chain, moving goods from producers to consumers. Goods are offered to customers on a take-it-or-leave-it basis. In a two-sided market, customers are on both sides of the market. The service Google and Facebook provide is matching. They want advertisers to build relationships with users and vice-versa. Since the first scholarly work done on two-sided markets, economists have observed platforms that take three or more groups of users and match them together.

Two-sided markets are not new, of course. Newspapers have traditionally done this: match advertisers with readers. Banks match borrowers with lenders. French economics professor Jean Tirole’s first work looked specifically at credit card networks. But two-sided markets dominate the online world, and as the economy becomes more digital they are increasingly important. When we try to define what is unique about the ‘sharing economy’, we’re really just talking about two-sided markets: AirBnB matches holidaymakers with empty homes, Uber matches drivers with riders, AirTasker matches labour with odd jobs. Sometimes single and two-sided markets co-exist: Amazon’s two-sided marketplace sits alongside its more traditional online store.

The economic dynamics of two-sided markets are very different dynamics to what we are used to in the industrial economy. They are strongly characterised by network effects: the more users they have on both sides, the more valuable they are. So firms tend to price access in strange ways. Just as advertisers subsidised the cost of 20th century newspapers, Google and Facebook give us free access not because we are paying in personal data but because they are in the relationship business. Payments go in funny directions on platforms, and the more sides there are the more opaque the business model can seem.

An ironic implication of Zuboff’s arguments is that her neo-Marxian focus implicitly discounts what most analysts identify as the two key issues around these platforms: whether these networks are harmful for privacy and whether they are monopolistic.

First, the monopoly arguments. In Australia the ACCC has been running a digital platforms inquiry whose draft report—released in December 2018—called for using competition law against the large platforms on the basis they have started to monopolise the advertising market. There are many problems with the ACCC’s analysis. For example, it badly mangles its narrative account of how newspaper classifieds migrated online, implying Google and Facebook captured the ‘rivers of gold’. In fact, classified advertising went elsewhere (often to websites owned by the newspapers, such as Domain).

Yet the most critical failure of the ACCC is its bizarrely static perspective of an incredibly dynamic industry. True, platform markets are subject to extreme network effects—the more users, the more valuable—but this does not mean they tend towards sustainable monopolies. Far from it. There are no ‘natural’ limits to platform competition on the internet. There is unlimited space in a digital world. The only significant resource constraint is human attention, and the platform structure gives new entrants a set of strategic tools which can help jump-start competition. Using one side of the market to subsidise another side of the market helps ‘boot-strap’ network effects.

Consumer harm is the standard criteria for whether a firm is unacceptably monopolistic. Usually this means asking whether prices are higher than they would be if the market was more contested. Given the money prices for these services are often zero, that’s hard to sustain. Nobody pays to use Google.com. At first pass the digital platform business seems to have been an extraordinary boost to consumer surplus.

But, again, platform economics can be strange. It is possible we are paying not with money but with personal data, and the role of a competition authority is to protect our privacy as much as our wallet. This is the view of the ACCC (at least in its December 2018 draft report) and has become an article of faith in the ‘hipster antitrust’ movement in the United States that competition regulators need to focus on more than just higher prices.

There is obviously a great deal to privacy concerns. In a recent book, The Classical Liberal Case for Privacy in a World of Surveillance and Technological Change (Palgrave Macmillan, 2018), I argued we currently are in an extended social negotiation about the value of privacy and its protection. But the privacy debate is characterised by a lot of misconceptions and confusions. Privacy policies and disclosures have not always been acceptable. Expectations are changing. Mark Zuckerberg would no longer get away with the reckless anti-privacy statements he made as a CEO when Facebook launched. The question is whether to wait for privacy expectations to shift—supplemented by the common law—or whether governments need to step in with bold new privacy regulation.

The experience with privacy regulation so far has not been great. The European Union’s General Data Protection Regulation presents the single most significant attempt to regulate privacy thus far. The GDPR, which became enforceable in 2018, requires explicit and informed consent of data collection and use, informing users about how long their data will be retained, and provides for a “right of erasure” that allows users to require firms to delete any personal data they have collected at any time. The GDPR was written so broadly as to apply to any company that does business with any European citizen, in practice making the GDPR not just a European regulation but a global one.

Early evidence suggests host of consequences unforeseen by the GDPR’s designers. Alex Stapp, at the International Center for Law and Economics, argues GDPR compliance costs have been “astronomical”. Microsoft put as many as 1,600 engineers on GDPR compliance, and Google says they spent “hundreds of years of human time” ensuring they follow the new rules globally. These firms have the resources to do so. One consequence of high compliance costs has been to push out new competitors: small and medium internet companies that cannot dedicate thousands of engineers to regulatory compliance. As Stapp points out, it’s not at all clear this trade-off for privacy protection has been worth it: regulatory requirements for things such as data portability and right of data access have created new avenues for accidental and malicious access to private data.

A peculiarity of the history of early-stage technologies is they tend to trade off privacy against other benefits. Communications over the telegraph were deeply insecure before the widespread use of cryptography; early telephone lines (‘party lines’) allowed neighbours to listen in. Declaring privacy dead in the digital age is not just premature, it is potentially counterproductive. We need sustained innovation and entrepreneurial energy directed at building privacy standards into technologies we now use every day.

The deplatforming question

One final and politically sensitive way these platforms might be exercising power is by using their role as mediators of public debate to favour or disfavour certain political views. This is the fear behind the deplatforming of conservatives on social media, which has seen a number of conservative and hard-right activists and personalities banned from Facebook, Instagram and Twitter. Prominent examples include the conservative conspiracist broadcaster Alex Jones, his co-panellist Paul Joseph Watson, and provocateur Milo Yiannopoulos. Social media services also have been accused of subjecting conservatives to ‘shadow bans’—adjusting their algorithms to hide specific content or users from site-wide searches.

These practices have led many conservative groups who usually oppose increases in regulation to call for government intervention. The Trump administration even launched an online tool in May 2019 for Americans to report if they suspected “political bias” had violated their freedom of speech on social media platforms.

One widely canvassed possibility is for regulators to require social media platforms to be politically neutral. This resembles the long-discredited ‘fairness doctrine’ imposed by American regulators on television and radio broadcasting until the late 1980s. The fairness doctrine prevented the rise of viewpoint-led journalism (such as Fox News) and entrenched left-leaning political views as ‘objective’ journalism. Even if this was not an obvious violation of the speech rights of private organisations, it takes some bizarre thinking to believe government bureaucrats and regulators would prioritise protecting conservatives once given the power to determine what social media networks are allowed to do.

Another proposal is to make the platforms legally liable for content posted by their users. The more the platforms exercise discretion about what is published on their networks, the more they look like they have quasi-editorial control, and courts should treat them as if they do. While this would no doubt lead to a massive surge in litigation against the platforms for content produced by users, how such an approach would protect conservative voices is unclear: fear of litigation would certainly encourage platforms to take a much heavier hand, particularly given the possibilities of litigation outside the United States where hate speech and vilification laws are common.

The genesis of this proposal seems to come from a confusion about the distinction between social media platforms and newspapers. Newspapers solicit and edit their content. Social media platforms do not. Social media platforms come from a particular political and ideological environment—the socially liberal, quasi-libertarian and individualistic worldview of Silicon Valley and the Bay Area—and these technologies now hold the cultural high-ground. The conservative movement has focused on trying to change Washington DC when it should have been just as focused on developing new ways for people to exercise their freedom, as has Silicon Valley.

But regulation cannot be the answer. Regulation would dramatically empower bureaucrats, opening up new avenues for government intervention at the heart of the new economy (any proposed regulation of Facebook’s algorithm, for instance, would lay the foundation for regulating Amazon’s search algorithm, and then any firm that tries to customise and curate their product and service offerings), and threatening, not protecting, freedom of speech. To give government the power to regulate what ought to be published is a threat to all who publish, not to just a few companies in northern California.

Platform to protocol economy

I opened this article with a discussion of how recent a development the platform economy is: a decade old, at best. A host of new technologies and innovations are coming that challenge the platforms’ dominance and might radically change the competitive dynamic of the sector. New social media networks are opening all the time. Many of those who have been deplatformed have migrated to services such as Telegraph or specially designed free speech networks such as Gab. Blockchain technology, for instance, is a platform technology as a decentralised (no single authorities, public or private, can control its use) and open (anyone can join) protocol.

Likewise, intense innovation focusing on decentralised advertising networks threatens Google’s ad sector dominance, and offers advertisers more assurance their digital dollar is used well. Other new technologies focus on regaining control over user privacy. Cutting-edge privacy technologies such as zero-knowledge proofs open massive opportunities for hiding personal information while still participating in economic exchange and social interactions. Blockchain applications are being developed to give users genuine control over data and facilitate the sort of private property rights over information the European Union’s GDPR awkwardly tries (and fails) to create.

The platforms know they face an uncertain and more competitive technological future. That is why Facebook is developing its own cryptocurrency—a pivot into financial services, like Chinese social media WeChat developing WeChat Pay. Google is investing serious resources into blockchain research, despite the technology’s long-run potential to displace its competitive advantages. The internet 10 years on will look very different—not because governments decided to regulate, but because digital entrepreneurs will have kept pushing, bringing us new products and services, revolutionising the global economy.