Why a US crypto crackdown threatens all digital commerce

Australian Financial Review, 10 August 2022

The US government’s action against the blockchain privacy protocol Tornado Cash is an epoch-defining moment, not only for cryptocurrency but for the digital economy.

On Tuesday, the US Treasury Department placed sanctions on Tornado Cash, accusing it of facilitating the laundering of cryptocurrency worth $US7 billion ($10.06 billion) since 2019. Some $455 million of that is connected to a North Korean state-sponsored hacking group.

Even before I explain what Tornado Cash does, let’s make it clear: this is an extraordinary move by the US government. Sanctions of this kind are usually put on people – dictators, drug lords, terrorists and the like – or specific things owned by those people. (The US Treasury also sanctioned a number of individual cryptocurrency accounts, in just the same way as they do with bank accounts.)

But Tornado Cash isn’t a person. It is a piece of open-source software. The US government is sanctioning a tool, an algorithm, and penalising anyone who uses it, regardless of what they are using it for.

Tornado Cash is a privacy application built on top of the ethereum blockchain. It is useful because ethereum transactions are public and transparent; any observer can trace funds through the network. Blockchain explorer websites such as Etherscan make this possible for amateur sleuths, but there are big “chain analysis” firms that work with law enforcement that can link users and transactions incredibly easily.

Tornado Cash severs these links. Users can send their cryptocurrency tokens to Tornado Cash, where they are mixed with the tokens of other Tornado Cash users and hidden behind a state-of-the-art encryption technique called “zero knowledge proofs”. The user can then withdraw their funds to a clean ethereum account that cannot be traced to their original account.

Obviously, as the US government argues, there are bad reasons that people might want to use such a service. But there are also very good reasons why cryptocurrency users might want to protect their financial privacy – commercial reasons, political reasons, personal security, or even medical reasons. One mundane reason that investment firms used Tornado Cash was to prevent observers from copying their trades. A more serious reason is personal security. Wealthy cryptocurrency users need to be able to obscure their token holdings from hackers and extortionists.

Tornado Cash is a tool that can make these otherwise transparent blockchains more secure and more usable. No permission has to be sought from anyone to use Tornado Cash. The Treasury department has accused Tornado Cash of “laundering” more than $US7 billion, but that seems to be the total amount of funds that have used the service at all, not the funds that are connected to unlawful activity. There is no reason to believe that the Tornado Cash developers or community solicited the business of money launderers or North Korean hackers.

Now American citizens are banned from interacting with this open-source software at all. It is a clear statement from the world’s biggest economy that online privacy tools – not just specific users of those tools, but the tools themselves – are the targets of the state.

We’ve been here before. Cryptography was once a state monopoly, the exclusive domain of spies, diplomats and code breakers. Governments were alarmed when academics and computer scientists started building cryptography for public use. Martin Hellman, one of those who invented public key cryptography in the 1970s (along with Whitfield Diffie and Ralph Merkle), was warned by friends in the intelligence community his life was in danger as a result of his invention. In the so-called “crypto wars” of the 1990s, the US government tried to enforce export controls on cryptographic algorithms.

One of the arguments made during those political contests was that code was speech; as software is just text and lines of code, it should be protected by the same constitutional protections as other speech.

GitHub is a global depository for open-source software owned by Microsoft. Almost immediately after the Treasury sanctions were introduced this week, GitHub closed the accounts of Tornado Cash developers. Not only did this remove the project’s source code from the internet, GitHub and Microsoft were implicitly abandoning the long-fought principle that code needs to be protected as a form of free expression.

An underappreciated fact about the crypto wars is that if the US government had been able to successfully restrict or suppress the use of high-quality encryption, then the subsequent two decades of global digital commerce could not have occurred. Internet services simply would not have been secure enough. People such as Hellman, Diffie and Merkle are now celebrated for making online shopping possible.

We cannot have secure commerce without the ability to hide information with cryptography. By treating privacy tools as if they are prohibited weapons, the US Treasury is threatening the next generation of commercial and financial digital innovation.

The COVIDSafe app was just one contact tracing option. These alternatives guarantee more privacy

With Kelsie Nabben

Since its release on Sunday, experts and members of the public alike have raised privacy concerns with the federal government’s COVIDSafe mobile app.

The contact tracing app aims to stop COVID-19’s spread by “tracing” interactions between users via Bluetooth, and alerting those who may have been in proximity with a confirmed case.

According to a recent poll commissioned by The Guardian, 57% of respondents said they were “concerned about the security of personal information collected” through COVIDSafe.

In its coronavirus rewhy sponse, the government has a golden opportunity to build public trust. There are other ways to build a digital contact tracing system, some of which would arguably raise fewer doubts about data security than the app.

All eyes on encryption

Incorporating advanced cryptography into COVIDSafe could have given Australian citizens a mathematical guarantee of their privacy, rather than a legal one.

A team at Canada’s McGill University is working on a solution that uses “mix networks” to send cryptographically “hashed” contact tracing location data through multiple, decentralised servers. This process hides the location and time stamps of users, sharing only necessary data.

This would let the government alert those who have been near a diagnosed person, without revealing other identifiers that could be used to trace back to them.

It’s currently unclear what encryption standards COVIDSafe is using, as the app’s source code has not been publicly released, and the government has been widely criticised for this. Once the code is available, researchers will be able to review and assess how safe users’ data is.

COVIDSafe is based on Singapore’s TraceTogether mobile app. Cybersecurity experts Chris Culnane, Eleanor McMurtry, Robert Merkel and Vanessa Teague have raised concerns over the app’s encryption standards.

If COVIDSafe has similar encryption standards – which we can’t know without the source code – it would be wrong to say the app’s data are encrypted. According to the experts, COVIDSafe shares a phone’s exact model number in plaintext with other users, whose phones store this detail alongside the original user’s corresponding unique ID.

Tough tech techniques for privacy

US-based advocacy group The Open Technology Institute has argued in favour of a “differential privacy” method for encrypting contact tracing data. This involves injecting statistical “noise” into datasets, giving individuals plausible deniability if their data are leaked for purposes other than contact tracing.

Zero-knowledge proof is another option. In this computation technique, one party (the prover) proves to another party (the verifier) they know the value of a specific piece of information, without conveying any other information. Thus, it would “prove” necessary information such as who a user has been in proximity with, without revealing details such as their name, phone number, postcode, age, or other apps running on their phone.

Not on the cloud, but still an effective device

Some approaches to contact tracing involve specialised hardware. Simmel is a wearable pen-like contact tracing device. It’s being designed by a Singapore-based team, supported by the European Commission’s Next Generation Internet program. All data are stored in the device itself, so the user has full control of their trace history until they share it.

This provides citizens a tracing beacon they can give to health officials if diagnosed, but is otherwise not linked to them through phone data or personal identifiers.

Missed opportunity

The response to COVIDSafe has been varied. While the number of downloads has been promising since its release, iPhone users have faced a range of functionality issues. Federal police are also investigating a series of text message scams allegedly aiming to dupe users.

The federal government has not chosen a decentralised, open-source, privacy-first approach. A better response to contact tracing would have been to establish clearer user information requirements and interoperability specifications (standards allowing different technologies and data to interact).

Also, inviting the private sector to help develop solutions (backed by peer review) could have encouraged innovation and provided economic opportunities.

How do we define privacy?

Personal information collected via COVIDSafe is governed under the Privacy Act 1988 and the Biosecurity Determination 2020.

These legal regimes reveal a gap between the public’s and the government’s conceptions of “privacy”.

You may think privacy means the government won’t share your private information. But judging by its general approach, the government thinks privacy means it will only share your information if it has authorised itself to do so.

Fundamentally, once you’ve told the government something, it has broad latitude to share that information using legislative exemptions and permissions built up over decades. This is why, when it comes to data security, mathematical guarantees trump legal “guarantees”.

For example, data collected by COVIDSafe may be accessible to various government departments through the recent anti-encryption legislation, the Assistance and Access Act. And you could be prosecuted for not properly self-isolating, based on your COVIDSafe data.

A right to feel secure

Moving forward, we may see more iterations of contact tracing technology in Australia and around the world.

The World Health Organisation is advocating for interoperability between contact tracing apps as part of the global virus response. And reports from Apple and Google indicate contact tracing will soon be built into your phone’s operating system.

As our government considers what to do next, it must balance privacy considerations with public health. We shouldn’t be forced to choose one over another.

Selling Your Data without Selling Your Soul: Privacy, Property, and the Platform Economy

With Sinclair Davidson

Executive summary: Humans have always sought to defend a zone of privacy around themselves—to protect their personal information, their intimate actions and relationships, and their thoughts and ideas from the scrutiny of others. However, it is now common to hear that thanks to digital technologies, we now have little expectation of privacy over our personal information.

Meanwhile, the economic value of personal information is rapidly growing as data becomes a key input to economic activity. A major driver of this change is the rise of a new form of business organization that has come to dominate the economy—platforms that can accumulate and store data and information are likely to make that data and information more valuable.

Given the growing economic importance of data, digital privacy has come to the fore as a major public policy issue. Yet, there is considerable confusion in public debates over the meaning of privacy and why it has become a public policy concern. A poor foundational understanding of privacy is likely to result in poor policy outcomes, including excessive regulatory costs, misallocated resources, and a failure to achieve intended goals.

This paper explores how to build a right to privacy that gives individuals more control over their personal data, and with it a choice about how much of their privacy to protect. It makes the case that privacy is an economic right that has largely not emerged in modern economies.

Regulatory attempts to improve individual control over personal information, such as the European Union’s General Data Protection Regulation (GDPR), have unintended consequences and are unlikely to achieve their goals. The GDPR is a quasi-global attempt to institute privacy protections over personal data through regulation. As an attempt to introduce a form of ownership over personal data, it is unwieldy and complex and unlikely to achieve its goals. The GDPR supplants the ongoing social negotiation around the appropriate ownership of personal data and presents a hurdle to future innovation.

In contrast to top-down approaches like the GDPR, the common law provides a framework for the discovery and evolution of rules around privacy. Under a common law approach, problems such as privacy are solved on a case-by-case basis, drawing on and building up a stock of precedent that has more fidelity to real-world dilemmas than do planned regulatory frameworks.

New technologies such as distributed ledger technology—blockchain—and advances in zero-knowledge proofs likewise provide an opportunity for entrepreneurs to improve privacy without top-down regulation and law.

Privacy is key to individual liberty. Individuals require control over their own private information in order to live autonomous and flourishing lives. While free individuals expose information about themselves in the course of social and economic activity, public policy should strive to ensure they do so only with their own implied or explicit consent.

The ideal public policy setting is one in which individuals have property rights over personal information and can control and monetize their own data. The common law, thanks to its case-by-case, evolutionary nature, is more likely to provide a sustainable and adaptive framework by which we can approach data privacy questions.

Published by the Competitive Enterprise Institute

Submission on the final report of the Australian Competition and Consumer Commission’s Digital Platforms Inquiry

With Darcy Allen, Dirk Auer, Justin (Gus) Hurwitz, Aaron Lane, Geoffrey A. Manne, Julian Morris and Jason Potts

The emergence of “Big Tech” has caused some observers to claim that the world is entering a new gilded age. In the realm of competition policy, these fears have led to a flurry of reports in which it is asserted that the underenforcement of competition laws has enabled Big Tech firms to crush their rivals and cement their dominance of online markets. They then go on to call for the creation of novel presumptions that would move enforcement of competition policy further away from the effects-based analysis that has largely defined it since the mid-1970s.

Australia has been at the forefront of this competition policy rethink. In July of 2019, the Australian Competition and Consumer Commission (ACCC) concluded an almost two-year-long investigation into the effect of digital platforms on competition in media and advertising markets.

The ACCC Digital Platforms Inquiry Final Report spans a wide range of issues, from competition between platforms to their effect on traditional news outlets and consumers’ privacy. It ultimately puts forward a series of recommendations that would tilt the scale of enforcement in favor of the whims of regulators without regard to the adverse effects of such regulatory action, which may be worse than the diseases they are intended to cure.

Available in PDF here.

Regulate? Innovate!

Suddenly, we live in a world of policy dilemmas around social media, digital platforms, personal data, and digital privacy. Voices on both sides of politics are loudly proclaiming we ought to regulate Facebook and Google. From the left, these calls focus on antitrust and competition law—the big platforms are too large, too dominant in their respective markets, and governments need to step in. From the right, conservatives are angry that social media services are deplatforming some popular voices and call for some sort of neutrality standard to be applied to these new ‘utilities’.

Less politically charged but nonetheless highly salient are the concerns about the collection and use of personal data. If ‘data is the new oil’—a commodity around which the global economy pivots—then Facebook and Google look disturbingly like the OPEC oil production cartel. These firms use that data to train artificial intelligence (AI) and serve advertisements to consumers with unparalleled precision. No more is it the case that 50 per cent of advertising is wasted.

These policy dilemmas have come about because the digital environment has changed, and it has changed sharply. Facebook only opened to the public in 2006 and by 2009 already had 242 million users. In the second half of 2019 it has 2.38 billion users.

Facebook is not just central to our lives—one of the primary ways so many of us communicate with family, friends and distant acquaintances—but central to our politics. The first volume of the Mueller investigation into Russian interference in the 2016 American presidential election focused on the use of sock-puppet social media accounts by malicious Russian sponsors. There’s no reason to believe these efforts influenced the election outcome but it is nonetheless remarkable that, through Facebook, Russian agents were able to fraudulently organise political protests (for both left and right causes)—sometimes with hundreds of attendees—by pretending to be Americans.

There always have been and always will be a debate about tax rates, free trade versus protectionism, monetary policy and banking, Nanny State paternalism, or whether railways should be privatised or nationalised. The arguments have been rehearsed since the 19th century, or even earlier. But we are poorly prepared not just for these topics of digital rights and data surveillance, but for new dimensions on which we might judge our freedoms or economic rights.

Private firms are hoovering up vast quantities of data about us in exchange for providing services. With that data they can, if they like, map our lives—our relationships, activities, preferences—with a degree of exactness and sophistication we, as individuals, may not be able to do ourselves. How should we think about Facebook knowing more about our relationships than we do? Do we need to start regulating the new digital economy?

The surveillance economy

One prominent extended case for greater government control is made by Shoshana Zuboff, in her recent book The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs, 2019). For Zuboff, a professor at Harvard Business School, these new digital technologies present a new economic system, surveillance capitalism, that “claims human experience as free raw material for translation into behavioural data”.

Zuboff argues these new firms look a lot like the industrial behemoths of the 19th and 20th century. Google is like General Motors in its heyday, or the robber barons of the Gilded Age. Using Marxist-tinged language, she describes how firms claim the ‘behaviourial surplus’ of this data to feed AI learning and predict our future desires—think Amazon or Netflix recommendation engines.

More sinisterly in Zuboff’s telling, these firms are not simply predicting our future preferences, but shaping them too: “It is no longer enough to automate information flows about us; the goal now is to automate us.” Netflix can put its own content at the top of its recommendation algorithm; Pokémon Go players tend to shop at restaurants and stores near the most valuable creatures.

Where many people spent years worrying about government surveillance in the wake of Edward Snowden’s leaks about the National Security Agency, she argues NSA learned these techniques from Google—surveillance capitalism begets surveillance state. At least the NSA is just focused on spying. Silicon Valley wants to manipulate: “Push and pull, suggest, nudge, cajole, shame, seduce,” she writes. “Google wants to be your co-pilot for life itself.”

Harrowing stuff. But these concerns would be more compelling if Zuboff had seriously engaged with the underlying economics of the business models she purports to analyse. Her argument—structured around an unclearly specified model of ‘surveillance assets’, ‘surveillance revenues’, and ‘surveillance capital’—is a modification of the internet-era adage, “If you’re not paying for the product, you are the product”. Many services we use online are free. The platforms use data about our activities on those platforms to make predictions—for example, about goods and services we might like to consume—and sell those predictions to advertisers. As she describes it:

… we are the objects from which raw materials are extracted and expropriated for Google’s prediction factories. Predictions about our behaviour are Google’s products, and they are sold to its actual customers but not to us. We are the means to others’ ends.

 … the essence of the exploitation here is the rendering of our lives as behavioural data for the sake of others’ improved control of us.

This argument misses a crucial step: what is this control? For the most part, the product derived from our data that is sold to other firms is advertising space: banner ads on news websites, ads dropped into social media feeds, ads threaded above our email inboxes. Seeing an advertisement is not the same as being controlled by a company. The history of advertising dates back at least to Ancient Rome. We are well familiar with the experience of companies trying to sell us products. We do not have to buy if we do not like the look of the products displayed on our feeds. It’s a crudely simple point, but if we do not buy, all that money—all that deep-learning technology, all those neural networks, all that ‘surveillance’—has been wasted.

Two sided markets

So how should we think about the economics of the big technology companies? Google and Facebook are platforms; what Nobel-winning economist Jean Tirole described as ‘two-sided’ markets. Until recently the dominant market structure was a single-sided market: think a supermarket. A supermarket has a one-directional value chain, moving goods from producers to consumers. Goods are offered to customers on a take-it-or-leave-it basis. In a two-sided market, customers are on both sides of the market. The service Google and Facebook provide is matching. They want advertisers to build relationships with users and vice-versa. Since the first scholarly work done on two-sided markets, economists have observed platforms that take three or more groups of users and match them together.

Two-sided markets are not new, of course. Newspapers have traditionally done this: match advertisers with readers. Banks match borrowers with lenders. French economics professor Jean Tirole’s first work looked specifically at credit card networks. But two-sided markets dominate the online world, and as the economy becomes more digital they are increasingly important. When we try to define what is unique about the ‘sharing economy’, we’re really just talking about two-sided markets: AirBnB matches holidaymakers with empty homes, Uber matches drivers with riders, AirTasker matches labour with odd jobs. Sometimes single and two-sided markets co-exist: Amazon’s two-sided marketplace sits alongside its more traditional online store.

The economic dynamics of two-sided markets are very different dynamics to what we are used to in the industrial economy. They are strongly characterised by network effects: the more users they have on both sides, the more valuable they are. So firms tend to price access in strange ways. Just as advertisers subsidised the cost of 20th century newspapers, Google and Facebook give us free access not because we are paying in personal data but because they are in the relationship business. Payments go in funny directions on platforms, and the more sides there are the more opaque the business model can seem.

An ironic implication of Zuboff’s arguments is that her neo-Marxian focus implicitly discounts what most analysts identify as the two key issues around these platforms: whether these networks are harmful for privacy and whether they are monopolistic.

First, the monopoly arguments. In Australia the ACCC has been running a digital platforms inquiry whose draft report—released in December 2018—called for using competition law against the large platforms on the basis they have started to monopolise the advertising market. There are many problems with the ACCC’s analysis. For example, it badly mangles its narrative account of how newspaper classifieds migrated online, implying Google and Facebook captured the ‘rivers of gold’. In fact, classified advertising went elsewhere (often to websites owned by the newspapers, such as Domain).

Yet the most critical failure of the ACCC is its bizarrely static perspective of an incredibly dynamic industry. True, platform markets are subject to extreme network effects—the more users, the more valuable—but this does not mean they tend towards sustainable monopolies. Far from it. There are no ‘natural’ limits to platform competition on the internet. There is unlimited space in a digital world. The only significant resource constraint is human attention, and the platform structure gives new entrants a set of strategic tools which can help jump-start competition. Using one side of the market to subsidise another side of the market helps ‘boot-strap’ network effects.

Consumer harm is the standard criteria for whether a firm is unacceptably monopolistic. Usually this means asking whether prices are higher than they would be if the market was more contested. Given the money prices for these services are often zero, that’s hard to sustain. Nobody pays to use Google.com. At first pass the digital platform business seems to have been an extraordinary boost to consumer surplus.

But, again, platform economics can be strange. It is possible we are paying not with money but with personal data, and the role of a competition authority is to protect our privacy as much as our wallet. This is the view of the ACCC (at least in its December 2018 draft report) and has become an article of faith in the ‘hipster antitrust’ movement in the United States that competition regulators need to focus on more than just higher prices.

There is obviously a great deal to privacy concerns. In a recent book, The Classical Liberal Case for Privacy in a World of Surveillance and Technological Change (Palgrave Macmillan, 2018), I argued we currently are in an extended social negotiation about the value of privacy and its protection. But the privacy debate is characterised by a lot of misconceptions and confusions. Privacy policies and disclosures have not always been acceptable. Expectations are changing. Mark Zuckerberg would no longer get away with the reckless anti-privacy statements he made as a CEO when Facebook launched. The question is whether to wait for privacy expectations to shift—supplemented by the common law—or whether governments need to step in with bold new privacy regulation.

The experience with privacy regulation so far has not been great. The European Union’s General Data Protection Regulation presents the single most significant attempt to regulate privacy thus far. The GDPR, which became enforceable in 2018, requires explicit and informed consent of data collection and use, informing users about how long their data will be retained, and provides for a “right of erasure” that allows users to require firms to delete any personal data they have collected at any time. The GDPR was written so broadly as to apply to any company that does business with any European citizen, in practice making the GDPR not just a European regulation but a global one.

Early evidence suggests host of consequences unforeseen by the GDPR’s designers. Alex Stapp, at the International Center for Law and Economics, argues GDPR compliance costs have been “astronomical”. Microsoft put as many as 1,600 engineers on GDPR compliance, and Google says they spent “hundreds of years of human time” ensuring they follow the new rules globally. These firms have the resources to do so. One consequence of high compliance costs has been to push out new competitors: small and medium internet companies that cannot dedicate thousands of engineers to regulatory compliance. As Stapp points out, it’s not at all clear this trade-off for privacy protection has been worth it: regulatory requirements for things such as data portability and right of data access have created new avenues for accidental and malicious access to private data.

A peculiarity of the history of early-stage technologies is they tend to trade off privacy against other benefits. Communications over the telegraph were deeply insecure before the widespread use of cryptography; early telephone lines (‘party lines’) allowed neighbours to listen in. Declaring privacy dead in the digital age is not just premature, it is potentially counterproductive. We need sustained innovation and entrepreneurial energy directed at building privacy standards into technologies we now use every day.

The deplatforming question

One final and politically sensitive way these platforms might be exercising power is by using their role as mediators of public debate to favour or disfavour certain political views. This is the fear behind the deplatforming of conservatives on social media, which has seen a number of conservative and hard-right activists and personalities banned from Facebook, Instagram and Twitter. Prominent examples include the conservative conspiracist broadcaster Alex Jones, his co-panellist Paul Joseph Watson, and provocateur Milo Yiannopoulos. Social media services also have been accused of subjecting conservatives to ‘shadow bans’—adjusting their algorithms to hide specific content or users from site-wide searches.

These practices have led many conservative groups who usually oppose increases in regulation to call for government intervention. The Trump administration even launched an online tool in May 2019 for Americans to report if they suspected “political bias” had violated their freedom of speech on social media platforms.

One widely canvassed possibility is for regulators to require social media platforms to be politically neutral. This resembles the long-discredited ‘fairness doctrine’ imposed by American regulators on television and radio broadcasting until the late 1980s. The fairness doctrine prevented the rise of viewpoint-led journalism (such as Fox News) and entrenched left-leaning political views as ‘objective’ journalism. Even if this was not an obvious violation of the speech rights of private organisations, it takes some bizarre thinking to believe government bureaucrats and regulators would prioritise protecting conservatives once given the power to determine what social media networks are allowed to do.

Another proposal is to make the platforms legally liable for content posted by their users. The more the platforms exercise discretion about what is published on their networks, the more they look like they have quasi-editorial control, and courts should treat them as if they do. While this would no doubt lead to a massive surge in litigation against the platforms for content produced by users, how such an approach would protect conservative voices is unclear: fear of litigation would certainly encourage platforms to take a much heavier hand, particularly given the possibilities of litigation outside the United States where hate speech and vilification laws are common.

The genesis of this proposal seems to come from a confusion about the distinction between social media platforms and newspapers. Newspapers solicit and edit their content. Social media platforms do not. Social media platforms come from a particular political and ideological environment—the socially liberal, quasi-libertarian and individualistic worldview of Silicon Valley and the Bay Area—and these technologies now hold the cultural high-ground. The conservative movement has focused on trying to change Washington DC when it should have been just as focused on developing new ways for people to exercise their freedom, as has Silicon Valley.

But regulation cannot be the answer. Regulation would dramatically empower bureaucrats, opening up new avenues for government intervention at the heart of the new economy (any proposed regulation of Facebook’s algorithm, for instance, would lay the foundation for regulating Amazon’s search algorithm, and then any firm that tries to customise and curate their product and service offerings), and threatening, not protecting, freedom of speech. To give government the power to regulate what ought to be published is a threat to all who publish, not to just a few companies in northern California.

Platform to protocol economy

I opened this article with a discussion of how recent a development the platform economy is: a decade old, at best. A host of new technologies and innovations are coming that challenge the platforms’ dominance and might radically change the competitive dynamic of the sector. New social media networks are opening all the time. Many of those who have been deplatformed have migrated to services such as Telegraph or specially designed free speech networks such as Gab. Blockchain technology, for instance, is a platform technology as a decentralised (no single authorities, public or private, can control its use) and open (anyone can join) protocol.

Likewise, intense innovation focusing on decentralised advertising networks threatens Google’s ad sector dominance, and offers advertisers more assurance their digital dollar is used well. Other new technologies focus on regaining control over user privacy. Cutting-edge privacy technologies such as zero-knowledge proofs open massive opportunities for hiding personal information while still participating in economic exchange and social interactions. Blockchain applications are being developed to give users genuine control over data and facilitate the sort of private property rights over information the European Union’s GDPR awkwardly tries (and fails) to create.

The platforms know they face an uncertain and more competitive technological future. That is why Facebook is developing its own cryptocurrency—a pivot into financial services, like Chinese social media WeChat developing WeChat Pay. Google is investing serious resources into blockchain research, despite the technology’s long-run potential to displace its competitive advantages. The internet 10 years on will look very different—not because governments decided to regulate, but because digital entrepreneurs will have kept pushing, bringing us new products and services, revolutionising the global economy.

Some Economic Consequences of the GDPR

With Darcy Allen, Alastair Berg, Brendan Markey-Towler, and Jason Potts. Published in Economics Bulletin, Volume 39, Issue 2, pages 785-797. Originally a Medium post.

Abstract: The EU General Data Protection Regulation (GDPR) is a wide ranging personal data protection regime of greater magnitude than any similar regulation previously in the EU, or elsewhere. In this paper, we outline how the GDPR impacts the value of data held by data collectors before proposing some potential unintended consequences. Given the distortions of the GDPR on data value, we propose that new complex financial products—essentially new data insurance markets—will emerge, potentially leading to further systematic risks. Finally we examine how market-driven solutions to the data property rights problems the GDPR seeks to solve—particularly using blockchain technology as economic infrastructure for data rights—might be less distortionary.

Available here.

Submission to the Australian Competition and Consumer Commission’s Digital Platforms Inquiry

With Gus Hurwitz.

Executive summary: The analysis in the Australian Competition and Consumer Commission’s Preliminary Report for the Digital Platforms Inquiry is inadequate in several ways, most notably:

  • It mischaracterises the relationship between changes in the economics of media advertising and the rise of digital platforms such as Facebook and Google.
  • Its analysis of the dynamics of media diversity is misguided.
  • Its competition analysis assumes its results and makes unsupportable claims about the division of advertising markets.
  • It is recklessly unconcerned with the freedom of speech consequences of its recommendations.
  • It fails to recognise, and proposes to supplant, the ongoing social negotiation over data privacy.
  • It provides a poor analytic base on which to make policy recommendations, as it applies a static, rather than dynamic, approach to its analysis.

There is a real danger that if the policy recommendations outlined in the preliminary report were to be adopted, Australian consumers would be severely harmed.

Available here.

The Classical Liberal Case for Privacy in a World of Surveillance and Technological Change

Palgrave Macmillan, 2018

How should a free society protect privacy? Dramatic changes in national security law and surveillance, as well as technological changes from social media to smart cities mean that our ideas about privacy and its protection are being challenged like never before. In this interdisciplinary book, Chris Berg explores what classical liberal approaches to privacy can bring to current debates about surveillance, encryption and new financial technologies. Ultimately, he argues that the principles of classical liberalism – the rule of law, individual rights, property and entrepreneurial evolution – can help extend as well as critique contemporary philosophical theories of privacy.

Available from Palgrave Macmillan.

Some economic consequences of the GDPR

With Darcy Allen, Alastair Berg and Jason Potts.

At the end of May 2018, the most far reaching data protection and privacy regime ever seen will come into effect. Although the General Data Protection Regulation (GDPR) is a European law, it will have a global impact. There are likely to be some unintended consequences of the GDPR.

As we outline in a recent working paper, the implementation of the GDPR opens the potential for new data markets in tradable (possibly securitised) financial instruments. The protection of people’s data is better protected through self-governance solutions, including the application of blockchain technology.

The GDPR is in effect a global regulation. It applies to any company which has a European customer, no matter where that company is based. Even offering the use of a European currency on your website, or having information in a European language may be considered offering goods and services to an EU data subject for the purposes of the GDPR.

The remit of the regulation is as broad as its territorial scope. The rights of data subjects include that of data access, rectification, the right to withdraw consent, erasure and portability. Organisations using personal data in the course of business must abide by strict technical and organisational requirements. These restrictions include gaining explicit consent and justifying the collection of each individual piece of personal data. Organisations must also employ a Data Protection Officer (DPO) to monitor compliance with the 261-page document.

Organisations collect data from customers for a range of reasons, both commercial and regulatory — organisations need to know who they are dealing with. Banks will not lend money to someone they don’t know; they need to have a level of assurance over their customer’s willingness and ability to repay. Similarly, many organisations are forced to collect increasingly large amounts of personal data about their customers. Anti-money laundering and counter-terrorism financing legislation (AML/CTF) requires many institutions to monitor their customers activity on an ongoing basis. In addition, many organisations derive significant value from personal data. Consumers and organisations exchange data for services, much off which is voluntary and to their mutual benefit.

One of the most discussed aspects of the GDPR is the right to erasure — often referred to as the right to be forgotten. This allows data subjects to use the government to compel companies who hold their personal data to delete it.

We propose that the right to erasure creates uncertainty over the value of data held by organisations. This creates an option on that data.

The right to erasure creates uncertainty over the value of the data to the data collector. At any point in time, the data subject may withdraw consent. During a transaction, or perhaps in return for some free service, a data subject may consent to have their personal data sold to a third party such as an advertiser or market researcher. Up until an (unknown) point in time — when the data subject may or may not withdraw consent to their data being used — that personal data holds positive value. This is in effect a put option on that data — the option to sell that data to a third party.

The value of such an option is derived from the value of the underlying asset — the data — which in turn depends on the continued consent by the data subject.

Rational economic actors will respond in predictable ways to manage such risk. Data-Backed Securities (DBS) might allow organisations to convert unpredictable future revenue streams into one single payment. Collateralised Data Obligations (CDO) might allow data collectors to package personal data into tranches of varying risk of consent withdrawal. A secondary data derivative market is thus created — one that we have very little idea of how it will operate, and what any secondary effects may be.

Such responses to regulatory intervention are not new. The Global Financial Crisis (GFC) was at least in part caused by complex and rarely understood financial instruments like Mortgage-Backed Securities (MBS) and Collateralised Debt Obligations (CBS). These were developed in response to poorly designed capital requirements.

Similarly, global AML/CTF requirements faced by financial institutions have caused many firms to simply stop offering their products to certain individuals and even whole regions of the world. The unbanked and underbanked are all the poorer as a result.

What these two examples have in common is that they both have good intentions. Adequate capital requirements and preventing money from being cleaned by money launderers are good things, but good intentions are not enough. Secondary consequences should always be considered and discussed.

Self-governance alternatives, including the application of blockchain technology, should be considered. These alternatives use technology to allow individuals greater control over the personal data they share with the world.

Innovators developing self-sovereign identity solutions are attempting to provide a market based way for individuals to gain greater control over — and derive value from — their personal data. These solutions allow users to share just enough data for a transaction to go ahead. A bartender doesn’t need to know your name or address when you want a drink, they just need to know you are of legal age.

Past instances of regulatory intervention should make us cautious that even well-meaning regulation will achieve its stated objectives with no negative effects. Self-sovereign identity, and the use of blockchain technology is a promising solution to the challenges of data privacy.