The COVIDSafe app was just one contact tracing option. These alternatives guarantee more privacy

With Kelsie Nabben

Since its release on Sunday, experts and members of the public alike have raised privacy concerns with the federal government’s COVIDSafe mobile app.

The contact tracing app aims to stop COVID-19’s spread by “tracing” interactions between users via Bluetooth, and alerting those who may have been in proximity with a confirmed case.

According to a recent poll commissioned by The Guardian, 57% of respondents said they were “concerned about the security of personal information collected” through COVIDSafe.

In its coronavirus rewhy sponse, the government has a golden opportunity to build public trust. There are other ways to build a digital contact tracing system, some of which would arguably raise fewer doubts about data security than the app.

All eyes on encryption

Incorporating advanced cryptography into COVIDSafe could have given Australian citizens a mathematical guarantee of their privacy, rather than a legal one.

A team at Canada’s McGill University is working on a solution that uses “mix networks” to send cryptographically “hashed” contact tracing location data through multiple, decentralised servers. This process hides the location and time stamps of users, sharing only necessary data.

This would let the government alert those who have been near a diagnosed person, without revealing other identifiers that could be used to trace back to them.

It’s currently unclear what encryption standards COVIDSafe is using, as the app’s source code has not been publicly released, and the government has been widely criticised for this. Once the code is available, researchers will be able to review and assess how safe users’ data is.

COVIDSafe is based on Singapore’s TraceTogether mobile app. Cybersecurity experts Chris Culnane, Eleanor McMurtry, Robert Merkel and Vanessa Teague have raised concerns over the app’s encryption standards.

If COVIDSafe has similar encryption standards – which we can’t know without the source code – it would be wrong to say the app’s data are encrypted. According to the experts, COVIDSafe shares a phone’s exact model number in plaintext with other users, whose phones store this detail alongside the original user’s corresponding unique ID.

Tough tech techniques for privacy

US-based advocacy group The Open Technology Institute has argued in favour of a “differential privacy” method for encrypting contact tracing data. This involves injecting statistical “noise” into datasets, giving individuals plausible deniability if their data are leaked for purposes other than contact tracing.

Zero-knowledge proof is another option. In this computation technique, one party (the prover) proves to another party (the verifier) they know the value of a specific piece of information, without conveying any other information. Thus, it would “prove” necessary information such as who a user has been in proximity with, without revealing details such as their name, phone number, postcode, age, or other apps running on their phone.

Not on the cloud, but still an effective device

Some approaches to contact tracing involve specialised hardware. Simmel is a wearable pen-like contact tracing device. It’s being designed by a Singapore-based team, supported by the European Commission’s Next Generation Internet program. All data are stored in the device itself, so the user has full control of their trace history until they share it.

This provides citizens a tracing beacon they can give to health officials if diagnosed, but is otherwise not linked to them through phone data or personal identifiers.

Missed opportunity

The response to COVIDSafe has been varied. While the number of downloads has been promising since its release, iPhone users have faced a range of functionality issues. Federal police are also investigating a series of text message scams allegedly aiming to dupe users.

The federal government has not chosen a decentralised, open-source, privacy-first approach. A better response to contact tracing would have been to establish clearer user information requirements and interoperability specifications (standards allowing different technologies and data to interact).

Also, inviting the private sector to help develop solutions (backed by peer review) could have encouraged innovation and provided economic opportunities.

How do we define privacy?

Personal information collected via COVIDSafe is governed under the Privacy Act 1988 and the Biosecurity Determination 2020.

These legal regimes reveal a gap between the public’s and the government’s conceptions of “privacy”.

You may think privacy means the government won’t share your private information. But judging by its general approach, the government thinks privacy means it will only share your information if it has authorised itself to do so.

Fundamentally, once you’ve told the government something, it has broad latitude to share that information using legislative exemptions and permissions built up over decades. This is why, when it comes to data security, mathematical guarantees trump legal “guarantees”.

For example, data collected by COVIDSafe may be accessible to various government departments through the recent anti-encryption legislation, the Assistance and Access Act. And you could be prosecuted for not properly self-isolating, based on your COVIDSafe data.

A right to feel secure

Moving forward, we may see more iterations of contact tracing technology in Australia and around the world.

The World Health Organisation is advocating for interoperability between contact tracing apps as part of the global virus response. And reports from Apple and Google indicate contact tracing will soon be built into your phone’s operating system.

As our government considers what to do next, it must balance privacy considerations with public health. We shouldn’t be forced to choose one over another.

Selling Your Data without Selling Your Soul: Privacy, Property, and the Platform Economy

With Sinclair Davidson

Executive summary: Humans have always sought to defend a zone of privacy around themselves—to protect their personal information, their intimate actions and relationships, and their thoughts and ideas from the scrutiny of others. However, it is now common to hear that thanks to digital technologies, we now have little expectation of privacy over our personal information.

Meanwhile, the economic value of personal information is rapidly growing as data becomes a key input to economic activity. A major driver of this change is the rise of a new form of business organization that has come to dominate the economy—platforms that can accumulate and store data and information are likely to make that data and information more valuable.

Given the growing economic importance of data, digital privacy has come to the fore as a major public policy issue. Yet, there is considerable confusion in public debates over the meaning of privacy and why it has become a public policy concern. A poor foundational understanding of privacy is likely to result in poor policy outcomes, including excessive regulatory costs, misallocated resources, and a failure to achieve intended goals.

This paper explores how to build a right to privacy that gives individuals more control over their personal data, and with it a choice about how much of their privacy to protect. It makes the case that privacy is an economic right that has largely not emerged in modern economies.

Regulatory attempts to improve individual control over personal information, such as the European Union’s General Data Protection Regulation (GDPR), have unintended consequences and are unlikely to achieve their goals. The GDPR is a quasi-global attempt to institute privacy protections over personal data through regulation. As an attempt to introduce a form of ownership over personal data, it is unwieldy and complex and unlikely to achieve its goals. The GDPR supplants the ongoing social negotiation around the appropriate ownership of personal data and presents a hurdle to future innovation.

In contrast to top-down approaches like the GDPR, the common law provides a framework for the discovery and evolution of rules around privacy. Under a common law approach, problems such as privacy are solved on a case-by-case basis, drawing on and building up a stock of precedent that has more fidelity to real-world dilemmas than do planned regulatory frameworks.

New technologies such as distributed ledger technology—blockchain—and advances in zero-knowledge proofs likewise provide an opportunity for entrepreneurs to improve privacy without top-down regulation and law.

Privacy is key to individual liberty. Individuals require control over their own private information in order to live autonomous and flourishing lives. While free individuals expose information about themselves in the course of social and economic activity, public policy should strive to ensure they do so only with their own implied or explicit consent.

The ideal public policy setting is one in which individuals have property rights over personal information and can control and monetize their own data. The common law, thanks to its case-by-case, evolutionary nature, is more likely to provide a sustainable and adaptive framework by which we can approach data privacy questions.

Published by the Competitive Enterprise Institute

Submission on the final report of the Australian Competition and Consumer Commission’s Digital Platforms Inquiry

With Darcy Allen, Dirk Auer, Justin (Gus) Hurwitz, Aaron Lane, Geoffrey A. Manne, Julian Morris and Jason Potts

The emergence of “Big Tech” has caused some observers to claim that the world is entering a new gilded age. In the realm of competition policy, these fears have led to a flurry of reports in which it is asserted that the underenforcement of competition laws has enabled Big Tech firms to crush their rivals and cement their dominance of online markets. They then go on to call for the creation of novel presumptions that would move enforcement of competition policy further away from the effects-based analysis that has largely defined it since the mid-1970s.

Australia has been at the forefront of this competition policy rethink. In July of 2019, the Australian Competition and Consumer Commission (ACCC) concluded an almost two-year-long investigation into the effect of digital platforms on competition in media and advertising markets.

The ACCC Digital Platforms Inquiry Final Report spans a wide range of issues, from competition between platforms to their effect on traditional news outlets and consumers’ privacy. It ultimately puts forward a series of recommendations that would tilt the scale of enforcement in favor of the whims of regulators without regard to the adverse effects of such regulatory action, which may be worse than the diseases they are intended to cure.

Available in PDF here.

Regulate? Innovate!

Suddenly, we live in a world of policy dilemmas around social media, digital platforms, personal data, and digital privacy. Voices on both sides of politics are loudly proclaiming we ought to regulate Facebook and Google. From the left, these calls focus on antitrust and competition law—the big platforms are too large, too dominant in their respective markets, and governments need to step in. From the right, conservatives are angry that social media services are deplatforming some popular voices and call for some sort of neutrality standard to be applied to these new ‘utilities’.

Less politically charged but nonetheless highly salient are the concerns about the collection and use of personal data. If ‘data is the new oil’—a commodity around which the global economy pivots—then Facebook and Google look disturbingly like the OPEC oil production cartel. These firms use that data to train artificial intelligence (AI) and serve advertisements to consumers with unparalleled precision. No more is it the case that 50 per cent of advertising is wasted.

These policy dilemmas have come about because the digital environment has changed, and it has changed sharply. Facebook only opened to the public in 2006 and by 2009 already had 242 million users. In the second half of 2019 it has 2.38 billion users.

Facebook is not just central to our lives—one of the primary ways so many of us communicate with family, friends and distant acquaintances—but central to our politics. The first volume of the Mueller investigation into Russian interference in the 2016 American presidential election focused on the use of sock-puppet social media accounts by malicious Russian sponsors. There’s no reason to believe these efforts influenced the election outcome but it is nonetheless remarkable that, through Facebook, Russian agents were able to fraudulently organise political protests (for both left and right causes)—sometimes with hundreds of attendees—by pretending to be Americans.

There always have been and always will be a debate about tax rates, free trade versus protectionism, monetary policy and banking, Nanny State paternalism, or whether railways should be privatised or nationalised. The arguments have been rehearsed since the 19th century, or even earlier. But we are poorly prepared not just for these topics of digital rights and data surveillance, but for new dimensions on which we might judge our freedoms or economic rights.

Private firms are hoovering up vast quantities of data about us in exchange for providing services. With that data they can, if they like, map our lives—our relationships, activities, preferences—with a degree of exactness and sophistication we, as individuals, may not be able to do ourselves. How should we think about Facebook knowing more about our relationships than we do? Do we need to start regulating the new digital economy?

The surveillance economy

One prominent extended case for greater government control is made by Shoshana Zuboff, in her recent book The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs, 2019). For Zuboff, a professor at Harvard Business School, these new digital technologies present a new economic system, surveillance capitalism, that “claims human experience as free raw material for translation into behavioural data”.

Zuboff argues these new firms look a lot like the industrial behemoths of the 19th and 20th century. Google is like General Motors in its heyday, or the robber barons of the Gilded Age. Using Marxist-tinged language, she describes how firms claim the ‘behaviourial surplus’ of this data to feed AI learning and predict our future desires—think Amazon or Netflix recommendation engines.

More sinisterly in Zuboff’s telling, these firms are not simply predicting our future preferences, but shaping them too: “It is no longer enough to automate information flows about us; the goal now is to automate us.” Netflix can put its own content at the top of its recommendation algorithm; Pokémon Go players tend to shop at restaurants and stores near the most valuable creatures.

Where many people spent years worrying about government surveillance in the wake of Edward Snowden’s leaks about the National Security Agency, she argues NSA learned these techniques from Google—surveillance capitalism begets surveillance state. At least the NSA is just focused on spying. Silicon Valley wants to manipulate: “Push and pull, suggest, nudge, cajole, shame, seduce,” she writes. “Google wants to be your co-pilot for life itself.”

Harrowing stuff. But these concerns would be more compelling if Zuboff had seriously engaged with the underlying economics of the business models she purports to analyse. Her argument—structured around an unclearly specified model of ‘surveillance assets’, ‘surveillance revenues’, and ‘surveillance capital’—is a modification of the internet-era adage, “If you’re not paying for the product, you are the product”. Many services we use online are free. The platforms use data about our activities on those platforms to make predictions—for example, about goods and services we might like to consume—and sell those predictions to advertisers. As she describes it:

… we are the objects from which raw materials are extracted and expropriated for Google’s prediction factories. Predictions about our behaviour are Google’s products, and they are sold to its actual customers but not to us. We are the means to others’ ends.

 … the essence of the exploitation here is the rendering of our lives as behavioural data for the sake of others’ improved control of us.

This argument misses a crucial step: what is this control? For the most part, the product derived from our data that is sold to other firms is advertising space: banner ads on news websites, ads dropped into social media feeds, ads threaded above our email inboxes. Seeing an advertisement is not the same as being controlled by a company. The history of advertising dates back at least to Ancient Rome. We are well familiar with the experience of companies trying to sell us products. We do not have to buy if we do not like the look of the products displayed on our feeds. It’s a crudely simple point, but if we do not buy, all that money—all that deep-learning technology, all those neural networks, all that ‘surveillance’—has been wasted.

Two sided markets

So how should we think about the economics of the big technology companies? Google and Facebook are platforms; what Nobel-winning economist Jean Tirole described as ‘two-sided’ markets. Until recently the dominant market structure was a single-sided market: think a supermarket. A supermarket has a one-directional value chain, moving goods from producers to consumers. Goods are offered to customers on a take-it-or-leave-it basis. In a two-sided market, customers are on both sides of the market. The service Google and Facebook provide is matching. They want advertisers to build relationships with users and vice-versa. Since the first scholarly work done on two-sided markets, economists have observed platforms that take three or more groups of users and match them together.

Two-sided markets are not new, of course. Newspapers have traditionally done this: match advertisers with readers. Banks match borrowers with lenders. French economics professor Jean Tirole’s first work looked specifically at credit card networks. But two-sided markets dominate the online world, and as the economy becomes more digital they are increasingly important. When we try to define what is unique about the ‘sharing economy’, we’re really just talking about two-sided markets: AirBnB matches holidaymakers with empty homes, Uber matches drivers with riders, AirTasker matches labour with odd jobs. Sometimes single and two-sided markets co-exist: Amazon’s two-sided marketplace sits alongside its more traditional online store.

The economic dynamics of two-sided markets are very different dynamics to what we are used to in the industrial economy. They are strongly characterised by network effects: the more users they have on both sides, the more valuable they are. So firms tend to price access in strange ways. Just as advertisers subsidised the cost of 20th century newspapers, Google and Facebook give us free access not because we are paying in personal data but because they are in the relationship business. Payments go in funny directions on platforms, and the more sides there are the more opaque the business model can seem.

An ironic implication of Zuboff’s arguments is that her neo-Marxian focus implicitly discounts what most analysts identify as the two key issues around these platforms: whether these networks are harmful for privacy and whether they are monopolistic.

First, the monopoly arguments. In Australia the ACCC has been running a digital platforms inquiry whose draft report—released in December 2018—called for using competition law against the large platforms on the basis they have started to monopolise the advertising market. There are many problems with the ACCC’s analysis. For example, it badly mangles its narrative account of how newspaper classifieds migrated online, implying Google and Facebook captured the ‘rivers of gold’. In fact, classified advertising went elsewhere (often to websites owned by the newspapers, such as Domain).

Yet the most critical failure of the ACCC is its bizarrely static perspective of an incredibly dynamic industry. True, platform markets are subject to extreme network effects—the more users, the more valuable—but this does not mean they tend towards sustainable monopolies. Far from it. There are no ‘natural’ limits to platform competition on the internet. There is unlimited space in a digital world. The only significant resource constraint is human attention, and the platform structure gives new entrants a set of strategic tools which can help jump-start competition. Using one side of the market to subsidise another side of the market helps ‘boot-strap’ network effects.

Consumer harm is the standard criteria for whether a firm is unacceptably monopolistic. Usually this means asking whether prices are higher than they would be if the market was more contested. Given the money prices for these services are often zero, that’s hard to sustain. Nobody pays to use Google.com. At first pass the digital platform business seems to have been an extraordinary boost to consumer surplus.

But, again, platform economics can be strange. It is possible we are paying not with money but with personal data, and the role of a competition authority is to protect our privacy as much as our wallet. This is the view of the ACCC (at least in its December 2018 draft report) and has become an article of faith in the ‘hipster antitrust’ movement in the United States that competition regulators need to focus on more than just higher prices.

There is obviously a great deal to privacy concerns. In a recent book, The Classical Liberal Case for Privacy in a World of Surveillance and Technological Change (Palgrave Macmillan, 2018), I argued we currently are in an extended social negotiation about the value of privacy and its protection. But the privacy debate is characterised by a lot of misconceptions and confusions. Privacy policies and disclosures have not always been acceptable. Expectations are changing. Mark Zuckerberg would no longer get away with the reckless anti-privacy statements he made as a CEO when Facebook launched. The question is whether to wait for privacy expectations to shift—supplemented by the common law—or whether governments need to step in with bold new privacy regulation.

The experience with privacy regulation so far has not been great. The European Union’s General Data Protection Regulation presents the single most significant attempt to regulate privacy thus far. The GDPR, which became enforceable in 2018, requires explicit and informed consent of data collection and use, informing users about how long their data will be retained, and provides for a “right of erasure” that allows users to require firms to delete any personal data they have collected at any time. The GDPR was written so broadly as to apply to any company that does business with any European citizen, in practice making the GDPR not just a European regulation but a global one.

Early evidence suggests host of consequences unforeseen by the GDPR’s designers. Alex Stapp, at the International Center for Law and Economics, argues GDPR compliance costs have been “astronomical”. Microsoft put as many as 1,600 engineers on GDPR compliance, and Google says they spent “hundreds of years of human time” ensuring they follow the new rules globally. These firms have the resources to do so. One consequence of high compliance costs has been to push out new competitors: small and medium internet companies that cannot dedicate thousands of engineers to regulatory compliance. As Stapp points out, it’s not at all clear this trade-off for privacy protection has been worth it: regulatory requirements for things such as data portability and right of data access have created new avenues for accidental and malicious access to private data.

A peculiarity of the history of early-stage technologies is they tend to trade off privacy against other benefits. Communications over the telegraph were deeply insecure before the widespread use of cryptography; early telephone lines (‘party lines’) allowed neighbours to listen in. Declaring privacy dead in the digital age is not just premature, it is potentially counterproductive. We need sustained innovation and entrepreneurial energy directed at building privacy standards into technologies we now use every day.

The deplatforming question

One final and politically sensitive way these platforms might be exercising power is by using their role as mediators of public debate to favour or disfavour certain political views. This is the fear behind the deplatforming of conservatives on social media, which has seen a number of conservative and hard-right activists and personalities banned from Facebook, Instagram and Twitter. Prominent examples include the conservative conspiracist broadcaster Alex Jones, his co-panellist Paul Joseph Watson, and provocateur Milo Yiannopoulos. Social media services also have been accused of subjecting conservatives to ‘shadow bans’—adjusting their algorithms to hide specific content or users from site-wide searches.

These practices have led many conservative groups who usually oppose increases in regulation to call for government intervention. The Trump administration even launched an online tool in May 2019 for Americans to report if they suspected “political bias” had violated their freedom of speech on social media platforms.

One widely canvassed possibility is for regulators to require social media platforms to be politically neutral. This resembles the long-discredited ‘fairness doctrine’ imposed by American regulators on television and radio broadcasting until the late 1980s. The fairness doctrine prevented the rise of viewpoint-led journalism (such as Fox News) and entrenched left-leaning political views as ‘objective’ journalism. Even if this was not an obvious violation of the speech rights of private organisations, it takes some bizarre thinking to believe government bureaucrats and regulators would prioritise protecting conservatives once given the power to determine what social media networks are allowed to do.

Another proposal is to make the platforms legally liable for content posted by their users. The more the platforms exercise discretion about what is published on their networks, the more they look like they have quasi-editorial control, and courts should treat them as if they do. While this would no doubt lead to a massive surge in litigation against the platforms for content produced by users, how such an approach would protect conservative voices is unclear: fear of litigation would certainly encourage platforms to take a much heavier hand, particularly given the possibilities of litigation outside the United States where hate speech and vilification laws are common.

The genesis of this proposal seems to come from a confusion about the distinction between social media platforms and newspapers. Newspapers solicit and edit their content. Social media platforms do not. Social media platforms come from a particular political and ideological environment—the socially liberal, quasi-libertarian and individualistic worldview of Silicon Valley and the Bay Area—and these technologies now hold the cultural high-ground. The conservative movement has focused on trying to change Washington DC when it should have been just as focused on developing new ways for people to exercise their freedom, as has Silicon Valley.

But regulation cannot be the answer. Regulation would dramatically empower bureaucrats, opening up new avenues for government intervention at the heart of the new economy (any proposed regulation of Facebook’s algorithm, for instance, would lay the foundation for regulating Amazon’s search algorithm, and then any firm that tries to customise and curate their product and service offerings), and threatening, not protecting, freedom of speech. To give government the power to regulate what ought to be published is a threat to all who publish, not to just a few companies in northern California.

Platform to protocol economy

I opened this article with a discussion of how recent a development the platform economy is: a decade old, at best. A host of new technologies and innovations are coming that challenge the platforms’ dominance and might radically change the competitive dynamic of the sector. New social media networks are opening all the time. Many of those who have been deplatformed have migrated to services such as Telegraph or specially designed free speech networks such as Gab. Blockchain technology, for instance, is a platform technology as a decentralised (no single authorities, public or private, can control its use) and open (anyone can join) protocol.

Likewise, intense innovation focusing on decentralised advertising networks threatens Google’s ad sector dominance, and offers advertisers more assurance their digital dollar is used well. Other new technologies focus on regaining control over user privacy. Cutting-edge privacy technologies such as zero-knowledge proofs open massive opportunities for hiding personal information while still participating in economic exchange and social interactions. Blockchain applications are being developed to give users genuine control over data and facilitate the sort of private property rights over information the European Union’s GDPR awkwardly tries (and fails) to create.

The platforms know they face an uncertain and more competitive technological future. That is why Facebook is developing its own cryptocurrency—a pivot into financial services, like Chinese social media WeChat developing WeChat Pay. Google is investing serious resources into blockchain research, despite the technology’s long-run potential to displace its competitive advantages. The internet 10 years on will look very different—not because governments decided to regulate, but because digital entrepreneurs will have kept pushing, bringing us new products and services, revolutionising the global economy.

Some Economic Consequences of the GDPR

With Darcy Allen, Alastair Berg, Brendan Markey-Towler, and Jason Potts. Published in Economics Bulletin, Volume 39, Issue 2, pages 785-797. Originally a Medium post.

Abstract: The EU General Data Protection Regulation (GDPR) is a wide ranging personal data protection regime of greater magnitude than any similar regulation previously in the EU, or elsewhere. In this paper, we outline how the GDPR impacts the value of data held by data collectors before proposing some potential unintended consequences. Given the distortions of the GDPR on data value, we propose that new complex financial products—essentially new data insurance markets—will emerge, potentially leading to further systematic risks. Finally we examine how market-driven solutions to the data property rights problems the GDPR seeks to solve—particularly using blockchain technology as economic infrastructure for data rights—might be less distortionary.

Available here.

Submission to the Australian Competition and Consumer Commission’s Digital Platforms Inquiry

With Gus Hurwitz.

Executive summary: The analysis in the Australian Competition and Consumer Commission’s Preliminary Report for the Digital Platforms Inquiry is inadequate in several ways, most notably:

  • It mischaracterises the relationship between changes in the economics of media advertising and the rise of digital platforms such as Facebook and Google.
  • Its analysis of the dynamics of media diversity is misguided.
  • Its competition analysis assumes its results and makes unsupportable claims about the division of advertising markets.
  • It is recklessly unconcerned with the freedom of speech consequences of its recommendations.
  • It fails to recognise, and proposes to supplant, the ongoing social negotiation over data privacy.
  • It provides a poor analytic base on which to make policy recommendations, as it applies a static, rather than dynamic, approach to its analysis.

There is a real danger that if the policy recommendations outlined in the preliminary report were to be adopted, Australian consumers would be severely harmed.

Available here.

The Classical Liberal Case for Privacy in a World of Surveillance and Technological Change

Palgrave Macmillan, 2018

How should a free society protect privacy? Dramatic changes in national security law and surveillance, as well as technological changes from social media to smart cities mean that our ideas about privacy and its protection are being challenged like never before. In this interdisciplinary book, Chris Berg explores what classical liberal approaches to privacy can bring to current debates about surveillance, encryption and new financial technologies. Ultimately, he argues that the principles of classical liberalism – the rule of law, individual rights, property and entrepreneurial evolution – can help extend as well as critique contemporary philosophical theories of privacy.

Available from Palgrave Macmillan.

Some economic consequences of the GDPR

With Darcy Allen, Alastair Berg and Jason Potts.

At the end of May 2018, the most far reaching data protection and privacy regime ever seen will come into effect. Although the General Data Protection Regulation (GDPR) is a European law, it will have a global impact. There are likely to be some unintended consequences of the GDPR.

As we outline in a recent working paper, the implementation of the GDPR opens the potential for new data markets in tradable (possibly securitised) financial instruments. The protection of people’s data is better protected through self-governance solutions, including the application of blockchain technology.

The GDPR is in effect a global regulation. It applies to any company which has a European customer, no matter where that company is based. Even offering the use of a European currency on your website, or having information in a European language may be considered offering goods and services to an EU data subject for the purposes of the GDPR.

The remit of the regulation is as broad as its territorial scope. The rights of data subjects include that of data access, rectification, the right to withdraw consent, erasure and portability. Organisations using personal data in the course of business must abide by strict technical and organisational requirements. These restrictions include gaining explicit consent and justifying the collection of each individual piece of personal data. Organisations must also employ a Data Protection Officer (DPO) to monitor compliance with the 261-page document.

Organisations collect data from customers for a range of reasons, both commercial and regulatory — organisations need to know who they are dealing with. Banks will not lend money to someone they don’t know; they need to have a level of assurance over their customer’s willingness and ability to repay. Similarly, many organisations are forced to collect increasingly large amounts of personal data about their customers. Anti-money laundering and counter-terrorism financing legislation (AML/CTF) requires many institutions to monitor their customers activity on an ongoing basis. In addition, many organisations derive significant value from personal data. Consumers and organisations exchange data for services, much off which is voluntary and to their mutual benefit.

One of the most discussed aspects of the GDPR is the right to erasure — often referred to as the right to be forgotten. This allows data subjects to use the government to compel companies who hold their personal data to delete it.

We propose that the right to erasure creates uncertainty over the value of data held by organisations. This creates an option on that data.

The right to erasure creates uncertainty over the value of the data to the data collector. At any point in time, the data subject may withdraw consent. During a transaction, or perhaps in return for some free service, a data subject may consent to have their personal data sold to a third party such as an advertiser or market researcher. Up until an (unknown) point in time — when the data subject may or may not withdraw consent to their data being used — that personal data holds positive value. This is in effect a put option on that data — the option to sell that data to a third party.

The value of such an option is derived from the value of the underlying asset — the data — which in turn depends on the continued consent by the data subject.

Rational economic actors will respond in predictable ways to manage such risk. Data-Backed Securities (DBS) might allow organisations to convert unpredictable future revenue streams into one single payment. Collateralised Data Obligations (CDO) might allow data collectors to package personal data into tranches of varying risk of consent withdrawal. A secondary data derivative market is thus created — one that we have very little idea of how it will operate, and what any secondary effects may be.

Such responses to regulatory intervention are not new. The Global Financial Crisis (GFC) was at least in part caused by complex and rarely understood financial instruments like Mortgage-Backed Securities (MBS) and Collateralised Debt Obligations (CBS). These were developed in response to poorly designed capital requirements.

Similarly, global AML/CTF requirements faced by financial institutions have caused many firms to simply stop offering their products to certain individuals and even whole regions of the world. The unbanked and underbanked are all the poorer as a result.

What these two examples have in common is that they both have good intentions. Adequate capital requirements and preventing money from being cleaned by money launderers are good things, but good intentions are not enough. Secondary consequences should always be considered and discussed.

Self-governance alternatives, including the application of blockchain technology, should be considered. These alternatives use technology to allow individuals greater control over the personal data they share with the world.

Innovators developing self-sovereign identity solutions are attempting to provide a market based way for individuals to gain greater control over — and derive value from — their personal data. These solutions allow users to share just enough data for a transaction to go ahead. A bartender doesn’t need to know your name or address when you want a drink, they just need to know you are of legal age.

Past instances of regulatory intervention should make us cautious that even well-meaning regulation will achieve its stated objectives with no negative effects. Self-sovereign identity, and the use of blockchain technology is a promising solution to the challenges of data privacy.

Government must leave encryption alone, or it will endanger blockchain

With Sinclair Davidson and Jason Potts

If we could give Malcolm Turnbull one piece of economic advice right now – one piece of advice about how to protect the economy against a challenging and uncertain future – it would be this: don’t mess with encryption.

Earlier this month the government announced that it was going to “impose an obligation” on device manufacturers and service providers to provide law enforcement authorities access to encrypted information on the presentation of a warrant.

At the moment it’s unclear what exactly this means. Attorney-General George Brandis and Malcolm Turnbull have repeatedly denied they want a legislated “backdoor” into encrypted devices, but the loose way they’ve used that language suggests some sort of backdoor requirement is still a real possibility.

Hopefully we’ll discover more when the legislation is introduced in the August sitting weeks. Turnbull did say at the press conference “I’m not suggesting this is not without some difficulty”. The government may not have made any final decisions yet.

But before any legislation is introduced, the government needs to understand what the stakes are in as they strive against encryption.

Anything the government does to undermine the reliability of encryption could have deleterious consequences for what we believe will be the engine of economic growth in decades to come: the blockchain protocol.

The blockchain is the distributed and decentralised ledger that powers the Bitcoin cryptocurrency. Blockchain constitutes a suite of five technologies: cryptography, a database that can be added to but not altered, peer-to-peer networking, an application of game theory, and an algorithm for ensuring a consensus about what information is held on the ledger.

Taken separately, these are long established technologies and techniques – even mundane ones. But taken together, they constitute an entirely new tool for creating political, economic, and social relationships.

The possibilities far exceed digital currencies. Already banks and other financial institutions are trying to integrate blockchains into their business structures: blockchains drastically reduce the costs of tracking, recording, and verifying transactions. Almost any business or government organisation that is done with a database now can be done more efficiently, more reliably, and cheaper with a blockchain – property registers, intellectual property, security and logistics, healthcare records, you name it.

But these much publicised blockchain applications are just a small taste of the technology’s possibility. “Smart” self-executing contracts and massively distributed organisational structures enabled by the blockchain will allow the creation of new forms of business structures and new ways to work together in every sector and every industry.

In fact, we think that the blockchain is so significant that it should be treated as its own category of human organisation. There are firms, there are markets, there are governments, and now there are blockchains.

But the blockchain revolution is not inevitable.

If there is one key technology in the blockchain, it is cryptography. There are lots of Silicon Valley entrepreneurs playing around with lots of different adaptations of the blockchain protocol, but this one is a constant: the blockchain’s nested levels of encryption are built to ensure that once something is placed on the blockchain it is permanent, immutable, and only accessible to those who own it.

Blockchains only work because their users have absolute confidence that the system is secure.

Any legal restrictions, constraints or hurdles placed on encryption will be a barrier to the introduction of this remarkable new economic technology. In fact, any suggestion of future regulatory challenges to encryption will pull the handbrake on blockchain in Australia. In the wake of the banking, mining and carbon taxes, Australia already has a serious regime uncertainty problem.

Melbourne in particular is starting to see the growth of a small but prospective financial technology industry of which blockchain is a central part. The Australian Financial Review reported earlier this week about the opening of a new fintech hub Stone & Chalk in the establishment heart of Collins St. What’s happening in Melbourne is exactly the sort of innovation-led economic growth that the Coalition government was talking about in the 2016 election.

But the government won’t be able to cash in on those innovation dividends if they threaten encryption: the simple and essential technology at the heart of the blockchain.

Medicare details available on dark web is just tip of data breach iceberg

Modern governments use a lot of data. A lot. Our social services are organised by massive databases. Health, welfare, education and the pension all require reams of information about identity, social needs, eligibility, and entitlement.

Our infrastructure is managed by massive databases holding information about traffic flows, public transport usage, communications networks, and population flows.

Our security is maintained by complex information systems managing defence assets, intelligence data, and capabilities and deployment information.

We should be thinking about these enormous data holdings when we read the news that thieves have been selling Medicare numbers linked to identities on the “dark web” – a mostly untraceable anonymous corner of the internet.

That last detail is what has made this such a scandal for the government, as Human Services Minister Alan Tudge and the Australian Federal Police have scrambled to identity the systems’ weaknesses.

But the fact that the Medicare numbers are being sold is the only thing that makes this an unusual data security breach. Australian government databases are constantly being accessed by people who are not authorised to do so.

Here’s just a taste. Last year the Queensland Crime and Corruption Commission revealed it had laid 81 criminal charges and 11 disciplinary recommendations in the space of 12 months for unauthorised access to confidential information by police. One of those was a police officer who had been trawling through crime databases looking for information about people he had met on a dating service. He was convicted of 50 charges of unauthorised access.

A Queensland police officer was disciplined in May this year for using the police database to share the address of a woman with her husband who was subject to a restraining order.

The Victorian government’s police database was wrongly accessed 214 times between 2008 and 2013, by “hundreds” of officers.

Earlier this year 12 staff were fired from the Australian Taxation Office for accessing tax data on celebrities and people they knew.

We could go on. These of course are the instances we know about because they have been detected and reported on. There are undoubtedly others.

Governments manage a lot of data because we ask them to do it a lot, and to do what they do well.

They run thousands of complex systems. Many of these systems have been jerry-rigged and adapted from earlier systems, a series of politicised, over-budget and under-delivering IT projects stacked on top of each other over decades.

But these repeated episodes of unauthorised access show that these complex systems are in dire need of reform.

It is clear that the “permission” structures on these government databases are deeply broken.

In the debate over mandatory data retention one of the big questions was whether law enforcement and regulatory agencies should have to obtain a warrant before accessing stored data. In the end the government decided no warrant was necessary – because warrants could only slow down investigations.

This is exactly the sort of loose permission structure that leads to abuse. Just two weeks after data retention officially came into effect this April, the Australian Federal Police admitted one of its members had illegally accessed the metadata of a journalist.

This breach was entirely predictable. Data retention opponents repeatedly predicted it.

Last week’s Medicare breach has been made possible because thousands and thousands of people – bureaucrats, health professionals, and so on – can access the Medicare database. Most police officers, bureaucrats, and health professionals are trustworthy. But it only takes a few bad actors to wreck a system built on trust.

Rather than leaving data access up to the discretion of thousands of people, we need stricter codified rules on data access. Government databases need to be restructured to prevent, not simply penalise, government employees from going on fishing expeditions through our data.

The point isn’t to provide a legal or technological fix to the problem of unauthorised access. Rather, we should completely reimagine who owns the information that the government keeps on all of us. We ought to own and control our information, not the state.

New cryptographic technologies increasingly being applied to blockchain and cryptocurrency applications allow for even greater personal control over information. If applied, they would only allow government agents to know exactly what they need to know.

And it would move us from a system of surveillance and big data, to one of personal disclosure and privacy.

In the past, economic reform was targeted at big sectors like banking, telecommunications, and trade.

As Australian governments evolve inevitably into complex information brokers, the next wave of reform will have to focus on data management.