Suddenly, we live in a world of policy
dilemmas around social media, digital platforms, personal data, and digital privacy.
Voices on both sides of politics are loudly proclaiming we ought to regulate Facebook
and Google. From the left, these calls focus on antitrust and competition law—the
big platforms are too large, too dominant in their respective markets, and
governments need to step in. From the right, conservatives are angry that social
media services are deplatforming some popular voices and call for some sort of neutrality
standard to be applied to these new ‘utilities’.
Less politically charged but nonetheless
highly salient are the concerns about the collection and use of personal data. If
‘data is the new oil’—a commodity around which the global economy pivots—then Facebook
and Google look disturbingly like the OPEC oil production cartel. These firms use
that data to train artificial intelligence (AI) and serve advertisements to consumers
with unparalleled precision. No more is it the case that 50 per cent of
advertising is wasted.
These policy dilemmas have come about
because the digital environment has changed, and it has changed sharply. Facebook
only opened to the public in 2006 and by 2009 already had 242 million users. In
the second half of 2019 it has 2.38 billion users.
Facebook is not just central to our
lives—one of the primary ways so many of us communicate with family, friends
and distant acquaintances—but central to our politics. The first volume of the
Mueller investigation into Russian interference in the 2016 American presidential
election focused on the use of sock-puppet social media accounts by malicious
Russian sponsors. There’s no reason to believe these efforts influenced the
election outcome but it is nonetheless remarkable that, through Facebook, Russian
agents were able to fraudulently organise political protests (for both left and
right causes)—sometimes with hundreds of attendees—by pretending to be
Americans.
There always have been and always
will be a debate about tax rates, free trade versus protectionism, monetary
policy and banking, Nanny State paternalism, or whether railways should be privatised
or nationalised. The arguments have been rehearsed since the 19th century, or
even earlier. But we are poorly prepared not just for these topics of digital rights
and data surveillance, but for new dimensions on which we might judge our freedoms
or economic rights.
Private firms are hoovering up
vast quantities of data about us in exchange for providing services. With that
data they can, if they like, map our lives—our relationships, activities, preferences—with
a degree of exactness and sophistication we, as individuals, may not be able to
do ourselves. How should we think about Facebook knowing more about our relationships
than we do? Do we need to start regulating the new digital economy?
The surveillance economy
One prominent extended case for greater
government control is made by Shoshana Zuboff, in her recent book The Age of Surveillance
Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs,
2019). For Zuboff, a professor at Harvard Business School, these new digital technologies
present a new economic system, surveillance capitalism, that “claims human experience
as free raw material for translation into behavioural data”.
Zuboff argues these new firms look
a lot like the industrial behemoths of the 19th and 20th century. Google is like
General Motors in its heyday, or the robber barons of the Gilded Age. Using Marxist-tinged
language, she describes how firms claim the ‘behaviourial surplus’ of this data
to feed AI learning and predict our future desires—think Amazon or Netflix recommendation
engines.
More sinisterly in Zuboff’s telling,
these firms are not simply predicting our future preferences, but shaping them too:
“It is no longer enough to automate information flows about us; the goal now is
to automate us.” Netflix can put its own content at the top of its recommendation
algorithm; Pokémon Go players tend to shop at restaurants and stores near the
most valuable creatures.
Where many people spent years
worrying about government surveillance in the wake of Edward Snowden’s leaks about
the National Security Agency, she argues NSA learned these techniques from Google—surveillance
capitalism begets surveillance state. At least the NSA is just focused on spying.
Silicon Valley wants to manipulate: “Push and pull, suggest, nudge, cajole, shame,
seduce,” she writes. “Google wants to be your co-pilot for life itself.”
Harrowing stuff. But these concerns
would be more compelling if Zuboff had seriously engaged with the underlying
economics of the business models she purports to analyse. Her argument—structured
around an unclearly specified model of ‘surveillance assets’, ‘surveillance revenues’,
and ‘surveillance capital’—is a modification of the internet-era adage, “If you’re
not paying for the product, you are the product”. Many services we use online
are free. The platforms use data about our activities on those platforms to make
predictions—for example, about goods and services we might like to consume—and sell
those predictions to advertisers. As she describes it:
… we are the objects from which raw materials are extracted and expropriated for Google’s prediction factories. Predictions about our behaviour are Google’s products, and they are sold to its actual customers but not to us. We are the means to others’ ends.
… the essence of the exploitation here is the rendering of our lives as behavioural data for the sake of others’ improved control of us.
This argument misses a crucial step:
what is this control? For the most part, the product derived from our data that
is sold to other firms is advertising space: banner ads on news websites, ads dropped
into social media feeds, ads threaded above our email inboxes. Seeing an advertisement
is not the same as being controlled by a company. The history of advertising dates
back at least to Ancient Rome. We are well familiar with the experience of
companies trying to sell us products. We do not have to buy if we do not like
the look of the products displayed on our feeds. It’s a crudely simple point,
but if we do not buy, all that money—all that deep-learning technology, all
those neural networks, all that ‘surveillance’—has been wasted.
Two sided markets
So how should we think about the
economics of the big technology companies? Google and Facebook are platforms;
what Nobel-winning economist Jean Tirole described as ‘two-sided’ markets.
Until recently the dominant market structure was a single-sided market: think a
supermarket. A supermarket has a one-directional value chain, moving goods from
producers to consumers. Goods are offered to customers on a take-it-or-leave-it
basis. In a two-sided market, customers are on both sides of the market. The service
Google and Facebook provide is matching. They want advertisers to build relationships
with users and vice-versa. Since the first scholarly work done on two-sided markets,
economists have observed platforms that take three or more groups of users and
match them together.
Two-sided markets are not new, of
course. Newspapers have traditionally done this: match advertisers with readers.
Banks match borrowers with lenders. French economics professor Jean Tirole’s first
work looked specifically at credit card networks. But two-sided markets dominate
the online world, and as the economy becomes more digital they are increasingly
important. When we try to define what is unique about the ‘sharing economy’,
we’re really just talking about two-sided markets: AirBnB matches holidaymakers
with empty homes, Uber matches drivers with riders, AirTasker matches labour with
odd jobs. Sometimes single and two-sided markets co-exist: Amazon’s two-sided
marketplace sits alongside its more traditional online store.
The economic dynamics of two-sided
markets are very different dynamics to what we are used to in the industrial
economy. They are strongly characterised by network effects: the more users
they have on both sides, the more valuable they are. So firms tend to price
access in strange ways. Just as advertisers subsidised the cost of 20th century
newspapers, Google and Facebook give us free access not because we are paying in
personal data but because they are in the relationship business. Payments go in
funny directions on platforms, and the more sides there are the more opaque the
business model can seem.
An ironic implication of Zuboff’s
arguments is that her neo-Marxian focus implicitly discounts what most analysts
identify as the two key issues around these platforms: whether these networks are
harmful for privacy and whether they are monopolistic.
First, the monopoly arguments. In
Australia the ACCC has been running a digital platforms inquiry whose draft report—released
in December 2018—called for using competition law against the large platforms on
the basis they have started to monopolise the advertising market. There are many
problems with the ACCC’s analysis. For example, it badly mangles its narrative account
of how newspaper classifieds migrated online, implying Google and Facebook
captured the ‘rivers of gold’. In fact, classified advertising went elsewhere (often
to websites owned by the newspapers, such as Domain).
Yet the most critical failure of
the ACCC is its bizarrely static perspective of an incredibly dynamic industry.
True, platform markets are subject to extreme network effects—the more users, the
more valuable—but this does not mean they tend towards sustainable monopolies.
Far from it. There are no ‘natural’ limits to platform competition on the
internet. There is unlimited space in a digital world. The only significant
resource constraint is human attention, and the platform structure gives new
entrants a set of strategic tools which can help jump-start competition. Using one
side of the market to subsidise another side of the market helps ‘boot-strap’
network effects.
Consumer harm is the standard
criteria for whether a firm is unacceptably monopolistic. Usually this means asking
whether prices are higher than they would be if the market was more contested. Given
the money prices for these services are often zero, that’s hard to sustain. Nobody
pays to use Google.com. At first pass the digital platform business seems to have
been an extraordinary boost to consumer surplus.
But, again, platform economics
can be strange. It is possible we are paying not with money but with personal
data, and the role of a competition authority is to protect our privacy as much
as our wallet. This is the view of the ACCC (at least in its December 2018 draft
report) and has become an article of faith in the ‘hipster antitrust’ movement
in the United States that competition regulators need to focus on more than
just higher prices.
There is obviously a great deal to
privacy concerns. In a recent book, The Classical Liberal Case for Privacy in a
World of Surveillance and Technological Change (Palgrave Macmillan, 2018), I argued
we currently are in an extended social negotiation about the value of privacy
and its protection. But the privacy debate is characterised by a lot of
misconceptions and confusions. Privacy policies and disclosures have not always
been acceptable. Expectations are changing. Mark Zuckerberg would no longer get
away with the reckless anti-privacy statements he made as a CEO when Facebook launched.
The question is whether to wait for privacy expectations to shift—supplemented
by the common law—or whether governments need to step in with bold new privacy
regulation.
The experience with privacy regulation
so far has not been great. The European Union’s General Data Protection Regulation
presents the single most significant attempt to regulate privacy thus far. The GDPR,
which became enforceable in 2018, requires explicit and informed consent of data
collection and use, informing users about how long their data will be retained,
and provides for a “right of erasure” that allows users to require firms to delete
any personal data they have collected at any time. The GDPR was written so broadly
as to apply to any company that does business with any European citizen, in
practice making the GDPR not just a European regulation but a global one.
Early evidence suggests host of consequences
unforeseen by the GDPR’s designers. Alex Stapp, at the International Center for
Law and Economics, argues GDPR compliance costs have been “astronomical”. Microsoft
put as many as 1,600 engineers on GDPR compliance, and Google says they spent “hundreds
of years of human time” ensuring they follow the new rules globally. These
firms have the resources to do so. One consequence of high compliance costs has
been to push out new competitors: small and medium internet companies that cannot
dedicate thousands of engineers to regulatory compliance. As Stapp points out,
it’s not at all clear this trade-off for privacy protection has been worth it: regulatory
requirements for things such as data portability and right of data access have created
new avenues for accidental and malicious access to private data.
A peculiarity of the history of
early-stage technologies is they tend to trade off privacy against other benefits.
Communications over the telegraph were deeply insecure before the widespread use
of cryptography; early telephone lines (‘party lines’) allowed neighbours to listen
in. Declaring privacy dead in the digital age is not just premature, it is potentially
counterproductive. We need sustained innovation and entrepreneurial energy
directed at building privacy standards into technologies we now use every day.
The deplatforming question
One final and politically sensitive
way these platforms might be exercising power is by using their role as
mediators of public debate to favour or disfavour certain political views. This
is the fear behind the deplatforming of conservatives on social media, which has
seen a number of conservative and hard-right activists and personalities banned
from Facebook, Instagram and Twitter. Prominent examples include the conservative
conspiracist broadcaster Alex Jones, his co-panellist Paul Joseph Watson, and provocateur
Milo Yiannopoulos. Social media services also have been accused of subjecting conservatives
to ‘shadow bans’—adjusting their algorithms to hide specific content or users
from site-wide searches.
These practices have led many
conservative groups who usually oppose increases in regulation to call for
government intervention. The Trump administration even launched an online tool in
May 2019 for Americans to report if they suspected “political bias” had violated
their freedom of speech on social media platforms.
One widely canvassed possibility is
for regulators to require social media platforms to be politically neutral.
This resembles the long-discredited ‘fairness doctrine’ imposed by American regulators
on television and radio broadcasting until the late 1980s. The fairness doctrine
prevented the rise of viewpoint-led journalism (such as Fox News) and
entrenched left-leaning political views as ‘objective’ journalism. Even if this
was not an obvious violation of the speech rights of private organisations, it
takes some bizarre thinking to believe government bureaucrats and regulators would
prioritise protecting conservatives once given the power to determine what social
media networks are allowed to do.
Another proposal is to make the
platforms legally liable for content posted by their users. The more the platforms
exercise discretion about what is published on their networks, the more they
look like they have quasi-editorial control, and courts should treat them as if
they do. While this would no doubt lead to a massive surge in litigation against
the platforms for content produced by users, how such an approach would protect
conservative voices is unclear: fear of litigation would certainly encourage platforms
to take a much heavier hand, particularly given the possibilities of litigation
outside the United States where hate speech and vilification laws are common.
The genesis of this proposal
seems to come from a confusion about the distinction between social media
platforms and newspapers. Newspapers solicit and edit their content. Social media
platforms do not. Social media platforms come from a particular political and ideological
environment—the socially liberal, quasi-libertarian and individualistic worldview
of Silicon Valley and the Bay Area—and these technologies now hold the cultural
high-ground. The conservative movement has focused on trying to change
Washington DC when it should have been just as focused on developing new ways for
people to exercise their freedom, as has Silicon Valley.
But regulation cannot be the answer.
Regulation would dramatically empower bureaucrats, opening up new avenues for government
intervention at the heart of the new economy (any proposed regulation of Facebook’s
algorithm, for instance, would lay the foundation for regulating Amazon’s
search algorithm, and then any firm that tries to customise and curate their
product and service offerings), and threatening, not protecting, freedom of speech.
To give government the power to regulate what ought to be published is a threat
to all who publish, not to just a few companies in northern California.
Platform to protocol economy
I opened this article with a
discussion of how recent a development the platform economy is: a decade old,
at best. A host of new technologies and innovations are coming that challenge
the platforms’ dominance and might radically change the competitive dynamic of
the sector. New social media networks are opening all the time. Many of those who
have been deplatformed have migrated to services such as Telegraph or specially
designed free speech networks such as Gab. Blockchain technology, for instance,
is a platform technology as a decentralised (no single authorities, public or private,
can control its use) and open (anyone can join) protocol.
Likewise, intense innovation focusing
on decentralised advertising networks threatens Google’s ad sector dominance, and
offers advertisers more assurance their digital dollar is used well. Other new
technologies focus on regaining control over user privacy. Cutting-edge privacy
technologies such as zero-knowledge proofs open massive opportunities for hiding
personal information while still participating in economic exchange and social interactions.
Blockchain applications are being developed to give users genuine control over data
and facilitate the sort of private property rights over information the
European Union’s GDPR awkwardly tries (and fails) to create.
The platforms know they face an uncertain
and more competitive technological future. That is why Facebook is developing its
own cryptocurrency—a pivot into financial services, like Chinese social media WeChat
developing WeChat Pay. Google is investing serious resources into blockchain research,
despite the technology’s long-run potential to displace its competitive advantages.
The internet 10 years on will look very different—not because governments
decided to regulate, but because digital entrepreneurs will have kept pushing, bringing
us new products and services, revolutionising the global economy.