A Facelift for Facebook - OPINION

  10 October 2021    Read: 669
  A Facelift for Facebook -   OPINION

The free market has never been a free-for-all, yet tech companies have long operated with few constraints on their business models. Perhaps the latest Facebook scandal will finally provide the impetus governments need to take effective action – beginning with the implementation of digital operating permits.

If the testimony of whistleblower Frances Haugen, a former Facebook data scientist, before a US Senate subcommittee told us anything, it was that tech companies cannot be relied upon to regulate themselves. And why should they be? It is a basic principle of modern economics that governments mandate the terms of operation for business. The real question is how governments can best exercise this authority when it comes to the tech sector.

Facebook, Google, Amazon, and other tech companies have been allowed to develop unprecedented surveillance-based business models that include relentless capture of personal data, including geographic locations, and the manipulation of users with hyper-targeted content. And yet, as Haugen testified, “Almost no one outside of Facebook knows what happens inside Facebook.” The KGB would have been envious.

But tech companies also serve a useful purpose. They have built much of the public infrastructure of the digital age, including search engines, global portals for news and social networking, GPS-based navigation apps, online commercial and job marketplaces, and movie, music, and livestreaming platforms.

To enable digital platforms to continue performing a beneficent role, while minimizing their harm, governments should require them to secure “digital operating licenses.” There is plenty of precedent for this: from grocery stores to nuclear power plants to pharmaceutical manufacturing facilities, traditional businesses must be granted various licenses and permits before they can begin operations, not least to ensure the safety of workers, customers, the environment, and the local community.

Likewise, to be granted a digital operating license, tech companies would have to meet certain conditions. The first would be to obtain users’ explicit permission before collecting any personal data, using an “opt-in” consent system instead of an “opt-out” system that must be periodically renewed.

Since tech companies first developed their business models, users’ private data have become their real cash cow. Companies sell psychographic profiles of users to advertisers and political operatives, who then target them with manipulative content. There have also been leaks. In 2014, for example, the political consultancy Cambridge Analytica harvested information from the Facebook profiles of more than 87 million users – information that it used to try to sway voters. Five years later, Facebook leaked 530 million users’ private data.

Tech executives argue that their data grabs benefit users by giving them personalized ads that cater to their individual desires. But how many times do you need to see ads for hiking boots, especially after you have purchased them? The risks of the surveillance-capitalism business model far outweigh the benefits.

The digital operating permit could also require companies to ensure compatibility with “middleware,” third-party software that helps users to manage their online experience. Software that blocks online advertisements is one example. Another is a smartphone app that allows users to turn data-collection and location-tracking on and off as needed, with the touch of a button. Want to call a taxi? Turn on location tracking so the driver knows where to find you, and then turn it off – no more tracking, and no transaction data retained. If this functionality – a limited version of which is now included in Apple’s iPhone iOS – becomes widespread, it could upend Facebook’s “data-grab-for-profit” model.

Other middleware could target “dark patterns” in platform design: engagement tricks like infinite scroll, autoplay, pop-ups, and automated recommendations, which keep users clicking and viewing. Platforms like Facebook deploy such “behavioral nudges” to ensure that users continue to see ads – the single biggest source of Facebook’s $86 billion in annual revenue.

The digital permit system could also help address Big Tech’s monopoly problem. For example, Facebook, with its 2.8 billion users, owns WhatsApp (two billion users) and Instagram (1.1 billion users). But while the growing call for antitrust enforcement has merit, these three platforms would be behemoths even separately.

A digital permit could help reduce the major social-media platforms’ market share by imposing strict limits on audience size: each piece of user-generated content could be served to, say, no more than 1,000 people. That is many more people than most users actually know or have regular contact with, so it hardly constitutes a deprivation. Social-media critic Tristan Harris suggests that Facebook turn off its Share/Reshare button after a piece of content is viewed two “hops” away from the originator. Facebook knows this approach works: it deployed a version of it during the 2020 US presidential election.

Of course, there would be exceptions, including legitimate news, information, music, and videos from leaders, artists, and thinkers. Tech platforms already have teams of human moderators – Facebook employs 15,000 – who could be tasked with identifying such “public-interest content.” This approach would reduce the spread of fake news and disinformation by introducing necessary friction in the information flow, and would be a far better use of moderators’ time than continuing to play whack-a-lie.

Such a system recognizes that platforms like Facebook, Twitter, and YouTube are not merely “public squares,” but also publishers and broadcasters. As such, they have much more in common with The New York Times, BBC, and The Sun than many analysts have been willing to admit.

In fact, they operate on a much larger scale than any of those outlets. Facebook is the single largest media publisher in history, and YouTube is the largest visual-media broadcaster. One study found that a mere 100 pieces of COVID-19 misinformation were shared 1.7 million times and viewed 117 million times on Facebook.

Social-media platforms have not hesitated to wield their power as publishers. After the ransacking of the US Capitol on January 6, they decided to stop “publishing” then-US President Donald Trump. Earlier this year, Facebook blocked all of Australia from accessing news on its feeds during a dispute over sharing of advertising revenue. Google did the same to Spain in 2014.

Introducing digital operating permits would allow social-media platforms to remain free-speech agoras for smaller assemblies of networked friends, families, and associates, while drastically reducing the virality of fake news and disinformation. That was how Facebook worked in its early years, when it was still a cool invention.

 

Steven Hill, a former policy director at the Center for Humane Technology, is the author of seven books, including Raw Deal: How the “Uber Economy” and Runaway Capitalism Are Screwing American Workers and The Startup Illusion: How the Internet Economy Threatens Our Welfare (in German).

Read the original article on project-syndicate.org.


More about:


News Line