Google’s fake news problem could be worse than on Facebook

  07 March 2017    Read: 3090
Google’s fake news problem could be worse than on Facebook
Ever since "fake news" first started to get traction during Donald Trump's presidential campaign, it has been seen by many as primarily a Facebook problem because many hoax and fake-news creators use the giant social network as a distribution system.
But Google also plays a role in the promotion and dissemination of false information, and in some ways, it's a more disturbing one than Facebook.

While Google was once just a search box followed by a page of blue links, the search giant has been trying hard to become a one-stop provider of answers to common questions. Some of these consist of what it calls "featured snippets" from webpages that deal with those topics, which are highlighted in a box at the top of the search page.

In most cases, these answers—which Google selects from the billions of pages it scans and indexes every day, based on how many people have shared them—are uncontroversial, like the answer to "When is Mother's Day?" or "What time is the Super Bowl?" Other one-box solutions point to cheap flights to your chosen destination, or supply the answer to a math question.

In some cases, however, these answers are wrong—and not just humorously wrong but dangerously so because they point to inaccurate stories or offensive opinions on important topics.

As a recent story from The Outline points out, Google's one-box answer to a question about the dangers of monosodium glutamate quotes from a site that is notorious for fake health claims. In answer to the question of how many presidents have been members of the Ku Klux Klan, Google replies four, even though there is no evidence to support that answer.

At one point, if you asked Google whether President Barack Obama was planning a coup, the one-box reply quoted an article that said yes, he was in league with communists from China. This was based on an article from a site called Secrets of the Fed that is popular with conspiracy theorists. (The same question now mostly brings up articles about the search problem.)


This isn't a new problem for Google. A year or so ago, if you typed into the search engine "Who controls Hollywood?" the one-box provided you with a one-word answer—"Jews"—followed by an article from an anti-Semitic website about how Jews control Hollywood.

In a related issue, Google had to remove one of its "auto-complete" suggestions last year because a number of users found that if they started typing the words "Are Jews" into the search box, one of the recommendations was "Are Jews evil?" If clicked on, Google then provided a list of 10 reasons people hate Jews, drawn from a notorious anti-Semitic website.

Other "featured answer" responses have claimed that the disappearance of the dinosaurs is a myth that is designed to indoctrinate children about evolution, something taken from a website for right-wing Christian fundamentalists.

As Search Engine Land founder Danny Sullivan notes, Google has been dealing with this kind of problem for some time. And it has come about because the search giant doesn't want to just be a source of links any more, it wants to be a source of answers. But those answers can be flawed because Google is effectively just crowdsourcing the results.

"It depends on what people post on the web," Sullivan told The Outline. "And it can use all its machine learning and algorithms to make the best guess, but sometimes a guess is wrong. And when a guess is wrong, it can be spectacularly terrible."

This has become even more important recently because Google doesn't just want to provide answers in a box on its website. It has staked its future on doing so through devices like its Google Home smart assistant device, a competitor for Amazon's Echo, which sits on a table and responds to voice commands like "What is the weather like today?"

In some ways, Google's mistakes are arguably more dangerous than the fake news that circulates on Facebook. While it's true that hoaxes about Hillary Clinton and Pizzagate can go viral on the social network and be seen by billions of people, these articles are voluntarily shared.

Google, by contrast, is saying that the one-box and auto-complete answers it provides are the best responses it has been able to find in its vast database. It is giving those fake articles credence by providing its implicit endorsement of them, and that means a lot in an era when Googling something is seen as the definitive way of finding the truth.

After initially dismissing the issue of fake news and its role in distributing it, Facebook recently started working with third-party fact-checking groups and is now flagging stories that have been called into question or whose accuracy has been disputed.

Google has taken steps from time to time to remove offensive suggestions like the "Are Jews evil?" auto-complete recommendation after widespread criticism. But in response to The Outline's story about some of its one-box errors, all the company would say is that it is "always working to improve our algorithms, and we welcome feedback on incorrect information."These problems aren't going to go away. If anything, they are going to become even more important to solve as devices like Google Home and the Echo—and even more futuristic smart assistants—become the default way in which many people get information. If someone is providing you with what they claim is the only answer you need, they should probably make sure it's not a racist hoax.

/Fortune/

More about: #Google   #Facebook  


News Line