5 things to know about fake news on Facebook, Google

How big is the problem and what can or should be done about it?

With presidential election signs coming down from front lawns and voters watching protests on the news, many are wondering if fake news stories on Facebook and Google contributed to Donald Trump's winning the presidency.

And that raises the question of what Google and Facebook plan to do about it.

Here are some examples of headlines from fake stories posted to the web:

"Pope Francis endorses Trump"

"WikiLeaks Confirms Hillary Sold Weapons to ISIS"

"Donald Trump wins electoral college and popular vote"

"Clinton Foundation bought $137 million worth of illegal arms and ammunition"

These headlines are from fake news stories that littered Facebook and Google in the past few months. While the content was wrong, it didn't stop them from pulling in hundreds of thousands or millions of likes, shares and comments.

With these stories, truth was beside the point. Or maybe spreading false news was the point.

Fake news spread around the world on social networks, as well as on Google News and in Google searches.

"Americans get most of their news off both of these sites," said Patrick Moorhead, an analyst with Moor Insights & Strategy. "I believe readers, Facebook, and Google all play a role in bringing this to a successful conclusion."

Here is what you need to know about how fake news is affecting what you know about the world and what is being done about it.

1. How big a problem is this?

There are no clear numbers on how many fake news stories hit Facebook and Google during the election season. Nor are there any figures on the number of fake news stories that appeared on those sites all year.

To be clear, there also is some debate over what makes a news story fake.

Take for example the totally false and concocted story over the summer that said the pope endorsed then-Republican nominee Donald Trump for president. The pope did not endorse Trump. Is it a story that has an omission or an error?

"I think until we start to define the issue more clearly, it's open for a variety of interpretations," said Brian Blau, an analyst with market research firm Gartner. "Users want to trust not only the providers but also the content they consume, and given the lack of fine-grain controls, we have to assume that some amount of fake news is being read and trusted, which isn't a good situation for anyone."

Most agree that to be considered fake, a story needs to be largely false or purposefully false.

To that end, Mark Zuckerberg, CEO and co-founder of Facebook, said in mid-November that most of the stories that show up on users' news feeds are real news pieces, downplaying the affect that fake news has had on Facebook's more than 1.18 billion daily active users.

"Of all the content on Facebook, more than 99% of what people see is authentic," Zuckerberg wrote on his personal Facebook page . "Only a very small amount is fake news and hoaxes. The hoaxes that do exist are not limited to one partisan view, or even to politics. Overall, this makes it extremely unlikely hoaxes changed the outcome of this election in one direction or the other."

If Zuckerberg is right and 99% of stories on Facebook are accurate, does that mean 1% are false?

If that's the case then, if a user sees 500 news headlines in her News Feed every week, five of them are fake.

And if those fake news stories purposefully have salacious headlines, will those five headlines stick in her brain more than 1% of news stories normally would?

Add to that the fact that Facebook uses algorithms to show readers more of the kinds of stories they normally respond to, read and share, then some users might be seeing a much higher percentage of fake stories.

2. What's the effect of fake news stories?

According to a Pew Research Center survey of 1,520 adults conducted between March 7 and April 4, 68% of U.S. adults are Facebook users. In addition, a majority of Americans say they get news via social media, and half of the public used these sites to learn about the 2016 presidential election.

The Washington Post went right to the source, interviewing a man who is a fake news writer.

According to the Post report , fake news writer Paul Horner is taking credit for Trump's winning the election, saying, "I think Trump is in the White House because of me. His followers don't fact-check anything — they'll post everything, believe anything. His campaign manager posted my story about a protester getting paid $3,500 as fact. Like, I made that up."

Meanwhile, Zuckerberg said over the weekend in a blog post, that Facebook is working with fact-checking organizations to verify the authenticity of news on its site.

So it’s unclear at this point how much fake news stories affected the outcome of the election. Some claim they helped Trump pull off an upset win. Others disagree.

What is clear is that people read these stories, commented, liked or were angered by them. They also shared this content with their friends and family. These stories, filled with lies and propaganda, made the rounds, and some chose to believe them.

3. How does Google fit into all of this?

Fake news stories is not just a Facebook problem. Google is mired up to its knees in this, as well.

A story saying that Trump beat Democratic nominee Hillary Clinton in the popular vote, as well as in the Electoral College, made it to the top of Google News in the days following the election. It was a fake story. Clinton won the popular vote, but Trump secured the 270 electoral votes needed to win the presidency.

When people want to know something, they generally turn to Google. Want to know which candidate won the state of Utah or Virginia? Google it. Want to know if the pope endorsed a candidate? Google it.

If fake news stories are appearing in searches and Google News spots, misinformation is being propagated.

4. Why can't fake stories simply be eradicated?

There are a number of issues with this question.

Do users want companies like Facebook and Google deciding what is true or not? For example, Is it a fake story if one fact is wrong or is it only a fake story if the entire premise is wrong? How does an algorithm distinguish the difference?

Also, do users want Facebook, for instance, to decide what they can and cannot share with their friends?

"The problems here are complex, both technically and philosophically," Zuckerberg wrote in the weekend post. "We need to be careful not to discourage sharing of opinions or to mistakenly restrict accurate content. We do not want to be arbiters of truth ourselves, but instead rely on our community and trusted third parties."

As to who makes the final decisions about what is fake and what is real news, it will be a human/machine team. Algorithms will be used to pick out likely offenders and mark them for further inspection. At that point, humans likely would make the final decisions.

To do this, algorithms will need to be updated, become more sophisticated and be rigorously tested before they are unleashed on the Facebook and Google worlds.

5. What's being done to cull fake news stories from our News Feeds and searches?

While Google did not respond to a request for information, Reuters reported that Google, while not addressing the issue of fake news, is working to change its policies so websites that run fake content will not be able to use its Adsense advertising network.

Google's move is aimed at cutting out much of the financial benefits of creating fake news.

As for Facebook, Zuckerberg said the company is working on a list of changes that he hopes will curb the amount of fake news on the social network.

In his weekend Facebook post, he noted that the company is working on making it easier for users to report fake stories; tagging stories that have been flagged as false with warnings; changing the companies ad policies to make it harder to make money off fake stories, and developing technology to better detect fake news.

Jeff Kagan, an independent industry analyst, said artificial intelligence and machine learning eventually will be part of the solution to fake stories, by analyzing news articles much faster and more efficiently than humans can. But he's not sure when the technology will be ready to do that.

"I am sure machine learning is part of the solution but [these technologies] are still in their infancy and are only so helpful today," he said "The solution is complicated."

Join the Computerworld newsletter!

Error: Please check your email address.

Tags GoogleFacebook

More about FacebookGartnerGoogleNews

Show Comments