This is part of “Can You Believe It?” a series of stories and resources focused on giving you the tools to combat disinformation heading into Election 2020. You can find more resources here. What have you been seeing? Share here.
Truth doesn’t often go viral. Instead, it’s exaggerated, false or malicious content that tends to wind up in our digital news feeds.
The 2016 presidential election was marked by outside attempts to deceive or inflame voters through sophisticated disinformation tactics. False information raced across social media as some people unwittingly helped spread it. With the 2020 election here, experts are warning we should be on guard for similar deceptions.
Collectively, Americans spend more than 80 billion minutes per week consuming news on TV, radios and their smartphones, according to research published by Don Fallis, a professor of philosophy and computer science at Northeastern University in Massachusetts. He’s been studying false information for years and has published extensively on the topic.
It’s no surprise that politicians often have a tendency to stray from the truth. But Fallis said disinformation is different. It’s something explicitly used to mislead.
"There's a couple of important ways in which this differs from, say, telling a lie. The information needs to be misleading, but it doesn't have to be false,” he said. “So, there's all sorts of ways in which people can mislead other people, even though they're they're very careful to make sure they're only saying the truth, saying things that are true."
Disinformation is often aimed at feeding what’s known to psychologists as confirmation bias. That’s people’s tendency to seek information that reinforces their own beliefs, or to interpret information in ways that align to their own ideals.
Purveyors of disinformation seek to use this bias against us, Fallis said.
“There can be bad actors that take advantage of confirmation bias in order to get us to believe what they want us to believe,” Fallis said. “But there can also [be] structural issues that may lead us to get more polarized on the things that we already believe.”
And with so much information out there, some of it completely fabricated, the real risk in the long term is that no one will know what to believe, Fallis said.
Once faulty information is on the internet, it’s difficult to stop its circulation — and lies tend to spread faster than the truth.
“Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information,” wrote researchers from MIT who did an analysis of 126,000 news stories shared on Twitter.
We’ve already seen instances of disinformation about Minnesota candidates.
A video circulated on social media that falsely claimed Minnesota U.S. Rep. Ilhan Omar was in the streets protesting a visit from President Trump.
And there are what are called “deepfake” videos that have been doctored to make it look as if someone said or did something they did not. Examples include a recent video edited to make House Speaker Nancy Pelosi look drunk (she was not) and a video that showed Trump saying AIDS had been eradicated (it has not).
Deepfakes are particularly tricky because you’re supposed to be able to believe what you see with your own eyes. And they may have the long-term effect of making you question everything you see.
In the coming year, a team of journalists from MPR News will be helping you sort through the disinformation you’re encountering during the election. We’ll be launching tools to allow you to tell us what you’re seeing.
To get started, here are some best practices for spotting disinformation.
Consider the source, very carefully
There are more publishers and more ways to access information than ever before, so you need to know the people and mission behind your sources.
A good place to start is with the “About” section or by looking for a mission statement.
Let’s take The Intercept as an example of how to be transparent and clear with an audience. Its about page lists what it covers and from what viewpoint it operates. The page also lists the site’s founding investor as ebay founder Pierre Omidyar. The site lists its editorial policies and procedures so readers can understand how journalists work. There’s also a complete list of staff — with pictures, emails and social media accounts — which shows they are indeed real journalists who are accountable for their work.
The Intercept works because it’s clear about what it does, who’s paying for it and who it employs.
Who wrote it?
An article’s author is as important as its source, especially in an era where news outlets increasingly shy away from staff writers and rely more on guest contributors.
The ways articles get passed around social media can blur their authors’ intent, too. When reading, check to see if the article comes from a reporter who’s putting forth new information, or if it comes from a columnist or opinion writer.
Aggregators — such as Drudge Report or Heavy — often have a tendency to distort information. While they may cite and link to credible reporting, they didn’t get the information firsthand, which can lead to a lack of context and accuracy in how it’s delivered.
Is this news current?
This may seem like a no-brainer, but check the publication date on everything you read. It’s crucial context. And if there isn’t a publication date, be suspect of it entirely. Common journalistic ethics call for transparency in showing when an article is posted and when it’s updated.
In addition, questionable websites will often rewrite older news items and include improper context without saying when the initial news event occurred.
If an image or video looks fake, it probably is
You should trust your gut. If the picture of the shark swimming in the flooded highway after a hurricane looks fake, you’re probably correct.
Often, fake images use a real source image that has been manipulated. One way to detect that is by doing a reverse Google Images search. Go to the Google Images homepage and click the camera icon to upload an image or paste an image URL. While not foolproof, the results can give you an idea of where the image originated and the veracity of how it’s being used.
Use the internet to your advantage
While social media has made it much easier for disinformation to spread, the internet has also opened the door for tools to spot it.
Two useful tools come from the University of Indiana: Botometer analyzes Twitter accounts and gives them a score for how likely they are to be a bot. Hoaxy, which is integrated with Botometer, shows how accounts with low credibility spread stories around the web.
Tools like these abound online, but none is perfect. That’s where people come in.
If you run into information that seems sketchy, chances are someone else has had the same experience. Try searching around the web or visiting a fact-checking site such as Snopes or Politifact to see whether anyone else has debunked the information in question.
Can you find the information elsewhere?
“If not, then it’s very likely that the journalistic jury is still out on whether this information is valid,” writes Christina Nagler for Harvard’s Summer School.
If a bombshell of an exclusive story is coming from a site you’ve never heard of, there’s probably a good reason it’s an exclusive — it’s fake. While all credible outlets are different, information is more likely to be true when multiple credible reporters are saying the same things.
Come across a story or some campaign information you’re not sure about? Let us know what you've been seeing in your community and online via this survey.
Your support matters.
You make MPR News possible. Individual donations are behind the clarity in coverage from our reporters across the state, stories that connect us, and conversations that provide perspectives. Help ensure MPR remains a resource that brings Minnesotans together.