YouTube will try to limit the effect of conspiracy theories by adding text from Wikipedia below videos.
It's no secret that social media sites like Facebook, Twitter, and YouTube have a problem with fake news and conspiracy theories. If you were to rely on them to get your news, you may end up with a distorted view of current events.
Recently, videos accusing Parkland school shooting survivor David Hogg of being an actor were featured prominently on YouTube and Facebook. YouTube featured a false video about Hogg at the top of its trending chart, and Facebook linked to misleading videos about him in its trending module.
Large technology companies have tried countless ways to fight back against fake news and conspiracy theories.
Facebook tried adding a big, red "disputed" tag to stories but ditched that feature after it realised that the tag actually made people click on false stories even more.
Twitter emailed nearly 678,000 users to inform them that they may have interacted with accounts linked to a Russian propaganda factory.
Now it's YouTube's turn to try to fix the problem. The company has come up with a new way to try to stop fake news and conspiracy theories proliferating on its platform: Adding in some text from Wikipedia below the videos.
Let's say you're on YouTube late at night and stumble upon a video about chemtrails. Take "'Chemtrails' — How They Affect You and What You Can Do." The video claims that planes flying overheard are causing everything from autism to polluting the water.
At the moment, there's nothing on the video to warn you that it's an unscientific conspiracy theory. You might even believe it and write a song about chemtrails, as the video suggests.
But YouTube's new plan will likely put text from Wikipedia underneath the video, informing you that chemtrails are a conspiracy theory.
YouTube's idea of adding in text from Wikipedia underneath conspiracy theory videos seems like a good compromise. The site isn't a partisan news source, and so everyone will take its word as truth, right?
Well, no. Wikipedia gets things wrong too. In December, Wikipedia seemed to cause Apple's virtual assistant Siri to tell people that actor John Travolta had died. The site even maintains its own list of hoaxes that started on Wikipedia.
And while it tries to remain neutral, Wikipedia isn't trusted by everyone. Breitbart News published a story in 2016 on what it called "Wikipedia's seven worst moments." It criticised the site for "corrupt mismanagement" and listed seven examples that it saw as failings from Wikipedia.
Whether or not the examples genuinely show failings from Wikipedia, it shows that the site isn't trusted by everyone. That's not to mention the ongoing issues with gender bias on the site.
Will someone watching 9/11 conspiracy theories have a moment of enlightenment and realise their foolishness because of a few sentences from Wikipedia? It seems unlikely.
Many of the methods launched by tech companies to combat fake news and conspiracies so far follow a standard pattern: We can't remove the content, so instead we'll stick a badge next to it, or some words, or we'll email you to let you know you saw it.
But this window dressing approach masks the real problem: Fake news thrives on social media sites. In fact, it often performs better than actual news.
Researchers from MIT recently found that lies were 70% more likely to be retweeted than the truth on Twitter. And social media analytics company Newswhip recently released its ranking of the most popular journalists on Facebook in February. It found that two journalists from fake news sites appeared in the top 40 most-popular authors.
The fundamental problem is in how sites like Facebook and YouTube decide what to show you. Their goal is to show you more and more relevant content, and they use cues like interaction and your searches to show you more.
The trouble with fake news is that people interact with it. By its very nature, fake news is exaggerated and dramatic, encouraging people to comment, like, and share it. A rush of comments on a post is a signal to Facebook and YouTube that the video is hot right now, so it had better suggest you watch it.
The only foolproof way to stop people seeing fake news and conspiracy theories online is to remove them altogether. But that won't happen for two reasons: Free speech and tech companies' fear of public outrage.
Tech companies are in an awkward position. They want to crack down on fake news, but they don't want to overstep into censorship. For years, Twitter styled itself as "the free speech wing of the free speech party." Removing videos about conspiracy theories clearly strays away from that approach. Of course, the largest social media companies are all headquartered in the US, which means they have to obey strong free speech laws.
And social media companies are hyper-sensitive to causing outrage. In 2016, people who used to work on Facebook's trending news sidebar claimed that they routinely suppressed conservative news. It caused a firestorm of criticism. Even The Guardian, a left-leaning British newspaper, called it "censorship."
Facebook CEO Mark Zuckerberg denied the report, but still invited prominent conservatives to Facebook HQ. The company will be reluctant to cause similar outrage in the future.
Tech companies have reached a stalemate with fake news and conspiracy theories. Paralysed by their own free speech positioning and fear of criticism, they can only dance around the issue with "disputed" tags and text from Wikipedia. As more and more fake and misleading content rises to prominence on social media sites, it's clear that they're a long way from fixing the problem.