Since the 2016 U.S. Presidential election, fake news and social media have been at the forefront of discussions about news curation. Social media websites have taken different approaches to tackling fake news. Some have been utilizing artificial intelligence to parse news stories to limit users’ exposure to false articles. Here, Hava Parker and Grace Hickey argue whether using AI to curate news stories on social media websites is a prudent policy.
In the modern era, news has become increasingly digitized. Last year, Pew Research Center found that 67%—over two-thirds—of American adults reported consuming news via social media platforms, up from 62% in 2016. Among some populations, this figure is even steeper; 74% of the non-white population got their news through social media, up from 64% the year before . This mass exodus to alternative media means that, as time goes on, the general information economy will be entirely housed in the digital realm. Because of this, safeguarding news and information in their emerging digital forms is of paramount importance.
According to the aforementioned Pew study, an astoundingly high percentage—78%—of surveyed Americans responded that they had seen inaccurate news circulated via social media; 51% said that they frequently saw false information spread . The effects of “fake” news, especially when unchecked, can be disastrous.
"During the 2016 presidential election, BuzzFeed analysts found that fake news stories actually outperformed legitimate sources by a wide margin when it came to reader engagement , something that very likely had huge impact on the final outcome of the election."
Further, the effects of fake news are self-exacerbating; the more inaccuracies are circulated, the more readers are unable to distinguish the legitimacy of news. Even if they are not gullible and entirely victim to false information, readers who are aware of the phenomenon may instead overcompensate and assume even valid information is entirely unreliable . Clickbait/fake news have huge politico-ideological ramifications as well as the ability to confound truth in society over time, as already seen in practice.
The most concrete way of protecting news’ validity is to filter them by credibility. However, due to the sheer volume of text to sift through, this is nigh impossible to accomplish with human fact-checking alone. Due to this obstacle, many content providers (such as Google  and Facebook ) have turned to artificial intelligence and algorithmic approaches to news curating. This shift has met both applause and heavy resistance; although it is necessary to offload the human element for practicality’s sake, it is also a difficult process to smoothly undertake, especially because of the technology’s nascency. In mid-2016, Facebook entirely abandoned its “Trending” section editors and moved to a wholly algorithmic approach—a move that resulted in some controversial and questionable article promotion .
However, despite those sorts of missteps, other incidents have reminded us that full-scale human-staffed endeavors are unsustainable. Recall the 2017 Godwin murder, where Steve Stephens shot and killed Robert Godwin on Facebook Live. The footage notoriously remained on Facebook for over two hours, before it was finally removed . This is a prime example of the necessity of automated media analysis. No user reports were filed in response to Stephens’ initial statement of intent, and it was only an hour and 45 minutes after the murder video itself that Facebook received any report of what had happened (the company then took only 23 minutes to remove Stephens’ account ). Other incidents include a live-streamed sexual assault that went unreported ; only the day afterward did the victim’s family realize what happened and take legal action . Yes, artificial intelligence-driven news filtering may have its downfalls, but some accidental flagging of material is ultimately far preferable to graphic media such as these examples circulating for any longer than needed.
In summary, automated news curation is the only feasible step forward. Its current shortcomings are largely associated with the technology’s youth, not with the very concept of automata in media inspection. It could be used in concert with actual human readers to check for possible blunders, but as discussed, it simply isn’t fiscally or generally practical to hire dozens of employees to review information—whether it be screening the appropriateness of material or determining posts that may be of a user’s interest. Indeed, it makes the most sense to couple AI with human eyes until the technology matures, but the truth remains that augmenting human employees with algorithmically robust artificial intelligence is safer than the alternative.
Public outcry has been mounting for Facebook and other news-sharing websites to respond to the explosion of “fake news” on their platforms. Many people advocate for them to implement artificial intelligence to flag sources or stories that are not credible.
Confirmation bias is a well-proven psychological phenomenon. Peoples’ confirmation bias makes them almost certain to believe or at least be influenced by false information they read, even when they know that it is false . This means that fake news needs to be stopped, not just flagged. Humans crave information that feeds their current ideas. “[P]eople experience genuine pleasure—a rush of dopamine—when processing information that supports their beliefs,” says Jack Gorman, a psychologist . People’s tendency to read what reinforces their views could lead to people simply ignoring fake-news flags, even if it is against their better judgement, which renders those flags obsolete .
In order for artificial intelligence to be effective, people have to care what the truth is, and be willing to believe it when they see it. As a collective, our society does not currently function that way. Unfortunately, technology cannot do its job until we combat the collective psychological issues that fostered fake news distribution in the first place. Without addressing those issues, any developments in AI can be easily circumvented or used to spread even more false information. People’s underlying tendency to believe only what they want to believe will always win-out. At its base, fake news is a product of human psychology, not technology, which is why we need to change our psychology before surface-level technology reforms can ever make a difference.
This is exemplified by Facebook’s recent backfired attempts to introduce the first fake-news flagging system. The minimal number of articles that were actually flagged were all quickly reproduced in new, slightly altered forms that could easily be shared without the flag .
"In one disastrous case, a flagged article drew more readers, rather than less."
Sources and Notes