In the United States, the term “fake news” has lost its bite through overuse since the 2016 presidential election. Though misinformation is not a new phenomenon, its impact and spread has been particularly severe in the last decade as trolls, malicious actors, and state-sponsored media groups exploit the new affordances of social media and the Internet. Everyone — authority figures, journalists, everyday citizens — is a potential target. In its 2020 Doomsday Clock Statement, the Bulletin of the Atomic Scientists states that the greatest dangers to humanity “are compounded by a threat multiplier, cyber-enabled information warfare, that undercuts society’s ability to respond.”
Artificial intelligence (AI) can — and does — play a nontrivial role in the spread of misinformation. When Pew Research canvassed 979 scientific experts on artificial intelligence in 2018, one common theme was that AI could allow increased “mayhem” or “further erosion of traditional sociopolitical structures and the possibility of great loss of lives,” partially due to “the use of weaponized information, lies and propaganda to dangerously destabilize human groups.”
Despite global consequences, online misinformation continues to live in a cloud of relative obscurity, and AI’s role in exacerbating the problem is particularly misunderstood. With this article, I hope to help readers consider the state and severity of online misinformation, as well as understand AI’s role in this.