In Favor of More Science Communication by AI Researchers

There's too much misunderstanding, hype, and misinformation surrounding AI, and those developing it should do more to change that

Welcome to the first editorial from Last Week in AI!

As explained in our recent announcement, we want to expand to broader and more in-depth explorations of recent AI developments. In particular, we’ll start by writing weekly editorials meant to address and comment on AI beyond last week’s news, similar to things we’ve written before. The first few editorials will be free, but later ones will be exclusive to paying subscribers, so if you like this one consider subscribing to have access to all the future ones:

Many working in AI today can still no doubt remember the media craze about how Creepy Facebook bots talked to each other in a secret language. As I wrote about when it first happened, the whole thing began with a single innocuous sentence in the researchers’ paper, which was then highlighted in a single tweet, and then covered in a single article, and eventually the entire ridiculous narrative followed entirely without the encouragement of the researchers or the company.

Many would also agree that there is An Epidemic of AI Misinformation. And for good reason: entire documentaries such as Do You Trust This Computer spread the view of AI as something that is scary and hard to understand, popular videos such as Humans Need Not Apply spread the idea AI will inevitably take over everyone’s jobs, and of course there are many other examples of questionable media narratives besides that Facebook chatbot story. What there seems to be less agreement on is what people such as myself, AI researchers, can do to help stop it.

Of course, we can and should make the limits of our research clear, not make inflated statements, and so on. But this just means we would not be adding fuel to the fire of misinformation; as the case of the Facebook chatbot makes clear, that is sometimes not enough. How do we fight the fire? 

Simple: this is a problem of signal versus noise, and besides not adding to the noise we can also amplify the signal. In other words, we can take part in science communication -- not just write papers meant exclusively for fellow academics, but actively work to communicate to the general public as well. 

This can be done in many ways: assisting journalists covering AI research, writing blog posts to make research more accessible or addressing questions about AI, taking part in interviews, and more. Many are already doing this in many ways, via lab blogs, blog posts such as “AlphaGo, in context, talks, YouTube channels, or even just tweets condemning silly clickbait articles. And of course, our very own Skynet Today and Last Week in AI are meant to do just that!

Of course, non academics can help too; one need not be a researcher to be informed about AI, and to combat misinformation on social media and elsewhere. Anyone reading this no doubt cares about AI and reads a good deal about it, so you are no doubt informed enough to inform others!

Still, academics are the ones most informed about the state of AI, and yet a small minority of AI researchers engage in any sort of science communication to the public. My aim with this piece has been to argue more should consider it something worth doing. Especially for the field of AI, for which there is such a profoundly rising presence of the technology in society and so many misconceptions about it, this should be seen as part of the responsibilities of academics and encouraged.