Last Week in AI #92

Problems with deep learning, AI-trained counselors, database of AI incidents, and more!

Mini Briefs

The way we train AI is fundamentally flawed

A new study from Google investigates the "underspecification" problem facing deep learning. In machine learning, there is usually a training dataset from which a model learns, and a test dataset on which the model deploys. It is often the case that good performance on the training dataset does not mean good performance on the test dataset, and this difference is often attributed to "data shift," the difference between train and test data.

Underspecification is a different problem that seems to span many deep learning application areas, from vision to language. Here, the issue is that models that have similar performance on the training dataset can have wildly different performance on the test dataset. In the short term, this means commercial deep learning applications should do a lot more testing, perhaps even offering multiple versions of the same model to end users, letting them test and pick the best ones they need. In the long term, additional research is needed to better understand underspecification and improve the training process.

Facebook says AI has fueled a hate speech crackdown

In a recent report, Facebook published statistics about hate speech takedowns on its platform. Specifically, the company estimates that about 0.1% of what Facebook users see violate hate speech rules, while most (95%) of hate speech takedowns were done proactively - without user reports. This is a large jump from the mere 23.6% in 2017, and some of it is powered by AI moderation tools that can detect hate speech automatically.

For now however, it seems like content moderation is still reliant on human moderators, as a group of 200 moderators signed "an open request for better cornonavirus protections" as they are only allowed to review sensitive content in the workplace, not at home. The letter was also critical of automated moderation:

The AI wasn't up to the job. Important speech got swept into the maw of the Facebook filter — and risky content, like self-harm, stayed up

Podcast

Check out our weekly podcast covering these stories! Website | RSS | iTunes | Spotify | YouTube

News

Advances & Business

Concerns & Hype

Explainers

  • Making Sense of the AI Landscape - "As AI tools become more commonplace, many businesses find themselves playing catch up when it comes to incorporating these new systems into their existing infrastructure."


That’s all for this week! If you are not subscribed and liked this, feel free to subscribe below!

Recent Articles:

Copyright © 2020 Skynet Today, All rights reserved.