AI for AI chips, AI's failure at moderating hate speech, and more!

Last Week in AI #120

Share

Mini Briefs

AI still sucks at moderating hate speech

Incendiary speech is one of the most persistent issues plaguing social media platforms. Given how easily information can spread through social networks and the fact that controversial speech often invites strong reactions, hate speech finds itself at the center of discussions on moderating social media content. Platforms like Facebook have tapped into language AI technology to rise to the challenge, but that technology still struggles: a recent study found that four of the best AI-based hate speech detectors struggled in different ways to distinguish toxic and innocuous sentences. While the study's results cast doubt on the efficacy of hate speech detection systems, the researchers were able to create a taxonomy of different types of hate speech and evaluate how the AI systems fell short at a more granular level. This alone represents useful progress and should help researchers better diagnose and improve hate speech detection systems.

Google is using AI to design its next generation of AI chips more quickly than humans can

As machine learning models become larger and, as a result, compute costs for those models increase, the industry has turned its attention to developing more efficient AI models. This involves creating models that themselves require less compute to train and developing underlying hardware that can more quickly perform the computations that power deep learning models. In a meta-twist, Google has decided to use machine learning to design its own machine learning chips, and its engineers claim that the algorithm's designs are "comparable or superior" to those created by humans and that the algorithm completes its designs far more quickly than humans can. While Google has been working on using machine learning to create chips for years, this effort, described in a paper in Nature, is the first to be applied to a commercial product: Google's TPU chips.

Podcast

Check out our weekly podcast covering these stories! 
Website | RSS | iTunes | Spotify | YouTube

News

Advances & Business

Concerns & Hype

  • The current state of affairs and a roadmap for effective carbon-accounting tooling in AI - "EcoQoS is a new Quality of Service (QoS) level introduced to Windows that developers can now opt-in to run their work efficiently, leading to better energy efficiency/increased battery life, reduced fan noise and power/thermal throttling."

  • Is there any way out of Clearview’s facial recognition database? - "In March 2020, two months after The New York Times exposed that Clearview AI had scraped billions of images from the internet to create a facial recognition database, Thomas Smith received a dossier encompassing most of his digital life."

  • When AI Becomes Childsplay - "Despite their popularity with kids, tablets and other connected devices are built on top of systems that weren’t designed for them to easily understand or navigate. But adapting algorithms to interact with a child isn’t without its complications—as no one child is exactly like another."

  • It’s 2021. Do You Know What Your AI Is Doing? - "Responsible AI has been one of my big topics for a few years now, the subject of many articles, blogs and talks I’ve given to audiences around the world."

  • India’s ‘Ugliest’ Language? Google Had an Answer (and Drew a Backlash). - "A Google fact box singled out Kannada, a language spoken in the country’s south. The faux pas highlights the algorithm’s fallibility. It was an odd, unanswerable question. Still, it was on the mind of at least one Google user in India."

  • The Double Exploitation of Deepfake Porn - "Over the past three years, celebrities have been appearing across social media in improbable scenarios. You may have recently caught a grinning Tom Cruise doing magic tricks with a coin or Nicolas Cage appearing as Lois Lane in Man of Steel."

  • Is there any way out of Clearview’s facial recognition database? - "In March 2020, two months after The New York Times exposed that Clearview AI had scraped billions of images from the internet to create a facial recognition database, Thomas Smith received a dossier encompassing most of his digital life."

  • If a killer robot were used, would we know? - "A recent UN report on Libya implies—but does not explicitly state—that a Turkish Kargu-2 drone was used to attack humans autonomously using the drone’s artificial intelligence capabilities. I wrote about the event in the Bulletin, and the story went viral."

Analysis & Policy

Expert Opinions & Discussion within the field

  • DeepMind scientists: Reinforcement learning is enough for general AI - "This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence."

  • Graphs at ICLR 2021 - "The field of geometric deep learning is booming, there’s no way around it. And graph neural networks are the rockstars, sitting in the driver seat. Graphs were clearly at the center of a lot of attention at ICLR 2021."


That's all for this week! If you are not subscribed and liked this, feel free to subscribe below!

Recent Articles:

Copyright © 2021 Skynet Today, All rights reserved.