Last Week in AI #92
Problems with deep learning, AI-trained counselors, database of AI incidents, and more!
|Nov 23, 2020||2|
A new study from Google investigates the "underspecification" problem facing deep learning. In machine learning, there is usually a training dataset from which a model learns, and a test dataset on which the model deploys. It is often the case that good performance on the training dataset does not mean good performance on the test dataset, and this difference is often attributed to "data shift," the difference between train and test data.
Underspecification is a different problem that seems to span many deep learning application areas, from vision to language. Here, the issue is that models that have similar performance on the training dataset can have wildly different performance on the test dataset. In the short term, this means commercial deep learning applications should do a lot more testing, perhaps even offering multiple versions of the same model to end users, letting them test and pick the best ones they need. In the long term, additional research is needed to better understand underspecification and improve the training process.
In a recent report, Facebook published statistics about hate speech takedowns on its platform. Specifically, the company estimates that about 0.1% of what Facebook users see violate hate speech rules, while most (95%) of hate speech takedowns were done proactively - without user reports. This is a large jump from the mere 23.6% in 2017, and some of it is powered by AI moderation tools that can detect hate speech automatically.
For now however, it seems like content moderation is still reliant on human moderators, as a group of 200 moderators signed "an open request for better cornonavirus protections" as they are only allowed to review sensitive content in the workplace, not at home. The letter was also critical of automated moderation:
The AI wasn't up to the job. Important speech got swept into the maw of the Facebook filter — and risky content, like self-harm, stayed up
Advances & Business
How The Trevor Project uses AI to help LGBTQ+ youth and train its counselors - "The Trevor Project, a nonprofit organization focused on ending suicide among LGBTQ+ youth, is using artificial intelligence to better meet its mission."
Google's Tree Canopy Lab taps AI to help cities plan tree-planting projects - "Google today announced the launch of Tree Canopy Lab for Los Angeles, a tool that combines AI and aerial imagery to help cities see their current tree canopy coverage and plan future tree-planting projects."
Uber in talks to sell ATG self-driving unit to Aurora - "Eighteen months ago, Uber's self-driving car unit, Uber Advanced Technologies Group, was valued at $7.25 billion following a $1 billion investment from Toyota, DENSO and Softbank's Vision Fund."
Canvas emerges from stealth with AI for drywall installation - "Canvas, a company created in January 2017 that uses machine learning to install drywall at construction sites, emerged from stealth today. Canvas uses a modified JLG lift, robotic arm, and sensors to automate drywall installation."
Rapid Robotics raises $5.5M for pre-programmed manufacturing robots - "Bay Area-based Rapid Robotics today announced it has raised $5.5 million in seed funding in a round led by Greycroft and Bee Partners."
AMP Robotics Partners with Waste Connections to Deploy AI-Guided Recycling Robots - "AMP Robotics Corp. (AMP), a pioneer in artificial intelligence (AI) and robotics used to recover recyclables reclaimed as raw materials for the global supply chain, has signed a long-term agreement with Waste Connections, Inc."
Spacemaker, AI software for urban development, is acquired by Autodesk for $240M - "Autodesk, the U.S. publicly listed software and services company that targets engineering and design industries, has acquired Norway's Spacemaker, a startup that has developed AI-supported software for urban development. The price of the acquisition is $240 million in a mostly all-cash deal."
As the pandemic continues, cleaning robots are showing up for duty at the office - "Autonomous robot cleaners could help share the load"
Researchers investigate why popular AI algorithms classify objects by texture, not by shape - "In a paper accepted to the 2020 NeurIPS conference, Google and Stanford researchers explore the bias exhibited by certain kinds of computer vision algorithms - convolutional neural networks (CNNs) - trained on the open source ImageNet dataset."
OpenAI proposes using reciprocity to encourage AI agents to work together - "Many real-world problems require complex coordination between multiple agents - e.g., people or algorithms."
AI tool may predict movies' future ratings - "Movie ratings can determine a movie's appeal to consumers and the size of its potential audience. Thus, they have an impact on a film's bottom line."
An AI helps you summarize the latest in AI - "Semantic Scholar, a scientific literature search engine, is using recent breakthroughs in natural-language processing to give researchers the tl;dr on papers."
How role-playing a dragon can teach an AI to manipulate and persuade - "Combining natural-language processing and reinforcement learning in a text-based adventure game shows machines how to use language as a tool."
Google's new AI tool turns your terrible drawings into hideous monsters - "Google has released a prototype AI app that turns your sketches into fantastical monsters - with varying degrees of success. The Big G developed the "Chimera Painter" to produce an endless stream of creatures for a fantasy card game in which the different monsters battle."
Concerns & Hype
When AI Systems Fail: Introducing the AI Incident Database - "Today we introduce a systematized collection of incidents where intelligent systems have caused safety, fairness, or other real-world problems: The AI Incident Database (AIID)."
You can't eliminate bias from machine learning, but you can pick your bias - "Bias is a major topic of concern in mainstream society, which has embraced the concept that certain characteristics - race, gender, age, or zip code, for example - should not matter when making decisions about things such as credit or insurance."
When AI Sees a Man, It Thinks 'Official.' A Woman? 'Smile' - "A new paper renews concerns about bias in image recognition services offered by Google, Microsoft, and Amazon."
The AI Telegram bot that abused women is still out of control - "Messaging app Telegram is under pressure to crack down on an AI bot that generated tens of thousands of non-consensual images of women on its platform."
Health systems are using AI to predict severe Covid-19 cases. But limited data could produce unreliable results - "As the United States braces for a bleak winter, hospital systems across the country are ramping up their efforts to develop AI systems to predict how likely their Covid-19 patients are to fall severely ill or even die."
AI research finds a "compute divide" concentrates power and accelerates inequality in the era of deep learning - "AI researchers from Virginia Tech and Western University have concluded that an unequal distribution of compute power in academia is furthering inequality in AI during the deep learning era."
It's Managers, Not Workers, Who Are Losing Jobs To AI And Robots, Study Shows - "Managers, not lower-level employees, are seeing their ranks diminished with the onset of artificial intelligence and robots, a new study out of the University of Pennsylvania Wharton School finds."
Pilot In A Real Aircraft Just Fought An AI-Driven Virtual Enemy Jet For The First Time - "Donning an augmented reality headset in the cockpit, a veteran F-22 pilot just had a dogfight with a projection of a Chinese J-20 fighter."
Making Sense of the AI Landscape - "As AI tools become more commonplace, many businesses find themselves playing catch up when it comes to incorporating these new systems into their existing infrastructure."
That’s all for this week! If you are not subscribed and liked this, feel free to subscribe below!
Copyright © 2020 Skynet Today, All rights reserved.