

Discover more from Last Week in AI
Forced biased labeling in crowdsourced data, testing self-driving cars on past crashes, and more!
Last Week in AI #108

Mini Briefs
Underpaid Workers Are Being Forced to Train Biased AI on Mechanical Turk
Modern AI systems often require large datasets labeled by humans, and companies looking to leverage AI use online platforms, like Amazon's Mechanical Turk, to distribute data lableing tasks to remote workers. Such tasks can include ranking top results in "featured snippets" of search engines, classifying which content is objectionable and should be censored on YouTube, and listing emotional attributes used to describe artworks. Because many of these labeling tasks rely on subjective human judgement, individual biases will creep into the dataset and ultimately into the AI systems that are trained on this data.
However, a more challenging source of bias maybe "majority-thinking." Platforms often reject workers' answers if they don't fall in line within the expectated outputs and answers given by the majority of other workers. While this is used as a mechanism for quality control, it helps to propagate majority bias, penalizing those who give creative answers or the correct answers that are not popular. From an interviewed worker:
I sometimes find myself thinking like, I think this is a wrong answer ... but I know that if I say what I really think I will get booted from the job, and I will get bad scores
Replaying real life: how the Waymo Driver avoids fatal human crashes
In a recent study Waymo tested its self-driving car algorithm on all fatal car accidents in Chandler Arizona, a region the Waymo taxi service operates in, to evaluate how the outcome of those accidents would differ if a Waymo car was used. To perform the test, Waymo tested its algorithm in both the initiator role of the acciddent (e.g. a car that ran a red light) and the responder role (e.g. a car that had right of way but was hit by the initiator that ran a red light). The Waymo algorithm avoided the accident 100% of the time when it was in the initiator role. For the responder role, the algorithm avoided the accident 82% of the time by taking smooth, evasive actions. For the remaining simulations, 10% of the time the algorithm took action to reduce the severity of the crash, and 8% of the time the accident outcome was unchanged. Notably, all the cases where the outcome was unchanged were ones where the initiator car struck the rear of the responder car.
These results are encouraging albeit they only apply to the operational domain of the current Waymo car, which contain mostly wide roads, good weather, and not a lot of city driving. Still, performing such tests and sharing the results with the public show Waymo's confidence in the safety of its self-driving car, even if the domain is limited.
Podcast
Check out our weekly podcast covering these stories!
Website | RSS | iTunes | Spotify | YouTube
News
Advances & Business
Quadruped robot automatically adapts in unstructured outdoor environments - "The four-legged robot Dyret can adjust the length of its legs to adapt the body to the surface. Along the way, it learns what works best. This way it is better equipped the next time it encounters an unknown environment."
Writing helper Copy.ai raises $2.9M in a round led by Craft Ventures - "Copy.ai, a startup building AI-powered copywriting tools for business customers, announced a $2.9 million round this morning. The investment was led by Craft Ventures. Other investors took part in the deal, including smaller checks from Lee Jin's newly-formed Atelier Ventures, and Sequoia."
Researchers Blur Faces That Launched a Thousand Algorithms - "Managers of the ImageNet data set paved the way for advances in deep learning. Now they’ve taken a big step to protect people’s privacy."
Concerns & Hype
A New York Lawmaker Wants to Ban Police Use of Armed Robots - "Patrick Lin, director, Ethics and Emerging Sciences Group, California Polytechnic University, San Luis Obispo."
The Departure of 2 Google AI Researchers Spurs More Fallout - "Two academics changed plans to attend an invite-only conference this week; a third says he'll no longer accept funding from the company."
Expert Opinions & Discussion within the field
OpenAI's Sam Altman: Artificial Intelligence will generate enough wealth to pay each adult $13,500 a year - "Artificial intelligence will create so much wealth that every adult in the United States could be paid $13,500 per year from its windfall as soon as 10 years from now. So says Sam Altman, co-founder and president of San Francisco-headquartered, artificial intelligence-focused nonprofit OpenAI."
To Make Good Policy on AI, Talk to Social Workers - "We call for the attention of policymakers to partner with social workers to ensure safe, inclusive, and ethical technologies which cause no harm to marginalized people and promote diverse representation to amplify voices of vulnerable communities. "
Explainers
The Secret Auction That Set Off the Race for AI Supremacy - "How the shape of deep learning—and the fate of the tech industry—went up for sale in Harrah's Room 731, on the shores of Lake Tahoe."
That's all for this week! If you are not subscribed and liked this, feel free to subscribe below!
Recent Articles:
DeepMind’s AlphaFold 2—An Impressive Advance With Hyperbolic Coverage
IBM, Microsoft, and Amazon Halt Sales of Facial Recognition to Police, Call for Regulations
Copyright © 2021 Skynet Today, All rights reserved.