Forced biased labeling in crowdsourced data, testing self-driving cars on past crashes, and more!

Last Week in AI #108

Mini Briefs

Underpaid Workers Are Being Forced to Train Biased AI on Mechanical Turk

Modern AI systems often require large datasets labeled by humans, and companies looking to leverage AI use online platforms, like Amazon's Mechanical Turk, to distribute data lableing tasks to remote workers. Such tasks can include ranking top results in "featured snippets" of search engines, classifying which content is objectionable and should be censored on YouTube, and listing emotional attributes used to describe artworks. Because many of these labeling tasks rely on subjective human judgement, individual biases will creep into the dataset and ultimately into the AI systems that are trained on this data.

However, a more challenging source of bias maybe "majority-thinking." Platforms often reject workers' answers if they don't fall in line within the expectated outputs and answers given by the majority of other workers. While this is used as a mechanism for quality control, it helps to propagate majority bias, penalizing those who give creative answers or the correct answers that are not popular. From an interviewed worker:

I sometimes find myself thinking like, I think this is a wrong answer ... but I know that if I say what I really think I will get booted from the job, and I will get bad scores

Replaying real life: how the Waymo Driver avoids fatal human crashes

In a recent study Waymo tested its self-driving car algorithm on all fatal car accidents in Chandler Arizona, a region the Waymo taxi service operates in, to evaluate how the outcome of those accidents would differ if a Waymo car was used. To perform the test, Waymo tested its algorithm in both the initiator role of the acciddent (e.g. a car that ran a red light) and the responder role (e.g. a car that had right of way but was hit by the initiator that ran a red light). The Waymo algorithm avoided the accident 100% of the time when it was in the initiator role. For the responder role, the algorithm avoided the accident 82% of the time by taking smooth, evasive actions. For the remaining simulations, 10% of the time the algorithm took action to reduce the severity of the crash, and 8% of the time the accident outcome was unchanged. Notably, all the cases where the outcome was unchanged were ones where the initiator car struck the rear of the responder car.

These results are encouraging albeit they only apply to the operational domain of the current Waymo car, which contain mostly wide roads, good weather, and not a lot of city driving. Still, performing such tests and sharing the results with the public show Waymo's confidence in the safety of its self-driving car, even if the domain is limited.

Podcast

Check out our weekly podcast covering these stories! 
Website | RSS | iTunes | Spotify | YouTube

News

Advances & Business

Concerns & Hype

Expert Opinions & Discussion within the field

Explainers


That's all for this week! If you are not subscribed and liked this, feel free to subscribe below!

Recent Articles:

Copyright © 2021 Skynet Today, All rights reserved.