AI for job interviews, development of autonomous weapons, and more!

Last Week in AI #104

Mini Briefs

OBJECTIVE OR BIASED

A study conducted by a Bayerischer Rundfunk (a German broadcasting company) found that software that automatically assess job candidates via recorded videos are easily manipulated by factors unrelated to the interviewee. To establish a baseline, real actors were hired to record job interview videos with as little emotion as possible. Then, the experimenters changed the videos in several ways that should not affect assement outcome, including adding framed pictures or bookshelves to the background, letting the interviewee wear glasses and headscarves, and changing the brightness of the video. However, the software gave significantly different results in these modified videos. For example, adding a bookshelf to the background significantly increased the candidates's supposed "conscientiousness" score. In addition, the software doesn't seem to pay attention to the audio in the video at all, as the scores were unchanged when the audio track was swapped with that of a different actor, and when the audio was removed completely. All these suggest that AI-based HR, in addition to raising pointed ethical and privacy concerns, are also not very useful.

Can Computer Algorithms Learn to Fight Wars Ethically?

This report gives an overview of what people in the U.S. military think about developing and deploying autonomous lethal weapons. The military, from commanders to cadets, seem to be broadly aware of the limitations and opaqueness of modern AI software, which is very difficult to test and certify. Still, they feel pressure to push ahead with lethal autonomous weapons due to a fear of missing out. This is a cause of real worry, as AI evangelists within the Pentagon use FOMO as an excuse to deploy immature technology, disregarding serious ethical concerns in the process. One defense official remarks “I fear our lack of keeping up [...] I don’t fear us losing our ethical standards, our moral standards.” It is difficult to see how ethical standards are not being lost when the same official says "it doesn’t make sense to study anything in the era of AI” and misleadingly describes AI as "a living, breathing system."

A poll from December 2020 found that the majority of citizens in 26 of the 28 surveyed countries, including U.S., Russia, and China, opposed the development of AI weapons. But public sentiment doesn't necessarily track with the military's decision making.

The military is going to put AI into its weapons despite debates about morality, [Pentagon’s Joint Artificial Intelligence Center] told me: “We are going to do it. We’re going to do it deliberately. We’re going to follow policy.”

For more on this topic, see our overivew on The Rise of “Killer Robots” and the Race to Restrain Them.

News

Advances & Business

Concerns & Hype

Analysis & Policy


That's all for this week! If you are not subscribed and liked this, feel free to subscribe below!

Recent Articles:

Copyright © 2021 Skynet Today, All rights reserved.