Autonomous lethal drones used in Libya, King County bans government use of facial recognition, and more!

Last Week in AI #119

Mini Briefs

Have autonomous robots started killing in war?

A recent U.N. report states that an autonomous drone may have been used to attack soldiers in the conflict in Libya last year. While drones have been used as a weapon for a long time, which target they engage with and whether or not they fire a weapon has always been controlled by humans. If verified, this seems to be the first time that an algorithm running on the drone has autonomously identified targets and launched attacks without human control. There is real danger in the proliferation of such autonomous drones, and many countries, including the U.S., have refused to agree to a global ban on lethal autonomous weapons.

See our article covering this subject in depth here

King County is first in the country to ban facial recognition software

The county recently approved a ban on government use of facial recognition software, addressing many's concerns with the system's impact on privacy, bias, and harm, especially when used by law enforcement. There have been a growing backslash toward facial recognition software in the last year and a half, and bans like this will likely continue. No government agency in King County, including police departments, have used facial recognition software before the ban. The does not impact use of facial recognition by companies and private citizens.


Check out our weekly podcast covering these stories! 
Website | RSS | iTunes | Spotify | YouTube


Advances & Business

Concerns & Hype

Analysis & Policy

  • How The World Is Updating Legislation in the Face Of Persistent AI Advances - "With the ability to create devices and systems capable of autonomous decisions, arises the need for legislation to monitor artificial intelligence Amazon's now scrapped AI recruiting tool is a prime example where it was discovered that the AI tool had a bias towards men Recently, 13 states across"

  • Ex-Google boss slams transparency rules in Europe's AI bill - "Eric Schmidt, who leads a U.S. government initiative to integrate AI into national security, warned today that the EU's AI transparency requirements would be "very harmful to Europe.""

  • Energy and Policy Considerations in Deep Learning for NLP - "Overview: As we inch towards ever-larger AI models, we have entered an era where achieving state-of-the-art results has become a function of access to huge compute and data infrastructure in addition to fundamental research capabilities."

  • Biden's AI czar focuses on societal risks, preventing harm - "A first task for Parker, who took on the role in the waning days of the Trump administration, is adapting to priorities set by the Biden administration. Those include confronting the societal risks of AI and putting the technology to work on causes such as health equity and reducing climate change."

Expert Opinions & Discussion within the field

  • GPT-3: a disappointing paper - "This post is a compilation of two posts I recently made on tumblr. For context: I have been an enthusiastic user of GPT-2, and have written a lot about it and transformer models more generally."


That's all for this week! If you are not subscribed and liked this, feel free to subscribe below!

Recent Articles:

Copyright © 2021 Skynet Today, All rights reserved.