Discover more from Last Week in AI
Autonomous lethal drones used in Libya, King County bans government use of facial recognition, and more!
Last Week in AI #119
A recent U.N. report states that an autonomous drone may have been used to attack soldiers in the conflict in Libya last year. While drones have been used as a weapon for a long time, which target they engage with and whether or not they fire a weapon has always been controlled by humans. If verified, this seems to be the first time that an algorithm running on the drone has autonomously identified targets and launched attacks without human control. There is real danger in the proliferation of such autonomous drones, and many countries, including the U.S., have refused to agree to a global ban on lethal autonomous weapons.
See our article covering this subject in depth here
The county recently approved a ban on government use of facial recognition software, addressing many's concerns with the system's impact on privacy, bias, and harm, especially when used by law enforcement. There have been a growing backslash toward facial recognition software in the last year and a half, and bans like this will likely continue. No government agency in King County, including police departments, have used facial recognition software before the ban. The does not impact use of facial recognition by companies and private citizens.
Advances & Business
Chinese AI lab challenges Google, OpenAI with a model of 1.75 trillion parameters- PingWest - "In the race to build the underlying technologies that can power the next wave of AI revolution, a Chinese lab just toppled OpenAI"
Naver unveils supersized AI platform HyperCLOVA optimized to Korean tongue - "South Korea's Naver Corp. has unveiled its supersized artificial intelligence (AI) platform "HyperCLOVA", a new Korean-based language model system enabling human brain-like linguistic capacity."
NYU, Facebook & CIFAR Present "True Few-Shot Learning" for Language Models Whose Few-Shot Ability They Say Is Overestimated - "A research team from New York University, Facebook AI, and a CIFAR Fellow in Learning in Machines & Brains raise doubts regarding large-scale pretrained language models' few-shot learning abilities. The researchers re-evaluate such abilities with held-out examples unavailable, which they propose constitutes 'true few-shot learning.'"
The power of synthetic images to train AI models - "But what do you do when good data just isn't available? Increasingly, enterprises are discovering the gap can be filled with synthetic data - a move that promises to revolutionize the industry, enabling more companies to use AI to improve processes and solve business problems with machine intelligence."
Google Ventures-backed Merlin Labs is building AI that can fly planes - "The company has been working with the Air Force to enable unmanned cargo plane flights. 2 minute ReadMerlin Labs, which develops autonomous systems that fly airplanes, has emerged from stealth with $25 million in funding from Google Ventures and others."
Self-Driving Truck Completes 950-Mile Trip 10 Hours Faster Than Human Driver - "The road to fully autonomous trucks is a long and winding one, but it's not an impossible one, and it seems to be in closer reach than fully self-driving cars."
Team develops machine learning platform that mines nature for new drugs - "Researchers in the Computational Biology Department have developed a new process that could reinvigorate the search for natural product drugs to treat cancers, viral infections and other ailments."
Concerns & Hype
Tesla doubles down on camera-based Autopilot amid growing scrutiny - "Tesla Inc (TSLA.O) said on Tuesday it will drop a radar sensor in favour of a camera-focused Autopilot system for its Model 3 and Model Y vehicles in North America starting this month."
Clearview AI - The Facial Recognition Company Embraced By U.S. Law Enforcement - Just Got Hit With A Barrage Of Privacy Complaints In Europe - "Clearview AI, the American purveyor of facial recognition tech reportedly used by thousands of government and law enforcement agencies throughout the world, is facing an onslaught of legal complaints across Europe Thursday for allegedly breaching the bloc's strict data protection laws."
Machine learning is booming in medicine. It's also facing a credibility crisis - "The mad dash accelerated as quickly as the pandemic. Researchers sprinted to see whether artificial intelligence could unravel Covid-19's many secrets - and for good reason. There was a shortage of tests and treatments for a skyrocketing number of patients."
Google says it's committed to ethical AI research. Its ethical AI team isn't so sure. - "Six months after Timnit Gebru left, Google's ethical artificial intelligence team is still in a state of upheaval."
Don't End Up on This Artificial Intelligence Hall of Shame - "When a person dies in a car crash in the US, data on the incident is typically reported to the National Highway Traffic Safety Administration. Federal law requires that civilian airplane pilots notify the National Transportation Safety Board of in-flight fires and some other incidents."
Analysis & Policy
How The World Is Updating Legislation in the Face Of Persistent AI Advances - "With the ability to create devices and systems capable of autonomous decisions, arises the need for legislation to monitor artificial intelligence Amazon's now scrapped AI recruiting tool is a prime example where it was discovered that the AI tool had a bias towards men Recently, 13 states across"
Ex-Google boss slams transparency rules in Europe's AI bill - "Eric Schmidt, who leads a U.S. government initiative to integrate AI into national security, warned today that the EU's AI transparency requirements would be "very harmful to Europe.""
Energy and Policy Considerations in Deep Learning for NLP - "Overview: As we inch towards ever-larger AI models, we have entered an era where achieving state-of-the-art results has become a function of access to huge compute and data infrastructure in addition to fundamental research capabilities."
Biden's AI czar focuses on societal risks, preventing harm - "A first task for Parker, who took on the role in the waning days of the Trump administration, is adapting to priorities set by the Biden administration. Those include confronting the societal risks of AI and putting the technology to work on causes such as health equity and reducing climate change."
Expert Opinions & Discussion within the field
GPT-3: a disappointing paper - "This post is a compilation of two posts I recently made on tumblr. For context: I have been an enthusiastic user of GPT-2, and have written a lot about it and transformer models more generally."
On Thinking Machines, Machine Learning, And How AI Took Over Statistics - "A history of the early days of AI"
That's all for this week! If you are not subscribed and liked this, feel free to subscribe below!
Copyright © 2021 Skynet Today, All rights reserved.