

Discover more from Last Week in AI
A digest of AI news in the first half of 2021
An overview of the big AI-related stories of the first half of 2021
With 2021 quickly approaching its second half, we’d like to reflect on what’s happened in AI during a year that began in the midst of the pandemic. This overview is based on our curated news feed Last Week in AI — if you were referred by a friend, go to lastweekin.ai to subscribe and/or follow us on twitter @LastWeekinAI. This reflects just about 500 articles that we’ve included in the newsletter in 2021 so far:
Digging a bit deeper, we find that COVID 19 is no longer at the forefront of the news, with facial recognition, bias, and deepfakes being covered most:
Among institutions, Google still receives by far the most coverage:
But enough overview – let’s go through the most significant articles we’ve curated from the past year, month by month.
As in with our newsletter, these articles will be about Advances & Business, Concerns & Hype, Analysis & Policy, and in some cases Expert Opinions & Discussion within the field. They will be presented in chronological order, and represent a curated selection that we believe are particularly noteworthy. Click on the name of the month for the full newsletter release that started out that month.
January
A big topic carried over from last year is the explosion of facial recognition software and the growing public pushback they are receiving, and January started off with new lawsuits and regulations on their uses. Separately, OpenAI publicized a new impressive work called CLIP that is able to generate pictures from language prompts.
Why we must democratize AI to invest in human prosperity, with Frank Pasquale
Use of Clearview AI facial recognition tech spiked as law enforcement seeks to identify Capitol mob
February
Another thread carried over from last year is the turmoil at Google’s Ethical AI team. Following the firing of researcher Timnit Gebru, Google fired Margaret Mitchell, leading to more controversy in the press. Again on the topic of facial recognition, a crowdsourced map was produced by Amnesty International to expose where these cameras might be watching. An OECD task force led by former OpenAI policy director Jack Clark was formed to calculate compute needs for national governments in an effort to craft better-informed AI policy.
These crowdsourced maps will show exactly where surveillance cameras are watching
Why the OECD wants to calculate the AI compute needs of national governments
Deepfake porn is ruining women’s lives. Now the law may finally ban it
Band of AI startups launch ‘rebel alliance’ for interoperability
Google fires researcher Margaret Mitchell amid chaos in AI division
March
The AI Index report released in March paints an optimistic outlook on the future of AI development - we are seeing significant increases in private AI R&D, especially in the healthcare. Simultaneously, concerns about AI continue to manifest. Karen Hao of the MIT Technology Review interviewed an important player in Facebook’s AI Ethics group, and found that Facebook was over-focusing on AI bias at the expense of grappling with the more destructive features of its AI systems. In another development stemming from Google’s Ethical AI fallout, a researcher publicly rejected a grant from the behemoth.
Clearview AI sued in California by immigrant rights groups, activists
Underpaid Workers Are Being Forced to Train Biased AI on Mechanical Turk
April
Another report release in April, the Analysis on U.S. AI Workforce, shows how AI workers grew 4x as fast as all U.S. occupations. In this month we also saw more reports of EU’s growing regulations on AI commercial applications in high-risk areas. This is an important legislative framework that may be borrowed by other governments in thef future.
Boston Dynamics unveils Stretch: a new robot designed to move boxes in warehouses
EU is cracking down on AI, but leaves a loophole for mass surveillance
May
Late May saw Google announcing their new large language models that can significantly impact how search and other Google products work in the future. This builds on top of the explosion of language model sizes over the last two years. Perhaps this drive to commercialize large lanugage models is behind the firing of its Ethical AI team leads months before, who at the time were focused on characterizing the potential harm of using such models.
June
A worrying report from the U.N. surfaced in June that describes the possibility of a drone making autonomous target and attack soldiers during last year’s Libya conflicts. The report remains to be independently verified, but proliferation of autonomous lethal weapons looms large as there are no global agreements that limit their use. Also as a sign of things to come, June also saw King County banning government use of facial recognition technology, the first of such regulations in the U.S.
King County is first in the country to ban facial recognition software
Google is using AI to design its next generation of AI chips more quickly than humans can
The False Comfort of Human Oversight as an Antidote to A.I. Harm
DeepMind scientist calls for ethical AI as Google faces ongoing backlash
LinkedIn’s job-matching AI was biased. The company’s solution? More AI.
Conclusion
If you’ve enjoyed this piece, subscribe to our ‘Last Week in AI’ newsletter!
Also, check out our weekly podcast covering these stories! Website | RSS | iTunes | Spotify | YouTube