Last Week in AI News #80
Welcome to Last Week in AI, our newsletter about each week's most important news related to Artificial Intelligence.
Forward this email to share this week's news with friends.
The term “ethical AI” is finally starting to mean something
As warnings about the dangers of AI have come to the fore, industry and academia have answered with research, conferences, and institutes all aimed at fair, ethical AI. The most recent wave, opposed to the first that was dominated by philosophers, has been focused on technical approaches to the problem. But technical fixes alone cannot wring all the water from the sponge. In order to ensure that profit-seeking institutions and other groups producing AI technologies are careful in their quest to advance applications of the technology, a strong hand will be needed to both oversee the research itself and to decide whether or not that research should be used. Finally, a new wave in ethical AI has taken us to “practical mechanisms for rectifying power imbalances and achieving individual and societal justice.” The shift towards socio-technical issues has laid the ground for the UK’s Court of Appeal to find police use of facial recognition unlawful and call for a new legal framework, for high school students to protest after their marks were downgraded by an algorithmic system, and for countries like New Zealand to publish charters and form task forces to provide guidance for the use of algorithmic systems. Practical tools for accountability, and the full participation of research institutes, activists, and campaigners to make sure governments help in addressing these risks, will be vital. Fortunately, movement towards more ethical use of AI seems to have been mobilized by social pressure. That pressure will need to be sustained, and perspectives beyond European and North American actors will need to be surfaces, in order to make AI and data truly work for everyone.
Hulu deepfaked its new ad. It won’t be the last.
Rather than going through the complicated logistics of flying out actors during a pandemic, Hulu decided to simply fake its latest ad. Using deepfake algorithms trained on footage of athletes such as Damian Lillard and Skylar Diggins-Smith, the Hulu clip superimposed the faces of the stars onto body doubles. This is not a new trend: State Farm ran a commercial in April that featured deepfakes, while AI video startup Synthesia, “which offers customers the ability to produce videos without actors and film crews, has seen a 10x growth in demand since the beginning of the pandemic.” Synthesia COO Steffen Tjerrild predicts that “AI will eventually do to video production what apps like Instagram filters did to photography.” Indeed, the technology appears to be advancing quickly and the monetary incentives will be difficult for companies to ignore. Just as much of our entertainment has been pervaded by CGI, soon it may be populated by deepfakes.
Advances & Business
Facebook is training robot assistants to hear as well as see - In June 2019, Facebook’s AI lab, FAIR, released AI Habitat, a new simulation platform for training AI agents. It allowed agents to explore various realistic virtual environments, like a furnished apartment or cubicle-filled office.
The robot revolution has arrived - If you’re like most people, you’ve probably never met a robot. But you will. I met one on a windy, bright day last January, on the short-grass prairie near Colorado’s border with Kansas, in the company of a rail-thin 31-year-old from San Francisco named Noah Ready-Campbell.
Please remain calm while the robot swabs your nose - If you’ve been tested for COVID-19 then you’ve probably experienced the unpleasantness of a nasal swab. Someone takes a long-handled cotton swab and sticks it up your nose–way up your nose–until it reaches the back of the mucus-cave that is your nasal cavity.
How AI and Art Hold Each Other Accountable - The arts have a major role to play in the fairness of our technological future.
The utopian promise and dystopian potential of real-time detection of police, fire, and medical emergencies - In 2014, John Garofolo went to Baltimore to visit Lt. Samuel Hood of the Baltimore Police Department. Garofolo was previously head of Aladdin, a program in the Office of the Director of National Intelligence to automate analysis of a massive number of video clips.
How Close Are Computers to Automating Mathematical Reasoning? - AI tools are shaping next-generation theorem provers, and with them the relationship between math and machine.
AI Algorithm Reaches Equivalent Accuracy of Average Radiologist - At least one algorithm is ready to pinpoint which women have breast cancer without additional oversight or interpretation needed from the radiologist. That’s according to one research team that compared the performance of several AI tools.
Impact of Go AI on the professional Go world - Professional Go players are often referred to as one of two types, tournament players or teaching players. Tournament players are the ones who spend most of their time and energy training and competing in tournaments, and most of their income is tournament prize money.
Concerns & Hype
GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about - Our tests show that the popular AI still has a poor grasp of reality. Since OpenAI first described its new AI language-generating system called GPT-3 in May, hundreds of media outlets (including MIT Technology Review) have written about the system and its capabilities.
GPT-3 Is an Amazing Research Tool. But OpenAI Isn’t Sharing the Code. - For years, A.I. research lab OpenAI has been chasing the dream of an algorithm that can write like a human.
The Impact of AI on Journalism - Back in 2014, the Los Angeles Times published a report about an earthquake three minutes after it happened. This feat was possible because a staffer had developed a bot (a software robot) called Quakebot to write automated articles based on data generated by the US Geological Survey.
Participation-washing could be the next dangerous fad in machine learning - Many people already participate in the field’s work without recognition or pay. The AI community is finally waking up to the fact that machine learning can cause disproportionate harm to already oppressed and disadvantaged groups. We have activists and organizers to thank for that.
Memers are making deepfakes, and things are getting weird - The rapidly increasing accessibility of the technology raises new concerns about its abuse. Grace Windheim had heard of deepfakes before. But she had never considered how to make one.
Analysis & Policy
Trump Slashes Research Funding but Raises the US Federal AI Budget by 34 Percent - Trump announced the 2021 fiscal year budget proposal on Monday, further flashing the budget for federally funded research projects, whereas research budgets of major scientific institutions have experienced double-digit cuts.
Expert Opinions & Discussion within the field
First Woman Director At MIT CS AI Lab: “Want More Women In STEM? Inspire Them Early.” - We decide on our careers long before we ever step foot in our workplace. We take cues from our family and dramatized media depictions of professionals who often look and act nothing like their real-life counterparts.
That’s all for this week! If you are not subscribed and liked this, feel free to subscribe below!
IBM, Microsoft, and Amazon Halt Sales of Facial Recognition to Police, Call for Regulations
Bots, Lies, and DeepFakes — Online Misinformation and AI's Role in it
Retraining as a Response to Automation — Promising, but Only if Done Right
Humans Who Are Not Concentrating Are Not General Intelligences
AI Strategies of U.S., China, and Canada in Global Governance, Fairness, and Safety
Artificial Intelligence — The Revolution Hasn’t Happened Yet
Boston Dynamics' robots — impressive, but far from the Terminator