

Discover more from Last Week in AI
Skynet This Week #8: Safe AI deployment, standards talk, government policy, and more!
Skynet This Week #7
By Viraat Aryabumi and Aidan Rocke

Our bi-weekly quick take on a bunch of the most important recent media stories about AI for the period 27th August 2018 - 10th September 2018.
Advances & Business
Safety-first AI for autonomous data centre cooling and industrial control
Amanda Gasparik, Chris Gamble, Jim Gao, Deepmind
Rules don’t get better over time, but AI does.
In 2016, Deepmind developed a machine learning based recommendation system to reduce Google’s data center cooling bills by upto 40%. Based on feedback from operators, Deepmind has now developed a system that autonomously controls data center cooling with strict safety constraints in place. This new system delivers savings of 30% on average with further improvement expected over time.
OpenAI’s Dota 2 defeat is still a win for artificial intelligence
James Vincent, The Verge
The OpenAI Five, an AI system designed by OpenAI to play Dota 5v5, recently lost in a tightly fought best-of-three contest against a team of 99th percentile Dota players where the humans won two out of three matches. In spite of the loss, humans are now learning from the OpenAI Five game play. Key machine learning experts also believe that the bots, which can go through 100 lifetimes of human Dota experience per day, have a lot more potential both in terms of research and engineering improvements.
ANZ bank unpicking neural networks in effort to avoid dangers of deep learning
Chris Duckett, ZD Net
Deep Learning has been being applied in all sorts of areas, but with the rising popularity of such tools there also comes the need to be aware of their limitations. If not careful, an AI algorithm may be unintentionally made to be biased and misused. So it is good to read that ANZ bank is not rushing such their shiny new neural net based risk assessment model into production, and is instead working to understand it better first. The Head of Retail Risk at ANZ Jason Humphrey nicely summed up the things to keep in mind when using neural networks:
“In a deep-learning environment, it becomes very difficult to work out the factors that were the most predictive for this instance, or for this customer,” he said. “Before we roll out any deep-learning models, we need to solve for that – even though it’s not legislated here. I think it is good practice to be able to know why decisions are being made.” … “The biggest danger in terms of deep learning is because it is bringing in new attributes and new correlations we’ve never seen, is that the things that traditionally we have never seen that could be creating bias, that we wouldn’t know to look for to say, ‘that’s something we shouldn’t do’,” Humphrey explained.
These Entrepreneurs Are Taking on Bias in Artificial Intelligence
Liz Webber, Entrepreneur.com
As AI software is increasingly being used for decision-making in everyday lives, a growing number of AI scientists and AI entrepreneurs are stepping up to the challenge of biased decision-making due to biased datasets. Proposed solutions range from challenging standard practices in machine learning to diversifying the talent pool for AI researchers and engineers. Incidentally, bias in machine learning algorithms is often a reflection of the socioeconomic homogeneity of the developers who built them.
Concerns & Hype
I Used AI To Clone My Voice And Trick My Mom Into Thinking It Was Me
Charlie Warzel, Buzzfeed News
As we have covered before, the ability of AI algorithms to produce audio and video that convincingly mimics people is a topic of significant concern. Buzzfeed’s Charlie Warzel has documented how close we are to the wide spread usability of such algorithms being a reality - he could already use a commercial product to have a ‘conversation’ with his mom using audio entirely generated by the algorithm. We may soon all have to become used to being a little more skeptical of the voices we hear over the phone soon…
“Not only did my Lyrebird voice fool my mom, she had a hard time believing it was created by AI. “I didn’t know something like that was possible,” she told me later on, reminding me that audio and video manipulation to her sounds more like science fiction than reality. “I never doubted for a second that it was you.””
Factsheets for AI Services
Aleksandra Mojsilovic, IBM Research
Concerns about safety, transparency, and bias in AI are widespread, and it is easy to see how they erode trust in these systems. Part of the problem is a lack of standard practices to document how an AI service was created, tested, trained, deployed, and evaluated; how it should operate; and how it should (and should not) be used. To address this need, my colleagues and I recently proposed the concept of factsheets for AI services.
IBM Research have proposed using factsheets for AI services to increase transparency and engender trust in them. These factsheets, which would be voluntarily released by AI service providers, aim to effectively measure and communicate performance with respect to fairness, safety, reliability, explainability and accountability.
U.S. Senator Bans Funding for Beerbots That Don’t Exist
Evan Ackerman, IEEE Spectrum
In 2015, a team from MIT developed algorithms for coordination between multiple robots under uncertainty. The students set up a beer delivery task to demonstrate the effectiveness of their algorithms. US Senator Jeff Flake, has taken this demo out of context and used it as an example of wasteful spending by the Department of Defense.
Analysis & Policy
Defense Department pledges billions toward artificial intelligence research
Drew Harwell, Washington Post
DARPA has announced plans to invest $2 Billion towards various programs aiming to advance artificial intelligence. These programs will be in addition to the existing 20 plus programs dedicated to AI, and will focus on logistical, ethical, safety, privacy problems and explainable AI. This move will put the US at odds with Silicon Valley companies and academics who have been vocal about not developing AI for military use.
“This is a massive deal. It’s the first indication that the United States is addressing advanced AI technology with the scale and funding and seriousness that the issue demands,” said Gregory C. Allen, an adjunct fellow specializing in AI and robotics for the Center for a New American Security, a Washington think tank. “We’ve seen China willing to devote billions to this issue, and this is the first time the U.S. has done the same.”
Inside the United Nations’ effort to regulate autonomous killer robots
Sono Motoyoma, The Verge
The Verge’s Sono Motoyoma sat down with the chair of the United Nations’ Convention on Conventional Weapons, Amandeep Gill to talk about lethal autonomous weapons. Gill talks about how policy should be tech-neutral, different questions that he hopes will be addressed at the convention, and how AI weapons development differs from previously regulated weapons like Nuclear bombs.
“I think making policy out of fear is not a good idea. So as serious policy practitioners, we have to look at what has become of the situation in terms of technology development and what is likely to happen.”
China’s AI push raises fears over widespread job cuts
YUAN YANG, The Financial Times
“Automation has replaced the jobs of up to 40 per cent of workers in some Chinese industrial companies over the past three years, highlighting the effects of Beijing’s push to upgrade its technological base and become a world superpower in artificial intelligence.”
China’s government is making a strong push to upgrade manufacturing tech, but the increased automation has correspondingly led increased job losses for low skilled laborers whose work is being automated. An interesting development and a good reminder that AI should be understood as the next wave of labor automation.
Expert Opinions & Discussion within the field
Introducing the Inclusive Images Competition
Tulsee Doshi, Google AI Blog
Google has announced the Inclusive Images Competition on Kaggle. The competition aims to help further development of ML methods that are robust and inclusive even when learning from imperfect data sources. The competition challenge involved using Open Images, an image classification dataset majorly samples from North America and Europe. The models trained on this dataset will be evaluated on crowdsourced images from different geographic regions across the globe. The deadline for the challenge is November 5th.
Should Evil AI Research Be Published? Five Experts Weigh In
Dan Robitzski, Futurism
While “Should Evil AI research be published?” is a rhetorical question, it provides for an interesting thought experiment. Five AI thought experts weighed in with their thoughts. Their answers ranged from it should be published, to thinking about responsible use and impact before publishing.
Explainers
Why Love Generative Art?
Jason Bailey, Artnome.com
With AI generated art gaining in popularity, this well-written post about the history of Generative art is a compelling read. It covers the origins of the field and looks at the various artists who have influenced generative art in general. It also talks about how generative art is still art and requires significant human contribution.
Face detection - An overview and comparison of different solutions
David Pacassi Torrico, liip.ch
This article looks at the methods, pricing and results of commercially available face detection systems from Amazon, Google, IBM and Microsoft.
Aurora’s Approach to Development of Self Driving Cars
The Aurora Team, Medium
Aurora, a self-driving car startup share their thinking and approach to self-driving technology. They talk about self-driving as an applied science problem, testing, integrating machine learning and their process for building these systems.