Discover more from Last Week in AI
Skynet Today Last Week in AI News #42
Growing uses of AI in controversial applications, such as facial recognition for surveillance and predicting the likelihood of crime, pose a concern for civil liberties in authoritarian and democratic societies alike. In addition, the capabilities of these AI-enabled systems are sometimes exaggerated by the users to create an impression of a powerful algorithm and to limit human liabilities. As more goverments and law enforcement agencies test and adopt AI tools, their long-term effects and consequences is a topic that must be carefully considered.
Advances & Business
Modern Black Friday Work Force: Postal Clerk, Influencer, Robot - Meet America’s retail work force in 2019. Nearly five million people are employed in traditional retail jobs. Many still work in stores, selling stuff, but the reality is that today’s retail industry is powered by a variety of staff employees, gig workers and artificial intelligence.
Building a better battery with machine learning - Researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory have turned to the power of machine learning and artificial intelligence to dramatically accelerate the process of battery discovery.
Using artificial intelligence to determine whether immunotherapy is working - Scientists from the Case Western Reserve University digital imaging lab, already pioneering the use of Artificial Intelligence (AI) to predict whether chemotherapy will be successful, can now determine which lung-cancer patients will benefit from expensive immunotherapy.
Alibaba Cloud publishes machine learning algorithm on GitHub - Chinese cloud vendor releases “core codes” of its Alink platform on GitHub, uploading a range of algorithm libraries that support processes for machine learning tasks, such as online product recommendation.
We tried teaching an AI to write Christmas movie plots. Hilarity ensued. Eventually. - Using a neural network to create ridiculous plot lines takes a lot of work—and reveals the challenges of generating human language.
Concerns & Hype
Internet Companies Prepare to Fight the Deepfake Future - Researchers are creating tools to find A.I.-generated fake videos before they become impossible to detect. Some experts fear it is a losing battle.
No, AI is not for social good - Faced with the public furor over problems with artificial intelligence, tech companies and researchers would now have us believe that the big fix for those problems is to develop AI for social good.
Former Go champion beaten by DeepMind retires after declaring AI invincible - The South Korean Go champion Lee Se-dol has retired from professional play, telling Yonhap news agency that his decision was motivated by the ascendancy of AI.
AI Is Not Similar To Human Intelligence. Thinking So Could Be Dangerous - Thinking an artificial intelligence works in the same way as a human brain can be misleading and even dangerous, says a recent paper in Minds and Machines by David Watson of the Oxford Internet Institute and the Alan Touring Institute.
How a Machine Learns and Fails - A Grammar of Error for Artificial Intelligence - This essay attempts to review the limitations that affect AI as a mathematical and cultural technique, stressing the role of error in the definition of intelligence in general.