Last Week in AI #149: AI enables brain interface for robot control, Deep Learning suffers from overinterpretation
AI algorithm interprets brain EEG signals to guide robot arms, deep learning's overinterpretation problem and how ensembles can help, and more!
In a paper titled Customizing skills for assistive robotic manipulators, an inverse reinforcement learning approach with error-related potentials, researchers at Ecole Polytechnique Fédérale de Lausanne’s Learning Algorithms and Systems Laboratory described a machine-learning algorithm that can be connected to a human brain and used to control a robot. An EEG machine is used to enable the human to let the robot know when it is doing the wrong thing, so over time it can learn to avoid mistakes. The project is described in the following YouTube video:
People with a spinal cord injury often experience permanent neurological deficits and severe motor disabilities that prevent them from performing even the simplest tasks, such as grasping an object … Assistance from robots could help these people recover some of their lost dexterity, since the robot can execute tasks in their place.
Our take: Brain-based teleoperation has a lot of potential for helping people with disabilities, so it’s cool to see this latest step in this direction. While far from full-on controlling the robot, being able to send signals to indicate it is doing things incorrectly is still an important step.
In a recent paper, MIT researchers found that deep learning models are prone to overinterpretation errors. Overinterpretation refers to the problem of a learned model picking up meaningless yet valid predictive signals from a dataset. For example, in a given image classification dataset, all pizzas were placed on wooden tables, so the model could’ve learned to identify brown edge pixels for predicting pizza, rather than focusing on the pizza itself. The researchers proposed an algorithm to automatically detect this problem, and they found that models can achieve over 90% accuracy on the popular CIFAR-10 image classification dataset by just using 5% of the pixels from each image. Clearly, these models are not learning meaningful signals from the dataset and are instead relying on spurious correlations.
To mitigate this issue, the researchers propose training a diverse ensemble of models, which achieved only 10% accuracy when using 5% of image pixels.
Our take: There is still so much we do not yet know about deep learning, and this paper demonstrates potential pitfalls even with the most popular and well-studied benchmark of image classification. There are likely more low-hanging fruits for research that aims to better understand deep learning. Companies that deploy deep learning in the real world should also take note and perform careful empirical testing to avoid unintended errors.
Check out our newest editorial! If you are a paid subscriber (thanks!) you can read all of it, and if you are a regular subscriber (also thanks!) you can read the beginning of it as a free preview. Enjoy!
A desert robot depicts AI’s vast opportunities - "The next chapter in AI development will be defined by two trends: increased accessibility and increased technical maturity."
A Robot for the Worst Job in the Warehouse - "“We do have bikes in our name, but we’re not a bike company,” Hlede says of his vertically integrated outfit. Among 50 employees in Greyp R&D, only four work on the physical bikes themselves."
Navy Launches SCOUT To Expand AI Capabilities - "The U.S. Navy’s SCOUT Experimentation aims to develop automated and emerging technologies into targeted problem areas among warfighters."
Bossa Nova de-emphasizes robots in ‘retail AI’ rebrand - "The company is taking a “broader approach” to image-based AI analytics for retailers and de-emphasizing the role of robotics."
TuSimple Becomes First to Successfully Operate Driver Out, Fully Autonomous Semi-truck on Open Public Roads - "Specially upfitted class 8 semi-truck navigated 80 miles without a human in the vehicle, traveling on surface streets and highways, interacting naturally with other motorists"
Foodoo.ai Applies Machine Learning to Reduce Food Waste in the “Grab & Go” Sector - "The company works with certified commercial kitchens to prepare fresh, healthy food options, delivers them to workplaces, and then uses its proprietary software and hardware to ensure little to no food is wasted."
Chinese Company Names AI Debt Collector Employee of the Year - "Vanke executive claims the bot has a 91.44% success rate in collecting overdue payments."
AI-Powered Stock Fund Bails Out of Mega-Cap FANG+ Stocks - "An artificial intelligence-guided fund that has been lagging the market has jettisoned its mega-cap tech names in a bid to right the ship."
Can Algorithms be Racist? - "A brief exploration of juristic and social dilemmas within language models"
Deep Learning Can’t be Trusted Brain Modelling Pioneer Says - "Stephen Grossberg explains why his ART model is better"
Using AI hiring tools? Why you need to be more careful than ever - "New York City just enacted a law placing significant restrictions on the use of artificial intelligence hiring tools by employers."
What Happens When an AI Knows How You Feel? - "Technology used to only deliver our messages. Now it wants to write them for us by understanding our emotions."
The Batch: Hopes for 2022 from Alexei Efros, Abeba Birhane, Yoav Shoham, Chip Huyen, Matt Zeiler, Wolfram Burgard, Yale Song - "In this issue, AI leaders from academia and industry share their highest hopes for 2022."
Copyright © 2021 Skynet Today, All rights reserved.