Last Week in AI #170: Tricking autonomous vehicles with phantom obstacles, AI to track and prevent deforestation, Google limits deepfakes in Colab, and more!
Last Week in AI #170: Tricking autonomous vehicles with phantom obstacles, AI to track and prevent deforestation, Google limits deepfakes in Colab, and more!
lastweekin.ai
Top News UCI Researchers: Autonomous Vehicles Can Be Tricked Into Dangerous Driving Behavior Researchers at the University of California, Irvine have designed a testing tool, called PlanFuzz, which can automatically detect security vulnerabilities in widely used automated driving systems. These security vulnerabilities are part of the planning module of autonomous driving systems which decides when to change lanes and slow down and stop, among other functions. They used their system to evaluate autonomous driving systems Apollo and Autoware. The researchers found that ordinary objects, like cardboard boxes and bicycles placed on the side of the road, caused vehicles to permanently stop on otherwise empty roads and intersections. They also found that by perceiving nonexistent threats from such ordinary objects, autonomous cars did not change lanes as planned. While acknowledging the need to be cautious for autonomous vehicle companies like Uber and Tesla, they stress that even overly cautious behavior can also be a cause of concern for road safety.
Last Week in AI #170: Tricking autonomous vehicles with phantom obstacles, AI to track and prevent deforestation, Google limits deepfakes in Colab, and more!
Last Week in AI #170: Tricking autonomous…
Last Week in AI #170: Tricking autonomous vehicles with phantom obstacles, AI to track and prevent deforestation, Google limits deepfakes in Colab, and more!
Top News UCI Researchers: Autonomous Vehicles Can Be Tricked Into Dangerous Driving Behavior Researchers at the University of California, Irvine have designed a testing tool, called PlanFuzz, which can automatically detect security vulnerabilities in widely used automated driving systems. These security vulnerabilities are part of the planning module of autonomous driving systems which decides when to change lanes and slow down and stop, among other functions. They used their system to evaluate autonomous driving systems Apollo and Autoware. The researchers found that ordinary objects, like cardboard boxes and bicycles placed on the side of the road, caused vehicles to permanently stop on otherwise empty roads and intersections. They also found that by perceiving nonexistent threats from such ordinary objects, autonomous cars did not change lanes as planned. While acknowledging the need to be cautious for autonomous vehicle companies like Uber and Tesla, they stress that even overly cautious behavior can also be a cause of concern for road safety.