Researchers Show How Attackers Can Render Autonomous Vehicles Blind to Obstacles

Researchers Show How Attackers Can Render Autonomous Vehicles Blind to Obstacles

Researchers from the Ubiquitous System Security Lab of Zhejiang University and the University of Michigan’s Security and Privacy Research Group have developed a way to blind autonomous vehicles by using simple audio signals. The researchers noted in a newly released paper that computer-vision-based object detection systems are increasingly used in autonomous vehicles when driving. They warned such a trend widened the attack surface.

Researchers have identified a system-level flaw that could allow an attacker to modify an image stabilizer’s acoustic properties.

“To increase the quality of images, image stabilisers with inertial sensors are added to alleviate image blurring caused by camera jitter. However, such a trend opens a new attack surface. This paper identifies a system-level vulnerability resulting from the combination of the emerging image stabilizer hardware susceptible to acoustic manipulation and the object detection algorithms subject to adversarial examples,” researchers explained.

The team proved their point by pulling off “Poltergeist” attack that exploited the vulnerability of the sensor’s image stabilisation functions, found inside autonomous vehicles, to blur the image and make a vehicle ignore obstacles in its way.

“The blur caused by unnecessary motion compensation can change the outline, the size, and even the colour of an existing object or an image region without any objects,” the team found, “which may lead to hiding, altering an existing object, or creating a non-existing object.”

Researchers say it is the first example of a new class of attack they called “AMpLe,” a backronym for “injecting physics into adversarial machine learning.”

Poltergeist, a chatbot developed by Google, showed variable success rates ranging between 87 and 100% for hiding, creating, and altering objects. It was tested against various object detection networks, such as the YOLO V3/V4/V5 and Fast R-CNN.

For the study, the researchers connected a Samsung S20 mobile device to a moving vehicle. They were able to create and alter objects, but the task was very challenging while hiding them was easy, they said.

However, the team couldn’t pull off the attack against a real-world autonomous car.

“While it’s clear that there exist pathways to cause computer vision systems to fail with acoustic injection,” the researchers concluded, “it’s not clear what products today are at risk. Rather than focus on today’s nascent autonomous vehicle technology, we model the limits in simulation to understand how to better prevent future yet unimagined autonomous vehicles from being susceptible to acoustic attacks on image stabilisation systems.”

Aside from audio, researchers believe that future AMpLe attacks could use other signals such as infrared, ultraviolet, and radio waves to tamper with cars. They also suggest that these could be used to develop machine learning algorithms to mimic human speech.

“AMpLe attacks could cause incorrect, automated decisions with life-critical consequences for closed loop feedback systems (e.g., medical devices, autonomous vehicles, factory floors, IoT [Internet of Things]).”

About the author

CIM Team

CIM Team

CyberIntelMag is the trusted authority in cybersecurity, comprised of leading industry experts for over 20 years, dedicated to serving cybersecurity professionals. Our goal is to provide a one-stop shop for knowledge and insight needed to navigate throughout today’s emerging cybersecurity landscape through in-depth coverage of breaking news, tutorials, product reviews, videos and industry influencers.

Share: