Researchers from the Advanced Threat Research (ATR) laboratory of the American company McAfee have succeeded in accelerating two Tesla models (Model X and Model S) to 85 mph (136 km / h) while speed limits was 35 mph (approximately 56 km / h) thanks to a targeted attack. The team published the details of this experiment on February 19, 2020, after 18 months of work.
"We made an attack on the algorithm which makes it possible to detect and classify the road signs present on the road", summarizes Thomas Roccia, researcher in the ATR laboratory, contacted by The Digital Factory. Autonomous cars have sensors that allow them to retrieve data about their environment. The target system is the MobilEye EyeQ3, which equips more than 40 million vehicles worldwide, including Tesla models equipped with the "Hardware Pack 1" (marketed before 2017).
The system uses cameras located at the front of the vehicle to read traffic signs and adapt driving. The researchers wanted to know if it was possible to "mislead" the classification carried out by the algorithm when it detects road signs on the road. "This is called 'model hacking', that is to say that we 'poison' the data sent to the model", says Thomas Roccia.
To achieve this, the researchers modified the physical appearance of the speed limit sign by lengthening the middle bar of the "3" by about five centimeters using a black sticker. "For the human eye, this modification is almost imperceptible, but the algorithm saw an '8' and therefore exceeded the maximum speed"says Thomas Roccia. The researchers carried out this attack in a" black box "that is to say that they knew neither the algorithm nor the data which had been used to train the model.
This idea of modifying the appearance of an image to fool an algorithm is not new, but McAfee wanted to apply it to a real situation which involves everyday objects. "There has already been a lot of research in this area, particularly with images of animals, specifies Thomas Roccia. Scientists have studied the "features", that is to say the elements which are taken from the image to be classified by the algorithm. They knew exactly which elements were taken into account by the model and modified them to poison it."For example, researchers have modified a few pixels of a penguin snapshot invisible to the naked eye and the algorithm has classified the animal as a desktop computer or a frying pan.
But then how to fight against these attacks? "Several methods exist"says Thomas Roccia. One of them is to add"noise"to prevent the model from detecting new features that would disrupt its classification method.
Helping companies reduce vulnerabilities
The results of the experiment were transmitted to the Californian company Tesla and to the manufacturer of the camera, the Israeli Mobileye. "Our goal is to work with companies when we discover flaws to help them solve them"says Thomas Roccia. Note that these technologies are evolving rapidly and that Mobileye is currently working on much more advanced systems. McAfee notes that a recent vehicle equipped with a Mobileye system was not deceived by this same technique. at Tesla, its new models use another technology and no longer rely on automatic panel reading.
This experience also questions from a legal point of view. The regulatory framework for autonomous cars is still very vague and these targeted attacks could considerably complicate the search for an author in the event of an accident involving an autonomous vehicle. To try to settle these questions, in early December 2019, France has set up an ethics committee on digital technology which must in particular look into the question of autonomous cars by the way "shared responsibilities between manufacturer, insurer and user".