Hackers Force Tesla Autopilot to Break Traffic Rules

A group of scientists from Singapore have developed a new method to interfere with the operation of autonomous cars using computer vision to recognize road signs. This new technique, called Ghoststripe, can pose a danger to drivers using Tesla and Baidu Apollo.

The concept behind Ghoststripe involves using LED-Video diodes to create light patterns on road signs that are invisible to the human eye, but can confuse the car’s camera. The attack works by rapidly flashing LEDs in different colors while the camera is functioning, resulting in distortions in the image captured by the camera.

The distortion occurs due to the characteristics of the digital shutter of the CMOS camera, which scans the image in stages. The flashing LEDs create varying shades at each stage of scanning, leading to an image that does not accurately represent reality. For instance, the red color of a “stop” sign may appear differently on each scan line.

When this distorted image is processed by the car’s deep neural network-based classifier, the system struggles to recognize the sign and respond appropriately. While similar attack methods have been known previously, this research team has achieved consistent and reproducible results, making the attack practical in real-world scenarios.

The researchers have developed two versions of the attack:

· Ghoststripe1 does not require access to the car and utilizes the tracking system to monitor the vehicle’s location in real-time, enabling dynamic configuration of the LED flashing to prevent sign recognition.

· Ghoststripe2 necessitates physical access to the car. A converter is installed on the camera’s power supply to precisely control the attack time by fixing the moments of image scanning. This allows for targeted attacks on specific vehicles and control over sign recognition outcomes.

The team conducted tests on real roads using the Leopard Imaging AR023ZWDR camera, commonly used in BAIDU Apollo equipment, focusing on “Stop,” “yield,” and speed limit signs. Ghoststripe1 achieved a success rate of 94%, while Ghoststripe2 reached 97% accuracy.

One influencing factor on the attack’s effectiveness is the presence of strong ambient lighting, which can reduce the attack’s success rate. Researchers suggest that attackers should carefully select the time and location for a successful attack.

There are countermeasures available to mitigate vulnerability, such as replacing CMOS cameras with sensors that capture an entire image at once or randomizing layer scanning. Additional cameras can also enhance security or require more elaborate attack methods. Training neural networks

/Reports, release notes, official announcements.