A new study has shown that lasers can hack self-driving cars, and this can be used to trick moving automobiles into missing pedestrians or other obstacles.
It was discovered in a research last week that numerous self-driving technologies in vehicles may be “messed with” using lasers, which is certainly going to be another pain in the side of Elon Musk, Tesla, and Autopilot.
The finding was revealed in a recent paper (pdf below) titled “You Can’t See Me: Physical Removal Attacks on LiDAR-based Autonomous Vehicles Driving Frameworks,” which was published and uploaded in late October. Cosmos Magazine also covered the story.
Lasers can be used to trick moving automobiles into missing pedestrians or other obstacles, according to U.S. and Japanese researchers. These vehicles send out laser lights to detect nearby objects using LiDAR, and then utilise the reflection to determine their location.
According to Cosmos, the research showed that a precisely timed laser beam back into a LiDAR system can produce “a blind spot large enough to hide an object like a pedestrian.”
Subscribe to GreatGameIndia
The study’s abstract says: “While existing attacks on LiDAR-based autonomous driving architectures focus on lowering the confidence score of AV object detection models to induce obstacle misdetection, our research discovers how to leverage laser-based spoofing techniques to selectively remove the LiDAR point cloud data of genuine obstacles at the sensor level before being used as input to the AV perception. The ablation of this critical LiDAR information causes autonomous driving obstacle detectors to fail to identify and locate obstacles and, consequently, induces AVs to make dangerous automatic driving decisions.”
Professor Sara Rampazzi, a cyber security researcher at the University of Florida, said: “We mimic the LIDAR reflections with our laser to make the sensor discount other reflections that are coming in from genuine obstacles.”
“The LIDAR is still receiving genuine data from the obstacle, but the data are automatically discarded because our fake reflections are the only one perceived by the sensor,” she continued.
According to the report, any laser utilised in this way would need to move with the vehicle as well as be properly timed.
Yulong Cao, a computer scientist at the University of Michigan and co-author of the study, said: “Revealing this liability allows us to build a more reliable system. In our paper, we demonstrate that previous defence strategies aren’t enough, and we propose modifications that should address this weakness.
According to the story, the findings will be presented at the USENIX Security Symposium in 2023.
Read the study given below: