SENSORS The fascinating world of AV sensors continues to mature and diversify. Experts share their views on how the race is shaping up By Ben Dickson I n the fast-paced arena of autonomous vehicles, the only constant is change. And few things are changing as quickly as the ability of vehicles to perceive and understand their surroundings. Progress has been remarkable. Lidar, once doubted by skeptics, has solidified its spot in the AV tech stack. Camera-based sensing continues to improve thanks to advances in machine learning. And radar sensors are making a comeback with new techniques. But some issues remain unresolved. Will a single modality rule the AV sensor stack? Will we need redundant sensors to ensure full perception? What role will radar play in the future of AV? Do we need sensor fusion at low-level raw data or do we need to bring the data together after processing it by algorithms? What role does artificial intelligence play in sensor data processing? The autonomous vehicle industry is trying to answer these questions and more as it figures out the best combination of sensors that provides the right balance of safety, accuracy, cost and durability. Production-ready lidar In its latest report on Next-generation Sensors for Automated Road Vehicles , the Society of Automotive Engineers (SAE) examines advances in AV sensors and the challenges that still remain. The study revisits some of the unsettled topics related to AV sensors that SAE reported in 2018. One of the areas that has experienced remarkable progress is lidar sensors, which the report says are “moving from prototype technology to mass production”. “There was an assumption or foregone perspective in 2018 that lidar was not going to really make it into automotive,” says Sven Beiker, managing director at Silicon Valley Mobility and editor of the SAE report. “Now we have the first production systems with lidar, which includes driver assistance systems and also automated Level 4 prototype vehicles or small fleets that use lidar. Lidar is not a question anymore.” Lidar detractors maintain that humans have only two eyes, therefore two cameras should be enough to navigate roads without additional sensors. However, current camera-based systems still can’t infer distance and geometry information that is crucial to ensure the safety of computer-based systems that must make critical decisions in potentially fatal situations. ADAS & Autonomous Vehicle International April 2024 27