Analysis and Development of a Perception System for Autonomous Vehicles on Urban Roads
Speaker: Riccardo Pieroni (Politecnico di Milano)
June 21, 2024 | 11:30 a.m.
Emilio Gatti Conference Room (Building 20)
Politecnico di Milano - Department of Electronics, Information and Bioengineering
Contacts: Prof. Simone Formentin | simone.formentin@polimi.it
June 21, 2024 | 11:30 a.m.
Emilio Gatti Conference Room (Building 20)
Politecnico di Milano - Department of Electronics, Information and Bioengineering
Contacts: Prof. Simone Formentin | simone.formentin@polimi.it
Sommario
Autonomous vehicles (AVs) require an accurate and detailed understanding of their surroundings to plan safe and correct maneuvers. To achieve this, AVs are equipped with various exteroceptive sensors, such as cameras, radars, and LiDARs. These sensors provide complementary signals: cameras capture detailed semantic information, LiDARs provide accurate spatial data, and radars produce instantaneous velocity estimates.
Data from these sensors are processed and often fused in order to accurately perceive the vehicle's surroundings. In an urban environment, perceiving the surroundings of a vehicle involves solving various tasks. Among these, it is crucial to distinguish navigable areas (roads, lanes, etc.) from non-navigable ones (sidewalks, traffic islands, etc) and to detect all obstacles and other road users.
To address the first task we will present a multimodal approach for creating online semantic maps in BEV, enabling autonomous driving where preconstructed maps are unavailable. Regarding the 3D object detection and tracking we will discuss a multimodal Multi-Object Tracking (MOT) algorithm that combines camera, LiDAR and radar data.
Experiments conducted in various urban environments demonstrated the strengths and effectiveness of these approaches.
This seminar is part of the 2024 Systems and Control Ph.D. Seminar Series. Take a look at the event program for further information.
Data from these sensors are processed and often fused in order to accurately perceive the vehicle's surroundings. In an urban environment, perceiving the surroundings of a vehicle involves solving various tasks. Among these, it is crucial to distinguish navigable areas (roads, lanes, etc.) from non-navigable ones (sidewalks, traffic islands, etc) and to detect all obstacles and other road users.
To address the first task we will present a multimodal approach for creating online semantic maps in BEV, enabling autonomous driving where preconstructed maps are unavailable. Regarding the 3D object detection and tracking we will discuss a multimodal Multi-Object Tracking (MOT) algorithm that combines camera, LiDAR and radar data.
Experiments conducted in various urban environments demonstrated the strengths and effectiveness of these approaches.
This seminar is part of the 2024 Systems and Control Ph.D. Seminar Series. Take a look at the event program for further information.