|Sensor Fusion for Autonomous Driving|
|  Dr. Ting Yuan is currently a Senior Research Scientist at the Mercedes-Benz Research & Development North America, Inc., Sunnyvale, where his fields of endeavor lie in detection, classification and tracking of moving/static objects using information from camera, Radar and Lidar systems and data fusion for the multi-sensor systems. He received his Ph.D. degree from the Electrical and Computer Engineering Department at the University of Connecticut, Storrs, CT, USA in 2013. He has more than 10 years research experience in Object Tracking, Sensor Fusion and Localization.|
|  Autonomous driving poses unique challenges for real world sensor fusion systems due to
the complex driving environment where the autonomous vehicle finds itself in and interacts itself with surrounding objects. Precise knowledge of the relevant participants becomes a key component for the task of comprehensive environmental perception and
scene understanding. We will talk about the current trend about autonomous driving in Silicon Valley. We will introduce the different environment representations frameworks from heterogeneous automotive sensors e.g. Radars, stereo/mono cameras and Lidars. The relevant state estimation algorithms, sensor fusion frameworks and the evaluation procedures with reference ground truth are presented in detail. An interesting glimpse of the data set obtained from a sensor configuration that would be used in the future Mercedes Benz autonomous vehicles will be shown. In particular, we try to elaborate the relationship in proper sensor setup, clean software architecture, and advanced algorithms for a satisfying autonomous vehicular settings and requirements.