Meet Inspiring Speakers and Experts at our 3000+ Global Conference Series LLC LTD Events with over 1000+ Conferences, 1000+ Symposiums and 1000+ Workshops on Medical, Pharma, Engineering, Science, Technology and Business.

Explore and learn more about Conference Series LLC LTD : World’s leading Event Organizer


Siniša Šegvić

Siniša Šegvić

University of Zagreb, Croatia

Title: Pixel-level image understanding for smart and autonomous vehicles


Siniša Šegvić completed his PhD degree in 2004 at the University of Zagreb. He has spent one year as a Postdoctoral Researcher at IRISA Rennes in 2006 and another year as a Postdoctoral Researcher at TU Graz in 2007. Currently, he is an Associate Professor at University of Zagreb. He is a program committee member of the VISAPP 2018 conference and an Associate Editor of the Journal of Computing and Information Technology. His research expertise is in the field of computer vision and deep learning, with special interest in applications for safe traffic. He published more than 50 national and international scientific papers


Semantic segmentation performs pixel-level image understanding by associating each image pixel with a meaningful class such as 'road', 'terrain', 'sidewalk' or 'person'. The resulting semantic map reveals the kind of surface terrain in front of the vehicle, and may be used to recover the traversability map required for motion planning. This capability makes semantic segmentation one of t he most important computer vision tasks in the automotive context. Today, state of the art semantic segmentation results are obtained with deep end-to-end trained convolutional models. However, direct application of popular and well understood image classification architectures would lead to poor semantic segmentation performance. The main obstacles are large variation of object scale, and strict memory limitations of contemporary GPUs. Recent works overcome these obstacles by careful architectural adaptations. As a result high semantic segmentation accuracy today can be achieved on large images in real-time, while more training data would likely further improve the results. Specifications of upcoming embedded hardware platforms promise low-power real-time onboardope ration and provide directions for exciting real-world applications.