3rd workshop on Unsupervised Learning for Automated Driving

Half-day workshop at the IEEE Intelligent Vehicles Symposium 2022, Sunday, June 5
supported by the IEEE ITS Technical Committee Self Driving Automobiles

 

Unlabelled data is easily collected, increasing traction in IV to explore unsupervised learning, its semi-, weakly-, and self-supervised variants, transfer learning, and inferring probabilistic latent representations. Share novel methodological developments, challenges and solutions for exploiting unlabelled data and reducing the annotation bottleneck.

 

Keynote speakers

Schedule

  • 13:30 – 13:40: [10 min] WS organizer – Welcome and introduction
  • 13:40 – 14:15: [35 min] 1st KeynoteProf. Dr. Vasileios Belagiannis:
    Reducing data labelling by simulating human driving behaviours and semi-supervised learning
  • 14:15 – 14:50: [35 min] 2nd KeynoteDr. Claudius Glaeser:
    Bridging domain gaps in lidar perception
  • 14:50 – 15:10: [20 min] Coffee Break
  • 15:10 – 15:45: [35 min] 3rd KeynoteDr. Daniel Kondermann:
    Dataset quality assurance via large scale human annotation
  • 15:45 – 16:20: [35 min] 4th KeynoteDr. Holger Caesar:
    A farewell to human labeled datasets
  • 16:20 – 17:00: [40 min] Roundtable discussion:
    All keynote speakers and audience

Workshop description

Machine learning is nowadays omnipresent in automated driving, and applied on sensor data for perception, context cues for intent recognition, trajectories for path prediction and planning, scenario clustering for test specification, density estimation for anomalous event detection, and realistic data generation for simulation. Large annotated datasets and benchmarks are key to advancing the state-of-the-art, and have so far mainly involved supervised learning.

However, the supervised learning paradigm has its limitations, since (a) new sensors or environment conditions require new labelling to counter domain shift, (b) creating manual annotations is labour intensive and time consuming, (c) some properties may be infeasible to collect (e.g. rare events) or annotate (e.g. mental states), (d) sometimes the relevant structure in the data is unknown and must be discovered (e.g. clustering, dimensionality reduction).

On the other hand, unlabelled data is easily collected, increasing traction in IV to explore unsupervised learning, its semi-, weakly-, and self-supervised variants, transfer learning, and inferring probabilistic latent representations.

This workshop therefore explores the varied use of machine learning techniques on unlabelled, partially or automatically labelled data throughout all IV research domains, encouraging IV researchers to share novel developments, challenges and solutions.

Main conference

Previous editions

  • ULAD 2019 at IEEE Intelligent Vehicles Symposium 2019
  • ULAD 2020 at IEEE Intelligent Vehicles Symposium 2020
Vasileios Belagiannis

Vasileios Belagiannis

Reducing data labelling by simulating human driving behaviours and semi-supervised learning

Abstract. Abnormal driving behaviour is usually early detected by human drivers who usually react early enough to prevent collisions. Similarly, autonomous vehicles should perform anomaly detection as part of automated driving. However, producing abnormal driving situations in real-world settings can be harmful to the participants. In this talk, I will present our approach to abnormal driving behaviour detection by learning solely from simulated data. Moreover, I will discuss how semi-supervised learning can contribute to the perception module of automated driving by presenting our latest approach for semantic image segmentation.

Prof. Dr. Vasileios Belagiannisis Professor at the Faculty of Computer Science of the Otto von Guericke University Magdeburg. He holds a degree in engineering (Greece, 2009) from Democritus University of Thrace, Engineering School of Xanthi and an M.Sc. in Computational Science and Engineering from TU München (Germany, 2011). He completed his doctoral studies at TU München (2015) and then continued as a post-doctoral research assistant at the University of Oxford (Visual Geometry Group). Before joining Magdeburg University, he spent time in the industry, working at OSRAM, and later at Ulm University.

Bridging domain gaps in lidar perception

Abstract. To make autonomous driving a reality, autonomous vehicles will have to work robustly in the diverse and constantly changing open world in which we live. However, it is not technically feasible to collect and annotate training datasets which accurately represent this open-world domain. Therefore, we must find ways to deal with domain gaps due to changing weather conditions, updates in sensor hardware, different geographical locations and so on. In this presentation, we demonstrate how to bridge domain gaps between high-resolution and low-resolution lidar sensors and good and bad weather conditions using three different domain adaptation approaches. We show that both unsupervised and self-supervised domain adaptation approaches operating on the feature level or data level are effective at bridging domain gaps.

Dr. Claudius Glaeser leads a research group on perception for automated driving at Bosch Research in Renningen, Germany. He received his PhD degree in computer science from Bielefeld University, Germany, in 2012. From 2006 he was a Research Scientist with the Honda Research Institute Europe GmbH, Offenbach/Main, Germany, working in the fields of speech processing and language understanding for humanoid robots. In 2011, he joined Robert Bosch GmbH, where he developed perception algorithms for various driver assistance and highly automated driving functions. His research interests include multimodal perception and data fusion for autonomous systems.

Claudius Glaeser

Claudius Glaeser

Daniel Kondermann

Daniel Kondermann

Dataset quality assurance via large scale human annotation

Abstract. I will review the research topics I have addressed during my times at Heidelberg University, focusing on ground truth generation. I will then discuss challenges in crowdsourcing and related large scale data annotation approaches, including a number of academically published use cases which aren’t immediately obvious. I’ll will discuss a list of abstract dataset quality metrics I’ve backronymed “RAD” which can be used to design new academic as well as industrial datasets with visual content. Based on examples, I will go through a number of concrete quality metrics and explain processes, tools and research opportunities to obtain the necessary raw data for dataset quality evaluation.

Dr. Daniel Kondermann received his PhD (2009) and Habilitation (2016) at the Heidelberg Collaboratory for Image Processing. His research revolves around performance analysis of computer vision and machine learning methods, with a focus on dataset architecture, ranging from sensor array design to data annotation and performance metric definition. In 2013, he founded Pallas Ludens, a visual data annotation company. He and his team joined Apple in 2016, where he headed a small team researching dataset quality metrics for large scale annotation projects. Daniel left Apple in 2019 to create a new company. Quality Match is a visual dataset quality assurance company which is based on the hypothesis that the amount of data needed for machine learning is less important than its quality.

A farewell to human labeled datasets

Abstract. Human labeled datasets have enabled enormous breakthroughs in terms of autonomous vehicle performance. In this presentation I will share insights from annotating six widely used public datasets. We will look at the highlights and shortcomings of these datasets and how the performance of leading methods improved over time. I argue that these datasets do not scale to level 4 (high automation) or even level 5 (full automation). Beyond that, I will discuss the latest trend of autolabeled datasets. Based on our newly released nuPlan dataset I will show that autolabeling, scenario mining and closed loop evaluation are key to scaling autonomous vehicles to new operational design domains.

Dr. Holger Caesar is an Assistant Professor at the Intelligent Vehicles group of TU Delft in the Netherlands. The Intelligent Vehicles group is led by Prof. Dr. Dariu Gavrila and part of the Cognitive Robotics department. Holger’s research interests are in the area of Autonomous Vehicle perception and prediction, with a particular focus on scalability of learning and annotation approaches. Previously Holger was a Principal Research Scientist at an autonomous vehicle company called Motional (formerly nuTonomy). There he started 3 teams with 20+ members that focused on Data Annotation, Autolabeling and Data Mining. Holger also developed the influential autonomous driving datasets nuScenes and nuPlan and contributed to the commonly used PointPillars baseline for 3d object detection from lidar data. Holger received a PhD in Computer Vision from the University of Edinburgh in Scotland under Prof. Dr. Vittorio Ferrari and studied in Germany and Switzerland (KIT Karlsruhe, EPF Lausanne, ETH Zurich). In his spare time he likes to hike with his small family, as well as sing, run or cross the Alps by mountainbike.

Holger Caesar

Holger Caesar

Contact information

for questions, please send us a mail at info[at]ulad-workshop.com

Topics of interest

  • unsupervised, semi-supervised, and weakly-supervised learning
  • self-supervised learning
  • domain adaptation
  • GANs & VAEs
  • missing data
  • representation learning
  • density estimation
  • anomaly detection
  • clustering and data analysis
  • generative graphical models
  • latent variable models
  • non-parametric models
  • artificial data generation and simulation