1st workshop on Unsupervised Learning for Automated Driving

Half-day workshop at the IEEE Intelligent Vehicles Symposium 2019, June 9 2019, MINES ParisTech, Paris,
supported by the IEEE ITS Technical Committee Self Driving Automobiles

 

 

Unlabelled data is easily collected, increasing traction in IV to explore unsupervised learning, its semi-, weakly-, and self-supervised variants, transfer learning, and inferring probabilistic latent representations. Share novel methodological developments, challenges and solutions for exploiting unlabelled data and reducing the annotation bottleneck.

Update: slides of the presentations are made available in Schedule overview!

 

 

Keynote speakers

We are very pleased to announce four excellent keynote speakers for the workshop with academic and industrial backgrounds, see Keynote details below for more information.

Schedule

Workshop description

Machine learning is nowadays omnipresent in automated driving, and applied on sensor data for perception, context cues for intent recognition, trajectories for path prediction and planning, scenario clustering for test specification, density estimation for anomalous event detection, and realistic data generation for simulation. Large annotated datasets and benchmarks are key to advancing the state-of-the-art, and have so far mainly involved supervised learning.

However, the supervised learning paradigm has its limitations, since (a) new sensors or environment conditions require new labelling to counter domain shift, (b) creating manual annotations is labour intensive and time consuming, (c) some properties may be infeasible to collect (e.g. rare events) or annotate (e.g. mental states), (d) sometimes the relevant structure in the data is unknown and must be discovered (e.g. clustering, dimensionality reduction).

On the other hand, unlabelled data is easily collected, increasing traction in IV to explore unsupervised learning, its semi-, weakly-, and self-supervised variants, transfer learning, and inferring probabilistic latent representations.

This workshop therefore explores the varied use of machine learning techniques on unlabelled, partially or automatically labelled data throughout all IV research domains, encouraging IV researchers to share novel developments, challenges and solutions.

Keynotes

Adrien Gaidon

Adrien Gaidon

Talk: Beyond Supervised Driving

PDF slides

Crowd-sourced steering does not sound as appealing as automated driving. We need to go beyond supervised learning for automated driving, including for computer vision problems seeing great progress with strong supervision today. First, we will motivate why this is required for long-term large-scale autonomous robots. Second, we will discuss recent state-of-the-art results obtained in the ML team at Toyota Research Institute (TRI) for unsupervised domain adaptation from simulation (SPIGAN, ICLR’19) and self-supervised depth and pose prediction from monocular imagery (SuperDepth, ICRA’19). Finally, we will highlight our next steps in learning from raw videos and imitation learning.

Adrien Gaidon is the Manager of the Machine Learning team and a Senior Research Scientist at the Toyota Research Institute (TRI) in Los Altos, CA, USA, working on open problems in world-scale learning for autonomous driving. He received his PhD from Microsoft Research – Inria Paris in 2012 and has over a decade of experience in academic and industrial Computer Vision, with over 30 publications, top entries in international Computer Vision competitions, multiple best reviewer awards, international press coverage for his work on Deep Learning with simulation, and was a guest editor for the International Journal of Computer Vision. You can find him on linkedin and Twitter (@adnothing).

Talk: Minimizing Annotation Effort

PDF slides

Human annotation of images for training visual models has been a major bottleneck since computer vision and machine learning walk together. This has been more evident since computer vision falls on the shoulders of data-hungry deep learning techniques. Therefore, any approach aiming at reducing such a time-consuming manual work, tedious and prone to errors, is of high interest for addressing computer vision applications such as ADAS and autonomous driving. In this talk we review our recent work in the line of minimizing human data annotation, including co-training, self-supervision, active learning, end-to-end driving, etc.

Antonio M. López is Associate Professor (Tenure) at the Computer Science Department of the Universitat Autònoma de Barcelona (UAB). He is also a founding member of the Computer Vision Center (CVC) at the UAB, where he created and is the Principal Investigator of the group on ADAS and Autonomous Driving since 2002. Antonio is founding member and co-organizer of consolidated international workshops such as the «Computer Vision in Vehicle Technology (CVVT)» and the «Transferring and Adapting Source Knowledge in Computer Vision (TASK-CV)». Antonio has also collaborated in numerous research projects with international companies, specially from the automotive sector. His work in the last 10 years has focused on the use of Computer Graphics, Computer Vision, Machine (Deep) Learning, and Domain Adaptation to train and test onboard AI drivers. He is the Principal Investigator at CVC/UAB of known systems such as SYNTHIA dataset and CARLA simulator. Complementary, Antonio is currently also focusing more on topics such as Active Learning, Lifelong learning, and Augmented Reality, in the context of Autonomous Driving. Antonio was also doing work related to procedural video generation for action recognition in videos, doing a close collaboration with Naver Labs to deploy the PHAV dataset. Thanks to Google Scholar you can see his list of papers here. Since January 2019, Antonio is supported by an ICREA Academia grant.

Antonio M. López

Antonio M. López

Tuan-Hung VU

Tuan-Hung VU

Talk: Adversarial Entropy Minimization for Domain Adaptation

PDF slides

Complex computer vision systems like autonomous vehicles demand high prediction accuracy in a large variety of urban environments. For example, under adverse weather conditions, the system must be able to recognize roads, lanes, sideways or pedestrians despite their appearances being largely different from ones in the training set. Current fully-supervised approaches have not yet guaranteed a good generalization to arbitrary testing cases. Thus a model trained on one domain, named as “source”, usually undergoes a drastic drop in performance when applied on another domain, named as “target”. Such a degradation is due to the distribution gap between the two domains. One promising tool to mitigate this issue is unsupervised domain adaptation (UDA), which assumes that un-annotated data from the “test domain” are available at training time, along with the annotated data from the “source domain”. We will discuss different ways to approach UDA, with application to semantic segmentation and object detection in urban scenes. We then introduce a new approach, called AdvEnt, that relies on combining adversarial training with minimization of decision entropy (seen as a proxy for uncertainty). Lastly, we propose a unified depth-aware UDA framework that leverages in several complementary ways the knowledge of dense depth in the source domain.

Tuan-Hung VU is a Research Scientist at Valeo.ai, France. He received his PhD from École Normale Supérieure, under the supervision of Ivan Laptev, and has a “Master 2” Degree in Mathematics, Machine Learning and Computer Vision (MVA) from École Normale Supérieure de Cachan. In 2014, he obtained the engineering degree from Télécom ParisTech. His recent work includes “Depth-aware Domain Adaptation in Semantic Segmentation” (DADA, technical report on arXiv) and “ADVENT: Adversarial Entropy Minimization for Domain Adaptation in Semantic Segmentation” (arXiv link), which has been accepted to CVPR’19 as an oral presentation.

Driving Autonomy – Machine Learning for Automated Driving

PDF slides

Intelligent transportation systems, and autonomous driving in particular, have captured the public imagination. In converting expectation to realisation Machine Learning will play a key role but this is not as simple as out-of-the-box deployment of strategies and models developed in related fields. I will argue that a system-centric approach not only allows us to meet the necessary requirements for real world deployment but also affords the machine learning community new opportunities for developing the next generation of intelligent algorithms.

Rob Weston is a PhD student at the Oxford Robotics Institute (ORI) speaking on behalf of Ingmar Posner, professor of Engineering Science (Applied AI) and founder of Oxbotica Ltd., an autonomous vehicle software company. His research focuses on the application of machine learning techniques to better harness the potential of radar data for robots operating in challenging real world environments. In order to be readily deployed in the real world methods are desirable which are able to process and, if necessary, assimilate new data online and in real-time, providing accurate measures of confidence to enable robust action-selection. These foundations underpin his work (both previous and to come) as well as a wider theme throughout the Applied AI group at ORI.

Rob Weston

Rob Weston

Topics of interest

  • unsupervised, semi-supervised, and weakly-supervised learning
  • self-supervised learning
  • domain adaptation
  • representation learning
  • density estimation
  • anomaly detection
  • clustering and data analysis
  • generative graphical models
  • latent variable models
  • non-parametric models
  • artificial data generation and simulation

Information for authors

  • Authors of accepted workshop papers will be published in the conference proceeding. Papers will go through the same peer-review process as regular conference submissions.
  • At least one author needs to be registered for the workshop and the conference.
  • Information on paper and submissions available at http://iv2019.org/information-for-authors/

Contact information

for questions, please send us a mail at info[at]ulad-workshop.com

Main conference

Important dates

  • Workshop day: Sunday, June 9, 2019
  • Final Workshop paper submission: April 22, 2019
  • Notification of workshop paper acceptance: March 29, 2019
  • Workshop paper submission deadline: February 7th, 2019 (extended)

 Workshop organizers