1st workshop on Unsupervised Learning for Automated Driving
Half-day workshop at the IEEE Intelligent Vehicles Symposium 2019, June 9 2019, MINES ParisTech, Paris,
supported by the IEEE ITS Technical Committee Self Driving Automobiles
Unlabelled data is easily collected, increasing traction in IV to explore unsupervised learning, its semi-, weakly-, and self-supervised variants, transfer learning, and inferring probabilistic latent representations. Share novel methodological developments, challenges and solutions for exploiting unlabelled data and reducing the annotation bottleneck.
Antonio M. López
Machine learning is nowadays omnipresent in automated driving, and applied on sensor data for perception, context cues for intent recognition, trajectories for path prediction and planning, scenario clustering for test specification, density estimation for anomalous event detection, and realistic data generation for simulation. Large annotated datasets and benchmarks are key to advancing the state-of-the-art, and have so far mainly involved supervised learning.
However, the supervised learning paradigm has its limitations, since (a) new sensors or environment conditions require new labelling to counter domain shift, (b) creating manual annotations is labour intensive and time consuming, (c) some properties may be infeasible to collect (e.g. rare events) or annotate (e.g. mental states), (d) sometimes the relevant structure in the data is unknown and must be discovered (e.g. clustering, dimensionality reduction).
On the other hand, unlabelled data is easily collected, increasing traction in IV to explore unsupervised learning, its semi-, weakly-, and self-supervised variants, transfer learning, and inferring probabilistic latent representations.
This workshop therefore explores the varied use of machine learning techniques on unlabelled, partially or automatically labelled data throughout all IV research domains, encouraging IV researchers to share novel developments, challenges and solutions.
- 13:00 – 13:05: WS organizer – Welcome and introduction
- 13:05 – 13:45: 1st Keynote: Ingmar Posner – Talk to be announced …
- 13:45 – 14:25: 2nd Keynote: Tuan-Hung VU – Talk to be announced …
- 14:25 – 14:55: 2 paper presentations each 10 min + 5 min questions
- Backpropagation for Parametric STL,
Karen Leung, Nikos Arechiga and Marco Pavone
- Learning Error Patterns from Diagnosis Trouble Codes,
Stefan Kriebel, Evgeny Kusmenko, Bernhard Rumpe and Igor Shumeiko
- Backpropagation for Parametric STL,
- 14:55 – 15:15: Coffee Break – 20 min
- 15:15 – 15:55: 3rd Keynote: Antonio López – Minimizing Annotation Effort
- 15:55 – 16:35: 4th Keynote: Adrien Gaidon – Beyond Supervised Driving
- 16:35 – 17:35: Roundtable discussion: All keynote speakers and audience
- Workshop day: June 9, 2019
- Final Workshop paper submission: April 22, 2019
- Notification of workshop paper acceptance: March 29, 2019
- Workshop paper submission deadline: February 7th, 2019 (extended)
Talk: Beyond Supervised Driving
Crowd-sourced steering does not sound as appealing as automated driving. We need to go beyond supervised learning for automated driving, including for computer vision problems seeing great progress with strong supervision today. First, we will motivate why this is required for long-term large-scale autonomous robots. Second, we will discuss recent state-of-the-art results obtained in the ML team at Toyota Research Institute (TRI) for unsupervised domain adaptation from simulation (SPIGAN, ICLR’19) and self-supervised depth and pose prediction from monocular imagery (SuperDepth, ICRA’19). Finally, we will highlight our next steps in learning from raw videos and imitation learning.
Adrien Gaidon is the Manager of the Machine Learning team and a Senior Research Scientist at the Toyota Research Institute (TRI) in Los Altos, CA, USA, working on open problems in world-scale learning for autonomous driving. He received his PhD from Microsoft Research – Inria Paris in 2012 and has over a decade of experience in academic and industrial Computer Vision, with over 30 publications, top entries in international Computer Vision competitions, multiple best reviewer awards, international press coverage for his work on Deep Learning with simulation, and was a guest editor for the International Journal of Computer Vision. You can find him on linkedin and Twitter (@adnothing).
Talk: Minimizing Annotation Effort
Human annotation of images for training visual models has been a major bottleneck since computer vision and machine learning walk together. This has been more evident since computer vision falls on the shoulders of data-hungry deep learning techniques. Therefore, any approach aiming at reducing such a time-consuming manual work, tedious and prone to errors, is of high interest for addressing computer vision applications such as ADAS and autonomous driving. In this talk we review our recent work in the line of minimizing human data annotation, including co-training, self-supervision, active learning, end-to-end driving, etc.
Antonio M. López is Associate Professor (Tenure) at the Computer Science Department of the Universitat Autònoma de Barcelona (UAB). He is also a founding member of the Computer Vision Center (CVC) at the UAB, where he created and is the Principal Investigator of the group on ADAS and Autonomous Driving since 2002. Antonio is founding member and co-organizer of consolidated international workshops such as the «Computer Vision in Vehicle Technology (CVVT)» and the «Transferring and Adapting Source Knowledge in Computer Vision (TASK-CV)». Antonio has also collaborated in numerous research projects with international companies, specially from the automotive sector. His work in the last 10 years has focused on the use of Computer Graphics, Computer Vision, Machine (Deep) Learning, and Domain Adaptation to train and test onboard AI drivers. He is the Principal Investigator at CVC/UAB of known systems such as SYNTHIA dataset and CARLA simulator. Complementary, Antonio is currently also focusing more on topics such as Active Learning, Lifelong learning, and Augmented Reality, in the context of Autonomous Driving. Antonio was also doing work related to procedural video generation for action recognition in videos, doing a close collaboration with Naver Labs to deploy the PHAV dataset. Thanks to Google Scholar you can see his list of papers here. Since January 2019, Antonio is supported by an ICREA Academia grant.
Antonio M. López
Talk to be announced …
Ingmar Posner is professor of Engineering Science (Applied Artificial Intelligence) at the Oxford Robotics Institute, and Founder of Oxbotica Ltd., and autonomous vehicle software company. His research focuses on the application of machine learning techniques to emerging mobile robotics tasks such as semantic mapping, active exploration and life-long learning. Mobile robotics presents an exciting and unconventional domain for machine learning applications since the data typically gathered by a mobile robot differ significantly from that of other, more typical application areas: they often originate from a combination of different modalities sensing spatially and temporarily contiguous workspaces. Expert labelling of a limited amount of new data can be obtained by way of human-machine interaction. New data can be acquired on demand. The requirements imposed on machine learning methods in mobile robotics are similarly particular: methods are desirable which are able to process and, if necessary, assimilate new data online and in real-time. The algorithms should be able to exploit the sequential nature of the data and be able to provide accurate measures of confidence to enable robust action-selection. Of particular concern in current work is the extraction of ‘higher-order’ semantic information from sensor data for autonomous navigation and mapping tasks outdoors.
Talk to be announced …
Tuan-Hung VU is a Research Scientist at Valeo.ai, France. He received his PhD from École Normale Supérieure, under the supervision of Ivan Laptev, and has a “Master 2” Degree in Mathematics, Machine Learning and Computer Vision (MVA) from École Normale Supérieure de Cachan. In 2014, he obtained the engineering degree from Télécom ParisTech. His recent work includes “Depth-aware Domain Adaptation in Semantic Segmentation” (DADA, technical report on arXiv) and “ADVENT: Adversarial Entropy Minimization for Domain Adaptation in Semantic Segmentation” (arXiv link), which has been accepted to CVPR’19 as an oral presentation.
Topics of interest
- unsupervised, semi-supervised, and weakly-supervised learning
- self-supervised learning
- domain adaptation
- representation learning
- density estimation
- anomaly detection
- clustering and data analysis
- generative graphical models
- latent variable models
- non-parametric models
- artificial data generation and simulation
Information for authors
- Authors of accepted workshop papers will be published in the conference proceeding. Papers will go through the same peer-review process as regular conference submissions.
- At least one author needs to be registered for the workshop and the conference.
- Information on paper and submissions available at http://iv2019.org/information-for-authors/
for questions, please send us a mail at info[at]ulad-workshop.com