2nd workshop on Unsupervised Learning for Automated Driving
Half-day workshop at the IEEE Intelligent Vehicles Symposium 2020, Friday, October 23 (Fully Virtual)
supported by the IEEE ITS Technical Committee Self Driving Automobiles
Unlabelled data is easily collected, increasing traction in IV to explore unsupervised learning, its semi-, weakly-, and self-supervised variants, transfer learning, and inferring probabilistic latent representations. Share novel methodological developments, challenges and solutions for exploiting unlabelled data and reducing the annotation bottleneck.
UPDATE: New 2022 edition at IV’2022 in Aachen, see here https://intelligent-vehicles.org/ulad-2022/
Keynote speakers
While the schedule is still tentative, we are already very pleased to announce the following keynote speakers with diverse academic and industrial backgrounds, see Keynote details below for more information.
Anima Anandkumar
Professor at Caltech, director of ML at NVIDIA (biography and talk info)
Zsolt Kira
Tudor Achim
CTO at Helm.ai, Stanford (biography and talk info)
Sertac Karaman
2021-02-04 We have now uploaded the videos to Youtube (videos were originally only accessible to conference participants), and the slides made available by some of the speakers. You can find them below together with the talk info.
Schedule
The workshop will be streamed using Zoom: https://zoom.us/j/99637369351?pwd=ZHpHUzZ2STNvSlFIbFdVVjNrbVNiUT09
Note: all times in Pacific Time (PT)
- 08:00-08:15 am: Introduction WS organizers
- 08:15-09:00 am: Keynote: Sertac Karaman – Unsupervised End-to-end Learning of Driven using Novel Photorealistic Simulations
- 09:00-09:45 am: Keynote: Zsolt Kira – Leveraging single and cross-modal unlabeled data for learning with limited labels
- 09:45-10:00 am: Break
- 10:00-10:45 am: Keynote: Tudor Achim – Deep Teaching: A Scalable AI Approach to Autonomous Driving
- 10:45-11:30 am: Keynote: Anima Anandkumar – Learning robust representations through self-supervision
- 11:30-12:00 am: Closing remarks WS organizers
Workshop description
Machine learning is nowadays omnipresent in automated driving, and applied on sensor data for perception, context cues for intent recognition, trajectories for path prediction and planning, scenario clustering for test specification, density estimation for anomalous event detection, and realistic data generation for simulation. Large annotated datasets and benchmarks are key to advancing the state-of-the-art, and have so far mainly involved supervised learning.
However, the supervised learning paradigm has its limitations, since (a) new sensors or environment conditions require new labelling to counter domain shift, (b) creating manual annotations is labour intensive and time consuming, (c) some properties may be infeasible to collect (e.g. rare events) or annotate (e.g. mental states), (d) sometimes the relevant structure in the data is unknown and must be discovered (e.g. clustering, dimensionality reduction).
On the other hand, unlabelled data is easily collected, increasing traction in IV to explore unsupervised learning, its semi-, weakly-, and self-supervised variants, transfer learning, and inferring probabilistic latent representations.
This workshop therefore explores the varied use of machine learning techniques on unlabelled, partially or automatically labelled data throughout all IV research domains, encouraging IV researchers to share novel developments, challenges and solutions.
Keynotes
Anima Anandkumar
Learning robust representations through self-supervision
Abstract. To be announced
Anima Anandkumar is a Bren Professor at Caltech and Director of ML Research at NVIDIA. She was previously a Principal Scientist at Amazon Web Services. She has received several honors such as Alfred. P. Sloan Fellowship, NSF Career Award, Young investigator awards from DoD, and Faculty Fellowships from Microsoft, Google, Facebook, and Adobe. She is part of the World Economic Forum’s Expert Network. She is passionate about designing principled AI algorithms and applying them in interdisciplinary applications. Her research focus is on unsupervised AI, optimization, and tensor methods.
Leveraging single and cross-modal unlabeled data for learning with limited labels
Abstract. Autonomous driving has made significant progress in the past decade, largely owing to large annotated datasets and supervised learning. However, the abundance of unlabeled data that can be continuously collected or synthetically generated in the context of self-driving presents significant opportunities in the face of challenges such as rare objects/events and cross-domain generalization. I will discuss three methods that we have developed to better-leverage unlabeled data under various semi-supervised contexts, namely: 1) Leveraging the manifold structure to perform feature-based augmentation for image classification, 2) Addressing imbalance in pseudo-labeling based methods for object detection, and 3) Using cross-modal data and HD maps to “inflate” mature 2D detection pipelines to 3D data in self-driving data. These methods are able to achieve state of art results under semi-supervised classification and object detection, often incurring little degradation with only 1-10% of the labeled data. However, more complex tasks such as 2D or 3D detection are more difficult and there is a larger gap (compared to vanilla image classification). We argue that for data in autonomous driving, cross-modal and geometric information is still under-utilized in the context of pseudo-labeling and consistency-based losses, presenting an opportunity to leverage abundant unlabeled data even further.
Dr. Zsolt Kira is an Assistant Professor at the Georgia Institute of Technology, branch chief of the Machine Learning and Analytics group at the Georgia Tech Research Institute (GTRI), and Associate Director of Georgia Tech’s Machine Learning Center. His work lies at the intersection of machine learning and artificial intelligence for sensor processing, perception, and robotics, emphasizing the fusion of multiple sources of information for scene understanding. Current projects and interests relate to moving beyond current limitations of machine learning to tackle unsupervised/semi-supervised methods, continual/lifelong learning, and adaptation as well as distributed perception across heterogeneous robot/sensor teams. Dr. Kira has grown a portfolio of projects funded by NSF, ONR, DARPA, and the IC community. He also has won several best paper/student paper awards, taught several graduate and undergraduate machine/deep learning courses at Georgia Tech, and been invited to speak at related workshops in both academia and the within the DoD.
Slides: ulad2020_session3_zsoltKira
Zsolt Kira
Tudor Achim
Deep Teaching: A Scalable AI Approach to Autonomous Driving
Abstract. Today’s L4 autonomous vehicle deployments are limited by the tremendous cost and time of development of the safety critical AI software which is required to address the enormous tail end of corner case scenarios. We will cover an emerging AI technology called Deep Teaching, which reduces the cost and time of development of autonomous driving systems by allowing large scale training of neural networks without the bottleneck of human annotation and by tackling training in the small data regime. We’ll conclude with demonstrations of the results of Deep Teaching, including state of the art performance on today’s autonomous driving benchmarks.
Tudor Achim is a computer scientist with a decade of experience in artificial intelligence and software engineering. At 19 years old, Tudor graduated from Carnegie Mellon University with a BA in Computer Science and a minor in Mathematics. He helped start the machine learning team at Quora at age 20 where he led machine learning-based ranking improvements. Prior to founding Helm.ai, Tudor spent two years researching theoretical Machine Learning in the Computer Science PhD program at Stanford, during which he published papers in NeurIPS, ICML, and AISTATS. Helm.ai is an automotive software company that is building and productizing cutting edge artificial intelligence technology which unlocks the full market potential for fully autonomous driving. Our primary focus is on developing flexible deep learning based automotive-grade software solutions throughout the entire self-driving stack, including computer vision and sensor fusion based perception, intent modeling, path planning, and control, all with the levels of reliability required for large-scale full autonomy.
Unsupervised End-to-end Learning of Driven using Novel Photorealistic Simulations
Abstract. Autonomous vehicles hold the potential to revolutionize transportation. However, continuing research and development in academia and the industry has not yet produced systems with substantial economic impact. Arguably, an important bottleneck is data labeling for a massive number of corner cases. In this talk, we discuss novel simulation platforms, driven by data, that lead to scalable, unsupervised automated learning of end-to-end driving policies. We show that such photorealistic simulations allow the transfer of learned policies on a full-scale autonomous vehicles.
Sertac Karaman is the Charles Stark Draper Associate Professor of Aeronautics and Astronautics at the Massachusetts Institute of Technology (since Fall 2012). He has obtained BS degrees in mechanical engineering and in computer engineering from the Istanbul Technical University, Turkey, in 2007, an SM degree in mechanical engineering from MIT in 2009, and a PhD degree in electrical engineering and computer science also from MIT in 2012. His research interests lie in the broad areas of robotics and control theory. In particular, he studies the applications of probability theory, stochastic processes, stochastic geometry, formal methods, and optimization for the design and analysis of high-performance cyber-physical systems.
Sertac Karaman
Previous edition
- ULAD 2019 at IEEE Intelligent Vehicles Symposium 2019
- Slides of ULAD 2019 presentations can be found on the website!
Topics of interest
- unsupervised, semi-supervised, and weakly-supervised learning
- self-supervised learning
- domain adaptation
- GANs & VAEs
- missing data
- representation learning
- density estimation
- anomaly detection
- clustering and data analysis
- generative graphical models
- latent variable models
- non-parametric models
- artificial data generation and simulation
Main conference
- IEEE Intelligent Vehicles Symposium 2020
June 23-26, 2020, Las Vegas, NV, United States- October 19-November 13, 2020, Las Vegas, NV, United States
(new dates due to COVID-19)
Information for authors
- Authors of accepted workshop papers will be published in the conference proceeding. Papers will go through the same peer-review process as regular conference submissions.
- At least one author needs to be registered for the workshop and the conference.
- Information on paper and submissions available at https://2020.ieee-iv.org/information-for-authors/
Contact information
for questions, please send us a mail at info[at]ulad-workshop.com
Workshop organizers
Fabian Flohr
Daimler AG, Ulm, Germany
Julian Kooij
Intelligent Vehicles group, TU Delft, The Netherlands