Making self-driving cars is one of the great AI challenges of the 21st Century and it involves many different parts. The goal is not merely to make fully autonomous driving cars so that humans never need to drive cars again. In fact, there are many forms of automation to every aspect of driving and coordination of vehicles on the road that can be considered.
In my lab we have done work on a few focussed topics in this area:
- Multi-Vehicle Communication - In a coordinated, multi-vehicle scenario such as a convoy or fleet or autonomous cars, it is important for the autonomous cars to communicate efficiently and reliably. In this topic we have looked at some ways to do this using Deep Neural Networks.
- Driver Behaviour Learning - In this line of research we look at how humans drive and try to learn models of that which can be predictive with a good level of accuracy. If autonomous vehicles drive in ways similar to, although hopefully safer than, humans, then they can more easily be integrated into the existing roads and traffic.
Our Papers on Autonomous Driving
Generative Causal Representation Learning for Out-of-Distribution Motion Forecasting
In
Proceedings of the 40th International Conference on Machine Learning (ICML).
PMLR,
Honolulu, Hawaii, USA.
Jul,
2023.
Conventional supervised learning methods typically assume i.i.d samples and are found to be sensitive to out-of-distribution (OOD) data. We propose Generative Causal Representation Learning (GCRL) which leverages causality to facilitate knowledge transfer under distribution shifts. While we evaluate the effectiveness of our proposed method in human trajectory prediction models, GCRL can be applied to other domains as well. First, we propose a novel causal model that explains the generative factors in motion forecasting datasets using features that are common across all environments and with features that are specific to each environment. Selection variables are used to determine which parts of the model can be directly transferred to a new environment without fine-tuning. Second, we propose an end-to-end variational learning paradigm to learn the causal mechanisms that generate observations from features. GCRL is supported by strong theoretical results that imply identifiability of the causal model under certain assumptions. Experimental results on synthetic and real-world motion forecasting datasets show the robustness and effectiveness of our proposed method for knowledge transfer under zero-shot and low-shot settings by substantially outperforming the prior motion forecasting models on out-of-distribution prediction.
Aggressive Driver Behavior Detection using Parallel Convolutional Neural Networks on Simulated and Real Driving Data
Zehra Camlica,
Jim Quesenberry,
Daniel Carballo,
and Mark Crowley
In
9th International Conference on Internet of Things: Systems, Management and Security (IOTSMS).
IEEE,
Milan, Italy.
Nov,
2022.
The novel method proposed in this paper is com- promised of application of two Convolutional Neural Networks (CNN) working in parallel to simultaneously classify driver be- haviors while classifying maneuvers by using time series data. We claim that the Parallel Convolutional Neural Network (PCNN) not only speeds-up training time but also increases performance since having information about the maneuver helps to improve behavior classification performance and vice versa. In this study, both simulation and real-world driving datasets are utilized for driver behavior analysis. As simulation data, mobile phone sensor data are simulated as a time series using a combination of a traffic simulator (SUMO) and a car simulation system (Webots). The same type of data is collected with a specially designed vehicle traveled on a defined route around a predefined region. The collected data are then separately utilized as training and testing data for classification of both maneuvers (e.g turns and lane changes) and driver behaviors (e.g aggressive, non-aggressive) applying a novel method using deep learning on time series data. In addition, other methods which are commonly used for time series analysis, Hidden Markov Models(HMMs) and Recurrent Neural Networks (RNN), are applied to the same datasets to compare with PCNN. According to the results, the CNN classifiers perform efficiently for a single task and PCNN outperforms both single task-CNN and RNN with an average accuracy of 86%.
Multi-Level Collaborative Control System With Dual Neural Network Planning For Autonomous Vehicle Control In A Noisy Environment
Zhiyuan Du,
Joseph Lull,
Rajesh Malhan,
Sriram Ganapathi Subramanian,
Sushrut Bhalla,
Jaspreet Sambee,
Mark Crowley,
Sebastian Fischmeister,
Donghyun Shin,
William Melek,
Baris Fidan,
Ami Woo,
and Bismaya Sahoo.
US Patent Office: #US 11,131,992 B2.
Sep,
2021.
A RLP system for a host vehicle includes a memory and levels. The memory stores a RLP algorithm, which is a multi-agent collaborative DQN with PER algorithm. A first level includes a data processing module that provides sensor data, object location data, and state information of the host vehicle and other vehicles. A second level includes a coordinate location module that, based on the sensor data, the object location data, the state information, and a refined policy provided by the third level, generates an updated policy and a set of future coordinate locations implemented via the first level. A third level includes evaluation and target neural networks and a processor that executes instructions of the RLP algorithm for collaborative action planning between the host and other vehicles based on outputs of the evaluation and target networks and to generate the refined policy based on reward values associated with events.
Deep Multi Agent Reinforcement Learning for Autonomous Driving
In
Canadian Conference on Artificial Intelligence.
May,
2020.
Learning Multi-Agent Communication with Reinforcement Learning
In
Conference on Reinforcement Learning and Decision Making (RLDM-19).
Montreal, Canada.
2019.
Training Cooperative Agents for Multi-Agent Reinforcement Learning
In
Proc. of the 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019).
Montreal, Canada.
2019.
Integration of Roadside Camera Images and Weather Data for monitoring Winter Road Surface Conditions
Juan Carrillo,
and Mark Crowley
In
Canadian Association of Road Safety Professionals (CARSP) Conference.
Calgary, Canada.
2019.
Background/Context: During the Winter season, real-time monitoring of road surface conditions is critical for the safety of drivers and road maintenance operations. Previous research has evaluated the potential of image classification methods for detecting road snow coverage by processing images from roadside cameras installed in RWIS (Road Weather Information System) stations. However, it is a challenging task due to limitations such as image resolution, camera angle, and illumination. Two common approaches to improve the accuracy of image classification methods are: adding more input features to the model and increasing the number of samples in the training dataset. Additional input features can be weather variables and more sample images can be added by including other roadside cameras. Although RWIS stations are equipped with both cameras and weather measurement instruments, they are only a subset of the total number of roadside cameras installed across transportation networks, most of which do not have weather measurement instruments. Thus, improvements in use of image data could benefit from additional data sources. Aims/Objectives: The first objective of this study is to complete an exploratory data analysis over three data sources in Ontario: RWIS stations, all the other MTO (Ministry of Transportation of Ontario) roadside cameras, and Environment Canada weather stations. The second objective is to determine the feasibility of integrating these three datasets into a more extensive and richer dataset with weather variables as additional features and other MTO roadside cameras as additional sources of images. Methods/Targets: First, we quantify the advantage of adding other MTO roadside cameras using spatial statistics, the number of monitored roads, and the coverage of ecoregions with different climate regimes. We then analyze experimental variograms from the literature and determine the feasibility of using Environment Canada stations and RWIS stations to interpolate weather variables for all the other MTO roadside cameras without weather instruments. Results/Activities: By adding all other MTO cameras as image data sources, the total number of cameras in the dataset increases from 139 to 578 across Ontario. The average distance to the nearest camera decreases from 38.4km to 9.4km, and the number of monitored roads increases approximately four times. Additionally, six times more cameras are available in the four most populated ecoregions in Ontario. The experimental variograms show that it is feasible to interpolate weather variables with reasonable accuracy. Moreover, observations in the three datasets are collected with similar frequency, which facilitates our data integration approach. Discussion/Deliverables: Integrating these three datasets is feasible and can benefit the design and development of automated image classification methods for monitoring road snow coverage. We do not consider data from pavement-embedded sensors, an additional line of research may explore the integration of this data. Our approach can provide actionable insights which can be used to more selectively perform manual patrolling to better identify road surface conditions. Conclusions: Our initial results are promising and demonstrate that additional, image only datasets can be added to road monitoring data by using existing multimodal sensors as ground truth, which will lead to greater performance on the future image classification tasks.
Decision Assist for Self-Driving Cars
In
Canadian Conference on Artificial Intelligence.
Springer,
Toronto, Ontario, Canada.
2018.
Research into self-driving cars has grown enormously in the last decade primarily due to the advances in the fields of machine intelligence and image processing. An under-appreciated aspect of self-driving cars is actively avoiding high traffic zones, low visibility zones, and routes with rough weather conditions by learning different conditions and making decisions based on trained experiences. This paper addresses this challenge by introducing a novel hierarchical structure for dynamic path planning and experiential learning for vehicles. A multistage system is proposed for detecting and compensating for weather, lighting, and traffic conditions as well as a novel adaptive path planning algorithm named Checked State A3C. This algorithm improves upon the existing A3C Reinforcement Learning (RL) algorithm by adding state memory which provides the ability to learn an adaptive model of the best decisions to take from experience. \textcopyright Springer International Publishing AG, part of Springer Nature 2018.