awesome-autonomous-vehicles
Autonomous Vehicles Resource Hub
A curated list of resources and tutorials for building autonomous vehicles
Curated List of Self-Driving Cars and Autonomous Vehicles Resources
2k stars
165 watching
579 forks
last commit: 8 months ago
Linked from 2 awesome lists
autonomous-carsautonomous-vehiclescar-drivingcomputer-visiondeep-learning
Awesome Autonomous Vehicles: / Foundations / Artificial Intelligence | |||
Awesome Machine Learning | 66,046 | 10 days ago | A curated list of awesome Machine Learning frameworks, libraries and software. Maintained by Joseph Misiti.Joseph Misiti |
Deep Learning Papers Reading Roadmap | 38,327 | almost 2 years ago | Deep Learning papers reading roadmap constructed from outline to detail, old to state-of-the-art, from generic to specific areas focus on state-of-the-art for anyone starting in Deep Learning. Maintained by, Flood Sung |
Open Source Deep Learning Curriculum | Deep Learning curriculum meant to be a starting point for everyone interested in seriously studying the field | ||
Awesome Autonomous Vehicles: / Foundations / Robotics | |||
Awesome Robotics | 4,387 | about 2 months ago | A list of various books, courses and other resources for robotics, maintained by kiloreux |
Awesome Autonomous Vehicles: / Foundations / Computer Vision | |||
Awesome Computer Vision | 21,049 | 6 months ago | A curated list of awesome computer vision resources, maintained by Jia-Bin Huang |
Awesome Deep Vision | 10,830 | over 1 year ago | A curated list of deep learning resources for computer vision, maintained by Jiwon Kim, Heesoo Myeong, Myungsub Choi, Jung Kwon Lee, Taeksoo Kim |
Awesome Autonomous Vehicles: / Courses | |||
[Coursera] Machine Learning | presented by , as of 2020 Jan 28 it has 125,344 ratings and 30,705 reviews | ||
[Coursera+DeepLearning.ai]Deep Learning Specialization | presented by , 5 Courses, teaches foundations of deep learning, programming language: python | ||
[Udacity] Self-Driving Car Nanodegree Program | teaches the skills and techniques used by self-driving car teams. Program syllabus can be found | ||
[University of Toronto] CSC2541 Visual Perception for Autonomous Driving | A graduate course in visual perception for autonomous driving. The class briefly covers topics in localization, ego-motion estimaton, free-space estimation, visual recognition (classification, detection, segmentation) | ||
[INRIA] Mobile Robots and Autonomous Vehicles | Introduces the key concepts required to program mobile robots and autonomous vehicles. The course presents both formal and algorithmic tools, and for its last week's topics (behavior modeling and learning), it will also provide realistic examples and programming exercises in Python | ||
[Universty of Glasgow] ENG5017 Autonomous Vehicle Guidance Systems | Introduces the concepts behind autonomous vehicle guidance and coordination and enables students to design and implement guidance strategies for vehicles incorporating planning, optimising and reacting elements | ||
[David Silver - Udacity] How to Land An Autonomous Vehicle Job: Coursework | David Silver, from Udacity, reviews his coursework for landing a job in self-driving cars coming from a Software Engineering background | ||
[Stanford] - CS221 Artificial Intelligence: Principles and Techniques | Contains a simple self-driving project and simulator | ||
[MIT] 6.S094: Deep Learning for Self-Driving Cars | - | ||
[MIT] Deep Learning | - | ||
[MIT] Human-Centered Artificial Intelligence | - | ||
[UCSD] - MAE/ECE148 Introduction to Autonomous Vehicles | A hands-on, project-based course using DonkeyCar with lane-tracking functionality and various advanced topics such as object detection, navigation, etc | ||
[MIT] 2.166 Duckietown | Class about the science of autonomy at the graduate level. This is a hands-on, project-focused course focusing on self-driving vehicles and high-level autonomy. The problem: | ||
[Coursera] Self-Driving Cars | A 4 course specialization about Self-Driving Cars by the University of Toronto. Covering all the way from the Introduction, State Estimation & Localization, Visual Perception, Motion Planning | ||
Awesome Autonomous Vehicles: / Papers | |||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | [ ] | ||
ref | . [ ] | ||
ref | [ ] | ||
ref | . [ ] | ||
ref | . [ ], [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ) | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ , ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ], [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | _Stereo and Colour Vision Techniques for Autonomous Vehicle Guidance _. [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ , , ] | ||
ref | . [ , ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | .[ ] | ||
ref | . [ ] | ||
ref | .[ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | . [ ] | ||
ref | .[ ] | ||
ref | . [ ] | ||
Awesome Autonomous Vehicles: / Research Labs | |||
SAIL-TOYOTA Center for AI Research at Stanford | The theme of the center is | ||
Berkeley DeepDrive | Investigates state-of-the-art technologies in computer vision and machine learning for automotive application | ||
Princeton Autonomous Vehicle Engineering | undergraduate student-led research group at Princeton University dedicated to advancing and promoting the field of robotics through competitive challenges, self-guided research and community outreach | ||
University of Maryland Autonomous Vehicle Laboratory | conducts research and development in the area of biologically inspired design and robotics | ||
University of Waterloo WAVE Laboratory | Research areas includes Multirotor UAV, Autonomous driving and Multi-Camera Parallel Tracking and Mapping | ||
Oxford Robotics Institute – Autonomous Systems | Researches all aspects of land based mobile autonomy | ||
Autonomous Lab - Freie Universität Berlin | Computer Vision, Cognitive Navigation, Spatial Car Environment Capture | ||
Honda Research Institute - USA | engaged in development and integration of multiple sensory modules and the coordination of these components while fulfilling tasks such as stable motion planning, decision making, obstacle avoidance, and control (test). | ||
Toyota-CSAIL Research Center at MIT | Aimed at furthering the development of autonomous vehicle technologies, with the goal of reducing traffic casualties and potentially even developing a vehicle incapable of getting into an accident | ||
Princeton Vision & Robotics | Autonomous Driving and StreetView | ||
CMU The Robotic Institute Vision and Autonomous Systems Center (VASC) | working in the areas of computer vision, autonomous navigation, virtual reality, intelligent manipulation, space robotics, and related fields | ||
Five AI | Computer vision, hardware, and other publications from a UK-based autonomous vehicle company | ||
Vehicle Industry Research Center - Széchenyi University | One of the most researched topic is self-driving (a.k.a autonomous) vehicles. The research center is preparing for this new technology-to-come by studying and researching its fundamentals and exploring the possibilities it offers | ||
Karlsruhe Institute of Technology (KIT) | At KIT, about 800 scientists of nearly 40 institutes conduct research into forward-looking, safe, sustainable, and comfortable solutions for future mobility. Scarcity of resources, lacking space, and overstrained infrastructure call for an integrated assessment of transport means and traffic flows | ||
Awesome Autonomous Vehicles: / Datasets | |||
Udacity | 6,234 | almost 3 years ago | Udacity driving datasets released for . Contains ROSBAG training data. (~80 GB) |
Comma.ai | 7 and a quarter hours of largely highway driving. Consists of 10 videos clips of variable size recorded at 20 Hz with a camera mounted on the windshield of an Acura ILX 2016. In parallel to the videos, also recorded some measurements such as car's speed, acceleration, steering angle, GPS coordinates, gyroscope angles. These measurements are transformed into a uniform 100 Hz time base | ||
Oxford RobotCar | over 100 repetitions of a consistent route through Oxford, UK, captured over a period of over a year. The dataset captures many different combinations of weather, traffic and pedestrians, along with longer term changes such as construction and roadworks | ||
Oxford Radar RobotCar | radar extension to The Oxford RobotCar Dataset providing data from a Navtech CTS350-X Millimetre-Wave FMCW radar and Dual Velodyne HDL-32E LIDARs with optimised ground truth radar odometry for 280 km of driving | ||
Oxford Road Boundaries | contains 62605 labelled samples, of which 47639 samples are curated. Each of these samples contain both raw and classified masks for left and right lenses. The data contains images from a diverse set of scenarios such as straight roads, parked cars, and junctions | ||
KITTI Vision Benchmark Suite | 6 hours of traffic scenarios at 10-100 Hz using a variety of sensor modalities such as highresolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS/IMU inertial navigation system | ||
University of Michigan North Campus Long-Term Vision and LIDAR Dataset | consists of omnidirectional imagery, 3D lidar, planar lidar, GPS, and proprioceptive sensors for odometry collected using a Segway robot | ||
University of Michigan Ford Campus Vision and Lidar Data Set | dataset collected by an autonomous ground vehicle testbed, based upon a modified Ford F-250 pickup truck. The vehicle is outfitted with a professional (Applanix POS LV) and consumer (Xsens MTI-G) Inertial Measuring Unit (IMU), a Velodyne 3D-lidar scanner, two push-broom forward looking Riegl lidars, and a Point Grey Ladybug3 omnidirectional camera system | ||
DIPLECS Autonomous Driving Datasets (2015) | dataset was recorded by placing a HD camera in a car driving around the Surrey countryside. The dataset contains about 30 minutes of driving. The video is 1920x1080 in colour, encoded using H.264 codec. Steering is estimated by tracking markers on the steering wheel. The car's speed is estimated from OCR the car's speedometer (but the accuracy of the method is not guaranteed) | ||
Velodyne SLAM Dataset from Karlsruhe Institute of Technology | two challenging datasets recorded with the Velodyne HDL64E-S2 scanner in the city of Karlsruhe, Germany | ||
SYNTHetic collection of Imagery and Annotations (SYNTHIA) | consists of a collection of photo-realistic frames rendered from a virtual city and comes with precise pixel-level semantic annotations for 13 classes: misc, sky, building, road, sidewalk, fence, vegetation, pole, car, sign, pedestrian, cyclist, lanemarking | ||
Cityscape Dataset | focuses on semantic understanding of urban street scenes. large-scale dataset that contains a diverse set of stereo video sequences recorded in street scenes from 50 different cities, with high quality pixel-level annotations of 5 000 frames in addition to a larger set of 20 000 weakly annotated frames. The dataset is thus an order of magnitude larger than similar previous attempts. Details on annotated classes and examples of our annotations are available | ||
CSSAD Dataset | Several real-world stereo datasets exist for the development and testing of algorithms in the fields of perception and navigation of autonomous vehicles. However, none of them was recorded in developing countries and therefore they lack the particular characteristics that can be found in their streets and roads, like abundant potholes, speed bumpers and peculiar flows of pedestrians. This stereo dataset was recorded from a moving vehicle and contains high resolution stereo images which are complemented with orientation and acceleration data obtained from an IMU, GPS data, and data from the car computer | ||
Daimler Urban Segmetation Dataset | consists of video sequences recorded in urban traffic. The dataset consists of 5000 rectified stereo image pairs with a resolution of 1024x440. 500 frames (every 10th frame of the sequence) come with pixel-level semantic class annotations into 5 classes: ground, building, vehicle, pedestrian, sky. Dense disparity maps are provided as a reference, however these are not manually annotated but computed using semi-global matching (sgm) | ||
Self Racing Cars - XSens/Fairchild Dataset | The files include measurements from the Fairchild FIS1100 6 Degree of Freedom (DoF) IMU, the Fairchild FMT-1030 AHRS, the Xsens MTi-3 AHRS, and the Xsens MTi-G-710 GNSS/INS. The files from the event can all be read in the MT Manager software, available as part of the MT Software Suite, available here | ||
MIT AGE Lab | a small sample of the 1,000+ hours of multi-sensor driving datasets collected at AgeLab | ||
Yet Another Computer Vision Index To Datasets (YACVID) | a list of frequently used computer vision datasets | ||
KUL Belgium Traffic Sign Dataset | a large dataset with 10000+ traffic sign annotations, thousands of physically distinct traffic signs. 4 video sequences recorded with 8 high resolution cameras mounted on a van, summing more than 3 hours, with traffic sign annotations, camera calibrations and poses. About 16000 background images. The material is captured in Belgium, in urban environments from Flanders region, by GeoAutomation | ||
LISA: Laboratory for Intelligent & Safe Automobiles, UC San Diego Datasets | traffic sign, vehicles detection, traffic lights, trajectory patterns | ||
Multisensory Omni-directional Long-term Place Recognition (MOLP) dataset for autonomous driving | It was recorded using omni-directional stereo cameras during one year in Colorado, USA | ||
Lane Instance Segmentation in Urban Environments | Semi-automated method for labelling lane instances. 24,000 image set available | ||
Foggy Zurich Dataset | Curriculum Model Adaptation with Synthetic and Real Data for Semantic Dense Foggy Scene Understanding. 3.8k High Quality Foggy images in and around Zurich | ||
SullyChen AutoPilot Dataset | 1,268 | 5 months ago | Dataset collected by SullyChen in and around California |
Waymo Training and Validation Data | One terabyte of data with 3D and 2D labels | ||
Intel's dataset for AD conditions in India | A dataset for Autonomous Driving conditions in India (road scene understanding in unstructured environments) which consists of 10k images, finely annotated with 34 classes collected from 182 drive sequences on Indian roads (by Intel & IIIT Hyderabad) | ||
nuScenes Dataset | A large dataset with 1,400,000 images and 390,000 lidar sweeps from Boston and Singapore. Provides manually generated 3D bounding boxes for 23 object classes | ||
German Traffic Sign Dataset | A large dataset of German traffic sign recogniton data (GTSRB) with more than 40 classes in 50k images and detection data (GTSDB) with 900 image annotations | ||
Swedish Traffic Sign Dataset | A dataset with traffic signs recorded on 350 km of Swedish roads, consisting of 20k+ images with 20% of annotations | ||
Argoverse 3d Tracking Dataset | A large dataset with ~1M images and ~1M labeled 3d cuboids from Miami and Pittsburgh. Provides HD maps and imagery from 7 ring cameras, 2 stereo cameras, and LiDAR | ||
Argoverse Motion Forecasting Dataset | A large dataset with trajectories of tracked objects across 324,557 scenes, mined from 1006 hours of driving | ||
Awesome Autonomous Vehicles: / Open Source Software | |||
Autoware | 9,145 | 6 days ago | Integrated open-source software for urban autonomous driving |
Comma.ai Openpilot | 49,920 | 5 days ago | an open source driving agent |
Stanford Driving Software | Software Infrastructure for Stanford's Autonomous Vehicles | ||
GTA Robotics SDC Environment | 62 | almost 8 years ago | development environment ready for Udacity Self Driving Car (SDC) Challenges |
The OSCC Project | A by-wire control kit for autonomous vehicle development | ||
OpenAI Gym | A toolkit for developing and comparing reinforcement learning algorithms. It supports teaching agents everything from walking to playing games, mountain car, car racing etc., with a good possibility to develop and validate RL algorithms for Self-Driving Cars | ||
argoverse-api | 861 | 11 months ago | Development kit for working with the 3d Tracking and Forecasting datasets, and for evaluating 3d tracking, 3d detection, and motion forecasting algorithms |
Awesome Autonomous Vehicles: / Toys | |||
TensorKart | 1,578 | almost 3 years ago | self-driving MarioKart with TensorFlow |
NeuroJS | 4,399 | about 1 year ago | A javascript deep learning and reinforcement learning library. A sample self-driving car implementation |
DonkeyCar | 3,164 | 2 months ago | A minimalist and modular self driving library for Python. It is developed for hobbyists and students with a focus on allowing fast experimentation and easy community contributions |
Awesome Autonomous Vehicles: / Companies | |||
40+ Corporations Working On Autonomous Vehicles | (As of August 28, 2019) | ||
Awesome Autonomous Vehicles: / Media / Podcasts | |||
Artificial Intelligence: AI Podcast | . Example episodes: | ||
Awesome Autonomous Vehicles: / Media / Podcasts / Artificial Intelligence: AI Podcast | |||
Sebastian Thrun: Flying Cars, Autonomous Vehicles, and Education | |||
Elon Musk: Neuralink, AI, Autopilot, and the Pale Blue Dot | |||
George Hotz: Comma.ai, OpenPilot, and Autonomous Vehicles | |||
Jeremy Howard: fast.ai Deep Learning Courses and Research | |||
Awesome Autonomous Vehicles: / Media / Podcasts | |||
Autonocast, The future of transportation | - | ||
Lex Fridman (channel) | 100+ of AI and autonomous driving related videos including which includes: | ||
Awesome Autonomous Vehicles: / Media / Podcasts / Lex Fridman (channel) | |||
Deep Learning State of the Art (2020) | [11 Jan 2020] , and | ||
MIT Deep Learning Basics: Introduction and Overview | [12 Jan 2019] | ||
Awesome Autonomous Vehicles: / Media / Podcasts | |||
watch | The Three Pillars of Autonomous Driving. [ ] | ||
watch | What goes into sensing for autonomous driving? [ ] | ||
watch | Amnon Shashua CVPR 2016 keynote: Autonomous Driving, Computer Vision and Machine Learning. [ ] | ||
watch | Chris Urmson: How a driverless car sees the road. [ ] | ||
watch | Deep Reinforcement Learning for Driving Policy. [ ] | ||
watch | NVIDIA at CES 2016 - Self Driving Cars and Deep Learning GPUs. [ ] | ||
watch | NVIDIA Drive PX2 self-driving car platform visualized. [ ] | ||
Awesome Autonomous Vehicles: / Media / Blogs | |||
Deep Learning and Autonomous Driving | |||
[Medium] Self-Driving Cars | |||
Awesome Autonomous Vehicles: / Media / Twitter | |||
comma.ai | |||
[Udacity] David Silver | |||
[Udacity] Dhruv Parthasarathy | |||
[Udacity] Eric Gonzalez | |||
[Udacity] Oliver Cameron | |||
[Udacity] MacCallister Higgins | |||
[Udacity] Sebastian Thrun | |||
[Google] Chris Urmson | |||
Awesome Autonomous Vehicles: / Laws | |||
California Regulatory Notice | |||
Michigan Just Passed the Most Permissive Self-Driving Car Laws in the Country | |||
Car accidents involving a SDC in California | |||
Nvidia starts testing its self-driving cars on public roads |