Engineering
Hands on Training icon
Hands On Training
Hands on Training icon

Visual Perception for Self-Driving Cars

Course Cover

5

(4)

compare button icon

Course Features

icon

Duration

31 hours

icon

Delivery Method

Online

icon

Available on

Limited Access

icon

Accessibility

Mobile, Desktop, Laptop

icon

Language

English

icon

Subtitles

English

icon

Level

Advanced

icon

Teaching Type

Self Paced

icon

Video Content

31 hours

Course Description

Welcome to Visual Perception for Self-Driving Cars, the third course in University of Toronto’s Self-Driving Cars Specialization.

This course will introduce you to the main perception tasks in autonomous driving, static and dynamic object detection, and will survey common computer vision methods for robotic perception. By the end of this course, you will be able to work with the pinhole camera model, perform intrinsic and extrinsic camera calibration, detect, describe and match image features and design your own convolutional neural networks. You'll apply these methods to visual odometry, object detection and tracking, and semantic segmentation for drivable surface estimation. These techniques represent the main building blocks of the perception system for self-driving cars. For the final project in this course, you will develop algorithms that identify bounding boxes for objects in the scene, and define the boundaries of the drivable surface. You'll work with synthetic and real image data, and evaluate your performance on a realistic dataset. This is an advanced course, intended for learners with a background in computer vision and deep learning. To succeed in this course, you should have programming experience in Python 3.0, and familiarity with Linear Algebra (matrices, vectors, matrix multiplication, rank, Eigenvalues and vectors and inverses).

Course Overview

projects-img

Hands-On Training,Instructor-Moderated Discussions

projects-img

Case Studies, Captstone Projects

Skills You Will Gain

What You Will Learn

Work with the pinhole camera model, and perform intrinsic and extrinsic camera calibration

Detect, describe and match image features and design your own convolutional neural networks

Apply these methods to visual odometry, object detection and tracking

Apply semantic segmentation for drivable surface estimation

Course Instructors

Author Image

Steven Waslander

Aerospace Studies

Prof. Steven Waslander is a leading authority on autonomous aerial and ground vehicles, including multirotor drones and autonomous driving, Simultaneous Localization and Mapping (SLAM) and multi-vehi...

Course Reviews

Average Rating Based on 4 reviews

5.0

100%

Course Cover