TensorFlow Privacy: Learning with differential privacy for training data

Course Cover
compare button icon

Course Features

icon

Duration

41 minutes

icon

Delivery Method

Online

icon

Available on

Limited Access

icon

Accessibility

Desktop, Laptop

icon

Language

English

icon

Subtitles

English

icon

Level

Intermediate

icon

Teaching Type

Self Paced

icon

Video Content

41 minutes

Course Description

It can be difficult to distinguish between what the models learned from training and what they have just memorized when evaluating ML model models. This difference can be critical in certain ML tasks, such if ML models are trained with sensitive data. Recent developments have allowed for differentially private training of models in ML, including deep neural network (DNNs) that use modified stochastic gradient descent to offer strong privacy guarantees for training data.

These techniques are practical and easy to use. They do come with their own hyperparameters and can be adjusted in ways that make learning less sensitive for outlier data. This may reduce utility. Ulfar Erlingsson explains the basics of ML privacy. He also introduces differential privacy and its value as a gold standard. He then dives into the principles behind ML privacy and dives into intended memorization and how it differs to generalization.

Course Overview

projects-img

International Faculty

projects-img

Post Course Interactions

projects-img

Instructor-Moderated Discussions

Skills You Will Gain

What You Will Learn

A basic understanding of stochastic gradient descent

Experience using TensorFlow to train ML models

Course Cover