This is the second installment in the Dataflow course series. We will be going deeper into developing pipelines with the Beam SDK. Let's start by reviewing the Apache Beam concepts. Next we will discuss streaming data processing using windows, watermarks, and triggers. Next, we will discuss the options for sources and sinks within your pipelines. We also discuss schemas that can be used to express structured data and how to statefully transform it using State and Timer APIs. Next, we will discuss best practices to maximize the performance of your pipeline. We will be covering SQL and Dataframes in the final part of the course. This will allow you to express your business logic in Beam. You'll also learn how to develop pipelines iteratively using Beam notebooks.