Watch the course summary
Who is this course designed for?
Data Scientists and Machine Learning engineers, who want to learn the basics of Deep Reinforcement Learning and get familiar with a production-grade Deep RL framework within a short time frame
Managers who want to know how Reinforcement Learning is applied in the industry and have an overview of the standard Deep RL tools
Students in a university-level Machine Learning curriculum who want a hands-on, practical introduction to Deep Reinforcement Learning.
What are the prerequisites?
Familiarity with the basics of Deep Learning
Intermediate knowledge of Python programming
What will you learn?
In Chapter 1, you will get introduced to Reinforcement Learning and understand its key concepts by analyzing a real-world problem and a case study from the industry.
By the end of this chapter, you will be able to analyze any problem and decide if Reinforcement Learning could be a promising approach to solve it.
In Chapter 2, you will learn the steps and tools involved in solving learning tasks using Reinforcement Learning.
In Chapter 3, we will focus on Reinforcement Learning environments.
Deep RL agents are usually trained in simulations and OpenAI Gym's simulation API is the industry standard in RL.
You will get introduced to OpenAI Gym and its API. By the end of this chapter, you will be able to control agents in Gym environments.
In Chapter 4, we will focus on Deep Reinforcement Learning frameworks. Frameworks have high-quality implementations of Deep RL algorithms so that you don't have to develop them from scratch. After this chapter, you will be able to
- Explain how Deep RL algorithms work at the high level
- Use Ray RLlib's implementation of Deep RL algorithms to solve OpenAI Gym environments
- Choose algorithms (e.g. DQN, PPO, TD3, SAC etc.) appropriately based on the nature of the problem
- Visualize training and evaluation metrics using Tensorboard
- Save trained agents and use them later (and even capture videos of live agents)
By the end of this chapter, you will be able to solve most benchmark RL tasks in OpenAI Gym.
*Some lessons can be previewed without enrolling
- What is Reinforcement Learning? (5:46)
- Quiz: RL Objective
- Visualizing Reinforcement Learning Tasks (5:47)
- Reinforcement Learning vs. Supervised/Self-Supervised Learning (8:49)
- Quiz: Problem Attributes That Justify RL Application
- Reinforcement Learning: Commercial and Intellectual Value (7:47)
- Let's Connect
- Reinforcement Learning Simulation Packages in Python (5:58)
- Installing OpenAI Gym (gym[all]) on Linux, Windows and Mac (10:32)
- OpenAI Gym: How to Start an Environment and Visualize it (6:28)
- Coding Exercise: Set up the BipedalWalker-v3 environment
- OpenAI Gym: How to Observe the Environment (6:56)
- Quiz: Observation Elements in BipedalWalker-v3
- Coding Exercise: Interpret the Observation Space
- OpenAI Gym: How to Take Actions (7:30)
- Quiz: Action Elements in BipedalWalker-v3
- Coding Exercise: Taking Actions in BipedalWalker-v3
- OpenAI Gym: Rewards and Goals (7:00)
- Quiz: Real World Goals
- Coding Exercise: Reward for Falling Down in BipedalWalker-v3
- OpenAI Gym: Terminal States and Episodes (8:46)
- Quiz: Terminal State in Video Games
- Coding Exercise: Calculate Expected Cumulative Rewards per Episode
- How Reinforcement Learning Algorithms Work: A High Level Overview (9:32)
- Quiz: Deep Neural Nets in RL
- Which Reinforcement Learning Framework is the Best? (7:05)
- How to Install Ray RLlib (2:16)
- Ray RLlib: How to Use Deep RL Algorithms to Solve RL Problems (10:44)
- Coding Exercise: Teach a Robot How to Walk
- Ray RLlib: How to Visualize Results Using Tensorboard (8:12)
- Coding Exercise: Visualize Results from the BipedalWalker-v3 PPO Experiment
- Ray RLlib: How to Save a Trained Agent for Later Use (3:15)
- Coding Exercise: Save the Trained Robot
- Ray RLlib: How to Use and Record a Saved Agent (7:29)
- Coding Exercise: Create a Video of the Walking Robot
- How to Choose an Appropriate Deep RL Algorithm for Your Problem (6:15)
- Quiz: Algorithm Selection in the Inventory Management Problem
- How Was Your Experience?
- Where to Go from Here? (4:38)
Bite sized video lessons with no fluff (on an average 6 mins long and rarely over 10 mins).
The whole course can be completed in 4 hours (including exercises).
All videos come with high-quality English subtitles.
Video lessons and demonstrations are followed by coding challenges whenever possible so that you can put your knowledge to the test.
The exercises are part of an overarching project, where we teach a robot how to walk. We will record a video of this agent at the end of the course, making it easy to share your work with others.
Certificate of Completion
You will get a downloadable course completion certificate when you finish the course.
Reviews from our learners
"This course broke down complex RL concepts into small pieces that I could easily understand"
Martin Musiol - Managing Data Scientist, IBM
"Brilliant introduction to RL concepts and how they map to RLlib."
Jules S. Damji - Lead Developer Advocate, Anyscale
"If you wanna learn DRL in a short period of time, this is the course for you"
Mobin Nik khesal - Machine Learning Engineer, DataArt
Meet the instructor
- Dibya Chakravorty is a Senior Python Engineer, working with one of the biggest automotive companies in Germany.
- He has been programming in Python for 15 years. He co-leads the Python community in Munich, Germany.
- He is also a DataCamp instructor and has taught 20000+ students in the past.
"Dibya is one of the most fluent Python instructors in the community"
Anton Caceres - Python Software Foundation Fellow
"Dibya's technical expertise is on par with his pedagogy "
Hadrien Lacroix - Data Science Curriculum Manager, Datacamp
30-day money-back guarantee
Your purchase is protected by a 30-day money-back guarantee. If the course doesn't meet your needs for any reason, let me know within 30 days of the purchase, and you will get a full refund. No questions asked.
If you have any other questions prior to enrollment, please drop me a message.