DiscoverThe TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)Inverse Reinforcement Learning Without RL with Gokul Swamy - #643
Inverse Reinforcement Learning Without RL with Gokul Swamy - #643

Inverse Reinforcement Learning Without RL with Gokul Swamy - #643

Update: 2023-08-211
Share

Description

Today we’re joined by Gokul Swamy, a Ph.D. Student at the Robotics Institute at Carnegie Mellon University. In the final conversation of our ICML 2023 series, we sat down with Gokul to discuss his accepted papers at the event, leading off with “Inverse Reinforcement Learning without Reinforcement Learning.” In this paper, Gokul explores the challenges and benefits of inverse reinforcement learning, and the potential and advantages it holds for various applications. Next up, we explore the “Complementing a Policy with a Different Observation Space” paper which applies causal inference techniques to accurately estimate sampling balance and make decisions based on limited observed features. Finally, we touched on “Learning Shared Safety Constraints from Multi-task Demonstrations” which centers on learning safety constraints from demonstrations using the inverse reinforcement learning approach.


The complete show notes for this episode can be found at twimlai.com/go/643.

Comments 
In Channel
loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Inverse Reinforcement Learning Without RL with Gokul Swamy - #643

Inverse Reinforcement Learning Without RL with Gokul Swamy - #643

Sam Charrington