DiscoverDev and Doc: AI For Healthcare Podcast#14 Aligning AI models for healthcare | Understanding Reinforcement Learning from Human Feedback (RLHF)
#14 Aligning AI models for healthcare | Understanding Reinforcement Learning from Human Feedback (RLHF)

#14 Aligning AI models for healthcare | Understanding Reinforcement Learning from Human Feedback (RLHF)

Update: 2024-02-14
Share

Description


How do we align AI models for healthcare? 👨‍⚕️ And importantly, the moral codes and ethics that we practice everyday, how does the LLM deal with ethical scenarios like the trolley problem for example? This is a fascinating topic and one we spend a lot of time thinking about.

In this episode Dev and Doc, Zeljko Kraljevic and I cover all the up to date topics around reinforcement learning, the benefits and where it can go wrong. We also discuss different RL methods including the algorithms used to train ChatGPT (RLHF).

Dev and Doc is a Podcast where developers and doctors join forces to deep dive into AI in healthcare. Together, we can build models that matter.

👨🏻‍⚕️Doc - Dr. Joshua Au Yeung - https://www.linkedin.com/in/dr-joshua...
🤖Dev - Zeljko Kraljevic https://twitter.com/zeljkokr

The podcast 🎙️
🔊Spotify: https://open.spotify.com/show/3QO5Lr3...
📙Substack: https://aiforhealthcare.substack.com/

Hey! If you are enjoying our conversations, reach out, share your thoughts and journey with us. Don't forget to subscribe whilst you're here :)

🎞️ Editor-
Dragan Kraljević https://www.instagram.com/dragan_kral...

🎨Brand design and art direction -
Ana Grigorovici
https://www.behance.net/anagrigorovic...00:00 Highlights
01:27 start
4:38 aligning ethics of ai models
7:04 doctors ethical choices daily
8:00 RLHF and AI training methods
16:29 reinforcement learning
19:35 Preference model -rewarding models correctly can make or break the success
27:05 exploiting reward function, model degradation (and how to fix it)

Ref
AI intro paper - https://pn.bmj.com/content/23/6/476
Open AI RLHF paper - https://arxiv.org/abs/1909.08593
War and peace of LLMs! - https://arxiv.org/abs/2311.17227

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

#14 Aligning AI models for healthcare | Understanding Reinforcement Learning from Human Feedback (RLHF)

#14 Aligning AI models for healthcare | Understanding Reinforcement Learning from Human Feedback (RLHF)

Dev and Doc