DiscoverDev and Doc: AI For Healthcare Podcast#17 How to build a clinically safe Large Language Model - Hippocratic AI, Llama3, Biollama
#17 How to build a clinically safe Large Language Model - Hippocratic AI, Llama3, Biollama

#17 How to build a clinically safe Large Language Model - Hippocratic AI, Llama3, Biollama

Update: 2024-05-09
Share

Description

How do we reach the holy grail of a clinically safe LLM for healthcare? Dev and Doc are back to discuss news with Meta's LlaMA model and potential of healthcare LLMs finetuned on top like BioLlaMa. We discuss the key steps in building a clinically safe LLM for healthcare for healthcare and how this was pursued by Hippocratic AI's latest model - Polaris.


👨🏻‍⚕️Doc - Dr. Joshua Au Yeung - https://www.linkedin.com/in/dr-joshua-auyeung/
🤖Dev - Zeljko Kraljevic https://twitter.com/zeljkokr

The podcast 🎙️
🔊Spotify: https://podcasters.spotify.com/pod/show/devanddoc
📙Substack: https://aiforhealthcare.substack.com/

Hey! If you are enjoying our conversations, reach out, share your thoughts and journey with us. Don't forget to subscribe whilst you're here :)

🎞️ Editor- Dragan Kraljević https://www.instagram.com/dragan_kraljevic/
🎨Brand design and art direction - Ana Grigorovici https://www.behance.net/anagrigorovici027d

References

Hippocratic AI LLM- https://arxiv.org/pdf/2403.13313
BioLLM tweet - https://twitter.com/aadityaura/status/1783662626901528803
Foresight lancet paper -https://www.thelancet.com/journals/landig/article/PIIS2589-7500(24)00025-6/fulltext
Linear processing units- https://wow.groq.com/lpu-inference-engine/

Timestamps
00:00 Start
01:10 Intro- llama3 , a chatGPT level model in our hands
06:53 Linear processing units to run LLMs
09:42 BioLLM for medical question and answering
11:13 quality and size of dataset, using youtube transcripts
12:41 Question and answering pairs do not reflect the real world - holy grail of healthcare llm
18:43 Dev has Beef with hippocratic AI
20:25 Step1 Training a clinical foundational model from scratch
22:43 Step 2 Instruction tuning with multi-turn simulated conversation
24:15 Step 3 training the model to guide model in tangential conversations
27:42 Focusing on the hospital back office and specialist nurse phone calls
33:02 Evaluating Polaris - clinical safety LLM , bedside manner, medical safety advice
Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

#17 How to build a clinically safe Large Language Model - Hippocratic AI, Llama3, Biollama

#17 How to build a clinically safe Large Language Model - Hippocratic AI, Llama3, Biollama

Dev and Doc