How He Built The Best 7B Params LLM with Maxime Labonne #43
Description
Our guest today is Maxime Labonne, GenAI Expert, book author and developer of NeuralBeagle14-7B, one of the best performing 7B params model on the open LLM leaderboard.
In our conversation, we dive deep into the world of GenAI. We start by explaining how to get into the field and resources needed to get started. Maxime then goes through the 4 steps used to build LLMs: Pre training, supervised fine-tuning, human feedback and merging models. Throughout our conversation, we also discuss RAG vs fine-tuning, QLoRA & LoRA, DPO vs RLHF and how to deploy LLMs in production.
If you enjoyed the episode, please leave a 5 star review and subscribe to the AI Stories Youtube channel.
Link to Train in Data courses (use the code AISTORIES to get a 10% discount): https://www.trainindata.com/courses?affcode=1218302_5n7kraba
Check out Maxime's LLM course: https://github.com/mlabonne/llm-course
Follow Maxime on LinkedIn: https://www.linkedin.com/in/maxime-labonne/
Follow Neil on LinkedIn: https://www.linkedin.com/in/leiserneil/
---
(00:00 ) - Intro
(02:37 ) - From Cybersecurity to AI
(06:05 ) - GenAI at Airbus
(13:29 ) - What does Maxime use ChatGPT for?
(15:31 ) - Getting into GenAI and learning resources
(22:23 ) - Steps to build your own LLM
(26:44 ) - Pre-training
(29:16 ) - Supervised fine-tuning, QLoRA & LoRA
(34:45 ) - RAG vs fine-tuning
(37:53 ) - DPO vs RLHF
(41:01 ) - Merging Models
(45:05 ) - Deploying LLMs
(46:52 ) - Stories and career advice