DiscoverThe Prompt DeskData Preparation Best Practices for Fine Tuning
Data Preparation Best Practices for Fine Tuning

Data Preparation Best Practices for Fine Tuning

Update: 2024-09-11
Share

Description

In this episode of The Prompt Desk podcast, hosts Bradley Arsenault and Justin Macorin dive deep into the world of fine-tuning large language models. They discuss:

  • The evolution of data preparation techniques from traditional NLP to modern LLMs

  • Strategies for creating high-quality datasets for fine-tuning

  • The surprising effectiveness of small, well-curated datasets

  • Best practices for aligning training data with production environments

  • The importance of data quality and its impact on model performance

  • Practical tips for engineers working on LLM fine-tuning projects

Whether you're a seasoned AI practitioner or just getting started with large language models, this episode offers valuable insights into the critical process of data preparation and fine-tuning. Join Brad and Justin as they share their expertise and help you navigate the challenges of building effective AI systems.
---
Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.
Check out PromptDesk.ai for an open-source prompt management tool.
Check out Brad’s AI Consultancy at bradleyarsenault.me
Add Justin Macorin and Bradley Arsenault on LinkedIn.


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Data Preparation Best Practices for Fine Tuning

Data Preparation Best Practices for Fine Tuning

Justin Macorin, Bradley Arsenault