DiscoverThe Machine: A computer science education podcastBuilding a Local Large Language Model (LLM)
Building a Local Large Language Model (LLM)

Building a Local Large Language Model (LLM)

Update: 2024-02-08
Share

Description

Another #ComputingWeek talk turned into a podcast! Two Red Hat software engineers, both recent graduates of SETU, returned to discuss the issues surrounding running your own LLM on a local machine, how models and datasets are built and reduced (quantised) so as to run on a laptop rather than an array of servers. Mark Campbell and Dimitri Saradkis provided excellent insight on the technical issues surround this topic, before getting into some of the ethical and moral issues with host Rob O'Connor at the end.


You can connect with all the people on this podcast on LinkedIn at:



Here are links to some of the tools referenced in the podcast:



Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Building a Local Large Language Model (LLM)

Building a Local Large Language Model (LLM)

Rob O'Connor