DiscoverProgramming By StealthPBS Tidbit 10 — Run LLMs Locally with Ollama with Steve Mattan
PBS Tidbit 10 — Run LLMs Locally with Ollama with Steve Mattan

PBS Tidbit 10 — Run LLMs Locally with Ollama with Steve Mattan

Update: 2024-12-19
Share

Description

In a very unusual Tidbit episode of Programming By Stealth, Allison interviews NosillaCastaway and Programming By Stealth student Steve Mattan about how he's running Large Language Models locally on his Mac. He pulls this off using a series of open source tools, starting with Ollama for the models from the command line, and then Enchanted to give him a nice GUI. He explains how he's integrated the local LLM into VSCode for his coding, and how he uses Keyboard Maestro to launch all of this at once.



You can find the outline and links to how Steve's doing this at pbs.bartificer.net.



Read an unedited, auto-generated transcript with chapter marks: PBS_2024_12_07



Join our Slack at podfeet.com/slack and look for the #pbs channel, and check out our pbs-student GitHub Organization. It's by invitation only but all you have to do is ask Allison!





Join the Conversation:

Support the Show:

Referral Links:



Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

PBS Tidbit 10 — Run LLMs Locally with Ollama with Steve Mattan

PBS Tidbit 10 — Run LLMs Locally with Ollama with Steve Mattan