DiscoverQuarkus InsightsQuarkus Insights #197: Understanding Jlama in Quarkus
Quarkus Insights #197: Understanding Jlama in Quarkus

Quarkus Insights #197: Understanding Jlama in Quarkus

Update: 2025-03-12
Share

Description

Jake Luciani and Mario Fusco join us to explain the benefits of performing LLM-inference directly in the same JVM as your Java application. They will discuss Jlama's technical aspects, its pros and cons, ongoing work, and future improvements. They will provide a practical example of how Quarkus, LangChain4j, and Jlama simplify creating a pure Java LLM-infused application.
Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Quarkus Insights #197: Understanding Jlama in Quarkus

Quarkus Insights #197: Understanding Jlama in Quarkus

Quarkus Team