DiscoverScott & Mark Learn To...Scott & Mark Learn To… Induced Hallucinations
Scott & Mark Learn To… Induced Hallucinations

Scott & Mark Learn To… Induced Hallucinations

Update: 2025-05-28
Share

Description

In this episode of Scott and Mark Learn ToScott Hanselman and Mark Russinovich dive into the chaotic world of large language models, hallucinations, and grounded AI. Through hilarious personal stories, they explore the difference between jailbreaks, induced hallucinations, and factual grounding in AI systems. With live prompts and screen shares, they test the limits of AI's reasoning and reflect on the evolving challenges of trust, creativity, and accuracy in today's tools.  

 

 

Takeaways:    

  • AI is getting better, but we still need to be careful and double check our work 
  • AI sometimes gives wrong answers confidently 
  • Jailbreaks break the rules on purpose, while hallucinations are just AI making stuff up 

   

Who are they?     

View Scott Hanselman on LinkedIn  

View Mark Russinovich on LinkedIn   

 

Watch Scott and Mark Learn on YouTube 

       

Listen to other episodes at scottandmarklearn.to  

         

Discover and follow other Microsoft podcasts at microsoft.com/podcasts   


Hosted on Acast. See acast.com/privacy for more information.

Comments 
loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Scott & Mark Learn To… Induced Hallucinations

Scott & Mark Learn To… Induced Hallucinations