DiscoverScott & Mark Learn To...Scott & Mark Learn To... Use AI and Know AI Limitations
Scott & Mark Learn To... Use AI and Know AI Limitations

Scott & Mark Learn To... Use AI and Know AI Limitations

Update: 2024-10-30
Share

Description

In this episode of Scott & Mark Learn To, Scott Hanselman and Mark Russinovich explore the evolving role of AI in tech, from leveraging tools like GitHub Copilot to boost productivity in coding, to the potential pitfalls of over-reliance on AI. They discuss how AI is reshaping both education and professional development and reflect on the challenges of large language models (LLMs), including issues like hallucinations, indirect prompt injection attacks, and jailbreaks. Mark highlights how models, shaped by Reinforcement Learning with Human Feedback (RLHF), can still produce unpredictable results, underscoring the need for transparency, safety, and ethical use in AI-driven systems. 

 

Takeaways:    

  • Whether reliance on AI affects one's foundational coding skills and overall efficiency 
  • How to balance continuous learning with maintaining expertise in technology fields 
  • AI models sometimes produce hallucinations and the importance of understanding how to effectively use these tools   

 

Who are they?     

View Scott Hanselman on LinkedIn  

View Mark Russinovich on LinkedIn   


       

Listen to other episodes at scottandmarklearn.to  

 

Watch Scott and Mark Learn on YouTube 


Discover and follow other Microsoft podcasts at microsoft.com/podcasts   


Download the Transcript



Hosted on Acast. See acast.com/privacy for more information.

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Scott & Mark Learn To... Use AI and Know AI Limitations

Scott & Mark Learn To... Use AI and Know AI Limitations