DiscoverUnaligned with Robert Scoble#15: Digging into explainable AI
#15: Digging into explainable AI

#15: Digging into explainable AI

Update: 2024-04-12
Share

Description

Finally an AI model that can tell you why it gives you the answers it does!!




Angelo Dalli is building a new kind of AI that fixes the problems of current large language models. Existing models generate errors, AKA “hallucinations” but can’t tell you why.




His AI, built using neurosymbolic techniques, will be able to get rid of these errors, but even better, explain why it makes the decisions it does.




Here he talks to me about the state of the art of current AIs and where we are going.




Sponsored by AI Top Tools: www.aitoptools.com

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

#15: Digging into explainable AI

#15: Digging into explainable AI

Robert Scoble