DiscoverThe AI ShowSolving AI's black box problem: engineering know-ability into machine learning
Solving AI's black box problem: engineering know-ability into machine learning

Solving AI's black box problem: engineering know-ability into machine learning

Update: 2020-02-211
Share

Description

Ever been shadow-banned? Ever wondered if an algorithm is changing your perception of reality? We talk about AI's black box problem and much more in this episode of The AI Show.


What happens when you don’t know why a smart system made a specific decision?  


Today’s guest chairs the Ethics Certification Program for AI systems for the IEEE standards association. She’s also the vice-chair on the Transparency of Autonomous Systems working group. She’s on the AI faculty at Singularity University … she’s an author ... and she’s been a judge for the X-Prize.  


Her name is Nell Watson.  


We’ve probably all heard the stories about this ... in one, an image recognition system distinguished between dogs and wolves because all the wolf photos it was trained on also had SNOW in the background. Clearly, that’s a system that will fail in other circumstances …   


But unless you know why an AI system is doing what it’s doing, it’s pretty hard to fix. So today we’re talking about transparency in AI.  


How important is it to know why a smart system made a decision … and, can we engineer know-ability into all our AI systems?

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Solving AI's black box problem: engineering know-ability into machine learning

Solving AI's black box problem: engineering know-ability into machine learning

John Koetsier