DiscoverChannel 9ONNX Runtime | AI Show
ONNX Runtime | AI Show

ONNX Runtime | AI Show

Update: 2020-07-07
Share

Description

ONNX Runtime inference engine is capable of executing ML models in different HW environments, taking advantage of the neural network acceleration capabilities. Microsoft and Xilinx worked together to integrate ONNX Runtime with the VitisAI SW libraries for executing ONNX models in the Xilinx U250 FPGAs. We are happy to introduce the preview release of this capability today.

Jump To: 

[06:15 ] Demo by PeakSpeed for satellite imaging Orthorectification 

Learn More:

The AI Show's Favorite links:

 

 

Comments 
Download from Google Play
Download from App Store
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

ONNX Runtime | AI Show

ONNX Runtime | AI Show

Seth Juarez, Anna Soracco