This resource is no longer available
In this exclusive webinar, Emma Ning of Microsoft and Peter Pyun of NVIDIA break down how to accelerate model inferencing from cloud to the edge.
Here’s the webinar agenda:
- An overview of ONNX and ONNX Runtime
- How to achieve a faster and smaller model inference with ONNX Runtime
- What to know about ONNX Model deployment on cloud and edge in Azure ML
- A quick demonstration
Tune in here.