This resource is no longer available

Cover Image

In this exclusive webinar, Emma Ning of Microsoft and Peter Pyun of NVIDIA break down how to accelerate model inferencing from cloud to the edge.

Here’s the webinar agenda:

  1. An overview of ONNX and ONNX Runtime
  2. How to achieve a faster and smaller model inference with ONNX Runtime
  3. What to know about ONNX Model deployment on cloud and edge in Azure ML
  4. A quick demonstration

Tune in here.

Vendor:
Microsoft and Nvidia
Posted:
Jul 14, 2021
Published:
Dec 7, 2020
Format:
HTML
Type:
Resource Center

This resource is no longer available.