ONNX Runtime: A quick overview and demo

Managing Cloud Infrastructure

Cover

In this exclusive webinar, Emma Ning of Microsoft and Peter Pyun of NVIDIA break down how to accelerate model inferencing from cloud to the edge.

Here’s the webinar agenda:

  1. An overview of ONNX and ONNX Runtime
  2. How to achieve a faster and smaller model inference with ONNX Runtime
  3. What to know about ONNX Model deployment on cloud and edge in Azure ML
  4. A quick demonstration

Tune in here.

Vendor:
Microsoft and Nvidia
Posted:
Apr 1, 2021
Published:
Dec 7, 2020
Format:
HTML
Type:
Resource Center
Already a Bitpipe member? Log in here

Download this Resource Center!