This resource is no longer available

Cover Image

With a growing trend towards deep learning techniques in AI, there are a lot of benefits to accelerating neural network models using GPUs.

Tune into this webinar to hear two experts from NVIDIA and Microsoft discuss how to accelerate model inferencing from cloud to the edge, covering:

  • An overview of ONNX and ONNX Runtime
  • How to reach a faster and smaller inference with ONNX Runtime
  • How to deploy ONNX models at scale with Azure ML services
Vendor:
Microsoft
Premiered:
Dec 8, 2020
Format:
Multimedia
Type:
Webcast

This resource is no longer available.