This resource is no longer available
As workflows shift away from the CPU in GPU-centric systems, the data path from storage to GPUs increasingly becomes the bottleneck.
NVIDIA and its partners are relieving that bottleneck with a new technology called GPUDirect Storage that includes a new set of interfaces. When partners are enabled with GPU Direct Storage, the Direct Memory Access engine in a NIC or local storage is able to move data directly to and from GPU memory, rather than going through a bounce buffer in the CPU.
This can improve bandwidth, reduce latency, reduce CPU-side memory management overheads, and reduce interference with CPU utilization.
In this presentation, discover the benefits of GPUDirect Storage with recent results from demos and proof points in AI, data analytics, and visualization.