This resource is no longer available
The data needs of task-specific AI models like ML and natural language processing necessitate robust infrastructure that can meet application latency and bandwidth requirements—sometimes up to billions of parameters worth.
HPE and NVIDIA can help you meet this challenge.
The GPU accelerated server NVIDIA-Certified HPE ProLiant DL380 can be configured to accelerate AI training and inference while still providing the resources necessary to address traditional IT workloads.
Access this reference configuration to learn more about flexible and democratized AI infrastructure, with virtualization capabilities ideal for resource and performance optimization.