Menu

NetApp Collaborates with NVIDIA to Simplify AI for Enterprises

Stan Skelton
69 views

The concept of artificial intelligence (AI) has been around for centuries. But it wasn’t until recently that AI stepped out of the realm of science fiction and become a very real, critical part of modern life. From helping doctors make faster, more accurate diagnoses to preventing identity fraud in real time to making sure that the world’s demand for oil is met now and in the future, AI is a crucial component of our daily lives.



Although AI enhances consumers’ lives and helps organizations in all industries around the globe to innovate and to grow their businesses, it is a huge disrupter for IT. To support the business, IT departments are scrambling to deploy high-performance computing (HPC) solutions that can meet the extreme demands of AI workloads. As the race to win with AI intensifies, the need for an easy-to-deploy, easy-to-manage solution becomes increasingly urgent.

Changing the Game with a Turnkey AI Supercomputing Infrastructure

In the race to AI, you can accelerate your competitive advantage by turbocharging your DGX SuperPOD with award-winning high-performance NetApp® EF600 all-flash NVMe storage.



The NVIDIA DGX SuperPOD makes supercomputing infrastructure easily accessible for organizations and delivers the extreme computational power needed to solve the world’s most complex AI problems. This turnkey solution takes the complexity and guesswork out of infrastructure design and delivers a complete, validated solution (including best-in-class compute, networking, storage, and software) to help you deploy at scale today.



The NetApp EF600 delivers 2M sustained IOPS, response times under 100 microseconds, 44GBps of throughput, and 99.9999% availability to enable fast, continuous feeding of data to an AI application. The EF600 systems provide massive scale that enables you to seamlessly accommodate data that’s coming in from the Internet of Things as well as data that’s generated from machine learning and deep learning training. This level of performance is well suited for performance-sensitive workloads like Oracle databases, or any real-time analytics on top of high-performance parallel file systems such as BeeGFS and Spectrum Scale.



With industry-leading density, NetApp EF600 storage helps reduce your power, cooling, and support costs to significantly lower your TCO. The only end-to-end NVMe system to support 100Gb NVMe over InfiniBand, 100Gb NVMe over RoCE, and 32Gb NVMe over FC, the EF600 helps future-proof your DGX SuperPOD.



“NVIDIA DGX SuperPOD with NetApp storage delivers a systemized approach for enterprises to build leadership-class AI infrastructure, so they can accelerate time-to-insight from their data,” said Charlie Boyle, vice president and general manager of DGX Systems at NVIDIA.

Bottom Line

NetApp and NVIDIA are changing the game for AI with DGX SuperPOD supported by EF600 storage. The extreme speed and massive infrastructure scale of DGX SuperPOD enable you to solve previously untrainable models.



Learn more about how the DGX SuperPOD can help you make the impossible possible:

Stan Skelton

Stan Skelton is Chief Architect and Senior Director of Business Development for the NetApp E-Series product line. With nearly 4 decades of experience in the industry, Stan has held extensive roles in engineering, product management, advanced development, architecture, and business development at NCR, AT&T, Symbios, LSI, Engenio, and NetApp. A true visionary, Stan is continually looking beyond the horizon to the market’s future. When Stan is not looking into the next major technology innovation, he can most likely be found traveling with his wife, riding a bicycle, or both. An avid cyclist, his passion for bicycles includes everything from building and riding them to immersing himself into the culture and studying the industry as a great example of innovation.

View all Posts by Stan Skelton

Next Steps

Drift chat loading