Menu

NetApp end-to-end NVMe – Delivering even more for our customers

Table Of Contents

Share this page

Ricky Martin
Ricky Martin
712 views

A few years ago, NetApp was the first major vendor to release end-to-end NVMe for flagship arrays, and we’re about to extend that leadership even further. If you're curious about what that’s going to look like and why it’s important, head on over to the What Is NVMe? - Benefits and Use Cases | NetApp page.

If you are an existing NetApp® ONTAP® SAN customer, this is great news, because you could be getting significantly more performance by upgrading to the latest version of ONTAP. If you’re not using ONTAP SAN yet, and you’re looking to buy a new storage array, make sure that you’re getting the best performance for your money and are future proofing your investment. That means putting end-to-end NVMe right at the top of your checklist.

The really important part of this is the end-to-end part, all the way from host to media, not just inside the array. That’s because most of the benefits lie in the NVMe connection between a server and the storage. For local storage, that’s easy. If you have NVMe media, then the communication is always going to be NVMe, and that gives you a whole stack of benefits compared to any SCSI-based system. More CPU cycles for applications, faster response times, and better throughput. The performance boost is so good that in many cases, a single local NVMe drive can be significantly faster than the fastest traditional SAN.

Figure 1) Traditional SAN vs local NMVe

Even though local NVMe had a lot of advantages over old-school end-to-end Fibre Channel SAN, storage arrays still provided a lot of other benefits, like compression, deduplication, replication,  and more. So the question for companies like NetApp was, "What's the best way to get the best of both worlds?"

There are two main ways to look at this question: focus on hardware improvements or focus on software innovation. 

Pure's legacy hardware-defined approach

One company that made a big noise about NVMe was Pure; they decided that a proprietary hardware approach to the problem was best. When they started down the hardware optimization path, most NVMe hardware devices weren't designed for high availability. That’s fine when an NVMe drive sits inside of a single server, but when they're part of a high-availability configuration, this approach introduces single points of failure. Pure’s solution was to build proprietary hardware devices and push the high-availability firmware functionality into their storage operating system. They rushed this design to market as fast as they could.

Figure 2) Traditional SAN vs local NMVe vs Traditional SAN with NVMe Media

This was the foundation of their //X range, which in mid2018 began to make their previous //M arrays obsolete. To get customers to make the switch and undertake the risk and expense of upgrading and refreshing their hardware, Pure made a huge fanfare about how revolutionary their NVMe technology was.

Unfortunately, despite the hype, they didn’t deliver much usable benefit, mostly because the connection from the server to the storage was still using legacy Fibre Channel or iSCSI connections. Based on their very scant performance data, there was a modest increase in performance, but this could easily be due to faster CPUs. Regardless of the small performance improvement, there was a pretty big reality gap between the hype and the delivery. 

NetApp's cloud-led software-defined approach

At NetApp, we avoided the pitfalls of developing proprietary NVMe media, because we knew that many drive manufacturers were working to deliver NVMe-enabled SSDs that didn’t suffer from single points of failure. But to  truly take advantage of that benefit, the NVMe standard needed to be improved to support high-availability use cases. In 2018, as part of the NVMe Express Promoter Group (an organization that Pure still does not contribute to), NetApp created and helped to ratify under the NVMe 1.3 specification a critical piece of software functionality, called Asynchronous Namespace Access. This software capability allows an NVMe device to be accessed through more than one physical path, making it easier to take advantage in an industry-standard way of the kind of "dual-ported" media needed by arrays. It also makes it possible to connect a host to NVMe safely over a network without fear of network device failure.

Even though we knew that industry-standard dual-ported NVMe drives weren’t far away, NetApp didn’t sit around waiting. In 2017, long before those drives became available and well before Pure announced the availability of their //X array, NetApp released the E570, which used proven, high-performance dual-ported SAS media at the back end, and NVMe over 100Gbit InfiniBand at the front. The E570 delivered performance that rivalled the benefits of locally attached media, while offering the scale and reliability needed by the world's fastest supercomputers.

Figure 3) Traditional SAN vs local NMVe vs NVMe-F with SAS Media

The superiority of software

Now to be fair, NetApp E-Series systems like the  EF570 and the incredible EF600 are tailored toward raw performance, which is reflected in the benchmark results at NetApp EF570 All-Flash Array Review | StorageReview.com  and NetApp AFA EF600 Review | StorageReview.com. So perhaps it’s natural that these kinds of high-performance software features were delivered in the EF supercomputing storage first. But while NetApp was delivering engineering results from software, Pure delivered promises and marketing based around custom hardware.

It took Pure until February 2019 before they announced NVMe connectivity from the host to storage and in the mean time NetApp delivered even more NVMe functionality which you can see in our why NetApp is best for flash page, and which I’ll cover in more detail the next blog.

Ricky Martin

Ricky Martin

Ricky Martin leads NetApp’s global market strategy for its portfolio of hybrid cloud solutions, providing technology insights and market intelligence to trends that impact NetApp and its customers. With nearly 40 years of IT industry experience, Ricky joined NetApp as a systems engineer in 2006, and has served in various leadership roles in the NetApp APAC region including developing and advocating NetApp’s solutions for artificial intelligence, machine learning and large-scale data lakes.

View all Posts by Ricky Martin

Next Steps

Drift chat loading