Menu

Choosing Storage for Your HPC Solution, Part 2: Reliability

Julie Fagan

Welcome to the second part of this five-part series on choosing storage for your high-performance computing (HPC) solution. In the first blog post, I talked about speed. Now, I will cover a characteristic that all your HPC operations depend on: reliability.

If your storage system is unavailable, data cannot be accessed, and work comes to a complete halt. Experts estimate that the average hourly cost of an infrastructure failure is $100,000 per hour. The cost of downtime for automobile manufacturers is much higher, ranging anywhere from $22,000 to $50,000 per minute—adding up to a staggering $1.3 million per hour. In an HPC environment, downtime can be even more costly:

  • A delay in getting a new product to market can open the door for a competitor to take the lead. A glitch in detecting anomalies on the production line can result in delivery of subpar products, leading to a loss of customers.
  • What if a company’s IT system for high-frequency stock trading goes down? Millions of dollars can be lost in just seconds if the precise window for buying or selling is missed. This downtime can have a wide-reaching, long-lasting effect if the missed opportunity was part of a mutual fund or retirement portfolio.
  • If a media streaming company is unable to deliver on-demand content to its customers, it could stand to lose millions of subscriptions, not to mention the damage to its reputation. Imagine what would happen if real-time votes for a reality competition show were unable to be captured and calculated because of an outage. Crowning the wrong person could result in expensive lawsuits, loss of viewers, and possibly cancellation of the show.
  • For oil and gas companies and healthcare researchers, downtime can come at an even greater cost by putting the health of the environment and the lives of citizens at risk. If important seismic data is not captured or processed because of a down system, drilling could trigger an earthquake. If healthcare researchers don’t have access to the data they need, finding solutions to life-threatening issues gets delayed.
In an environment where no time is a good time for downtime, the NetApp® HPC solution can help keep your operations up and running.

Built on a modular NetApp E-Series storage architecture, the solution offers:
  • Nonstop reliability with a fault-tolerant design that is proven to deliver 99.9999%+ availability
  • Best-in-class redundancy for outstanding resilience
  • Built-in data assurance features to keep your data accurate with no drops, corruption, or missed bits
With nearly 1 million systems shipped and an extensive partner ecosystem to help validate configurability, interoperability, and reliability, you can be certain that the NetApp HPC solution is proven to deliver the 24/7 availability your operations require.

Learn more about the NetApp HPC solution, and find out how customers around the world are maximizing availability in their HPC environments with NetApp E-Series storage.

And stay tuned for the third part of this five-part series, in which I will talk about the importance of simplicity when you choose storage for your HPC deployment.

Julie Fagan

Julie Fagan has a long career in high-tech solutions marketing. She loves working at NetApp where she gets to focus on bringing the best video surveillance and high performance computing storage solutions to the world along with her awesome co-workers.

View all Posts by Julie Fagan

Next Steps

Drift chat loading