Menu

From edge to core to cloud, part 2: What’s the practical application of a fully integrated infrastructure?

Meg Matich
195 views

practical application of fully integrated infrastructure As we said in part 1 of the “Edge to Core to Cloud” series, fully integrated infrastructure is a work in progress that’s emerging through adoption of foundational technologies. But by examining particular use cases, we can easily shine a bright light on just how necessary, and advantageous, integrated infrastructures could be.

Let’s dwell on AI for a moment.

Artificial intelligence solutions and deep learning algorithms rely on massive datasets. For example, autonomous cars can collect hundreds of terabytes per day. These datasets originate at the edge, where they’re partially processed, crunched, and shrunk. Then limited datasets are transferred to the core or cloud, analyzed, and often used for training and inference before being archived.

If an organization has disparate, disconnected, incompatible pools of storage, AI processes are blocked. That’s why AI applications that deliver business value benefit from a comprehensive, seamless architecture with data management that extends across edge, core, and cloud environments. By minimizing tedious data housekeeping and custom code for multiple APIs and integrations, data scientists can move forward faster.

Preparing for the upcoming “data explosion”

There’s another advantage of an integrated storage infrastructure. It allows you to better cope with the data onslaught you’re experiencing.

We mentioned that autonomous cars can produce hundreds of terabytes of data per day, and they’re hardly the only workload that puts pressure on storage infrastructure. Many organizations already find that their storage demands are spiraling out of control. This problem is exacerbated by disconnected pools of storage, often containing duplicate terabytes of information.

Not every storage technology is suited for rapidly increasing data.

Data reduction technologies will be key at any point in the data stream. Edge computing environments already benefit from data reduction technologies like edge gateways. Storage technologies like compression and deduplication, especially if they span multiple platforms or places, become a boon to addressing data growth.

It’s important to understand how your existing technologies can or can’t address your data growth. Some solutions—like throwing additional cloud storage at the problem—aren’t sustainable long-term approaches, because costs get out of control. It’s also important to evaluate emerging technologies that can help.

Storing and managing huge volumes of data

To begin coping with the amount of data that will result from edge computing, understand that everything you’re doing is to attain agility. Storing and managing your data must serve the needs of customers, who expect low latency, high throughput, and distinctive services. Anything that blocks agility has to be rooted out. Some of the principles behind agility include:
  • Automation. You can’t sustain manual approaches to provisioning, data migration, data protection, and other functions when infrastructure stores hundreds to thousands of terabytes. The more self-diagnosing, self-healing, and self-restarting a storage system can do, the more likely it is to be the right fit for a massively scalable collection of datasets.
  • Availability. Availability has to be a given: 99.9% uptime might not be enough when your customers want their financial records now.
  • Management. As storage infrastructure expands, integrated visibility and control make a huge difference. Sometimes, especially at the edge, it’s difficult (or impossible) to make changes to a storage system locally. Control over dozens or hundreds of different sites becomes a necessity.
  • Security. Any storage technology must be secure. Breaches happen every day and threats are constant. Most mature storage technologies used in today’s environments are built with encryption, access control, and secure multitenancy capabilities.
  • Consistency. It’s impossible to be agile without consistency. Consistent features help keep a service functioning predictably in any location. Consistent programmability lets developers write an application once and have the flexibility to deploy it anywhere, with the same performance. Consistent security keeps data safe regardless of location. Consistent availability means that an edge failure doesn’t affect core applications. And consistent automation improves data movement and data protection across the entire infrastructure.

Conclusion

Organizations are looking for a proven approach to storage that spans many requirements without compromising on essential capabilities. Using an excellent storage technology that runs wherever you need it—on vendor platforms, hyperconverged infrastructure, virtual machines, cloud resources, and in containers—offers you more flexibility in deploying the right storage in the right place at the right cost. This approach translates into accelerated development, better efficiency, and improved cost controls.

How to realize your vision of a full integrated infrastructure

In part 3 of this series, we’re going to explore what NetApp® has done to make the integrated storage infrastructure vision a reality. We’re putting our efforts into rolling out a leading-edge approach for integrated storage infrastructure that spans core to edge to cloud—powering new approaches to thrill customers without compromising the bottom line. You can learn more by visiting our all-things-integration hub.

Meg Matich

View all Posts by Meg Matich

Next Steps

Drift chat loading