X
Tech

Data storage: Everything you need to know about emerging technologies

The era of data-centric computing is here -- and, fortunately, data storage is more cost effective than ever. But we'll see more change in the next decade than we've ever seen before in computer data storage. Here's what's coming that you need to know.
Written by Robin Harris, Contributor

IT executives face a constant barrage of "new and improved" product claims. But data storage has changed more in the last 10 years than in the prior 25, and the rate of change is accelerating: We'll see more change in the next decade than we've ever seen before in computer data storage. Here's what's coming that you need to know.

Understanding what is coming -- some in the next few months -- will position savvy technology leaders to be proactive, value-added change agents. The innovations are real and fundamental, affecting how data centers are architected and managed, as well as enabling incredible new applications.

Executive overview

Twenty years ago there were storage arrays -- some small, some large -- and tape for archiving. Now, the storage landscape is much more varied, ranging from PCIe SSDs with the performance of 2010's million-dollar storage arrays, to scale out storage capable of storing a hundred petabytes -- a hundred million gigabytes -- on low-cost commodity servers and automated enough that two people can manage the entire array.

Storage options are expanding, as are application requirements and I/O profiles. What worked a decade ago -- what we made work -- is less and less adequate now. Here's an overview of the key applications and technologies that are changing how we specify and deploy storage.

The key driver

While technology enables new solutions, the key driver -- why we need new solutions -- is the growth of data.

Video -- consumer and surveillance -- is the main component of overall capacity growth. But in the enterprise, the collection and analysis of web-generated data -- customer behavior, ad effectiveness, A/B design testing, heat maps, semantic analysis, and more -- all generate data that itself must be evaluated for economic value.

Trend-heavy industries -- such as food, fashion, entertainment, and social media -- that need to keep their virtual finger on the pulse of change, must gather and analyze masses of streaming time-series data to understand and predict where their markets are going.

Greater granularity and specificity is also increasing data volume and velocity. Major food retailers track their supply chains down to the individual package of organic kale -- and also track who bought that package -- in case of a recall. As storage costs continue to decline at 25% to 40% annually, more and more applications will become economic, further increasing demand for storage.

Streaming data, video, artificial intelligence and machine learning (ML), IoT, and more will drive private data stores into the exabyte range over the next decade. The fundamental issue with AI is that to increase the "intelligence" of AI, it needs exponentially greater amounts of training data -- and the storage that goes along with it.

Companies that leverage the opportunities in big data and analytics will prosper. The others will fall by the wayside.

Consider this executive guide an early warning system for disruptive trends and technologies that can help usher your company into long-term digital success. The focus is on what the technologies enable, so you can scroll through to see what capability is most interesting to you.


Must read


Storage management

Twenty years ago storage silos were the bane of storage and database administrators. Applications were welded to the server OS and storage arrays they ran on, upgrades meant expensive new hardware and risky migrations, and the need to handle usage spikes meant the infrastructure was chronically over configured.

OS virtualization, containers, cloud integration, and the scale-out architectures (more on those later) that support them may make us long for the days when we could walk into a datacenter and touch our storage. Now, with cloud gateways integrated into enterprise storage arrays, and developers spinning up hundreds of terabytes for software testing, it is harder than ever to know who is using what storage, or why. And even harder to know if it is cost-effective, especially given the Byzantine bandwidth pricing designed to keep your data hostage.

What is needed, and will eventually appear, is cross-vendor storage tracking and analysis applications that leverage machine learning to understand and advise admins on optimizing the total storage infrastructure for performance and cost. These applications will know what various storage options cost (including bandwidth charges), how they perform, and their availability/reliability, and weigh that against what applications need and their economic value to the enterprise. That's a tall order, so what about right now?

Now, we're roughly where we were 20 years ago, managing different storage stacks. Until AI can help, we have to rely on an ad hoc mix of spreadsheets, heuristics, and human intelligence to make the best of our rich storage options.

Large memory servers

Intel, among others, will be introducing non-volatile, random access memories (NVRAM) this year. These memories retain their data -- without batteries -- through power cycles.

Because NVRAM sits on the server's memory bus, it is orders of magnitude faster than disks or SSDs. But unlike SSDs, NVRAM can be accessed as either memory bytes, or 4K storage blocks. This gives system architects flexibility in configuring systems for maximum performance and compatibility.

A common use case will see NVRAM used in large memory servers. Today, the latest Xeon SP (Skylake) servers can support up to 1.5TB of memory per processor, but the 12 128GB DIMMs required to achieve that are costly. Instead, Intel's Optane NVRAM DIMMs are priced as low as $625 per 128GB -- and use much less power, too.

A dual-socket Xeon SP server can support 3TB of memory. With affordable Optane DIMMs, large databases can be run in memory, dramatically improving performance.

Intel is not the only competitor in the NVRAM space. Nantero is due to start shipping NVRAM DIMMs next year, employing technology that is even faster than Optane. The bottom line is that NVDIMMs are here today, and offer real advantages over DRAM DIMMs -- and more are coming soon.


Must read


Scale-out storage

All the cloud vendors use highly scalable storage to store exabytes of data. That technology is making its way to the enterprise, in both hardware (Nutanix) and software (Quobyte), among others.

The biggest difference between scale out architectures -- which are typically shared-nothing clusters running on commodity hardware -- is how they protect data. Active I/O systems usually rely on triple replication, while less active systems rely on advanced erasure codes -- more on those in the next section -- to provide even higher levels of data protection.

The important point is that private data centers can create infrastructures that are cost competitive with cloud vendors, and offer lower latency and more control. The key is to understand what your base workload requirements are, and relegate cloud usage to transient or spiking workloads.


Must read


Highly resilient storage

Erasure codes have been used for decades to increase data density in disk drives and -- in the form of RAID -- storage arrays. But advanced erasure codes enable users to dial in the level of data protection and security they desire, with very low overhead.

RAID 5, for example, only protects against one drive failure. If a drive fails, and there is an unrecoverable read error (URE) in one of the remaining drives, the entire recovery can fail.

With advanced erasure codes (AEC), a 10 (or more) drive stripe can be configured to survive four drive failures, so even if three drives fail, a URE will not stop the recovery. For ultra-high data protection, AEC can be configured to run across multiple geographies, so that even the loss of one or more data centers will not lose data.

Compare this to RAID 5, which only protects against one failure. RAID 6, which protects against two failures, requires more parity, spread across two drive's worth of capacity.

With AEC the capacity overhead typically is about 40%, but will protect against as many failures -- disk, server, even data center outages -- as you choose, when properly configured. Forty percent may seem like a high price, but if you've ever lost data on a RAID array, it is a bargain.

The downside to AEC is that the math required to create the needed redundancy can be processor intensive: It's not for transaction processing. However, improving AEC in coming years will decrease compute requirements, leading to improved performance across many applications.


Must read


Data security

Data security, related to availability, but focused on keeping data out of the wrong hands, will undergo sweeping changes in the next few years. With the advent of Europe's General Data Protection Regulations (GDPR) last year, the stakes for mishandling European citizen's data rose dramatically. Encryption at rest and in-flight is required. Data breaches must be reported. Fines can be huge.

This will lead to the general adoption of defense-in-depth strategies, a necessary response to the reality of mobile computing and the IoT: there are too many entry points to rely on a single line of defense.

Machine learning will ultimately play a key role, but the problem is the massive amounts of data required to train the system. That requires organizations to share threat data using protocols that enable automation of threat communication and amelioration.


Must read


Neural processors

If your organization uses, or plans to use, machine learning in a significant way, you will need to become familiar with neural processors. Neural processors are massively parallel arithmetic logic units optimized for the math that machine learning models require.

Neural processors are increasingly common. There's one in the Apple Watch, and all the cloud vendors have created their own designs. Google's TensorFlow accelerator, for instance, is capable of 90 trillion operations per second. Expect much faster versions in the near future.

So what do neural processors demand from storage? Bandwidth.

In real time applications, such as in robotics, autonomous vehicles, and online security, the neural processor needs to be fed the appropriate data as quickly as possible, so bandwidth is important. As convolutional neural networks typically have multiple levels, most of the computational results are passed within the neural processor, not to external storage. Thus the processors do not need L3 caches. The focus is on feeding the data with as little latency as possible so the required math can be completed ASAP.


Must read


Rack scale design

Rack scale design (RSD) is a concept that Intel has been promoting for years, and the pieces have come together in the last year, with more advances coming this year. Essentially, RSD is an answer to the differing rates of technology advances in CPUs, storage, networks and GPUs.

The RSD concept is simple. Take individual racks of CPU, memory, storage, and GPUs, connect them all with a high-bandwidth, low-latency interconnect, and, with software, configure up virtual servers with whatever combination of computes, memory, and storage a particular application requires. Think of RSD as a highly configurable private cloud.

HP's Synergy system is one implementation of the concept, based, of course, on HP hardware. Liqid Inc. offers a software version that supports commodity hardware and multiple fabrics. Expect others to enter the market as well.

With the advent of PCIe v4, and the related upgrade of NVMe (NVMe can run over PCIe), and increases in the number of PCIe lanes CPUs support, the PCIe interconnect finally has enough bandwidth to handle demanding applications. With the ability to upgrade components as their technology improves -- without the expense of buying new everything else -- CIOs will be able to exert much more granular control over critical infrastructure.


Must read


Storage-based processing

With the rapid growth of data volumes at the edge and in data centers, it is increasingly difficult to move data to processors. Instead, processing is moving to the storage.

There are two different ideas covered under the rubric of intelligent storage. At the edge, data pre-processing and reduction, perhaps using machine learning, reduces bandwidth requirements to data centers. In big data applications, sharing a pool of storage and/or memory allows as many processors as needed to share the data needed to achieve required performance.

These concepts are currently labeled intelligent storage by HPE, Dell/EMC, and NGD Systems. It goes beyond the optimizations built into storage array controllers that manage issues with disk latency or access patterns. Call it storage intelligence v2.

Consider a petabyte rack of fast, dense, non-volatile memory, attached to dozens of powerful CPUs in the next rack. With proper synchronization and fine-grained locking thousands of VMs could operate on a massive data pool, without moving hundreds of terabytes across a network.

With the advent of fast and cheap neural processors, and a sufficient corpus for machine learning, intelligent storage could be trained to be largely self-managing. Beyond that an intelligent data pool could. For example, use ML to detect race conditions based on access patterns and locking activity.


Must read


High-capacity disk drives

Disk drives aren't dead, and are, in fact, enjoying a technology renaissance. The latest drives have capacities up to 16TB, and over the next five years that number will almost double. Disks will remain the lowest cost random access storage for years to come. The technologies driving HDDs forward include:

Helium
Helium reduces aerodynamic drag and turbulence, enabling vendors to cram more platters in the drive, while reducing power and heat. Popular in cloud data centers.

HAMR
Heat-Assisted Magnetic Recording drives are due next year from Seagate, and WD will likely follow. Using either lasers or microwaves, a tiny section of a disk platter is heated to 400 degrees C before writing. When cool, the medium is much more resistant to bit flips. In technical terms, the heat enables use of high coercivity magnetic material, which enables greater data density.

Shingled magnetic recording
Read/write heads lay down a much wider write track than the read head needs. By reducing the distance between tracks, the write tracks overlap like shingles, allowing much higher data densities. SMR (shingled magnetic recording) drives are optimal for archives.


Must read


Conclusion

The era of data-centric computing is here. With over 4.5 billion computers in use -- most of them mobile -- and the growth of IoT still in the future, the technology and governance of data will be a top priority for both economic and legal reasons.

Data is increasingly a competitive weapon. Properly stored, even old data can offer value thanks to new analytical tools. Fortunately, data storage is more cost effective than ever, a trend that will continue for the foreseeable future.

Related stories:

Editorial standards