Shadow IT, Latency Reduction and More: The Next Decade in Storage

Shadow IT, Latency Reduction and More: The Next Decade in Storage
latency reduction

You’ve been reading plenty of “What’s coming in 2015” articles. Pfft, what fun are they? Robin Harris, a.k.a. StorageMojo, peers into his crystal ball to predict what storage will be like in 2025. And, he says, the next 10 years will be the most exciting and explosive in the history of data storage.

Today, storage has assumed the central place in IT infrastructures, just as computer pioneer Alan Turing noted in 1947:

“Speed is necessary if the machine is to work fast enough for the machine to be commercially valuable, but a large storage capacity is necessary if it is to be capable of anything more than rather trivial operations. The storage capacity is therefore the more fundamental requirement.” [Italics added]

I’ve been observing the storage market for over 35 years. In that time, many technologies, products, companies and markets emerged, matured, and – sometimes – disappeared.

Yet some trends have persisted for decades. Ever-decreasing cost per byte. Growing capacity demand. Storage growing as a percentage of data center spend. Storage as the critical management problem.

Storage is the most difficult infrastructure problem because data needs to persist. The ever-present enemy of storage is entropy, the universal process of organized systems – such as your data – becoming less organized over time.

Despite the challenges, though, storage is set for an explosion of new and better choices thanks to a combination of technical, business, and application factors. The next 10 years will be the most exciting and explosive in the history of data storage.

The storage palette

For decades we had RAM, disk, and tape limiting our storage palette. But now: Get ready for Technicolor storage in 3D.

The shadow IT industry – the secretive cloud-scale IaaS suppliers – is a key piece. But so are resistance RAM (RRAM), low-latency architectures, new applications, and the commoditization of most storage. Here’s an overview of some of the most critical changes affecting the next decade in storage.

The shadow IT industry

The shadow IT industry is a couple of dozen Web scale service providers and the dozens of companies that serve them. It will continue to grow in power and influence. This numerically small but economically powerful group is creating products, such as Shingled Magnetic Recording (SMR) disks or terabyte optical discs for cloud vendors, that often will find broader applications in the enterprise.

The larger challenge is that enterprises will no longer have access to the latest and most cost-effective storage technology. Cloud providers will offer services with which few enterprises can compete.

However, this creates opportunities for those inside the cloud companies to leverage specialized technologies for enterprise and consumer consumption by creating new companies. Achieving this goal requires a tricky balancing act, since scaling down is almost as difficult as scaling up.

This is already happening. For instance, the founders of Nutanix, a hyperconverged systems company, included Google alumni who worked on its Web scale GFS storage system.

Non-volatile resistance RAM

NAND flash has revolutionized storage in the last decade. But flash is hardly the ideal storage medium.

Flash writes are very slow – slower than disk. Endurance is limited; 1,000 writes is a common specification. Thanks to Moore’s Law, smaller feature sizes reduce endurance and speed. Power is inefficient, since writes take about 20 volts.

Engineering creativity has worked around most of these issues, but new Resistive RAM (RRAM) technologies are coming that eliminate flash’s problems.

There are several forms of RRAM, but they all store data by changing the resistance of a memory site, instead of placing electrons in a quantum trap, as flash does. RRAM promises better scaling, fast byte-addressable writes, much greater power efficiency, and thousands of times flash’s endurance.

RRAM’s properties should enable significant architectural leverage, even if it is more costly per bit than flash is today. For example, a fast and high endurance RRAM cache would simplify metadata management while reducing write latency.

Latency reduction

Everyone loves the massive Input/Output Operations Per Second (IOPS) of flash storage. But those IOPS have uncovered another bottleneck: storage stack latency.

For decades, our working assumption has been that hard drives are latency’s limiting factor. We ignored the storage software stack’s contribution to latency.

Now the storage stack’s latency has come to the fore. We have to re-architect the rest of the storage stack to optimize it to cope with low latency.

Look at TPC-C benchmarks. You’ll see that even flash-based storage systems may have long tail latencies into the dozens of seconds. That simply shouldn’t be. Application and operating system software stacks shouldn’t have to deal with latencies like this when the underlying storage is capable of much higher responsiveness.

Reduced latency enables servers to do much more work with the same CPU cycles and cache capacity. More important: Long latencies create problems that are difficult to reliably engineer around.


Storage systems and network routers are the two last bastions of vertical integration. In the 1980s, the server world converted from vertically-integrated companies (such as DEC and Data General) to horizontal integration, with CPUs from Intel, operating systems from Microsoft, databases from Oracle, and applications from everywhere.

As a result of this transition, the server industry saw gross margins drop from 60%-70% to 30%. Servers became a box that a business ordered online and received a few days later.

That transition is long overdue for data storage. In fact, it is already happening. Look at what Google and Amazon charge for storage! But if enterprises hope to compete with cloud economics they will demand similar TCO economics as well.

It won’t be an easy transition, especially for legacy applications designed for specific underlying storage. But the sooner enterprises start using scale-out storage – as the big cloud providers already do – the better they will adapt.


Streaming data analysis is one example of new applications with unique storage requirements. But image analysis, video analysis, and other demanding storage-centric applications will enable new forms of data value. They’ll be made possible by infrastructure-busting performance and capacity demands.

Consider the relatively simple case of police body cam video storage. But to make it useful, a legal chain of custody has to be established, the data has to be searchable, and, of course, the storage has to be extremely cost-effective.

Think of the output of 50,000 high-definition (HD) police body cams. That’s a tremendous amount of data capacity, so it also needs to be readily searchable at high speed. High-performance, exabyte capacity, low-cost, tamper-proof storage – that is a 21st century challenge!

Electronic health records are another opportunity and concern. Once the currently range of EHR incompatibilities are ironed out (I’m thinking 5-10 years), researchers and healthcare professionals will be able to examine outcomes for millions of people in virtual drug trials.

We are just barely scratching the surface of what Big Data means for infrastructure architecture. There will be many surprises.

That’s not all

None of these trends is isolated. They all have spillover effects that, over the decade, will reshape the industry.

Few of today’s storage companies will emerge unscathed by the large drop in gross margins. Enterprises will have to rethink how they architect, justify, and provision infrastructure. The shadow IT industry will struggle with commercializing specialized cloud products.

But the good news is that by 2025 data storage will be much more robust, scalable, performant, and cost-effective than it is today. That’s a future I can live with.


More about storage technology:

Get a free trial of Druva’s single dashboard for backup, availability, and governance, or find out more information by checking out these useful resources:


Robin Harris

Robin Harris

Robin Harris is an independent analyst and consultant focused on emerging IT technologies, products and markets. His blogs, StorageMojo and Storage Bits at ZDnet, are ranked 10th and 11th out of 400 IT analyst blogs and have been linked to from the Wall Street Journal, All Things Digital, e-Week, ComputerWorld and hundreds of other sites.


  1. meh130 2 years ago

    I think there are some things missed here, regarding vertical and horizontal integration.

    Intel actually is a vertical integration play, in the sense it now manufactures most of the computer. When you look at the computers of the 1970s, CPUs, FPUs, memory controllers, HDD controllers, BIOS/PROM chips, video boards, system boards, NICs, etc. came from many different companies. Northbridge, southbridge, PCI, video, and Ethernet is now integrated on-chip. The last thing an Intel Xeon processor is, is a commodity. It is a monopoly vertically integrated system on a chip, albeit at a very aggressive price. The only thing off-chip, or more accurately, off system, are DRAM, hard disk drives and operating systems.

    Second, regarding commoditization, you mention Google and Amazon, which are total vertical integration plays. They even spin their own versions of Linux. Google and Amazon’s cloud services are commodities like wheat, but their internal tools and processes are more John Deere than pork bellies.

    It is the affordability of Intel x86 server CPUs, the commodity availability of HDDs and DRAM chips, and zero cost open source software which has allowed hyperscalers to build highly efficient systems at low cost, while provide value add in upper level software all while efficiently managing data center physical plant.

    In the hands of a hyperscaler, which can provide a managed service, SLAs and indemnification, true commodity storage is reasonable. But otherwise rolling one’s own storage system does make one a systems vendor.

    Consumable “commodity” storage will be somewhere in between: storage server hardware from Dell or SuperMicro, on an HCL from Microsoft or Nexenta, that you install and run, and an 800 number when things go wrong. More EVO:RAIL than Fry’s.

    If roll your own commodity storage does take off, put all of your 401k money into data protection companies. Because they will be the big winner.

  2. Nick 2 years ago

    But doesn’t it seem as though the Enterprise storage market moves at a glacial pace? If you’d been absent from the business for the past 15 years, and looked at the storage market you’d probably be forgiven for thinking it looks mostly like the mainframe-looking storage paradigm of the 90s. Big Iron-type of storage solutions, with some innovation happening in pockets… converged storage (Nasuni/etc.), data durability (LRC, Bitspead). Over the next decade? Maybe… maybe an infinity commoditized cloud is going to fundamentally alter how we do Enterprise storage. At least, I hope it does. Maybe erasure encoding, combined with drastically reduced costs from companies like Nasuni, will help change the landscape. But for the near term, I still need to buy Enterprise storage. And I still see Microsoft as being the most visible storage disrupters via Storage Spaces (Azure trickle-down effect).. more here…

Leave a reply

Your email address will not be published. Required fields are marked *