close

All-flash means quality of service – or does it?

All-flash means quality of service – or does it?

All-flash means quality of service – or does it?

If you want the best quality of service (QoS) management from your storage array, enterprise flash is a necessity – but far from the only one. Put simply, going all-flash won’t by itself give you QoS – no matter what some people might claim.

QoS matters for all sorts of reasons, perhaps the biggest being that you need your users to get a consistently good experience. It is especially important in virtualised environments, because virtualisation introduces a large amount of randomness into your storage traffic.

The reason why QoS needs flash is simple

Mechanical storage is slow, and it has too much variability. The data you want might be about to spin smoothly under the read head all in one go, or it might be scattered around the disk on a hundred different tracks.

And once you share or consolidate storage, then the data pipe into the disk might be wide open, or it might already be congested by several other applications. With flash, all your data is the same ‘read/write distance’ away so file fragmentation is irrelevant, and you can pump much, much more in or out at the same time.

Problem solved, you might think – flash gives me consistent QoS

Well, not quite – not unless you only run a single workload. That’s because any all-flash array (AFA) is a system, not merely a memory block, and once you need QoS the whole system matters. Other elements in the system also have their limitations, and in a way, when flash stops the storage element from being the bottleneck, the bottleneck can simply move somewhere else. So the whole system must be designed with guaranteed QoS in mind – and not all AFAs are.

Then there is granularity. For enterprise-wide QoS you need the ability to treat each application differently. Otherwise, the application that generates the most I/O requests will get too great a share, at the expense of equally important but less data-hungry applications. So your business intelligence system might swamp out a more time-critical production system, or a heavy database query or a VDI boot-storm could starve the web and email servers that share the same storage.

Watch out though for vendors who claim to solve this problem by prioritising applications, or throttling their access to storage. A much better route is enforcing minimum levels of data throughput for each important application, in the shape of a minimum allocation of IOPS. That way, a greedy application can eat its fill when the others are quiet, but no one goes hungry when everyone needs to eat.

There are two other key questions for the buyer to ask

One is the effect of adding de-duplication and compression: these are widely used to reduce the effective cost per Terabyte of an AFA, but they can add latency and a degree of variability. Does your AFA supplier let you enable and disable these per-workload, so you can turn them off for your QoS-sensitive applications if need be?

And how much can you automate? Ideally an AFA should be self-optimising, or at the very least semi-automated. Who wants to be manually adjusting QoS parameters all day? If you can’t define policy requirements and let the system get on with it – with appropriate levels of reporting and monitoring, of course – is it worth having?

Tags: , , , ,

No Comments

Leave a reply

Post your comment
Enter your name
Your e-mail address

Before you submit your comment you must solve the following arithmetic function! * Time limit is exhausted. Please reload CAPTCHA.

Story Page