IOPS versus Throughput – Measuring Performance of Your Storage

Working on a recent consulting job, I was asked to explain the difference in throughput, which is, measured in MB/s versus IOPS as I had recommended a storage array that would provide many more IOPS and throughput than what is currently in use.

For this client I had presented the expected IOPS per type of disk from 7200 RPM, 10K and 15K and how one would calculate the total IOPS per expansion unit but failed to explain the available throughput other than an increase in uplink speed to 6GB.

Throughput is a measurement of the average number of megabytes transferred within a period of time for a specific file size.  Back in the day this was performed using a single computer making a single request for a disk, but in today’s age with large storage arrays that are providing storage to a number of clients we need to measure based on a lot of small read/writes verses a single computer making a large request.

To measure throughput one can use the following formula.

MB/s = IOPS * KB per IO / 1024

With this formula your IO will be your block size.  For an array with 10 10k SAS drives which provide approx 145 IOPS per disk we will use 1450 IOPS, in the real world this will differ based on RAID configuration.  Other than RAID0 there is a penalty for writes.  Not with reads.  For RAID1 and 10 there is an I/O penalty of 2, for RAID 5 there is I/O penalty of 4 and for RAID 6 a I/O penalty of 6.  RAID 6 is popular among SANs.

MB/s = 1450 * 64 /1024 or 90.625 MB/s
MB/s = 1450 * 128 /1024 or 181.25 MB/s
MB/s = 1450 * 256 /1024 or 362.5 MB/s

Hard drive manufactures will advertise 100MB throughput for a hard drive so why wouldn’t you see 1000MB throughput in a RAID 0 configuration.  Because in the real world you are not performing the edge cases they are doing.  Your IO workload will be more than a single sector read.  These disk manufactures don’t share their engineering specifications or what criteria they used to benchmark their drives.  One thing you can bank on is that they do what ever it takes to get the best results possible for marketing.  Your results will vary drastically.

3 Comments

  • Great post, Tim. Like most DBAs, I’ve been in a position before to micromanage IOPS and throughput on a system that grew too fast. IOPS are the most important for some operations, and throughput is the most for others. In general, SANs help both by increasing read-write heads and interfaces. I found on a system with high writes, moving from RAID 5 to RAID 10 helped tremendously. And a system with high reads (i.e. SQL Server with 8GB RAM and 10GB queries), random read time is crucial – the data is never read sequentially when it’s crunch time, and IOPS come in real handy. I’ve also tested the difference between 10k spindles, 15k spindles and SSD (FusionIO)… 15k spindles are about 50% faster in random and sequential speed because even if both have 6Gb (about 600MB/sec) interface speed, that is only burst rate; the spindle itself peaks due to the platter limitation. Put a top notch SSD on the 6Gb interface, and it doesn’t have hardly any latency. Low-demand reads, writes, etc are < 1ms instead of 5ms. I've seen high-contention activity on spindles get latency up to 200ms – and that's where the IOPS metric shines.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *