Performance Consistency

In our Intel SSD DC S3700 review I introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below I took a freshly secure erased SSD and filled it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next I kicked off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. I ran the test for just over half an hour, no where near what we run our steady state tests for but enough to give me a good look at drive behavior once all spare area filled up.

I recorded instantaneous IOPS every second for the duration of the test. I then plotted IOPS vs. time and generated the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.

The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, I did vary the percentage of the drive that I filled/tested depending on the amount of spare area I was trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area. If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives we've tested here but not all controllers may behave the same way.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.

  Corsair Neutron 240GB Crucial m4 256GB Crucial M500 960GB Plextor M5 Pro Xtreme 256GB Samsung SSD 840 Pro 256GB
Default
25% Spare Area  

Like most consumer drives, the M500 exhibits the same pattern of awesome performance for a short while before substantial degradation. The improvement over the m4 is just insane though. Whereas the M500 sees its floor at roughly 2600 IOPS, the m4 will drop down to as low as 28 IOPS. That's slower than mechanical hard drive performance and around the speed of random IO in an mainstream ARM based tablet. To say that Crucial has significantly improved IO consistency from the m4 to the M500 would be an understatement.

Plextor's M5 Pro is an interesting comparison because it uses the same Marvell 9187 controller. While both drives attempt to be as consistent as possible, you can see differences in firmware/gc routines clearly in these charts. Plextor's performance is more consistent and higher than the M500 as well.

The 840 Pro comparison is interesting because Samsung manages better average performance, but has considerably worse consistency compared to the M500. The 840 Pro does an amazing job with 25% additional spare area however, something that can't be said for the M500. Although performance definitely improves with 25% spare area, the gains aren't as dramatic as what happens with Samsung. Although I didn't have time to run through additional spare are points, I do wonder if we might see better improvements with even more spare area when you take into account that ~7% of the 25% spare area is reserved for RAIN.

  Corsair Neutron 240GB Crucial m4 256GB Crucial M500 960GB Plextor M5 Pro Xtreme 256GB Samsung SSD 840 Pro 256GB
Default
25% Spare Area  

I am relatively pleased by the M500's IO consistency without any additional over provisioning. I suspect that anyone investing in a 960GB SSD would want to use as much of it as possible. At least in the out of box scenario, the M500 does better than the 840 Pro from a consistency standpoint. None of these drives however holds a candle to Corsair's Neutron however. The Neutron's LAMD controller shows its enterprise roots and delivers remarkably high and consistent performance out of the box.

  Corsair Neutron 240GB Crucial m4 256GB Crucial M500 960GB Plextor M5 Pro Xtreme 256GB Samsung SSD 840 Pro 256GB
Default
25% Spare Area  

Encryption Done Right & Drive Configurations A Preview of The Destroyer, Our 2013 Storage Bench
Comments Locked

111 Comments

View All Comments

  • Solid State Brain - Saturday, April 13, 2013 - link

    In theory, the spare area can be only configured on a clean drive, which means one would have to secure erase it (and therefore lose all data) and then create a partition smaller than the drive's maximum user capacity. The remaining unused (raw, unpartitioned) capacity should then be used by the drive as spare area for wear leveling operations, in addition to the factory OP area (usually derived from the GiB->GB capacity difference). In practice it *should* be sufficient to notify the drive that the empty space is actually empty with a TRIM command before resizing the partition.

    In your case the Samsung Magician software allows to double the drive's factory spare area (no other adjustment possible, at least in version 4). It doesn't perform a secure erase, so perhaps it isn't really necessary after all.

    I don't know however if the Samsung 840 controller actually actively detects when a certain portion of the drive is "raw/unpartitioned". Theory dictates that it shouldn't be able to discern that without the OS somehow telling it so.

    If a partition-wide TRIM operation is enough, then one can increase overprovisioning manually on an live/used system by:

    1) Performing a full-system TRIM command with the Windows 8 integrated "drive defrag/optimization" tool (or with the "fstrim" command line tool in Linux, although this works only on ext4 partitions), or with dedicated third party utilities (some commercial defragmentation software performs a system-wide trim on SSDs instead of regular defrag).
    2) Resize the last partition manually with Computer Management>Disk Management>Shrink Partition.

    Anyway, in practice all this hassle is going to benefit you only if you routinely perform dozens of gigabytes of sustained writes per day in a possibly trim-less environment. I doubt very much that most users would be able to feel any difference with their workloads.
  • AlB80 - Saturday, April 13, 2013 - link

    "Total NAND on-board" and "DRAM" values are specified in "GB" and "MB", but it should be "GiB" and "MiB".
  • JellyRoll - Saturday, April 13, 2013 - link

    Shut up JohnW lol
  • JellyRoll - Saturday, April 13, 2013 - link

    There is a huge misstatement in the article..."I introduced a new method of characterizing performance: looking at the latency of individual operations over time."
    First: it isnt individual operations, several thousand are taking place per one second interval.
    Second: Anand did not introduce this type of testing, it was a blatant copying of other another tech websites testing.
  • JellyRoll - Saturday, April 13, 2013 - link

    There is a huge misstatement in the article..."I introduced a new method of characterizing performance: looking at the latency of individual operations over time."
    First: it isnt individual operations, several thousand are taking place per one second interval.
    Second: Anand did not introduce this type of testing, it was a blatant copying of other another tech websites testing.
  • twtech - Sunday, April 14, 2013 - link

    I think it's kind of interesting in the comments, people are looking at the performance figures and saying, "Oh, it doesn't perform as well as a Samsung 840 Pro, so I'm disappointed."

    I have a couple computers booting off an M4 (slower than the M500), and one that has a Samsung 830 as the boot drive. The Samsung is quite a bit faster in benchmarks, but do I notice? Nope, not really. The jump to having any SSD at all is significant. The jump from one SSD to another - provided neither have something like firmware issues causing stuttering as some old models did - is negligible.

    I think the more important factor here is that we have a nearly 1TB SSD for $600 - less than what 512GB drives were selling for 1 year ago. That's big enough that many users may not even need a separate mechanical storage drive.
  • JellyRoll - Sunday, April 14, 2013 - link

    Part of the issue is the unrealistic test parameters. Testing with such ridiculously severe workloads is not irepresentative of a real-world use.
  • Wolfpup - Monday, April 15, 2013 - link

    Unfortunately I couldn't wait for the launch of the M500...had to "make due" with a 512GB M4. Oh well, it's still a great drive!
  • random2 - Monday, April 15, 2013 - link

    I cannot imagine anyone who doesn't have some sort of tech background, trying to read these articles. Granted I am no certificated IT professional, I have been very interested in hardware and software for over a decade, and have been a reader of Anandtech for almost as long. Which brings me to this. Can we not have some of the terms abbreviated or otherwise, hyper-linked at least to an article providing further explanation?

    Case in point; ONFI 3.0
  • af3 - Tuesday, April 16, 2013 - link

    I was thinking of ordering a $350 256G Lacie Thunderbolt Rugged external SSD for the purposes of booting another OS without needing to use space on my internal/main (SSD) drive.

    Can anyone tell me whether there might be a superior (in terms of performance and cost) alternative that might utilize something like one of these new Micron drives?

    Does anyone know whether or not the Lacie is fast and whether or not I might have something better by getting another external Thunderbolt device and installing one of these Micron drives?

Log in

Don't have an account? Sign up now