Mixed IO Performance

For details on our mixed IO tests, please see the overview of our 2021 Consumer SSD Benchmark Suite.

Mixed IO Performance
Mixed Random IO Throughput Power Efficiency
Mixed Sequential IO Throughput Power Efficiency

The mixed random IO test does a great job of separating the low-end drives from the more mainstream models with TLC and DRAM. None of the DRAMless or QLC-based drives come close to the mainstream TLC NVMe drives, and only the Corsair MP400 QLC drive comes close to overall performance of the Samsung 870 EVO SATA SSD. Among the low-end drives, the Samsung SSD 980 is clearly slower than the WD Blue SN550, and less power efficient. Samsung's NVMe drives are all among the most power-hungry during this test, and the SSD 980 doesn't deliver anywhere near as much performance with that power as its high-end siblings.

On the mixed sequential IO test the SSD 980 is more competitive with most of the other low-end NVMe SSDs, and performs much closer to the mainstream NVMe drives. The NVMe Host Memory Buffer feature also seems to help a bit on this test, while it had little impact on the mixed random IO test. With its more competitive performance on this test, the SSD 980's efficiency scores are up to par.

Mixed Random IO
Mixed Sequential IO

The performance trend for the Samsung SSD 980 across the mixed random IO test is fairly flat: it can't start out with high random read performance since this test is using 80% of the drive's space—far more than the HMB can help with.

On the mixed sequential IO test, the 980 shows increasing performance as the workload gets more write-oriented, though it and the WD Blue SN550 more or less plateau once reads are less than a third of the workload. The increasing trend illustrates how caching bursts of writes is easier on the drives than handling four separate threads each reading at QD1—but the low-end drives still have clear limits to the write volume they can handle.


Power Management Features

Real-world client storage workloads leave SSDs idle most of the time, so the active power measurements presented earlier in this review only account for a small part of what determines a drive's suitability for battery-powered use. Especially under light use, the power efficiency of a SSD is determined mostly be how well it can save power when idle.

For many NVMe SSDs, the closely related matter of thermal management can also be important. M.2 SSDs can concentrate a lot of power in a very small space. They may also be used in locations with high ambient temperatures and poor cooling, such as tucked under a GPU on a desktop motherboard, or in a poorly-ventilated notebook.

Samsung SSD 980
NVMe Power and Thermal Management Features
Controller Samsung Pablo
Firmware 1B4QFXO7
Feature Status
1.0 Number of operational (active) power states 3
1.1 Number of non-operational (idle) power states 2
Autonomous Power State Transition (APST) Supported
1.2 Warning Temperature 82 °C
Critical Temperature 85 °C
1.3 Host Controlled Thermal Management Supported
 Non-Operational Power State Permissive Mode Not Supported

The Samsung SSD 980 supports most of the usual NVMe power management features and claims very fast power state transition latencies, especially for its intermediate idle state that's supposed to get it down to about 50mW.

Samsung SSD 980
NVMe Power States
Controller Samsung Pablo
Firmware 1B4QFXO7
Active/Idle Entry
PS 0 5.24 W Active - -
PS 1 4.49 W Active - -
PS 2 2.19 W Active - 0.5 ms
PS 3 50 mW Idle 0.21 ms 1.2 ms
PS 4 5 mW Idle 1 ms 9 ms

Note that the above tables reflect only the information provided by the drive to the OS. The power and latency numbers are often very conservative estimates, but they are what the OS uses to determine which idle states to use and how long to wait before dropping to a deeper idle state.

Idle Power Measurement

SATA SSDs are tested with SATA link power management disabled to measure their active idle power draw, and with it enabled for the deeper idle power consumption score and the idle wake-up latency test. Our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state.

Idle power management for NVMe SSDs is far more complicated than for SATA SSDs. NVMe SSDs can support several different idle power states, and through the Autonomous Power State Transition (APST) feature the operating system can set a drive's policy for when to drop down to a lower power state. There is typically a tradeoff in that lower-power states take longer to enter and wake up from, so the choice about what power states to use may differ for desktop and notebooks, and depending on which NVMe driver is in use. Additionally, there are multiple degrees of PCIe link power savings possible through Active State Power Management (APSM).

We report three idle power measurements. Active idle is representative of a typical desktop, where none of the advanced PCIe link or NVMe power saving features are enabled and the drive is immediately ready to process new commands. Our Desktop Idle number represents what can usually be expected from a desktop system that is configured to enable SATA link power management, PCIe ASPM and NVMe APST, but where the lowest PCIe L1.2 link power states are not available. The Laptop Idle number represents the maximum power savings possible with all the NVMe and PCIe power management features in use—usually the default for a battery-powered system but rarely achievable on a desktop even after changing BIOS and OS settings. Since we don't have a way to enable SATA DevSleep on any of our testbeds, SATA drives are omitted from the Laptop Idle charts.

Idle Power Consumption - No PMIdle Power Consumption - DesktopIdle Power Consumption - Laptop

The active idle power draw of the Samsung SSD 980 is definitely lower than on Samsung's high-end NVMe drives, but given that this is a DRAMless 4-channel controller it seems like they could have done a bit better. On the other hand, the WD Blue SN550 doesn't even drop below 1W without being put into a sleep state. Both the desktop and laptop idle power draw scores look good for SSD 980. The measured wake-up latencies are a bit faster than the claimed 9ms, and in line with other Samsung NVMe drives.

Idle Wake-Up Latency

Advanced Synthetic Tests: Block Sizes and Cache Size Effects Conclusion
Comments Locked


View All Comments

  • WaltC - Tuesday, March 9, 2021 - link

    Trying to fathom what "retail-ready" means...;) All of my Samsung NVMe drives--yes, even those with "pro" and "evo" in the names--were purchased at retail. Why is this drive anymore "retail-ready" than a 980 Pro, for instance, also sold at retail. Perhaps you meant to say, Samsung's "value drive," or "low-cost NVMe segment," etc, as it is no more "retail-ready" than any of Samsung's other drives--which are all sold at retail.
  • WaltC - Tuesday, March 9, 2021 - link

    I did see the "entry-level" qualifier, however. But I'm not aware that Samsung has ever manufactured an SSD that was not "retail-ready."
  • linuxgeex - Tuesday, March 9, 2021 - link

    Along with "entry-level"... Anandtech really should have benchmarked this drive on a dual-core 6th gen laptop or 8th-gen 1L desktop minipc, because that's the kind of devices this is going to land in. And using the host memory for the cache and the extra driver bloat is going to hurt those weak machines. I am certain that a SATA M.2 with DRAM would end up outperforming this for most of the actual retail customers. Where this is a fabulous deal is in HEDT machines that want a lot of cheap SSD storage to act as front cache for a RAID, because they have RAM and cores to spare, so the loss of a relatively piddling amount of RAM to the host cache and driver bloat when they have 16-32x the RAM, isn't such a big deal.
  • Billy Tallis - Tuesday, March 9, 2021 - link

    I think you're vastly overestimating what's involved in making HMB work. It's a straightforward feature for the host OS to support and does not "bloat" the NVMe driver. Allocating a 64MB buffer out of the host RAM is a drop in the bucket, or: two frames of 4k image data. The SSD will only touch the HMB a few times per IO, plus maybe a bit more often when doing heavy background operations. The total PCIe bandwidth used by HMB is vastly smaller than the bandwidth used for transferring user data to and from the SSD. That means HMB is using an even more negligible fraction of the CPU's DRAM bandwidth. The CPU execution time used by HMB is exactly zero. The resource requirements of HMB are so low that UFS has copied the feature to accelerate smartphone storage.
  • Byte - Monday, August 30, 2021 - link

    The HMB helps track the LBA which is why most SATA drives suffer so much when there is no DRAM, it takes much longer to search for the block. With HMB PCIE SSDs will suffer much much less being DRAMless than an SATA SSD.
  • SarahKerrigan - Tuesday, March 9, 2021 - link

    The article refers to it as retail-ready entry-level. As opposed to OEM entry-level, which Samsung has produced in the past.
  • alfalfacat - Tuesday, March 9, 2021 - link

    Indeed, some may remember the PM981 OEM drive that came out like 6mo before the 970 EVO, which you could get if you were willing to forgo the retail package.

  • antonkochubey - Tuesday, March 9, 2021 - link

    Also forgo firmware updates and 5 year warranty.
  • serendip - Thursday, March 11, 2021 - link

    Samsung now has a bunch of OEM drives like the PM991, PM991a and PM9A1.

  • linuxgeex - Tuesday, March 9, 2021 - link

    exactly - retail-channel-ready.

Log in

Don't have an account? Sign up now