The Inland Performance Plus 2TB SSD Review: Phison's E18 NVMe Controller Testedby Billy Tallis on May 13, 2021 8:00 AM EST
Phison kicked off the transition to PCIe 4.0 for storage in 2019, and dominated the high-end SSD market for over a year with the only available PCIe 4.0 solution. There are now several competing PCIe 4.0 NVMe SSDs and controllers that handily outperform the Phison E16, but Phison has stayed competitive by bringing out a second-generation PCIe 4.0 controller, the E18. Today we're looking at one of many drives built around that controller: the Inland Performance Plus 2TB.
The Inland brand is owned by the parent company of American retailer Micro Center. Most or all Inland-branded SSDs are simply Phison reference designs with little or no customization beyond cosmetics. Inland SSDs are frequently great value options—especially for Micro Center's in-store only deals, but even their online prices tend to be very competitive. Part of the discount comes from their tendency toward shorter warranty periods: the Inland Performance Plus has only a three year warranty despite being a high-end flagship model. Fortunately, the total write endurance rating is the same as competing drives that carry five year warranties, and the SSD hardware itself is identical to other brands selling the same SSD reference design under different heatsinks and labels.
As part of the second wave of PCIe 4.0 SSD controllers, the Phison E18 aims to use substantially all of the performance available from the PCIe 4.0 x4 interface: sequential transfers up to around 7.4 GB/s and random IO up to about 1M IOPS. Hitting that level of performance while staying within M.2 power delivery and thermal dissipation limits required migrating to 12nm FinFET fabrication from the cheaper 28nm widely used by PCIe 3.0 SSD controllers and by the Phison E16. But even so, the Phison E18 can draw more power than the E16 controller because the increase in performance is so large. Competition for the Phison E18 includes in-house controllers used in the latest flagship consumer SSDs from Samsung and Western Digital, newcomer Innogrit's IG5236 Rainier controller, and Silicon Motion's upcoming SM2264 controller.
|Phison High-End NVMe SSD Controller Comparsion|
|Manufacturing Process||28 nm||12 nm|
|CPU Cores||2x Cortex R5||3x Cortex R5|
|Error Correction||3rd Gen LDPC||4th Gen LDPC|
|Host Interface||PCIe 3.0 x4||PCIe 4.0 x4|
|NVMe Version||NVMe 1.3||NVMe 1.4|
|Max Capacity||16 TB||16 TB||16 TB|
|Sequential Read||3.4 GB/s||5.0 GB/s||7.4 GB/s|
|Sequential Write||3.2 GB/s||4.4 GB/s||7.0 GB/s|
|4KB Random Read IOPS||700k||750k||1M IOPS|
|4KB Random Write IOPS||600k||750k||1M IOPS|
|Controller Power||2.1 W||2.6 W||3.0 W|
|Sampling||Q2 2018||Q1 2019||Q1 2020|
|Retail SSD Availability||Q4 2018||Q3 2019||Q4 2020|
The Inland Performance Plus does not quite hit the theoretical limits of the Phison E18 controller. The 1TB model is clearly handicapped on some performance metrics compared to the 2TB model, but even the latter is only rated for 7GB/s sequential reads and 650-700k IOPS instead of 7.4GB/s and 1M IOPS. This mostly comes down to the Inland Performance Plus and other current E18 drives using 96L TLC NAND with 1200MT/s IO between the NAND and the controller, while the E18 can support up to 1600MT/s IO. A new round of E18-based products will start hitting the market soon using Micron 176L TLC that operates with the higher IO speed and should bring some other performance and efficiency improvements. We expect some of these new drives to be announced at Computex next month.
|Inland Performance Plus
|Capacity||1 TB||2 TB|
|Form Factor||M.2 2280 PCIe 4.0 x4|
|NAND Flash||Micron 96L 3D TLC|
|Sequential Read (GB/s)||7.0||7.0|
|Sequential Write (GB/s)||5.5||6.85|
|Random Read IOPS (4kB)||350k||650k|
|Random Write IOPS (4kB)||700k||700k|
|Write Endurance||700 TB
|Retail Price (In Store Only)||$189.99
Like most drives using the Phison E18 controller, the Inland Performance Plus comes with a fairly substantial heatsink installed. The controller package is small enough to share space with a DRAM package and four NAND packages on the front of the PCB, which means there's a lot of heat concentrated in a small area. (The Inland Performance Plus also has DRAM and NAND on the back of the PCB.) PCIe 4.0 has barely started showing up in laptops and using the full performance of a drive like the Inland Performance Plus requires more power than most laptops are able to sink away from their M.2 slots, so it's reasonable to regard this drive as pretty much desktop-only.
The most important competitors for the Inland Performance Plus are other Phison E18 drives and the current flagship PCIe 4.0 drives from Samsung and Western Digital. We have fresh results in this review for the Samsung 980 PRO, retested with the latest 3B2QGXA7 firmware. We've also included results from some older top of the line drives: the Intel Optane 905P and Samsung 970 PRO (their last consumer NVMe drive to use MLC NAND), and the Silicon Power US70 representing the first wave of PCIe 4.0 drives that used the Phison E16 controller.
|WD Black SN850||PCIe 4.0 x4||WD Custom G1||96L TLC|
|Samsung 980 PRO||PCIe 4.0 x4||Samsung Elpis||128L TLC|
|Silicon Power US70||PCIe 4.0 x4||Phison E16||96L TLC|
|Intel Optane SSD 905P||PCIe 3.0 x4||Intel Custom||3D XPoint G1|
|Samsung 970 PRO||PCIe 3.0 x4||Samsung Phoenix||64L MLC|
The rest of the drives included in this review are more mainstream models, mostly PCIe 3.0 drives, some with four-channel controllers instead of the usual eight for the high-end, and even a few with QLC NAND. This includes the Inland Premium, which is based on the Phison E12S and TLC NAND.
|Inland Premium||PCIe 3.0 x4||Phison E12S||96L TLC|
|SK hynix Gold P31||PCIe 3.0 x4||SK hynix Custom (4ch)||128L TLC|
|Samsung 970 EVO Plus||PCIe 3.0 x4||Samsung Phoenix||92L TLC|
|WD Black SN750||PCIe 3.0 x4||WD Custom G1||64L TLC|
|HP EX950||PCIe 3.0 x4||SM2262EN||64L TLC|
|Kingston KC2500||PCIe 3.0 x4||SM2262EN||96L TLC|
|Intel SSD 670p||PCIe 3.0 x4||SM2265 (4ch)||144L QLC|
|ADATA Gammix S50 Lite||PCIe 4.0 x4||SM2267 (4ch)||96L TLC|
|Corsair MP600 CORE||PCIe 4.0 x4||Phison E16||96L QLC|
Post Your CommentPlease log in or sign up to comment.
View All Comments
mode_13h - Sunday, May 16, 2021 - link> programs were doing their own thing, till OS's began to clamp down.
DOS was really PCs' biggest Achilles heel. It wasn't until Windows 2000 that MS finally offered a mainstream OS that really provided all the protections available since the 386 (some, even dating back to the 286).
Even then, it took them 'till Vista to figure out that ordinary users having admin privileges was a bad idea.
In the Mac world, Apple was doing even worse. I was shocked to learn that MacOS had *no* memory protection until OS X! Of course, OS X is BSD-derived and a fully-decent OS.
FunBunny2 - Monday, May 17, 2021 - link" I was shocked to learn that MacOS had *no* memory protection until OS X! "
IIRC, until Apple went the *nix way, it was just co-operative multi-tasking, which is worth a box of Kleenex.
Oxford Guy - Tuesday, May 18, 2021 - linkApple had protected memory long before Microsoft did — and before Motorola had made a non-buggy well-functioning MMU to get it working at good speed.
One of the reasons the Lisa platform was slow was because Apple has to kludge protected memory support.
The Mac was originally envisioned as a $500 home computer, which was just above toy pricing in those days. It wasn’t designed to be a minicomputer on one’s desk like the Lisa system, which also had a bunch of other data-safety features like ECC and redundant storage of file system data/critical files — for hard disks and floppies.
The first Mac had a paltry amount of RAM, no hard disk support, no multitasking, no ECC, no protected memory, worse resolution, a poor-quality file system, etc. But, it did have a GUI that was many many years ahead of what MS showed itself to be capable of producing.
mode_13h - Tuesday, May 18, 2021 - link> Apple had protected memory long before Microsoft did
I mean in a mainstream product, enabled by default. Through MacOS 8, Apple didn't even enable virtual memory by default!
> The first Mac
I'm not talking about the first Mac. I'm talking about the late 90's, when Macs were PowerPC-based and MS had Win 9x & NT 4. Linux was already at 2.x (with SMP-support), BeOS was shipping, and OS/2 was sadly well on its way out.
mode_13h - Sunday, May 16, 2021 - link> C has been described as the universal assembler.
It was created as a cross-platform alternative to writing operating systems in assembly language!
> a C program can be blazingly fast, if the code treats the machine as a Control Program would.
No, that's just DOS. C came out of the UNIX world, where C programs are necessarily as well-behaved as anything else. The distinction you're thinking of is really DOS vs. real operating systems!
> I'm among those who spent more time than I wanted, editing with Norton Disk Doctor.
That's cuz you be on those shady BBS' dog!
mode_13h - Sunday, May 16, 2021 - link> I think there's been a view inculcated against C++
C++ is a messy topic, because it's been around for so long. It's a litle hard to work out what someone means by it. STL, C++11, and generally modern C++ style have done a lot to alleviate the grievances many had with it. Before the template facility worked well, inheritance was the main abstraction mechanism. That forced more heap allocations, and the use of virtual functions often defeated compilers' ability to perform function inlining.
It's still the case that C++ tends to hide lots of heap allocations. Where a C programmer would tend to use stack memory for string buffers (simply because its easiest), the easiest thing in C++ is basically to put it on the heap. Now, an interesting twist is that heap overrun bugs are both easier to find and less susceptible to exploits than on stack. So, what used to be seen as a common inefficiency of C++ code is now regarded as providing reliability and security benefits.
Another thing I've noticed about C code is that it tends to do a lot of work in-place, whereas C++ does more copying. This makes C++ easier to debug, and compilers can optimize away some of those copies, but it does work to the benefit of C. The reason is simple: if a C programmer wants to copy anything beyond a built-in datatype, they have to explicitly write code to do it. In C++ the compiler generally emits that code for you.
The last point I'll mention is restricted pointers. C has them (since C99), while C++ left them out. Allegedly, nearly all of the purported performance benefits of Fortran disappear, when compared against C written with restricted pointers. That said, every C++ compiler I've used has a non-standard extension for enabling them.
> if C++, do things in an excessive object-oriented way
Before templates came into more common use, and especially before C++11, you would typcially see people over-relying on inheritance. Since then, it's a lot more common to see functional-style code. When the two styles are mixed judiciously, the combination can be very powerful.
GeoffreyA - Monday, May 17, 2021 - linkYes! I was brought up like that, using inheritance, though templates worked as well. Generally, if a class had some undefined procedure, it seemed natural to define it as a pure virtual function (or even a blank body), and let the inherited class define what it did. Passing a function object, using templates, was possible but felt strange. And, as you said, virtual functions came at a cost, because they had to be resolved at run-time.
Concerning allocation on the heap, oh yes, another concern back then because of its overhead. Arrays on the stack are so fast (and combine those buggers with memcpy or memmove, and one's code just burns). I first started off using string classes, but as I went on, switched to char/wchar_t buffers as much as possible---and that meant you ended up writing a lot of string functions to do x, y, z. And learning about buffer overruns, had to go back and rewrite everything, so buffer sizes were respected. (Unicode brought more hassle too.)
"whereas C++ does more copying"
I think it's a tendency in C++ code, too much is returned by value/copy, simply because of ease. One can even be guilty of returning a whole container by value, when the facility is there to pass by reference or pointer. But I think the compiler can optimise a lot of that away. Still, not good practice.
mode_13h - Tuesday, May 18, 2021 - link> though templates worked as well
It actually took a while for compilers (particularly MSVC) to be fully-conformant in thier template implementations. That's one reason they took longer to catch on -- many programmers had gotten burned in early attempts to use templates.
> Passing a function object, using templates, was possible but felt strange.
Templates give you another way to factor out common code, so that you don't have to force otherwise unrelated data types into an inheritance relationship.
> I think it's a tendency in C++ code, too much is returned by value/copy, simply because of ease.
Oh yes. It's clean, side effect-free and avoids questions about what happens to any existing container elements.
> One can even be guilty of returning a whole container by value, when the facility is there
> to pass by reference or pointer. But I think the compiler can optimise a lot of that away.
It's called (N)RVO and C++11 took it to a new level, with the introduction of move-constructors.
> Still, not good practice.
In a post-C++11 world, it's now preferred. The only time I avoid it is when I need a function to append some additional values to a container. Then, it's most efficient to pass in a reference to the container.
GeoffreyA - Wednesday, May 19, 2021 - link"many programmers had gotten burned in early attempts to use templates"
It could be tricky getting them to work with classes and compile. If I remember rightly, the notation became quite unwieldy.
"C++11 took it to a new level, with the introduction of move-constructors"
Interesting. I suppose those are the counterparts of copy constructors for an object that's about to sink into oblivion. Likely, just a copying over of the pointers (or of all the variables if the compiler handles it)?
mode_13h - Thursday, May 20, 2021 - link> > "many programmers had gotten burned in early attempts to use templates"
> It could be tricky getting them to work with classes and compile.
I meant that early compiler implementations of C++ templates were riddled with bugs. After people started getting bitten by some of these bugs, I think templates got a bad reputation, for a while.
Apart from that, it *is* a complex language feature that probably could've been done a bit better. Most people are simply template consumers and maybe write a few simple ones.
If you really get into it, templates can do some crazy stuff. Looking up SFINAE will quickly take you down the rabbit hole.
> If I remember rightly, the notation became quite unwieldy.
I always used a few typedefs, to deal with that. Now, C++ expanded the "using" keyword to serve as a sort of templatable typedef. The repurposed "auto" keyword is another huge help, although some people definitely use it too liberally.