As with any cache, data that is frequently or recently used can be accessed more quickly from the cache device than from the larger, slower device. Intel's hope is that ordinary desktop usage is mostly confined to a relatively small data set: the OS, a few commonly-used applications, and some documents.
When accessing data that fits in the cache, you'll get SSD-like performance. If you launch a program that isn't in the cache, it'll still be hard drive slow (assuming the cache backing device is a hard drive, of course). Sequential accesses don't have a lot of reason to use the cache and are probably excluded by Intel's algorithms to save cache space for random I/O.
You can make a large magnetic hard drive faster by adding an external cache. For people who can't afford a large enough SSD, this might be a good choice. SSDs are getting cheap though, so this feels like a product that needed to ship a few years earlier to have a real chance.
it feels like a product searching for a reason to exist. if i need fast performance i buy a SDD that delivers 2 GB/s and not a cache device that delivers 1200 mb/s.
Keep in mind that consumer NVMe SSDs that boast throughput of 2GB/s or more generally do not reach their peak at low queue depth. Optane is supposed to be able to drive 1200MB/s read throughput at low queue depth (not sure why they listed QD4), so there is potential for some performance improvement here. Most consumer workloads never get out of low queue depth territory, so this could have some small real world benefit. Write throughput, however, is critically low.
More importantly, these Optane drive are gear more towards lowering latency than transferring large files. Where HDDs access the data on the order of 10s of mS and SSDs access data on the order of 1mS (give or take), Optane should be able to access data on the order of 1s - 10s of uS. Where Optane will be useful is high numbers of small file accesses (DLLs, library files, etc.).
That all said, I'd just as soon leave all the extra complications, compatibility issues, and inconsistencies on the table and get that 2 GB/s sdd that you mentioned until Intel figures out how to make these more compatible and easier to use without requiring a "golden setup". I don't want to buy a new W10, Kaby Lake, 200 series based system just to use one of these. My current W7/W10/Ubuntu, Skylake, 100 series system should work just fine for a good while yet.
I recall it being released with the 965 chipset and offering little to no benefit to the end user. In fact, I think HP and a few other OEMs didn't bother supporting it. Turbo Memory's disappointing performance is one of the reasons why I think Optane is better used as a higher endurance replacement for NAND flash SSDs than as a cache for now progressively less common conventional hard drives.
SLC is MUCH better than hypetane. Double the endurance, 1/100 the latency. It will we a big step back to replace SLC cache with xpoint.
What the industry should really do is go back to SLC in 3D form. Because it doesn't look like xpoint has a density advantage either, as it is already 3D and it takes 28 chips for the measly 448GB. Samsung 960 pro has 2 TB in 4 chips. Sure that's MLC, which is twice as dense as SLC. Meaning that with 3D SLC you could have a terabyte in 4 chips.
Now, if you get short of 0.5 TB of xpoint with 28 chips, and you get 1 TB of much faster, durable and overall better SLC with 4 chips, that means it would take like 60 chips to get a TB with xpoint. Making potential 3D SLC "ONLY" 15 TIMES better in terms of density, while still offering superior performance and endurance.
Which begs the question, why the hell is intel pushing this dreck??? My guess, knowing the bloated obese lazy spoiled brat they are, they put a shameful amount of money into RDing it, and how they are hyping the crap out of it in order to get some returns. They most likely realized its inferiority way back, which prompted them to go for the hype campaign, failing to realize despite (or because of) their brand name, that would do more harm than good the moment it fails to materialize. Which it did - I mean look at how desperate they are at trying to find a market for this thing.
Time to add Hypetane to "handheld SOC" in the "intel's grandiose failures" category. The downsides of being a bloated monopolist - you are too full of yourself and too slow to react to a changing market to offer adequate solutions.
No, it's pretty similar to Intel Smart Response technology introduced with Z68 and giving HDD users a decent gain, although obviously not full SSD performance.
They should really consider renaming it to Hypetane.
The perf figures listed here are for QD4. At that queue depth, a 960 pro beats it easily in sequential reads and writes and random write iops, hypetane is only faster in random reads, and even then, not anywhere near the "1000x better than ssd" figures. And it remains to be seen if that advantage extends to the drive's entire lba or if it is just for a limited data set fitting in some cache.
At any rate, at those capacities, it is kinda laughable. Just get some extra ram, it might cost a little more, but then again it will be much faster, much more durable, and with a decent ssd it will take just a few seconds to flush the working data set to persistent storage on shutdowns.
"Where Optane will be useful is high numbers of small file accesses (DLLs, library files, etc.)."
I'd say databases. Of course, provided your database is small enough to fit on such a drive. DLLs are not an issue, those are shared between all processes which link against them and are loaded in memory as long as they are used, and they are usually hundreds of k to a few mb, so they absolutely do not qualify as something that would benefit from frequent random reads.
This drive does not look impressive on paper. However the underlying concept of pairing a faster solid state tech with a larger, slower SSD is solid. Relatively slow (by today's standards) SATA/mSATA SSD caches did wonders for a system with a large mechanical drive. So the concept is sound. A 1TB 960 Pro is fairly expensive, compared to a 1TB SATA SSD.
Most of my workload is reads - I think that's fairly common for consumer use cases. A fast M.2 cache drive, primarily for reads should boost performance quite a bit - if price, speed, and capacity are there. Now that goes back to this drive being insufficient. Capacities need to be in the 64-128GB area, with an upgraded x4 controller offering better speeds all-around. Meanwhile pricing needs to stay around $100 for the 128GB product. Then they might have something on their hands. We'll see how the second gen product looks.
I'd still like to see some testing, in case the read latency and low QD performance benefits a system more than anticipated. There's a lot of competition and I don't think this standards out in a crowded M.2 arena.
It was only impressive on hype, it is neither impressive on paper, nor is it impressive in practice.
I wouldn't normally care, there is enough room for mediocre technology under the sun, what annoyed me was the laughable claims of "1000 times better than flash" BULLCRAP, and how the simpletons bought it. And still try to justify the hype, now that the product has turned out to be mediocre, now there are some mythical hidden merits only the chosen few would understand and appreciate.
You know what this sounds like? Like "the emperor's new clothes". You know, they are so great, that only smart people see them. To us, the silly ones, the emperor is fully nude.
1000 times faster than nand? BS. 75% of the time it is SLOWER than last year's SSDs. Barely faster in random reads, and that's about it.
1000 times the endurance of flash? BS, at best twice as good as MLC, still way behind SLC. The xpoint media itself doesn't appear to be faster than SLC either.
10 times denser than flash? BS. Waaaay behind MLC.
Just give me 3d SLC SSD with improved controller. I don't care about hype and by extension, about the naked hypetane.
Samsung 960 Pro M.2 can only do about 130MiB/s Q4 random. This can do 1.2GiB/s. I think you underestimate those read latencies. Lets wait for some benchmarks.
And that's where its advantage runs out. Everywhere else it is vastly inferior.
BTW, dunno about the smaller models, but the 1TB 960 pro does about 60k IOPS at random 4k reads, which makes for about 230 rather than 130 MB/s. And about 160k at random 4k writes, which is about 625 MB/s, more than twice as fast, and that still mediocre MLC.
But yeah, lets really wait for the benchmarks, because what I expect to see is that hypetane's random reads advantage will result in NADA benefit in 9 out of 10 practical usage scenarios. And that t will be "superior" in one very, very narrow niche barely anyone cares about.
@ddriver: "I'd say databases. Of course, provided your database is small enough to fit on such a drive. DLLs are not an issue, those are shared between all processes which link against them and are loaded in memory as long as they are used, and they are usually hundreds of k to a few mb, so they absolutely do not qualify as something that would benefit from frequent random reads."
I absolutely agree with databases here, but that isn't something you often see in a consumer system, so it really isn't the point of this consumer oriented cache. Also remember that this cache is persistent, so all those DLLs need to be loaded into memory every boot sequence. Not a great example as some people never shut down their system and if a large enough data set moves through the drive (depending on the internal cache algorithms) these files will be flushed anyways. Unfortunately I'm having trouble (as apparently is Intel) coming up with an obvious common use benefit for these drives. Perhaps a tablet/laptop/hybrid that is frequently powered on and off and uses hibernate or hybrid sleep would benefit from the low latency these present.
Intel is simply trying to find a market niche to cram that poor Hypetane miscarriage.
Oh wow, a use case, for all the people who bought a brand new system to couple with a sole mechanical HDD and are desperately looking to populate their single M2 slot with something as useless as possible. All 3 of them.
For consumers a SATA M2 drive is more than enough, and also offers better efficiency. There would be no tangible benefit to using Hypetane "cache", money will be much better spent on a 128 GB SSD for a boot/os drive. That would be better than "cache", especially considering how IDIOTIC windoze caching policies are. It keeps on caching the most useless junk. I have 64 gigs of ram on my main box, and it kept on caching movies, ENTIRE movies, many many gigabytes rather than the small, frequently used files. Which is why I disabled caching for all SSD drives, and limited disk cache overall, cuz I hate wasting CPU cycles waiting on windoze to deallocate cached nonsense every time I run a memory demanding application. So no more of that genius "99% done, 10 minutes hanging on the last 1%" when copying big files to USB drives and such.
I have no idea why you're going on about it caching movies in RAM, anything in RAM can be overwritten instantly, so the RAM "usage" figures are misleading to novices. Write caching to RAM does give you a little bump, even with SSDs. If you really want max performance and you've got a reliable rig with a UPS/laptop battery, not only enabled write caching, but disable Write cache buffer flushing. That gives you the best of both worlds... provided you have reliable protection against power loss.
Have you ever heard about this thing called memory management and heap allocators? You cannot just write in ram willinilly, that will result in an instant crash. Having windoze cache filling your entire ram means that for every single new allocation it has to make room, deallocate some of the useless garbage it caches, that doesn't involve any penalty in "wiping" ram clean or something like that, but there are CPU cycles consumed by releasing the memory from the caching kernel and allocating it to another process.
Heap memory allocations are rather slow to begin with, and having to keep on making room each and every time you allocate heap memory makes it even slower.
As for the caching part, it was not about write cache but about the infamous "super fetch", i.e. the "read cache".
that's the thing - who in their right mind getting the 7th gen cpu would want to get a 16 or 32gb cache utilizing a full m.2 slot instead of just getting a 128gb or 256gb m.2 ssd for the same price? bigger storage and more useable across any config from last 10 years... this is a dead product with a major marketing spin to make it feel like a real worthy product.
That's apparently intel's view on how much "consumers" need. 32 gigs of storage in the (usually sole) m2 slot. Might wanna throw in a few peanuts while they are it, just so that they seem extra generous.
Yeah, makes no sense. who would buy a new kabylake setup and not buy an SSD? The product would make some sense for older systems that still use HDD only but obviously there are no old kabylake systems. And since most mobos have exactly 1 m.2 slot, good look selling this intel. I recommend a crappy TLC 128 GB drive over this cache.
Well you know there ARE sub-$50 Kaby Lakes. Do they not support drive caching? The idea is that you could get both a larger, cheaper SSD and a faster, smaller one to act as a cache. Not a bad concept. The problem is that Intel's speed/capacity/price is NOT where it needs to be. I'd use a competing conventional 128GB M.2 SSD as a cache before this thing. Maybe next gen!
And those 200MB/sec write speeds? This is a cache, after all. Clearly write caching won't be an improvement so how is this radically better than a $100 2TB SSHD that has 8GB SLC cache onboard? Not to mention the platform requirements for Optane caching and the inherent software complexity. This is just stupid of Intel to introduce something that costs this much based on a 10 year old concept introduced in Vista.
It didn't work well then and it won't work well now. Yes the caching capacities have grown (Most readyboost caches were like 16 or 24GB) but so has storage capacities and overall data sets. A single AAA game is 50GB. A single game. There goes your entire cache, assuming the dumb algorithms SRT uses can even figure out to cache the maps in the first place. My experience is, it doesn't.
Hypetane is so "good" intel is struggling to find a market for it. It doesn't need to offer benefits, it doesn't even have to make sense. It just needs to sell, because otherwise it will look even worse than the complete and utter failure to live up to the "1000x better" hype.
My money is on FIFO. First in - first out, actual usage patterns are too bothersome to take into account. It will cache stuff regardless of what it is, and when it runs out of cache, it will start purging the oldest data, or AT BEST the oldest and oldest accessed data. But access frequency is definitely not accounted for.
In most cases, people who know their stuff will be better of having that volume dedicated for fast access, and put the files they need to be fast there.
As far as I understand you need not only low queue depth, you need also small random reads for optane to shine. On a large read like loading a big file SSD will win.
Windows will hold DLLs and libraries in RAM, you'll only get boost for the first run.
In the real world, it wont work. In a world of <$100 256GB SSDs, this is useless. There are zero benchmarks that show cache drives offer the same performance as a true SSD boot drive. It is useless on a notebook because no one wants to power up two drives constantly. If the HDD could be powered down most of the time it would be one thing, but I seriously doubt this cache will be smart enough to enable that.
This particular product is for the low-end consumer (enthusiast versions are later this year or early next year). So, if you already have a high-end computer this particular version probably doesn't have a real-world benefit for you.
The benefit for most people is that for $44 you can make a sluggish low-end computer far snappier. It is a pretty tiny price to pay for a potentially useful benefit. For example PC World shows a test where GIMP took 14 seconds to load, they popped in a Optane module, and now it took 3 to 4 seconds. That is a noticeable and significant boost for a pretty small cost.
I'd like to see more tests with full reviews to see if that is a one-off review or the norm. But I'm actually interested in it so far. No, it won't revolutionize the computer. But it might make certain tasks feel far less frustrating.
Caching isn't the same as mind-reading what work you want to do next. You still have to have loaded GIMP at least once beforehand to benefit from it still being stored in your fast cache, and that first time will always take as long as it takes from your slow storage.
I guess people who are RAM-starved and need to constantly open and close the same smallish programs from slow spinning drives they can't upgrade may feel a win, but the limitations on legacy processor/chipset support for this tech mean that users who would actually be able to fit it in their brand new motherboard (and be aware enough of it to do so) will doubtless already have an SSD for OS/program storage that negates virtually all the benefit, so they don't need it.
I personally open programs like Word, Excel, internet browsers, etc repeatedly before turning off the computer. So having them in cache will be quite helpful. You are assuming that SSDs aren't sped up too. This particular version probably won't help the fastest SSDs, but it will help some of them (those of us who move their existing SSD to their new computer build for example). Finally, you got it on the RAM-starved point. Optane is a much better value proposition as $44 for 16 GB of more memory (albiet slower than normal memory) than as $44 for 16 GB of hard drive space.
the only problem with your " benefit for most people is that for $44 you can make a sluggish low-end computer far snappier" is you didn't read the full article and realize the limitations, namely hardware requirements - must be a 7th gen system, and needs a m.2 slot. so why would anyone in their right mind get this over a typical m.2 ssd that will work with any system, not just 7th gen and above? also why would one get this when for the price of a 32gb of this i can get a 256GB sata ssd (bigger and faster writes), and meet most users storage requirements too as a full fledged drive?
Dum-ass Joe public, doesn't care for a $90 SSD, and will just see one product (laptop / desktop) with a 240/250GB hard disk, and another product with a 1/2TB hard disk.
Even when he gets it home, he feels he has made the better decision. This product helps to plug those gaps, giving him a SSD 'feel' to his system, and the 'better' amount of storage too. Every dollar counts to OEMs, and customers.
I have tried Intel's caching 20GB SLC drive in both Z68 systems, and in my own laptop, in front of a 2TB slow as hell mechanical drive. I was very impressed indeed. Actually that 20GB SLC drive is still functional, and kicking around somewhere. I can't have imagined an equivelent-aged 80GB MLC drive would still be alive under the same usage...
I wouldn't imagine that a ton of users who have kaby lake but no ssd are savvy enough to install an m.2 module. The low end computer with an owner wanting optane is a unicorn.
Leave it to Intel to pull a 10 year old technology out of the box (readyboost) and rebadge it to a new, slightly faster (and in the event of write caching, slower) product, and charge a shitload for it.
"Intel notes that Optane Memory caching can be used in front of hybrid drives and SATA SSDs, but the performance benefit will be smaller and these configurations are not expected to be common or cost effective."
Yeah, that's because you neutered it with a slow ass PCIe interface.
Readyboost was a failure, RST is inherently complex and overall, sucks (Apple Fusion drive based on the same concept is substantially better, mostly because of the larger 128GB caching SSD) and who are they kidding, SRT wasn't much of an improvement over Readyboost.
Optane is fascinating tech, but where is it? What the hell good is it if they can't scale it up to usable sizes. The purpose of NV memory is to move away from mechanical storage, not supplement it. This doesn't fix all the other problems with having a hard drive, especially durability, power consumption and physical size. And it barely addresses the performance bottleneck. Even a 64GB cache on a 2TB drive is 1:18 caching ratio. Sure is better than an SSHD but it's also more expensive, more complex, and less compatible.
I can't believe Intel spent time redeveloping such a stupid fucking concept here.
The point of optane wasn't to cache the HDD/SSD, it was to replace RAM for instant-off/on and the ability to remove the need for traditional sleep states. But it didn't work, so they are marketing it as a HDD/SSD cache, and it will fail just as hard as the last 2 times it was attempted.
If they deliver the technology on time with a matched mobile mother board, and can lower the idle power consumption on a Cannon Lake Platform, with 64 GB or 128 GB soldered onto the device, they might have a matched system that is remarkably fast due to low latency. The real benefit of a matched system would be superior performance on a lower TDP, so that you can get 10 hours out of a laptop that is 1 pound at a price point of $899 for an integrated system, like the Surface Pro 5. Otherwise, what is the point, if you have plenty of power, just go with a much larger and less expensive SSD.
The problem is that there is no benefit.... at least not in this config. The idea was that SSDs were stuck at SATA3 speeds and demand for more speed was coming hot and heavy, so Optane/3DxPoint was developed as a cache technology to bring a bit more performance to systems that were already screaming fast by replacing the need for RAM. While technically a little slower than the RAM it replaced, the Optane memory would allow for full power-off without loosing stored memory. This would allow for literal instant-off, instant-on capabilities in devices as things would not need to be spooled back into RAM. While not quite as fast as DRAM, it would be more than fast enough for most applications (as they are bottle-necked by the CPU and GPU rather than memory bandwidth), while offering up to 32GB at prices far lower than RAM. And this was all supposed to happen 2-3 years ago.
Well, it got stuck in development hell. NVMe SSDs hit the market with the m.2 interface allowing for far higher performance SSDs. DDR4 was released, offering higher density and lower prices than DDR3 (though about the same performance). Plus I suspect (total speculation here) there were issues adapting OSs to the new RAM-less architecture, which made it a ram augment rather than a RAM replacement.
So what are we stuck with? essentially what we had back with the Sandy Bridge architecture a few years back. SSDs were very expensive, so Intel made the ability to use a SSD as a cache for a HDD which could offer extreme performance gains. It too was limited to 32GB (though I believe a firmware update allowed for 64GB cache later), and required a RAID setup, and pretty much all of the same limitations we see here... and nobody used it. By the time it was released SSDs were tanking in price. The added speed was inconsistent, SSHDs did the same thing better, there were battery life issues, etc. etc. etc. All the same proposed benefits, just as the high performance tech was doping in price, all the same pitfalls, and again no manufacturer in their right mind would use this. We might *might* see this in a few low-end ultrabooks as a way to offer higher capacity while still getting the 'ultrabook' tag for having a 'ssd', but that may be about it.
That said, in the server market this is going to be a huge plus! Being able to have a 2TB cache on a large RAID array will help a lot of workloads (especially in multi-user databases and virtual environments) while being cheaper than a RAM drive and faster than a SSD. There are still uses for this tech, and it will be a big deal... just not in the consumer space, and not how it was originally promised a few years ago when it was still being developed.
Having similar sequential and random throughput is unsurprising for 3D XPoint since it doesn't have large page sizes and massive erase block sizes to contend with.
Wire-bonded die stacking for 3D XPoint is no more complicated than for NAND; Intel's doing it on the enterprise SSDs where they don't have an excess of PCB space. Vertical scaling of the layer count within a single die might be easier than increasing the layer count for 3D NAND, but it's too soon to say for sure.
Not even close. Q4 random read for NAND is about 10% of Seq. Random write has high throughput by masking with those 1GiB+ caches. Samsung 960 Pro M.2 has an average random read latency of about 150us, which is about 25x worse. Expect the latencies to get further worse as SSDs get higher densities.
I expect 3D XPoint to get lower latencies as the controllers improve, assuming we're not already against PCIe's limit.
But he is correct that random read and write for this products are the same as their sequential numbers. Just multiply the i/o by the block size for the random, and bingo, you have the sequential speeds.
It's hard to see where the market for this is. It only makes sense if your primary workload is small enough to fit in the cache, but you need more storage than fits into a reasonably priced SSD. Most use cases will be significantly faster with a SSD.
They seem to be off by an order of magnitude in their talk of making vast memory images with this caching mechanism worth-while. Surprisingly enough (not at all), caches work. Applications, compilers and coders have long ago found ways of dealing with tiered latency and throughput. So, if you are going to make a difference, disks the size of very, very small memory foot-prints aren't it.
Come talk to me when there are capital T's in the size...
Probably cost reasons and logistics. Current prices make a module big enough to be a system drive prohibitively expensive in an minimum enthusiast class size (~$275 for 128gb), so there's no point behind a larger module. From the other direction the pictured one is big enough that assuming it's the 32GB size that 2260 size might not fit the 2nd optane storage chip; and as a very niche device running up costs with separate 16/32gb PCBs isn't a good idea.
Since its for caching write endurance rating is crap at best(sure no one is going to write 100GB/day but there are conventional SSDs with better write endurance). Seems like intel did overpromise on this so called X-point technology. Intel is certainly under pressure and losing money to Samsung in SSD market.
Well, the SSDs with those endurance ratings are easily 10x times larger. So Optane is not (yet?) game changing in this regard, but significantly better.
Where this makes a lot of sense is a write cache on in-home file servers. Think ZFS SLOG device. The Intel S3700 and S3500 have been king for a long time due to low latency. Size is fine too because ZFS does a commit every 5 seconds so the drives don't need to be massive. The question is do either of these drives have power loss protection. If so, then these are perfect.
They explicitly mention several times that RAIDed configurations are not supported.
Given that a major selling point of using ZFS volumes is enabling redundancy through RAID-Z, I doubt you're going to get anywhere useful with Intel Optane m.2 when it's for a single-disk ZFS volume.
The comment about RAID not being supported applies only to Intel's caching software. On a BSD or Linux system, you'd operate the system with the storage controller in AHCI mode and the Optane Memory device would show up as a normal NVMe block device.
Hi, we're not talking about the ZFS RAID/data disks; we're talking about the ZFS ZIL/SLOG drives which are just used for temporary journaling and buffering. They don't need to be in RAID or mirror sets.
IIRC, the ZIL/SLOG devices don't absolutely have to have power protection; if they fail, the ZFS FS on the main datastore won't be _corrupted_, iirc. (you'll still lose the newest writes of course)
For the same reasons, the ZIL/SLOG doesn't need to be mirrored.
Intel's RST is a Windows-only piece of software. On a Linux system, you can use Optane Memory drives as ordinary NVMe SSDs, and take your pick of caching and volume management solutions: bcache, dm-cache, ZFS, etc.
Intel's storage software situation on Windows is a mess because Windows doesn't have anything as flexible as the Linux general-purpose block device abstraction, and because Intel is still trying to enforce product segmentation by requiring certain CPUs.
All the profit in Mobile will be at the 10 nm node, and Optane will play a significant role if optimized and launched successfully. In the consumer and commercial markets, Linux is not a play. Hardware vendors need to compete with the I-phone and I-pad to stay pertinent. I personally use a Mac at home. I would say that Windows 10 is the premier computer operating system in the world based on total revenue, profit per unit shipped, as well as market share. I-OS is dominant in mobile in terms of profits, and Android dominant in mobile in terms of market share. Windows based storage systems remain the most proliferated systems in the world for the consumer market. Optane was released first in the server market, but the real model would be to enter the consumer space. So long as latency is low, bandwidth is high on new motherboards released with Cannon Lake, and Windows 10 future builds are optimized for Optane, the product might be a hit. If Intel fails to deliver in the lower power space, the product could be a flop. Microsoft has made it clear, it will no longer service Windows 7, and is moving on. Windows 10 is their future, that is how they feel and they have the firepower and the market presence to strong arm those who will not go along. 2018 may be another excellent Microsoft year, and if Intel hits the 10 nm node on x86, this might propel the market towards the Surface Pro 5 and other professional mobile solutions.
Feels like a very limited product line with few applications to customers.
1) If you're not on a 7th gen i-series CPU + 200 series chipset motherboard, Intel Optane WON'T WORK FOR YOU.
2) If you're on a 7th gen i-series CPU + 200 series chipset motherboard AND you're not already using RAID AND you're not already using a different m.2 boot device, then Intel Optane MIGHT help your performance.
It feels like a very limited Intel RST (Rapid Storage Technology) replacement. Lots of motherboards from Intel for the past few years supported RAID + SSD caching (even for RAIDed volumes) through this tool. Basically, with Intel RST, you could have bought a new PC and new larger capacity and faster SATA/m.2 SSD, but you still have a smaller/slower SSD + your HDD from your last build. With Intel RST you could install the OS on your new drive, plug in your old HDD(s) and old SSD, and then configure through Intel RST to cache the HDDs giving them a boost in responsiveness and allowing you to get some use out of older/smaller SSDs in the form of a tangible caching solution.
Here, Optane is configured to be a caching solution (like Intel RST), but you're limited to brand spanking new hardware (and let's face it, given stagnating Intel performance upgrades year-after-year, even die-hard Intel fans are likely still on older platforms than the current newest), and it's basically only compatible with Intel Optane m.2 devices, for what is likely a minor speed boost in the form of caching for slower media such as SATA SSD or HDDs, but isn't compatible with caching RAIDed volumes.
Like, this product is barely applicable to anyone. And even though Intel RST is readily applicable to lots of people, not many people bothered with it or knew about it even to this day.
It WAS a limited production......of demo units they couldn't give away, so they are being rebranded as consumer retail units (LOL)
For the past 10 years, performance enthusiasts have been using SSD's for boot drives and games while keeping static (mostly unused) data on a second drive
Caching a 2TB - 5400RPM static drive would result in a hit rate of less than 2% for data we hardly ever use anyway and caching a much faster SSD is pointless
NOBODY on THIS planet should "currently" be booting to 2TB - 5400RPM platter drives that need caching!
(Especially on Kaby Lake hardware with Windows 10)
AnandTech should interview the GENIUS who thought selling X-Point demo units they couldn't give away was such a great idea.....
Intel RST's caching for the hard drive is still useful though. Say you have a 2TB HDD exclusively for hundreds of Steam game installs. Instead of manually installing games onto a separate disk (often the same SSD the OS is installed on) you can just write everything to the 2TB HDD volume. Intel RST could cache up to 64GB of data on an SSD, which while small, typically meant that your 2 - 3 most common Steam games would be entirely cached on the SSD without having to go back and forth on what's on the SSD and removing it back to the HDD after you've had your fun with the game.
Actually SRT doesn't work per game, and neither per file, it operates on the block level. So you'd have the things you frequently use in those games cached, but nothing else. This makes the storage space a lot more useful than people think when comparing it with install sizes.
Actually if the performance works well enough I bet its going to be fairly popular.
No, you may not get to see it. Why? Because you are an "enthusiast".
Most markets are sold with HDDs. 80% in fact. And industry moves to newest generation fairly quickly. 30-40% of new computers sold are based on the latest generation. 80% of that is 25% to 35%. Meaning 25-35% of the new computers sold will benefit from Optane Memory.
You'll see Optane Memory pre-configured systems coming fro vendors real soon. The pricing is reasonable too.
Unless it gives a much bigger performance boost than the PR slides/specsheets are implying I don't think we will. Tiny SSD caches with an HDD never really went anywhere because bargain market would rather have just an HDD and a slightly lower pricepoint, the enthusiast market was generally underwhelmed by the overall performance and either didn't bother or manually split media/etc off of a midsized SSD to an HDD. The problem for Optane+SSD is that SSDs are good enough in a way that HDDs weren't; and the price of a 32GB optane is more than enough to get you from a 128 to 256GB SSD and most of the way from 256 to 512GB. For most consumers the bigger SSD will be a better option. The rarity of dual SSD m.2 slots on laptops, combined with the larger 2280 form factor and idle power level also mean it's mostly going to be a desktop part in this version; meaning you've eliminated a big chunk of the market already.
Yes, and they use HDDs simply because they are cheap. An OEM bulk 500GB HDD costs them $30 vs a 250GB OEM SSD that would cost them $50. The idea that they are going to 'spring' for an expensive cache layer for an additional $40-70 is ridiculous. I mean, for that price they could just get rid of the HDD entirely and just move to a SSD in the first place! Even a 1TB OEM SSD is only going to cost ~$110.
But that is the whole point. Cheap laptops are cheap BECAUSE they use cheap parts. PC makers will continue to use HDDs on the low end, and SSDs in the mid to high end because nothing else makes any sense.
It's more interesting, I think, to consider what Intel is NOT shipping and to ask why.
As far as I can tell (given the limited tech details Intel has released) what we have is something that's both less and more than Apple's Fusion technology. Less than Fusion in that the cache sizes are smaller (32 or 64GB, Apple provides 128GB although you can create your own fusion drive using ay SSD you like --- I created one from a 64GB FW2 external drive fused to the internal HD of an old iMac); more than Fusion in that the cache is apparently common to all drives --- which is nice but also constrains how aggressively the caching can perform.
MORE interesting is that, again as far as I can tell, this caching role ONLY REQUIRES BLOCK WRITES. In other words it takes no advantage of that supposed improvement of 3D XPoint that it has byte read/writes. And the caching offered seems unlikely to provide any real value over similar caching done using an SSD (ie the Apple solution).
Compare to the product that Intel COULD have shipped... Imagine an SSD that's a 3D-Xpoint hybrid. The 3D-XPoint memory is used to maintain the various metadata of the flash (so all the usage stats, the remap tables etc) --- basically uses where the byte granularity would be of substantial value, and would be used ONLY by the SSD controller so would not be relevant to file system and OS. This seems like it could be an SSD with a nice performance boost over standard SSDs, not just because the metadata updating is faster, but because it allows for the use of different, more flexible, flash block management algorithms.
But we are not seeing such a product. Why? Everything Intel has done seems to suggest that SOMETHING about their story is fishy. Either that supposed byte granularity doesn't work (ie so many extra ECC bits are required that large sector granularities are the only feasible architecture) and/or the costs of the memory are so much higher than flash that they couldn't create a hybrid at a viable cost point.
Either way, I see this not as the next great step in Optane, but as an attempt to try to get something, ANYTHING, to work with the technology, regardless of the fact that what they're selling solves a non-problem.
The caching is only of the boot volume, not all drives. But that limitation might be removed in future driver versions. It is a bit of an unfortunate limitation that it's just a storage cache and doesn't also get cleanly used as swap space and for hibernation, but those data blocks can land in the cache if they're detected as hot enough by the cache management algorithms.
I do think once 3D XPoint is more openly available on the market as a component, we'll see some drives replace their DRAM with 3D XPoint and ditch the supercaps. It's probably a bit too soon to tell if the economics will work out right for that to be a viable product, but there will certainly be a lot of engineers interested in making that kind of prototype. In the short term, such a product is obviously more complicated to design (since 3D XPoint needs wear leveling) and it's no surprise that the first Optane devices to hit the market are simple NVMe SSDs.
Optane has to be integrated in some pattern at the 10 nm node with Cannon Lake and a system on a chip design. The entire goal has to be higher efficiency, lower latency, and more reliability. Intel demonstrated this with graphics on the chip, and this was very well well received in mobile platforms. I think the problem has been yields. The proof of principal is there. The mother board change is there. The software improvements are present. The yields at the 10 nm node have been low. This has been their hurdle, and seems to have delayed Surface Pro 5, which was their goal, a completely integrated Wintel Solution, mobile Surface Pro Power House. We will have to wait 6 months to see if they hit their benchmarks. If they hit 10 nm at high yield by September, my guess, they will be primed for a great Holiday Season. But the key is an integrated platform. Think how integrated the I-phone 7 has become. This is the premier mobile platform in the world right now based on profitability per unit.
While Intel talks about this and muses how best to enforce segmentation to screw enterprise, TSMC is actually DOING it ("intergrated Optane"). Look at item number 6 on this URL: https://www.semiwiki.com/forum/content/6675-top-10... and the date (2H17). OK, so the storage capacities are presumably NOT at the the sorts of level Intel has in mind. But there's something to be said for actually SHIPPING, and improving as you go. TSMC has established a reputation as quietly shutting up until they are sure they can deliver, while every quarter Intel comes across more as the belligerent drunk, shouting at everyone "You think you can take me? You think you can take me? Let me tell you how awesome I am, and all the things I got planned."
Would it be crazy to imagine that (in a side deal that's NOT in the options TSMC lists) Apple does something like get 512MB of MRAM or reRAM embedded on 2018 or 2019's A- SoC, and used to cache the FS metadata (catalog trees and suchlike, maybe also the FTL metadata) of the main flash storage? All the time while Intel is still talking about how great Optane is, one day, going to be. (Alternatively maybe you add this storage to Apple's custom flash controller, which I assume is on 28nm.)
It's not clear to me the extent to which is yet possible. Everspin dies look way too large for this, but the MRAM density should (in principle) be more dense than DRAM, so I don't know if that's because they're using WAY older lithography as a way to hold down burn rate as they perfect the technology.
I hope that second generation Optane will offer improved endurance, higher capacities, and lower prices. I'd consider using it as an OS/boot drive and then using a cheaper 3D TLC drive (or MLC depending on capacity needs versusu prices) as a storage disk, but I don't think I'd want to use Optane as cache for a HDD. Benchmarks that show clear performance improvements might change my mind, but I just don't think I need a HDD for bulk storage these days.
(I almost hate to bring this up as it appears to have been studiously avoided...)
SemiAccurate has had a lot to say about this technology, and it's worth reading if you're interested in Optane. Demerjian does generally let his mouth run ahead of his brain, but it is a good brain.
This is kinda pointless isn't it?. Samsung provides for ram to be used as cache with their rapid storage interface. So what's special about this? Samsung's ram caching is free, will be faster as DRAM is still faster than optane. the only advantage is fail safety on power failure, so this is only usefull for desktops needing a bit of speed for short burst writes. But wait, you can buy a battery backup to you desktop that can shut down the system incase of power failure in a clean manner. It would cost only 50-100$. Hmmm.. ok I see no point to this. Get it cheap enough with high enough capacity to compete with SSD. else Samsung's rapid ram caching will simply outperform this.
Until your battery backup fails, which it will. It's not just power failures - if the system crashes, it may never be able to write the data in RAM to disk. DRAM is perfect for a read cache, but bad as a write cache if you care about your data.
The OS already using any "unused" DRAM as small write cache and large read cache. By increasing the fraction of write cache you take away from the read cache. Overall not that much of an improvement compared to adding storage.
I'd like to see this bench tested, because if it really can make a 2 TB HDD *feel* like a SSD for a much lower price, then I think that's really rather good.
I don't get it. The only market I can actually see for a product like this is for large companies where speed matters from a productivity standpoint. For instance, I'm in healthcare and I am constantly logging into different terminals throughout the day and launching and relaunching programs. Most users do the same things on these computers all day long. Something like this paired with a low cost HDD makes sense since the volume is so high.
The extra $$ spent on this could be enough to jump from an HDD to an SSD. Most general computer users don't need many TB of storage and can get away with a modest SSD. I suppose OEM makers can market SSD speed with large storage to lure people in, but it's not practical.
Enthusiasists will opt to spend a little extra on an NVME storage solution that uses 4 PCIE lanes anyway in addition to an array of HDD if they are running as a backup or media server.
"The caching is only of the boot volume, not all drives." ------------------------------------------------------------------------- How many Kaby-Lake Win 10 Laptops will be sold this year with 2TB X 5400RPM boot drives that can take advantage of this cache for less than 2% of your most used data?
I thought this was the year for Boot SSD's on the majority of new Laptops?
I can see how this cache might help 10-year old tech "IF" and only "IF" this tech could work on anything other than Kaby-Lake and Windows 10
Are any of your readers planning on platter based "Boot" drives in their new Kaby Baby?
I don't think there's much point in Intel updating the 750 with 3D NAND. The controller is at least as big a limitation, and can't compete in the consumer space.
Depending on the price, I feel the cost will far exceed the benefit for this product. On the other hand, it is actually good to see that there is a new solid state drive with focus on lower latency, instead of transfer rate.
And outside of very niche use cases, I don't see a market for it, for the same reason that small SSDs used as HDD cache did not fly off the shelves 6 years ago.
You already failed buying a Z270 that stops at a simple quadcore vs getting a cheaper X370 and a full fledged 8 core Ryzen 1700 that will let you do many things while gaming or be a productivity monster at just 65w.
This pointless stuff coming out of Intel really makes we wonder what is happening internally. Has a feel of stupid management forcing engineers to create bullshit products because management doesn't have a clue.
My guess is simple. This is the initial launch of a technology with significant upside in netbooks and less expensive notebooks when Atom at 10 nm is released along with an I5 Y series at 10 nm, with motherboard that have very high bandwidth. If Optane can be lower power in Idle, at 64 GB this would be fine for a netbook or a low cost laptop, allowing fairly high computing power at a low power draw. The holy grail of mobile is 10 hours. If Optane can get Intel based laptops to 10 hours, at a lower power draw, with faster performance, than that will be marketable. $77 for 32 GB is a high price, but if that price would come down with yields, to 64 GB for $50, Optane could be in new netbooks by ASUS and HP, marketed specifically for a 10 hour experience to compete with older segments. A 128 GB Optane paired with a 10 nm I5 Y will simply be much faster than any Mac Book from 2017. Coupled with Windows 10 upgraded build, this mobile platform will be very snappy. This could replace SSD's in a Surface Pro 5 as an option, as users will pay for lower latency.
The NVMe interface may mean you can't really play with byte addressability yet. Looks like it requires block size of at least 512 bytes. (This is from a quick search so I might be reading old documents or might be just wrong.) And this particular device may only offer 4KB blocks, given that a lot of other stuff assumes 4KB or a multiple of it now. Given a controller that could handle them, could be interesting to be able to do an even larger number of smaller I/Os per unit time than SSDs today.
3D Xpoint Optane was supposed to next next best thing since sliced bread.So far it seems to be failing to impress. Did Intel fail to market this product right or it is just way too late on market since there are better, cheaper options already out there?
1- No standalone Optane SSD's yet (to make it as the main boot drive). and if there's too expensive.
2- Hybrid Optane+SATA SSD is not viable except for a small percentage of users specially that it requires a complete new platform which also support NVMe M.2 and I don't think the regular user will see a different, but I think NVMe M.2 will be a better option here.
3- Optane still big, I mean we can't expect 256GB and larger on standard M.2... they're promoting U.2 for such usage. so it's either built in the motherboard or using an M.2 to U.2 adapter.
4- For Ultraportable, space is very limited to have two drives which will be Optane M.2 + SATA M.2 SSD.. so only single drive is possible... the best option is NVMe M.2 then. unless Intel shrinks the Optane to have a single M.2 drive with both Optane M.2 and an M.2 SATA single chip solution... the Optane controller must be changed also to allow to connect external PCIe-SATA bridge or directly connect NAND chips.
5- Does the cost of Optane drive over NVMe drive worth the difference ? maybe but for a fraction of users, the NVMe drives are faster in sustained usage, but Optane is faster in random.. the smaller the random the better.
hyped like crazy.. and then a a dissapointment on many fronts.
i build two RYZEN systems for friends. no way i would build one for myself in the current state of things. BIOS issues and BSOD make building a RYZEN system no fun.
and i have wished so much for a cheaper rock stable 8 core system i can render with.
i can´t even remember when i had the last BSOD with intel.
these products have to mature a lot. maybe in 1-2 years we see the full potential of RYZEN and OPTANE.
AMD CPU's "emulate" an Intel CPU and can never (contrary to opinions at this site) beat or destroy Intel in the market using technology directly Licensed from Intel
ALL Wintel X86 software should be written directly for Intel CPUs for compatability
I have never used an AMD CPU in the past 10 years because several programs running fine on Intel chips crashed repeatedly on AMD
If you are having driver issues, you're screwed until AMD releases a fix
Software issues are easier to fix Simply DO NOT use software that is incompatible with AMD chips
I have not had a BSOD on a Intel CPU running Windows XP in over 10 years now since I eliminated DLL conflicts and registry errors by using only Portable Applications that keep their reg settings separate from the Windows Registry
The only BSODs I ever get now are on Windows 7 / 8.1 and 10 machines
There is a way for customers to boost sequential read and write performance and it is by RAIDing some of them. For enterprises, they can build the systems with upto 1200 GB/s read performance. It is evolution of the Optane which will happen...
I'm late to the party but please in the follow up run some numbers caching on a SATA SSD.
For example the AW 13 R3 has 2 m.2 slots but the 2nd one is limited in performance so its only suitable for a large storage SSD at SATA speeds like the MX300. If Optane can pull this drive out of SATA territory it might be worth looking at...for the right price of course.
Hello, i need assistant for using Intel optane in case one, dual-driver configuration ssd+hdd, so if i use ssd for system and hdd for storage, can i configuration Intel optane to hdd, and i still get optimal proses in hdd ? or i just using system and storage at hdd and use Intel optane three. thx for explanation it's relly good article.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
127 Comments
Back to Article
Eden-K121D - Monday, March 27, 2017 - link
I'm confused. Can someone explain how this would work and what benefits would occur in real world usage ?Billy Tallis - Monday, March 27, 2017 - link
As with any cache, data that is frequently or recently used can be accessed more quickly from the cache device than from the larger, slower device. Intel's hope is that ordinary desktop usage is mostly confined to a relatively small data set: the OS, a few commonly-used applications, and some documents.When accessing data that fits in the cache, you'll get SSD-like performance. If you launch a program that isn't in the cache, it'll still be hard drive slow (assuming the cache backing device is a hard drive, of course). Sequential accesses don't have a lot of reason to use the cache and are probably excluded by Intel's algorithms to save cache space for random I/O.
saratoga4 - Monday, March 27, 2017 - link
You can make a large magnetic hard drive faster by adding an external cache. For people who can't afford a large enough SSD, this might be a good choice. SSDs are getting cheap though, so this feels like a product that needed to ship a few years earlier to have a real chance.Gothmoth - Monday, March 27, 2017 - link
it feels like a product searching for a reason to exist.if i need fast performance i buy a SDD that delivers 2 GB/s and not a cache device that delivers 1200 mb/s.
BurntMyBacon - Monday, March 27, 2017 - link
@GothmothKeep in mind that consumer NVMe SSDs that boast throughput of 2GB/s or more generally do not reach their peak at low queue depth. Optane is supposed to be able to drive 1200MB/s read throughput at low queue depth (not sure why they listed QD4), so there is potential for some performance improvement here. Most consumer workloads never get out of low queue depth territory, so this could have some small real world benefit. Write throughput, however, is critically low.
More importantly, these Optane drive are gear more towards lowering latency than transferring large files. Where HDDs access the data on the order of 10s of mS and SSDs access data on the order of 1mS (give or take), Optane should be able to access data on the order of 1s - 10s of uS. Where Optane will be useful is high numbers of small file accesses (DLLs, library files, etc.).
That all said, I'd just as soon leave all the extra complications, compatibility issues, and inconsistencies on the table and get that 2 GB/s sdd that you mentioned until Intel figures out how to make these more compatible and easier to use without requiring a "golden setup". I don't want to buy a new W10, Kaby Lake, 200 series based system just to use one of these. My current W7/W10/Ubuntu, Skylake, 100 series system should work just fine for a good while yet.
Sarah Terra - Monday, March 27, 2017 - link
Anyone remember intel turbo cache? This looks to be nearly the same thing, kind of a let down.BrokenCrayons - Monday, March 27, 2017 - link
I recall it being released with the 965 chipset and offering little to no benefit to the end user. In fact, I think HP and a few other OEMs didn't bother supporting it. Turbo Memory's disappointing performance is one of the reasons why I think Optane is better used as a higher endurance replacement for NAND flash SSDs than as a cache for now progressively less common conventional hard drives.Byte - Monday, March 27, 2017 - link
Maybe it will find a way into Intels SSDs and replace the SLC cache with the Optane with is much bigger and higher performance.beginner99 - Tuesday, March 28, 2017 - link
That would actually be pretty reasonable product compared to this.ddriver - Tuesday, March 28, 2017 - link
SLC is MUCH better than hypetane. Double the endurance, 1/100 the latency. It will we a big step back to replace SLC cache with xpoint.What the industry should really do is go back to SLC in 3D form. Because it doesn't look like xpoint has a density advantage either, as it is already 3D and it takes 28 chips for the measly 448GB. Samsung 960 pro has 2 TB in 4 chips. Sure that's MLC, which is twice as dense as SLC. Meaning that with 3D SLC you could have a terabyte in 4 chips.
Now, if you get short of 0.5 TB of xpoint with 28 chips, and you get 1 TB of much faster, durable and overall better SLC with 4 chips, that means it would take like 60 chips to get a TB with xpoint. Making potential 3D SLC "ONLY" 15 TIMES better in terms of density, while still offering superior performance and endurance.
Which begs the question, why the hell is intel pushing this dreck??? My guess, knowing the bloated obese lazy spoiled brat they are, they put a shameful amount of money into RDing it, and how they are hyping the crap out of it in order to get some returns. They most likely realized its inferiority way back, which prompted them to go for the hype campaign, failing to realize despite (or because of) their brand name, that would do more harm than good the moment it fails to materialize. Which it did - I mean look at how desperate they are at trying to find a market for this thing.
Time to add Hypetane to "handheld SOC" in the "intel's grandiose failures" category. The downsides of being a bloated monopolist - you are too full of yourself and too slow to react to a changing market to offer adequate solutions.
nICKd1976 - Wednesday, October 25, 2017 - link
Optane is the biggest waste of $$$$. Save for the real SSD... True garbage....MrSpadge - Monday, March 27, 2017 - link
No, it's pretty similar to Intel Smart Response technology introduced with Z68 and giving HDD users a decent gain, although obviously not full SSD performance.StevoLincolnite - Monday, March 27, 2017 - link
If you use a system boosted by an SSD cache SSD and remove that cache drive at a later date... The difference is pretty massive.Got a 32GB Sandisk Readycache drive in another system, does a brilliant job for the cost. And it was cheap.
Gothmoth - Tuesday, March 28, 2017 - link
ok hybrid HDD´s are not that fast... but they help.personally i would not waste a precious M.2 port on a cache device like this.
maybe when our mainboards have 4 or 6 M.2 ports. :)
i have one fast 1 TB M.2 SSD for the OS and 5 data disk (HDD) in my system.
i don´t care much how fast the data disks are. they are for storage only.
i don´t see this optane SSD creating much interest for most users.
Gothmoth - Tuesday, March 28, 2017 - link
why not buying hybrid HDD´s?they are not that much more expensive.
i sure don´t waste a precious M.2 port on such a device.
ddriver - Monday, March 27, 2017 - link
They should really consider renaming it to Hypetane.The perf figures listed here are for QD4. At that queue depth, a 960 pro beats it easily in sequential reads and writes and random write iops, hypetane is only faster in random reads, and even then, not anywhere near the "1000x better than ssd" figures. And it remains to be seen if that advantage extends to the drive's entire lba or if it is just for a limited data set fitting in some cache.
At any rate, at those capacities, it is kinda laughable. Just get some extra ram, it might cost a little more, but then again it will be much faster, much more durable, and with a decent ssd it will take just a few seconds to flush the working data set to persistent storage on shutdowns.
"Where Optane will be useful is high numbers of small file accesses (DLLs, library files, etc.)."
I'd say databases. Of course, provided your database is small enough to fit on such a drive. DLLs are not an issue, those are shared between all processes which link against them and are loaded in memory as long as they are used, and they are usually hundreds of k to a few mb, so they absolutely do not qualify as something that would benefit from frequent random reads.
Alexvrb - Tuesday, March 28, 2017 - link
This drive does not look impressive on paper. However the underlying concept of pairing a faster solid state tech with a larger, slower SSD is solid. Relatively slow (by today's standards) SATA/mSATA SSD caches did wonders for a system with a large mechanical drive. So the concept is sound. A 1TB 960 Pro is fairly expensive, compared to a 1TB SATA SSD.Most of my workload is reads - I think that's fairly common for consumer use cases. A fast M.2 cache drive, primarily for reads should boost performance quite a bit - if price, speed, and capacity are there. Now that goes back to this drive being insufficient. Capacities need to be in the 64-128GB area, with an upgraded x4 controller offering better speeds all-around. Meanwhile pricing needs to stay around $100 for the 128GB product. Then they might have something on their hands. We'll see how the second gen product looks.
I'd still like to see some testing, in case the read latency and low QD performance benefits a system more than anticipated. There's a lot of competition and I don't think this standards out in a crowded M.2 arena.
ddriver - Tuesday, March 28, 2017 - link
It was only impressive on hype, it is neither impressive on paper, nor is it impressive in practice.I wouldn't normally care, there is enough room for mediocre technology under the sun, what annoyed me was the laughable claims of "1000 times better than flash" BULLCRAP, and how the simpletons bought it. And still try to justify the hype, now that the product has turned out to be mediocre, now there are some mythical hidden merits only the chosen few would understand and appreciate.
You know what this sounds like? Like "the emperor's new clothes". You know, they are so great, that only smart people see them. To us, the silly ones, the emperor is fully nude.
http://images.hardwarecanucks.com/image/akg/Storag...
1000 times faster than nand? BS. 75% of the time it is SLOWER than last year's SSDs. Barely faster in random reads, and that's about it.
1000 times the endurance of flash? BS, at best twice as good as MLC, still way behind SLC. The xpoint media itself doesn't appear to be faster than SLC either.
10 times denser than flash? BS. Waaaay behind MLC.
Just give me 3d SLC SSD with improved controller. I don't care about hype and by extension, about the naked hypetane.
bcronce - Tuesday, March 28, 2017 - link
Samsung 960 Pro M.2 can only do about 130MiB/s Q4 random. This can do 1.2GiB/s. I think you underestimate those read latencies. Lets wait for some benchmarks.ddriver - Tuesday, March 28, 2017 - link
And that's where its advantage runs out. Everywhere else it is vastly inferior.BTW, dunno about the smaller models, but the 1TB 960 pro does about 60k IOPS at random 4k reads, which makes for about 230 rather than 130 MB/s. And about 160k at random 4k writes, which is about 625 MB/s, more than twice as fast, and that still mediocre MLC.
But yeah, lets really wait for the benchmarks, because what I expect to see is that hypetane's random reads advantage will result in NADA benefit in 9 out of 10 practical usage scenarios. And that t will be "superior" in one very, very narrow niche barely anyone cares about.
BurntMyBacon - Tuesday, March 28, 2017 - link
@ddriver: "I'd say databases. Of course, provided your database is small enough to fit on such a drive. DLLs are not an issue, those are shared between all processes which link against them and are loaded in memory as long as they are used, and they are usually hundreds of k to a few mb, so they absolutely do not qualify as something that would benefit from frequent random reads."I absolutely agree with databases here, but that isn't something you often see in a consumer system, so it really isn't the point of this consumer oriented cache. Also remember that this cache is persistent, so all those DLLs need to be loaded into memory every boot sequence. Not a great example as some people never shut down their system and if a large enough data set moves through the drive (depending on the internal cache algorithms) these files will be flushed anyways. Unfortunately I'm having trouble (as apparently is Intel) coming up with an obvious common use benefit for these drives. Perhaps a tablet/laptop/hybrid that is frequently powered on and off and uses hibernate or hybrid sleep would benefit from the low latency these present.
ddriver - Tuesday, March 28, 2017 - link
Intel is simply trying to find a market niche to cram that poor Hypetane miscarriage.Oh wow, a use case, for all the people who bought a brand new system to couple with a sole mechanical HDD and are desperately looking to populate their single M2 slot with something as useless as possible. All 3 of them.
For consumers a SATA M2 drive is more than enough, and also offers better efficiency. There would be no tangible benefit to using Hypetane "cache", money will be much better spent on a 128 GB SSD for a boot/os drive. That would be better than "cache", especially considering how IDIOTIC windoze caching policies are. It keeps on caching the most useless junk. I have 64 gigs of ram on my main box, and it kept on caching movies, ENTIRE movies, many many gigabytes rather than the small, frequently used files. Which is why I disabled caching for all SSD drives, and limited disk cache overall, cuz I hate wasting CPU cycles waiting on windoze to deallocate cached nonsense every time I run a memory demanding application. So no more of that genius "99% done, 10 minutes hanging on the last 1%" when copying big files to USB drives and such.
Alexvrb - Wednesday, March 29, 2017 - link
I have no idea why you're going on about it caching movies in RAM, anything in RAM can be overwritten instantly, so the RAM "usage" figures are misleading to novices. Write caching to RAM does give you a little bump, even with SSDs. If you really want max performance and you've got a reliable rig with a UPS/laptop battery, not only enabled write caching, but disable Write cache buffer flushing. That gives you the best of both worlds... provided you have reliable protection against power loss.ddriver - Wednesday, March 29, 2017 - link
"anything in RAM can be overwritten instantly"Have you ever heard about this thing called memory management and heap allocators? You cannot just write in ram willinilly, that will result in an instant crash. Having windoze cache filling your entire ram means that for every single new allocation it has to make room, deallocate some of the useless garbage it caches, that doesn't involve any penalty in "wiping" ram clean or something like that, but there are CPU cycles consumed by releasing the memory from the caching kernel and allocating it to another process.
Heap memory allocations are rather slow to begin with, and having to keep on making room each and every time you allocate heap memory makes it even slower.
As for the caching part, it was not about write cache but about the infamous "super fetch", i.e. the "read cache".
XZerg - Monday, March 27, 2017 - link
that's the thing - who in their right mind getting the 7th gen cpu would want to get a 16 or 32gb cache utilizing a full m.2 slot instead of just getting a 128gb or 256gb m.2 ssd for the same price? bigger storage and more useable across any config from last 10 years... this is a dead product with a major marketing spin to make it feel like a real worthy product.ddriver - Monday, March 27, 2017 - link
That's apparently intel's view on how much "consumers" need. 32 gigs of storage in the (usually sole) m2 slot. Might wanna throw in a few peanuts while they are it, just so that they seem extra generous.beginner99 - Tuesday, March 28, 2017 - link
Yeah, makes no sense. who would buy a new kabylake setup and not buy an SSD? The product would make some sense for older systems that still use HDD only but obviously there are no old kabylake systems. And since most mobos have exactly 1 m.2 slot, good look selling this intel. I recommend a crappy TLC 128 GB drive over this cache.Alexvrb - Tuesday, March 28, 2017 - link
Well you know there ARE sub-$50 Kaby Lakes. Do they not support drive caching? The idea is that you could get both a larger, cheaper SSD and a faster, smaller one to act as a cache. Not a bad concept. The problem is that Intel's speed/capacity/price is NOT where it needs to be. I'd use a competing conventional 128GB M.2 SSD as a cache before this thing. Maybe next gen!Samus - Tuesday, March 28, 2017 - link
And those 200MB/sec write speeds? This is a cache, after all. Clearly write caching won't be an improvement so how is this radically better than a $100 2TB SSHD that has 8GB SLC cache onboard? Not to mention the platform requirements for Optane caching and the inherent software complexity. This is just stupid of Intel to introduce something that costs this much based on a 10 year old concept introduced in Vista.It didn't work well then and it won't work well now. Yes the caching capacities have grown (Most readyboost caches were like 16 or 24GB) but so has storage capacities and overall data sets. A single AAA game is 50GB. A single game. There goes your entire cache, assuming the dumb algorithms SRT uses can even figure out to cache the maps in the first place. My experience is, it doesn't.
ddriver - Tuesday, March 28, 2017 - link
Hypetane is so "good" intel is struggling to find a market for it. It doesn't need to offer benefits, it doesn't even have to make sense. It just needs to sell, because otherwise it will look even worse than the complete and utter failure to live up to the "1000x better" hype.My money is on FIFO. First in - first out, actual usage patterns are too bothersome to take into account. It will cache stuff regardless of what it is, and when it runs out of cache, it will start purging the oldest data, or AT BEST the oldest and oldest accessed data. But access frequency is definitely not accounted for.
In most cases, people who know their stuff will be better of having that volume dedicated for fast access, and put the files they need to be fast there.
cheshirster - Tuesday, March 28, 2017 - link
As far as I understand you need not only low queue depth, you need also small random reads for optane to shine.On a large read like loading a big file SSD will win.
Windows will hold DLLs and libraries in RAM, you'll only get boost for the first run.
ImSpartacus - Monday, March 27, 2017 - link
I think it's just for ssd caching, but I agree that I'm also confused as to the value proposition to enthusiasts.Shadowmaster625 - Monday, March 27, 2017 - link
In the real world, it wont work. In a world of <$100 256GB SSDs, this is useless. There are zero benchmarks that show cache drives offer the same performance as a true SSD boot drive. It is useless on a notebook because no one wants to power up two drives constantly. If the HDD could be powered down most of the time it would be one thing, but I seriously doubt this cache will be smart enough to enable that.MrSpadge - Monday, March 27, 2017 - link
Why would it not do that? Cache the writes (SRT can do that) and wait flushing them to disk.dullard - Monday, March 27, 2017 - link
This particular product is for the low-end consumer (enthusiast versions are later this year or early next year). So, if you already have a high-end computer this particular version probably doesn't have a real-world benefit for you.The benefit for most people is that for $44 you can make a sluggish low-end computer far snappier. It is a pretty tiny price to pay for a potentially useful benefit. For example PC World shows a test where GIMP took 14 seconds to load, they popped in a Optane module, and now it took 3 to 4 seconds. That is a noticeable and significant boost for a pretty small cost.
I'd like to see more tests with full reviews to see if that is a one-off review or the norm. But I'm actually interested in it so far. No, it won't revolutionize the computer. But it might make certain tasks feel far less frustrating.
asmian - Monday, March 27, 2017 - link
Caching isn't the same as mind-reading what work you want to do next. You still have to have loaded GIMP at least once beforehand to benefit from it still being stored in your fast cache, and that first time will always take as long as it takes from your slow storage.I guess people who are RAM-starved and need to constantly open and close the same smallish programs from slow spinning drives they can't upgrade may feel a win, but the limitations on legacy processor/chipset support for this tech mean that users who would actually be able to fit it in their brand new motherboard (and be aware enough of it to do so) will doubtless already have an SSD for OS/program storage that negates virtually all the benefit, so they don't need it.
I vote Meh.
dullard - Monday, March 27, 2017 - link
I personally open programs like Word, Excel, internet browsers, etc repeatedly before turning off the computer. So having them in cache will be quite helpful. You are assuming that SSDs aren't sped up too. This particular version probably won't help the fastest SSDs, but it will help some of them (those of us who move their existing SSD to their new computer build for example). Finally, you got it on the RAM-starved point. Optane is a much better value proposition as $44 for 16 GB of more memory (albiet slower than normal memory) than as $44 for 16 GB of hard drive space.XZerg - Monday, March 27, 2017 - link
the only problem with your " benefit for most people is that for $44 you can make a sluggish low-end computer far snappier" is you didn't read the full article and realize the limitations, namely hardware requirements - must be a 7th gen system, and needs a m.2 slot. so why would anyone in their right mind get this over a typical m.2 ssd that will work with any system, not just 7th gen and above? also why would one get this when for the price of a 32gb of this i can get a 256GB sata ssd (bigger and faster writes), and meet most users storage requirements too as a full fledged drive?dullard - Tuesday, March 28, 2017 - link
Because it makes SSDs faster too.Lolimaster - Monday, March 27, 2017 - link
Between spending $44 for a band aid and $60-90 for a proper drive, the cheap option is a really bad investment.Notmyusualid - Tuesday, March 28, 2017 - link
Dum-ass Joe public, doesn't care for a $90 SSD, and will just see one product (laptop / desktop) with a 240/250GB hard disk, and another product with a 1/2TB hard disk.Even when he gets it home, he feels he has made the better decision. This product helps to plug those gaps, giving him a SSD 'feel' to his system, and the 'better' amount of storage too. Every dollar counts to OEMs, and customers.
I have tried Intel's caching 20GB SLC drive in both Z68 systems, and in my own laptop, in front of a 2TB slow as hell mechanical drive. I was very impressed indeed. Actually that 20GB SLC drive is still functional, and kicking around somewhere. I can't have imagined an equivelent-aged 80GB MLC drive would still be alive under the same usage...
Lolimaster - Monday, March 27, 2017 - link
At it's 16GB for that $44 vs 250GB for $80-90 of virtually the same fast type of storage (for typical consumer scenarios)dullard - Tuesday, March 28, 2017 - link
There aren't just two options. The best option is $44 cache AND $80 SSD. This isn't an either/or situation.fanofanand - Monday, March 27, 2017 - link
I wouldn't imagine that a ton of users who have kaby lake but no ssd are savvy enough to install an m.2 module. The low end computer with an owner wanting optane is a unicorn.Samus - Tuesday, March 28, 2017 - link
Leave it to Intel to pull a 10 year old technology out of the box (readyboost) and rebadge it to a new, slightly faster (and in the event of write caching, slower) product, and charge a shitload for it."Intel notes that Optane Memory caching can be used in front of hybrid drives and SATA SSDs, but the performance benefit will be smaller and these configurations are not expected to be common or cost effective."
Yeah, that's because you neutered it with a slow ass PCIe interface.
Readyboost was a failure, RST is inherently complex and overall, sucks (Apple Fusion drive based on the same concept is substantially better, mostly because of the larger 128GB caching SSD) and who are they kidding, SRT wasn't much of an improvement over Readyboost.
Optane is fascinating tech, but where is it? What the hell good is it if they can't scale it up to usable sizes. The purpose of NV memory is to move away from mechanical storage, not supplement it. This doesn't fix all the other problems with having a hard drive, especially durability, power consumption and physical size. And it barely addresses the performance bottleneck. Even a 64GB cache on a 2TB drive is 1:18 caching ratio. Sure is better than an SSHD but it's also more expensive, more complex, and less compatible.
I can't believe Intel spent time redeveloping such a stupid fucking concept here.
CaedenV - Tuesday, March 28, 2017 - link
The point of optane wasn't to cache the HDD/SSD, it was to replace RAM for instant-off/on and the ability to remove the need for traditional sleep states. But it didn't work, so they are marketing it as a HDD/SSD cache, and it will fail just as hard as the last 2 times it was attempted.garygech - Tuesday, March 28, 2017 - link
If they deliver the technology on time with a matched mobile mother board, and can lower the idle power consumption on a Cannon Lake Platform, with 64 GB or 128 GB soldered onto the device, they might have a matched system that is remarkably fast due to low latency. The real benefit of a matched system would be superior performance on a lower TDP, so that you can get 10 hours out of a laptop that is 1 pound at a price point of $899 for an integrated system, like the Surface Pro 5. Otherwise, what is the point, if you have plenty of power, just go with a much larger and less expensive SSD.CaedenV - Tuesday, March 28, 2017 - link
The problem is that there is no benefit.... at least not in this config.The idea was that SSDs were stuck at SATA3 speeds and demand for more speed was coming hot and heavy, so Optane/3DxPoint was developed as a cache technology to bring a bit more performance to systems that were already screaming fast by replacing the need for RAM. While technically a little slower than the RAM it replaced, the Optane memory would allow for full power-off without loosing stored memory. This would allow for literal instant-off, instant-on capabilities in devices as things would not need to be spooled back into RAM. While not quite as fast as DRAM, it would be more than fast enough for most applications (as they are bottle-necked by the CPU and GPU rather than memory bandwidth), while offering up to 32GB at prices far lower than RAM. And this was all supposed to happen 2-3 years ago.
Well, it got stuck in development hell. NVMe SSDs hit the market with the m.2 interface allowing for far higher performance SSDs. DDR4 was released, offering higher density and lower prices than DDR3 (though about the same performance). Plus I suspect (total speculation here) there were issues adapting OSs to the new RAM-less architecture, which made it a ram augment rather than a RAM replacement.
So what are we stuck with? essentially what we had back with the Sandy Bridge architecture a few years back. SSDs were very expensive, so Intel made the ability to use a SSD as a cache for a HDD which could offer extreme performance gains. It too was limited to 32GB (though I believe a firmware update allowed for 64GB cache later), and required a RAID setup, and pretty much all of the same limitations we see here... and nobody used it. By the time it was released SSDs were tanking in price. The added speed was inconsistent, SSHDs did the same thing better, there were battery life issues, etc. etc. etc. All the same proposed benefits, just as the high performance tech was doping in price, all the same pitfalls, and again no manufacturer in their right mind would use this. We might *might* see this in a few low-end ultrabooks as a way to offer higher capacity while still getting the 'ultrabook' tag for having a 'ssd', but that may be about it.
That said, in the server market this is going to be a huge plus! Being able to have a 2TB cache on a large RAID array will help a lot of workloads (especially in multi-user databases and virtual environments) while being cheaper than a RAM drive and faster than a SSD. There are still uses for this tech, and it will be a big deal... just not in the consumer space, and not how it was originally promised a few years ago when it was still being developed.
repoman27 - Monday, March 27, 2017 - link
Erm, so the QD4 4K throughout is the same as the sequential then? That's odd.The power consumption is ridiculous.
Anyone know what the outlook is for die stacking with 3D-XPoint?
Billy Tallis - Monday, March 27, 2017 - link
Having similar sequential and random throughput is unsurprising for 3D XPoint since it doesn't have large page sizes and massive erase block sizes to contend with.Wire-bonded die stacking for 3D XPoint is no more complicated than for NAND; Intel's doing it on the enterprise SSDs where they don't have an excess of PCB space. Vertical scaling of the layer count within a single die might be easier than increasing the layer count for 3D NAND, but it's too soon to say for sure.
saratoga4 - Monday, March 27, 2017 - link
Should be similar to NAND.bcronce - Tuesday, March 28, 2017 - link
Not even close. Q4 random read for NAND is about 10% of Seq. Random write has high throughput by masking with those 1GiB+ caches. Samsung 960 Pro M.2 has an average random read latency of about 150us, which is about 25x worse. Expect the latencies to get further worse as SSDs get higher densities.I expect 3D XPoint to get lower latencies as the controllers improve, assuming we're not already against PCIe's limit.
andychow - Sunday, April 2, 2017 - link
But he is correct that random read and write for this products are the same as their sequential numbers. Just multiply the i/o by the block size for the random, and bingo, you have the sequential speeds.unrulycow - Monday, March 27, 2017 - link
It's hard to see where the market for this is. It only makes sense if your primary workload is small enough to fit in the cache, but you need more storage than fits into a reasonably priced SSD. Most use cases will be significantly faster with a SSD.Gothmoth - Monday, March 27, 2017 - link
im an not impressed.... those who want best perfromance will buy a fast M.2 SSD not this stuff.YazX_ - Monday, March 27, 2017 - link
useless, the price for 32GB is same as 512GB SSD, so better to use SSD instead of this piece of waste of time and money.cekim - Monday, March 27, 2017 - link
Better to get 32G or more of DDR.They seem to be off by an order of magnitude in their talk of making vast memory images with this caching mechanism worth-while. Surprisingly enough (not at all), caches work. Applications, compilers and coders have long ago found ways of dealing with tiered latency and throughput. So, if you are going to make a difference, disks the size of very, very small memory foot-prints aren't it.
Come talk to me when there are capital T's in the size...
MrSpadge - Monday, March 27, 2017 - link
"32GB is same as 512GB SSD"You get decent 512 GB SSDs for 77$, when the cheapest one (MX300) costs 140€ here? Amazing, you should open up a business.
nagi603 - Monday, March 27, 2017 - link
...damn, why 2280 only? Seriously, use the damn size advantage!DanNeely - Monday, March 27, 2017 - link
Probably cost reasons and logistics. Current prices make a module big enough to be a system drive prohibitively expensive in an minimum enthusiast class size (~$275 for 128gb), so there's no point behind a larger module. From the other direction the pictured one is big enough that assuming it's the 32GB size that 2260 size might not fit the 2nd optane storage chip; and as a very niche device running up costs with separate 16/32gb PCBs isn't a good idea.Chaitanya - Monday, March 27, 2017 - link
Since its for caching write endurance rating is crap at best(sure no one is going to write 100GB/day but there are conventional SSDs with better write endurance). Seems like intel did overpromise on this so called X-point technology. Intel is certainly under pressure and losing money to Samsung in SSD market.MrSpadge - Monday, March 27, 2017 - link
Well, the SSDs with those endurance ratings are easily 10x times larger. So Optane is not (yet?) game changing in this regard, but significantly better.cekim - Monday, March 27, 2017 - link
NTSAhrana - Monday, March 27, 2017 - link
Where this makes a lot of sense is a write cache on in-home file servers. Think ZFS SLOG device. The Intel S3700 and S3500 have been king for a long time due to low latency. Size is fine too because ZFS does a commit every 5 seconds so the drives don't need to be massive. The question is do either of these drives have power loss protection. If so, then these are perfect.JoeyJoJo123 - Monday, March 27, 2017 - link
They explicitly mention several times that RAIDed configurations are not supported.Given that a major selling point of using ZFS volumes is enabling redundancy through RAID-Z, I doubt you're going to get anywhere useful with Intel Optane m.2 when it's for a single-disk ZFS volume.
Billy Tallis - Monday, March 27, 2017 - link
The comment about RAID not being supported applies only to Intel's caching software. On a BSD or Linux system, you'd operate the system with the storage controller in AHCI mode and the Optane Memory device would show up as a normal NVMe block device.bobbozzo - Monday, March 27, 2017 - link
Hi, we're not talking about the ZFS RAID/data disks; we're talking about the ZFS ZIL/SLOG drives which are just used for temporary journaling and buffering. They don't need to be in RAID or mirror sets.bobbozzo - Monday, March 27, 2017 - link
Yeah, this is what I was thinking of.IIRC, the ZIL/SLOG devices don't absolutely have to have power protection; if they fail, the ZFS FS on the main datastore won't be _corrupted_, iirc.
(you'll still lose the newest writes of course)
For the same reasons, the ZIL/SLOG doesn't need to be mirrored.
MrSpadge - Monday, March 27, 2017 - link
Or as SSD cache in storage spaces, I think supported in Server 2016 and up.beginner99 - Tuesday, March 28, 2017 - link
Win 10 64-bit only...Notmyusualid - Tuesday, March 28, 2017 - link
Indeed, makes it a ho-hum launch.lefty2 - Monday, March 27, 2017 - link
Only supports Windows 10? There's no Linux support?Billy Tallis - Monday, March 27, 2017 - link
Intel's RST is a Windows-only piece of software. On a Linux system, you can use Optane Memory drives as ordinary NVMe SSDs, and take your pick of caching and volume management solutions: bcache, dm-cache, ZFS, etc.Intel's storage software situation on Windows is a mess because Windows doesn't have anything as flexible as the Linux general-purpose block device abstraction, and because Intel is still trying to enforce product segmentation by requiring certain CPUs.
garygech - Tuesday, March 28, 2017 - link
All the profit in Mobile will be at the 10 nm node, and Optane will play a significant role if optimized and launched successfully. In the consumer and commercial markets, Linux is not a play. Hardware vendors need to compete with the I-phone and I-pad to stay pertinent. I personally use a Mac at home. I would say that Windows 10 is the premier computer operating system in the world based on total revenue, profit per unit shipped, as well as market share. I-OS is dominant in mobile in terms of profits, and Android dominant in mobile in terms of market share. Windows based storage systems remain the most proliferated systems in the world for the consumer market. Optane was released first in the server market, but the real model would be to enter the consumer space. So long as latency is low, bandwidth is high on new motherboards released with Cannon Lake, and Windows 10 future builds are optimized for Optane, the product might be a hit. If Intel fails to deliver in the lower power space, the product could be a flop. Microsoft has made it clear, it will no longer service Windows 7, and is moving on. Windows 10 is their future, that is how they feel and they have the firepower and the market presence to strong arm those who will not go along. 2018 may be another excellent Microsoft year, and if Intel hits the 10 nm node on x86, this might propel the market towards the Surface Pro 5 and other professional mobile solutions.JoeyJoJo123 - Monday, March 27, 2017 - link
Feels like a very limited product line with few applications to customers.1) If you're not on a 7th gen i-series CPU + 200 series chipset motherboard, Intel Optane WON'T WORK FOR YOU.
2) If you're on a 7th gen i-series CPU + 200 series chipset motherboard AND you're not already using RAID AND you're not already using a different m.2 boot device, then Intel Optane MIGHT help your performance.
It feels like a very limited Intel RST (Rapid Storage Technology) replacement. Lots of motherboards from Intel for the past few years supported RAID + SSD caching (even for RAIDed volumes) through this tool. Basically, with Intel RST, you could have bought a new PC and new larger capacity and faster SATA/m.2 SSD, but you still have a smaller/slower SSD + your HDD from your last build. With Intel RST you could install the OS on your new drive, plug in your old HDD(s) and old SSD, and then configure through Intel RST to cache the HDDs giving them a boost in responsiveness and allowing you to get some use out of older/smaller SSDs in the form of a tangible caching solution.
Here, Optane is configured to be a caching solution (like Intel RST), but you're limited to brand spanking new hardware (and let's face it, given stagnating Intel performance upgrades year-after-year, even die-hard Intel fans are likely still on older platforms than the current newest), and it's basically only compatible with Intel Optane m.2 devices, for what is likely a minor speed boost in the form of caching for slower media such as SATA SSD or HDDs, but isn't compatible with caching RAIDed volumes.
Like, this product is barely applicable to anyone. And even though Intel RST is readily applicable to lots of people, not many people bothered with it or knew about it even to this day.
Bullwinkle J Moose - Monday, March 27, 2017 - link
It WAS a limited production......of demo units they couldn't give away, so they are being rebranded as consumer retail units (LOL)For the past 10 years, performance enthusiasts have been using SSD's for boot drives and games while keeping static (mostly unused) data on a second drive
Caching a 2TB - 5400RPM static drive would result in a hit rate of less than 2% for data we hardly ever use anyway and caching a much faster SSD is pointless
NOBODY on THIS planet should "currently" be booting to 2TB - 5400RPM platter drives that need caching!
(Especially on Kaby Lake hardware with Windows 10)
AnandTech should interview the GENIUS who thought selling X-Point demo units they couldn't give away was such a great idea.....
After all, we haz many questions
JoeyJoJo123 - Monday, March 27, 2017 - link
Intel RST's caching for the hard drive is still useful though. Say you have a 2TB HDD exclusively for hundreds of Steam game installs. Instead of manually installing games onto a separate disk (often the same SSD the OS is installed on) you can just write everything to the 2TB HDD volume. Intel RST could cache up to 64GB of data on an SSD, which while small, typically meant that your 2 - 3 most common Steam games would be entirely cached on the SSD without having to go back and forth on what's on the SSD and removing it back to the HDD after you've had your fun with the game.MrSpadge - Monday, March 27, 2017 - link
Actually SRT doesn't work per game, and neither per file, it operates on the block level. So you'd have the things you frequently use in those games cached, but nothing else. This makes the storage space a lot more useful than people think when comparing it with install sizes.Notmyusualid - Tuesday, March 28, 2017 - link
....but however it works, it works very well.Gothmoth - Tuesday, March 28, 2017 - link
personally i would not waste a precious M.2 port on a cache device like this.maybe when our mainboards have 4 or 6 M.2 ports. :)
i have one fast 1 TB M.2 SSD for the OS and 5 data disk (HDD) in my system.
i don´t care much how fast the data disks are. they are for storage only.
i don´t see this optane SSD creating much interest for most users.
Gothmoth - Tuesday, March 28, 2017 - link
ok hybrid HDD´s are not that fast... but they help.personally i would not waste a precious M.2 port on a cache device like this.
maybe when our mainboards have 4 or 6 M.2 ports. :)
i have one fast 1 TB M.2 SSD for the OS and 5 data disk (HDD) in my system.
i don´t care much how fast the data disks are. they are for storage only.
i don´t see this optane SSD creating much interest for most users.
Denithor - Tuesday, March 28, 2017 - link
...except that this is configured to ONLY cache the boot drive. So, no.IntelUser2000 - Monday, March 27, 2017 - link
Actually if the performance works well enough I bet its going to be fairly popular.No, you may not get to see it. Why? Because you are an "enthusiast".
Most markets are sold with HDDs. 80% in fact. And industry moves to newest generation fairly quickly. 30-40% of new computers sold are based on the latest generation. 80% of that is 25% to 35%. Meaning 25-35% of the new computers sold will benefit from Optane Memory.
You'll see Optane Memory pre-configured systems coming fro vendors real soon. The pricing is reasonable too.
DanNeely - Monday, March 27, 2017 - link
Unless it gives a much bigger performance boost than the PR slides/specsheets are implying I don't think we will. Tiny SSD caches with an HDD never really went anywhere because bargain market would rather have just an HDD and a slightly lower pricepoint, the enthusiast market was generally underwhelmed by the overall performance and either didn't bother or manually split media/etc off of a midsized SSD to an HDD. The problem for Optane+SSD is that SSDs are good enough in a way that HDDs weren't; and the price of a 32GB optane is more than enough to get you from a 128 to 256GB SSD and most of the way from 256 to 512GB. For most consumers the bigger SSD will be a better option. The rarity of dual SSD m.2 slots on laptops, combined with the larger 2280 form factor and idle power level also mean it's mostly going to be a desktop part in this version; meaning you've eliminated a big chunk of the market already.CaedenV - Tuesday, March 28, 2017 - link
Yes, and they use HDDs simply because they are cheap. An OEM bulk 500GB HDD costs them $30 vs a 250GB OEM SSD that would cost them $50. The idea that they are going to 'spring' for an expensive cache layer for an additional $40-70 is ridiculous. I mean, for that price they could just get rid of the HDD entirely and just move to a SSD in the first place! Even a 1TB OEM SSD is only going to cost ~$110.But that is the whole point. Cheap laptops are cheap BECAUSE they use cheap parts. PC makers will continue to use HDDs on the low end, and SSDs in the mid to high end because nothing else makes any sense.
name99 - Monday, March 27, 2017 - link
It's more interesting, I think, to consider what Intel is NOT shipping and to ask why.As far as I can tell (given the limited tech details Intel has released) what we have is something that's both less and more than Apple's Fusion technology.
Less than Fusion in that the cache sizes are smaller (32 or 64GB, Apple provides 128GB although you can create your own fusion drive using ay SSD you like --- I created one from a 64GB FW2 external drive fused to the internal HD of an old iMac);
more than Fusion in that the cache is apparently common to all drives --- which is nice but also constrains how aggressively the caching can perform.
MORE interesting is that, again as far as I can tell, this caching role ONLY REQUIRES BLOCK WRITES. In other words it takes no advantage of that supposed improvement of 3D XPoint that it has byte read/writes. And the caching offered seems unlikely to provide any real value over similar caching done using an SSD (ie the Apple solution).
Compare to the product that Intel COULD have shipped...
Imagine an SSD that's a 3D-Xpoint hybrid. The 3D-XPoint memory is used to maintain the various metadata of the flash (so all the usage stats, the remap tables etc) --- basically uses where the byte granularity would be of substantial value, and would be used ONLY by the SSD controller so would not be relevant to file system and OS.
This seems like it could be an SSD with a nice performance boost over standard SSDs, not just because the metadata updating is faster, but because it allows for the use of different, more flexible, flash block management algorithms.
But we are not seeing such a product. Why? Everything Intel has done seems to suggest that SOMETHING about their story is fishy. Either that supposed byte granularity doesn't work (ie so many extra ECC bits are required that large sector granularities are the only feasible architecture) and/or the costs of the memory are so much higher than flash that they couldn't create a hybrid at a viable cost point.
Either way, I see this not as the next great step in Optane, but as an attempt to try to get something, ANYTHING, to work with the technology, regardless of the fact that what they're selling solves a non-problem.
Billy Tallis - Monday, March 27, 2017 - link
The caching is only of the boot volume, not all drives. But that limitation might be removed in future driver versions. It is a bit of an unfortunate limitation that it's just a storage cache and doesn't also get cleanly used as swap space and for hibernation, but those data blocks can land in the cache if they're detected as hot enough by the cache management algorithms.I do think once 3D XPoint is more openly available on the market as a component, we'll see some drives replace their DRAM with 3D XPoint and ditch the supercaps. It's probably a bit too soon to tell if the economics will work out right for that to be a viable product, but there will certainly be a lot of engineers interested in making that kind of prototype. In the short term, such a product is obviously more complicated to design (since 3D XPoint needs wear leveling) and it's no surprise that the first Optane devices to hit the market are simple NVMe SSDs.
garygech - Tuesday, March 28, 2017 - link
Optane has to be integrated in some pattern at the 10 nm node with Cannon Lake and a system on a chip design. The entire goal has to be higher efficiency, lower latency, and more reliability. Intel demonstrated this with graphics on the chip, and this was very well well received in mobile platforms. I think the problem has been yields. The proof of principal is there. The mother board change is there. The software improvements are present. The yields at the 10 nm node have been low. This has been their hurdle, and seems to have delayed Surface Pro 5, which was their goal, a completely integrated Wintel Solution, mobile Surface Pro Power House. We will have to wait 6 months to see if they hit their benchmarks. If they hit 10 nm at high yield by September, my guess, they will be primed for a great Holiday Season. But the key is an integrated platform. Think how integrated the I-phone 7 has become. This is the premier mobile platform in the world right now based on profitability per unit.name99 - Wednesday, March 29, 2017 - link
While Intel talks about this and muses how best to enforce segmentation to screw enterprise, TSMC is actually DOING it ("intergrated Optane").Look at item number 6 on this URL:
https://www.semiwiki.com/forum/content/6675-top-10...
and the date (2H17).
OK, so the storage capacities are presumably NOT at the the sorts of level Intel has in mind. But there's something to be said for actually SHIPPING, and improving as you go. TSMC has established a reputation as quietly shutting up until they are sure they can deliver, while every quarter Intel comes across more as the belligerent drunk, shouting at everyone "You think you can take me? You think you can take me? Let me tell you how awesome I am, and all the things I got planned."
Would it be crazy to imagine that (in a side deal that's NOT in the options TSMC lists) Apple does something like get 512MB of MRAM or reRAM embedded on 2018 or 2019's A- SoC, and used to cache the FS metadata (catalog trees and suchlike, maybe also the FTL metadata) of the main flash storage? All the time while Intel is still talking about how great Optane is, one day, going to be.
(Alternatively maybe you add this storage to Apple's custom flash controller, which I assume is on 28nm.)
It's not clear to me the extent to which is yet possible. Everspin dies look way too large for this, but the MRAM density should (in principle) be more dense than DRAM, so I don't know if that's because they're using WAY older lithography as a way to hold down burn rate as they perfect the technology.
BrokenCrayons - Monday, March 27, 2017 - link
I hope that second generation Optane will offer improved endurance, higher capacities, and lower prices. I'd consider using it as an OS/boot drive and then using a cheaper 3D TLC drive (or MLC depending on capacity needs versusu prices) as a storage disk, but I don't think I'd want to use Optane as cache for a HDD. Benchmarks that show clear performance improvements might change my mind, but I just don't think I need a HDD for bulk storage these days.jjj - Monday, March 27, 2017 - link
This product is idiotic but the pricing is somewhat encouraging. If they launch proper SSDs, some folks will be able to afford them at 2$ per GB.Arbie - Monday, March 27, 2017 - link
(I almost hate to bring this up as it appears to have been studiously avoided...)SemiAccurate has had a lot to say about this technology, and it's worth reading if you're interested in Optane. Demerjian does generally let his mouth run ahead of his brain, but it is a good brain.
sharath.naik - Monday, March 27, 2017 - link
This is kinda pointless isn't it?. Samsung provides for ram to be used as cache with their rapid storage interface. So what's special about this? Samsung's ram caching is free, will be faster as DRAM is still faster than optane. the only advantage is fail safety on power failure, so this is only usefull for desktops needing a bit of speed for short burst writes. But wait, you can buy a battery backup to you desktop that can shut down the system incase of power failure in a clean manner. It would cost only 50-100$.Hmmm.. ok I see no point to this. Get it cheap enough with high enough capacity to compete with SSD. else Samsung's rapid ram caching will simply outperform this.
voicequal - Monday, March 27, 2017 - link
Until your battery backup fails, which it will. It's not just power failures - if the system crashes, it may never be able to write the data in RAM to disk. DRAM is perfect for a read cache, but bad as a write cache if you care about your data.MrSpadge - Monday, March 27, 2017 - link
The OS already using any "unused" DRAM as small write cache and large read cache. By increasing the fraction of write cache you take away from the read cache. Overall not that much of an improvement compared to adding storage.Meteor2 - Monday, March 27, 2017 - link
I'd like to see this bench tested, because if it really can make a 2 TB HDD *feel* like a SSD for a much lower price, then I think that's really rather good.SmCaudata - Monday, March 27, 2017 - link
I don't get it.The only market I can actually see for a product like this is for large companies where speed matters from a productivity standpoint. For instance, I'm in healthcare and I am constantly logging into different terminals throughout the day and launching and relaunching programs. Most users do the same things on these computers all day long. Something like this paired with a low cost HDD makes sense since the volume is so high.
The extra $$ spent on this could be enough to jump from an HDD to an SSD. Most general computer users don't need many TB of storage and can get away with a modest SSD. I suppose OEM makers can market SSD speed with large storage to lure people in, but it's not practical.
Enthusiasists will opt to spend a little extra on an NVME storage solution that uses 4 PCIE lanes anyway in addition to an array of HDD if they are running as a backup or media server.
I really don't see the market for this.
Bullwinkle J Moose - Monday, March 27, 2017 - link
"The caching is only of the boot volume, not all drives."-------------------------------------------------------------------------
How many Kaby-Lake Win 10 Laptops will be sold this year with 2TB X 5400RPM boot drives that can take advantage of this cache for less than 2% of your most used data?
I thought this was the year for Boot SSD's on the majority of new Laptops?
I can see how this cache might help 10-year old tech "IF" and only "IF" this tech could work on anything other than Kaby-Lake and Windows 10
Are any of your readers planning on platter based "Boot" drives in their new Kaby Baby?
Bullwinkle J Moose - Monday, March 27, 2017 - link
What I meant was.....Are any of your readers planning on platter based "Boot" drives in their new "DESKTOP" Kaby?
Anyone at all?
How bout you Ryan?
Sound like fun?
Bullwinkle J Moose - Tuesday, March 28, 2017 - link
Billy?Anyone?
FXi - Monday, March 27, 2017 - link
Where is the replacement for the Intel 750 series? Those were supposed to be updated for 3D by this time (though I know everything has been delayed).Billy Tallis - Monday, March 27, 2017 - link
I don't think there's much point in Intel updating the 750 with 3D NAND. The controller is at least as big a limitation, and can't compete in the consumer space.jabber - Monday, March 27, 2017 - link
If this was really 'needed' Seagate and Toshiba would have 16/32GB Hybrid HDDs out already.watzupken - Monday, March 27, 2017 - link
Depending on the price, I feel the cost will far exceed the benefit for this product. On the other hand, it is actually good to see that there is a new solid state drive with focus on lower latency, instead of transfer rate.aryonoco - Monday, March 27, 2017 - link
This is a product searching for a market.And outside of very niche use cases, I don't see a market for it, for the same reason that small SSDs used as HDD cache did not fly off the shelves 6 years ago.
Jad77 - Monday, March 27, 2017 - link
So I bought a Z270 for this? Yippee!Lolimaster - Monday, March 27, 2017 - link
You already failed buying a Z270 that stops at a simple quadcore vs getting a cheaper X370 and a full fledged 8 core Ryzen 1700 that will let you do many things while gaming or be a productivity monster at just 65w.Mine draws near 45w with a 0.9v undervolt.
Gothmoth - Tuesday, March 28, 2017 - link
LOL.who want´s blue screen and BIOS issues en mass?
ryzen is a beta product.... plagued by issues.
i build two systems for friends. no way i would build one for myself in the current state of things.
Lolimaster - Monday, March 27, 2017 - link
The cache thing utterly fails when you can get things like MX300 275GB for less than $100. Meanwhile 16-32 is just not enough.Any cheapo 120GB SSD will offer to the end user eyes the same kind of response.
dullard - Tuesday, March 28, 2017 - link
Or get both and be faster.beginner99 - Tuesday, March 28, 2017 - link
This pointless stuff coming out of Intel really makes we wonder what is happening internally. Has a feel of stupid management forcing engineers to create bullshit products because management doesn't have a clue.garygech - Tuesday, March 28, 2017 - link
My guess is simple. This is the initial launch of a technology with significant upside in netbooks and less expensive notebooks when Atom at 10 nm is released along with an I5 Y series at 10 nm, with motherboard that have very high bandwidth. If Optane can be lower power in Idle, at 64 GB this would be fine for a netbook or a low cost laptop, allowing fairly high computing power at a low power draw. The holy grail of mobile is 10 hours. If Optane can get Intel based laptops to 10 hours, at a lower power draw, with faster performance, than that will be marketable. $77 for 32 GB is a high price, but if that price would come down with yields, to 64 GB for $50, Optane could be in new netbooks by ASUS and HP, marketed specifically for a 10 hour experience to compete with older segments. A 128 GB Optane paired with a 10 nm I5 Y will simply be much faster than any Mac Book from 2017. Coupled with Windows 10 upgraded build, this mobile platform will be very snappy. This could replace SSD's in a Surface Pro 5 as an option, as users will pay for lower latency.vladx - Wednesday, March 29, 2017 - link
Atom line was cancelled, only Core M remains in the <10w space.twotwotwo - Tuesday, March 28, 2017 - link
The NVMe interface may mean you can't really play with byte addressability yet. Looks like it requires block size of at least 512 bytes. (This is from a quick search so I might be reading old documents or might be just wrong.) And this particular device may only offer 4KB blocks, given that a lot of other stuff assumes 4KB or a multiple of it now. Given a controller that could handle them, could be interesting to be able to do an even larger number of smaller I/Os per unit time than SSDs today.milkod2001 - Tuesday, March 28, 2017 - link
3D Xpoint Optane was supposed to next next best thing since sliced bread.So far it seems to be failing to impress. Did Intel fail to market this product right or it is just way too late on market since there are better, cheaper options already out there?Xajel - Tuesday, March 28, 2017 - link
Well, too bad.. high hype for this1- No standalone Optane SSD's yet (to make it as the main boot drive). and if there's too expensive.
2- Hybrid Optane+SATA SSD is not viable except for a small percentage of users specially that it requires a complete new platform which also support NVMe M.2 and I don't think the regular user will see a different, but I think NVMe M.2 will be a better option here.
3- Optane still big, I mean we can't expect 256GB and larger on standard M.2... they're promoting U.2 for such usage. so it's either built in the motherboard or using an M.2 to U.2 adapter.
4- For Ultraportable, space is very limited to have two drives which will be Optane M.2 + SATA M.2 SSD.. so only single drive is possible... the best option is NVMe M.2 then. unless Intel shrinks the Optane to have a single M.2 drive with both Optane M.2 and an M.2 SATA single chip solution... the Optane controller must be changed also to allow to connect external PCIe-SATA bridge or directly connect NAND chips.
5- Does the cost of Optane drive over NVMe drive worth the difference ? maybe but for a fraction of users, the NVMe drives are faster in sustained usage, but Optane is faster in random.. the smaller the random the better.
Gothmoth - Tuesday, March 28, 2017 - link
OPTANE reminds me on RYZEN.hyped like crazy.. and then a a dissapointment on many fronts.
i build two RYZEN systems for friends. no way i would build one for myself in the current state of things. BIOS issues and BSOD make building a RYZEN system no fun.
and i have wished so much for a cheaper rock stable 8 core system i can render with.
i can´t even remember when i had the last BSOD with intel.
these products have to mature a lot.
maybe in 1-2 years we see the full potential of RYZEN and OPTANE.
Bullwinkle J Moose - Tuesday, March 28, 2017 - link
RYZEN BSOD ?Driver issue or Software issue?
AMD CPU's "emulate" an Intel CPU and can never (contrary to opinions at this site) beat or destroy Intel in the market using technology directly Licensed from Intel
ALL Wintel X86 software should be written directly for Intel CPUs for compatability
I have never used an AMD CPU in the past 10 years because several programs running fine on Intel chips crashed repeatedly on AMD
If you are having driver issues, you're screwed until AMD releases a fix
Software issues are easier to fix
Simply DO NOT use software that is incompatible with AMD chips
I have not had a BSOD on a Intel CPU running Windows XP in over 10 years now since I eliminated DLL conflicts and registry errors by using only Portable Applications that keep their reg settings separate from the Windows Registry
The only BSODs I ever get now are on Windows 7 / 8.1 and 10 machines
upanddown - Tuesday, March 28, 2017 - link
It looks more like a big joke from Intel...nobodyblog - Tuesday, March 28, 2017 - link
There is a way for customers to boost sequential read and write performance and it is by RAIDing some of them. For enterprises, they can build the systems with upto 1200 GB/s read performance. It is evolution of the Optane which will happen...Thanks!
evenyourmom - Tuesday, March 28, 2017 - link
Wait, why would anyone buy it, if you can use any cheaper SSD and something like Diskache, which is currently free.zodiacfml - Wednesday, March 29, 2017 - link
You will see people having this with high-end sSD drives.Gothmoth - Wednesday, March 29, 2017 - link
why is anandtech so quite? too much money from intel?https://semiaccurate.com/2017/03/27/intel-crosses-...
evilpaul666 - Monday, April 3, 2017 - link
Can you RAID0 two of these on Z270 boards that have two M.2 slots?And wasn't the 32GB version supposed to support PCIe x4?
pasta514 - Tuesday, April 4, 2017 - link
These M.2 are x2 lanes. Performance will double with x4. Intel is sandbagging. but why?TheRealDrDuck - Monday, May 15, 2017 - link
I'm late to the party but please in the follow up run some numbers caching on a SATA SSD.For example the AW 13 R3 has 2 m.2 slots but the 2nd one is limited in performance so its only suitable for a large storage SSD at SATA speeds like the MX300. If Optane can pull this drive out of SATA territory it might be worth looking at...for the right price of course.
hamv17 - Saturday, November 4, 2017 - link
Hello, i need assistant for using Intel optane in case one, dual-driver configuration ssd+hdd, so if i use ssd for system and hdd for storage, can i configuration Intel optane to hdd, and i still get optimal proses in hdd ? or i just using system and storage at hdd and use Intel optane three. thx for explanation it's relly good article.