Funny yes, but teething issues aside the random write Performance is several orders of magnitude faster than all existing storage mediums, this is the number one metric I find that plays into system responsiveness, boot times, and overall performance and the most ignored metric by all Meg's to date. They all go for sequential numbers, which don't mean jack except when doing large file copies.
1000 times faster than NAND - in reality only about 10x faster in hypetane's few strongest points, 2-6x better in most others, maximum thorough lower than consumer NVME SSDs, intel lied about speed about 200 times LOL. Also from Tom's review, it became apparent that until the cache of comparable enterprise SSDs fills up, they are just as fast as hypetane, which only further solidifes my claim that xpoint is NO BETTER THAN SLC, because that's what those drives use for cache.
1000 times the endurance of flash - in reality like 2-3x better than MLC. Probably on par with SLC at the same production node. Intel liked about 300-500 times.
10 times denser than flash - in reality it looks like density is actually way lower than. 400 gigs in what.. like 14 chips was it? Samsung has planar flash (no 3d) that has more capacity in a single chip.
So now they step forward to offer this "flash killer" as a puny 32 gb "accelerator" which makes barely any to none improvement whatsoever and cannot even make it through one day of testing.
That's quite exciting. I am actually surprised they brought the lowest capacity 960 evo rather than the p600.
Consumer grade software already sees no improvement whatsoever from going sata to nvme. It won't be any different for hypetane. Latency are low queue depth access is good, but that's mostly the controller here, in this aspect NAND SSDs have a tremendous headroom for improvement. Which is what we are most likely going to see in the next generation from enterprise products, obviously it makes zero sense for consumers, regardless of how "excited" them fanboys are to load their gaming machines with terabytes of hypetane.
Last but not least - being exclusive to intel's latest chips is another huge MEH. Hypetane's value is already low enough at the current price and limited capacity, the last thing that will help adoption is having to buy a low value intel platform for it, when ryzen is available and offers double the value of intel offerings.
1000x -> Harp on it all you want, but that number was for the architecture not the first generation end product. It represents where we can go, not where we are. I'll also note that Toms gave it their editor approved award - "As tested today with mainstream settings, Optane Memory performed as advertised. We observed increased performance with both a hard disk drive and an entry-level NVMe SSD. The value proposition for a hard drive paired with Optane Memory is undeniable. The combination is very powerful, and for many users, a better solution than a larger SSD."
"1000 times the endurance of flash -> You can concede that 3D XPoint density isn't as good as they originally envisioned, but it's still impressive, gen1, and has nowhere to go but up. It's not really worse than other competing drives per drive capacity - this cache supports like 3 DWPD basically. The MX300 750GB only supports like .3 DWPD. 10x better is still good.
10 times denser than flash -> DRAM, not Flash. And it's going to be much denser than DRAM.
Barely any to no improvement -> LOL, did you look at the graphs? Those lines at the bottom and on the left were 500GB and 250GB Sata and NVMe drives getting killed by Optane in a 32GB configuration. 3D XPoint was designed for low queue depth and random performance - i.e. things that actually matter, where it kills its competition. Even sequential throughput, which is far from its design intention, generally outperforms consumer drives.
So, Optane costs, in an enterprise SSD, 2-3x more than other enterprise drives, for record breaking low queue depth throughput that far surpasses its extra cost, while providing 10-80x less latency. In a consumer drive, Optane regularly approaches an order of magnitude faster than consumer drives in only a 32GB configuration.
If Optane is only as fast as SLC, I'd love to understand why the P4800X broke records as pretty much the fastest drive in the world, barring unrealistically high queue depths.
This 32GB cache might be a stopgap, and less compelling of a product in general because of its capacity, but that you could deny the potential that 3D XPoint holds is absolutely laughable. The random performance and low queue depth performance is undeniably better than NAND, and that's where consumer performance matters.
"I'd love to understand why the P4800X broke records"
Because nobody bothered to make a SLC drive for many many years. The last time there were purely SLC drives on the market it was years ago, with controllers completely outdated compared to contemporary standards.
SLC is so good that today they only use it for cache in MLC and TLC drives. Kinda like what intel is trying to push hypetane as. Which is why you can see SSDs hitting hypetane IOPs with inferior controllers, until they run out of SLC cache space and performance plummets due to direct MLC/TLC access.
I bet my right testicle that with a comparable controller, SLC can do as well and even better than hypetane. SLC PE latencies are in the low hundreds of NANOseconds, which is substantially lower than what we see from hypetane. Endurance at 40 nm is rated at 100k PE cycles, which is 3 times more than what hypetane has to offer. It will probably drop as process node shrinks but still.
"10x better is still good"
Yet the difference between 10x and 1000x is 100x. Like imagine your employer tells you he's gonna pay you 100k a year, and ends up paying you a 1000 bucks instead. Surely not something anyone would object to LOL.
I am not having problems with "10x better". I am having problems with the fact it is 100x less than what they claimed. Did they fail to meet their expectations, or did they simply lie?
I am not denying hypetane's "potential". I merely make note that it is nothing radically better than nand flash that has not been compromised for the sake of profit. xpoint is no better than SLC nand. With the right controller, good old, even ancient and almost forgotten SLC is just as good as intel and micron's overhyped love child. Which is kinda like reinventing the wheel a few thousand years later, just to sell it at a few times what its actually worth.
My bias is showing? Nope, your "intel inside" underpants are ;)
SLC has severe limits on density and cost. It's not used because of that. Even at the same capacity as these initial Optane drives it would likely cost considerably more, and as Optane's density increases there is no ability to mitigate that cost with SLC, it would grow linearly with the amount of flash. The primary mitigations already exists: MLC and TLC. Of course those reduce the performance profile far below Optane and decrease it's ability to handle wear. Technically SLC could go with a stacked die approach, as MLC/TLC are doing, however nothing really stops Optane from doing the same making that at best a neutral comparison.
SLC is half the density of MLC. Samsung has 2 TB of MLC worth in 4 flash chips. Gotta love 3D stacking. Now employ epic math skills and multiply 4 by 0.5, and you get a full TB of SLC goodness, perfectly doable via 3D stacked nand.
And even if you put 3D stacking aside, which if I am not mistaken the sm961 uses planar MLC, 2 chips on each side for a full 1 TB. Cut that in half, you'd get 512 GB of planar SLC in 4 modules.
Now, I don't claim to be that good in math, but if you can have 512 GB of SLC nand in 4 chips, and it takes 14 for a 400 GB of xpoint, that would make planar SLC OVER 4 times denser than xpoint.
Thus if at planar dies SLC is over 4 times better, stacked xpoint could not possibly not possibly be better than stacked SLC.
Severe limits my ass. The only factor at play here is that SSDs are already faster than needed in 99% of the applications. Thus the industry would rather churn MLC and TLC to maximize the profit per grain of sand being used. The moment hypetane begins to take market share, which is not likely, they can immediately launch SLC enterprise products.
Also, it should be noted that there is still ZERO information about what the xpoint medium actually is. For all we know, it may well be SLC, now wouldn't that be a blast. Intel has made a bunch of claims about it, none of which seemed plausible, and most of which have already turned out to be a lie.
Why are you so sure you understand the technology? Intel has told us nothing about how it works. What we have are - a bunch of promises from Intel that are DRAMATICALLY not met - an exceptionally lousy (expensive, low capacity) product being sold. You can interpret these in many ways, but the interpretation that "Intel over promised and dramatically underdelivered" is certainly every bit as legit as the interpretation "just wait, the next version (which ships when?) will be super-awesome".
If Optane is capable TODAY of density comparable to NAND, then why ship such a lousy capacity? And if it's not capable, then what makes you so sure that it can reach NAND density? Getting 3D-NAND to work was not a cheap exercise. Does Intel have the stomach (and the project management skills) to last till that point, especially given that the PoS that they're shipping today ain't gonna generate enough of a revenue stream to pay for the electric bill of the Optane team while they take however long they need to get out the next generation.
Intel hasn't confirmed what it is, but AFAICT all the signs point to xpoint being phase-change ram, or at least very similar to it. Which still leaves a lot of wiggle room, of course.
Implying Intel would only use only the revenue of Optane to fund their next generation of Optane. You forget how much profit they make milking out their Processors? *Insert Woody Harrelson wiping tears away with money gif*
Be careful. What he's criticizing is the HYPE (ie Intel's business plan for this technology) rather than the technology itself, and in that respect he is basically correct. It's hard to see what more Intel could have done to give this technology a bad name.
- We start with the ridiculous expectations that were made for it. Most importantly the impression given that the RAM-REPLACEMENT version (which is what actually changes things, not a faster SSD) was just around the corner.
- Then we get this attempt to sell to the consumer market a product that makes ZERO sense for consumers along any dimension. The product may have a place in enterprise (where there's often value in exceptionally fast, albeit expensive, particular types of storage), but for consumers there's nothing of value here. Seriously, ignore the numbers, think EXPERIENCE. In what way is the Optane+hard drive experience better than the larger SSD+hard drive or even large SSD and no hard drive experience at the same price points. What, in the CONSUMER experience, takes advantage of the particular strengths of Optane?
- Then we get this idiotic power management nonsense, which reduces the value even further for a certain (now larger than desktop) segment of mobile computing
- And the enforced tying of the whole thing to particular Intel chipsets just shrinks the potential market even further. For example --- you know who's always investigating potential storage solutions and how they could be faster? Apple. It is conceivable (obviously in the absence of data none of us knows, and Intel won't provide the data) that a fusion drive consisting of, say, 4GB of Optane fused to an iPhone or iPad's 64 or 128 or 256GB could have advantages in terms of either performance or power. (I'm thinking particularly for power in terms of allowing small writes to coalesce in the Optane.) But Intel seems utterly uninterested in investigating any sort of market outside the tiny tiny market it has defined.
Maybe Optane has the POTENTIAL to be great tech in three years. (Who knows since, as I said, right know what it ACTUALLY is is a secret, along with all its real full spectrum of characteristics). But as a product launch, this is a disaster. Worse than all those previous Intel disasters whose names you've forgotten like ViiV or Intel Play or the Intel Personal Audio Player 3000 or the Intel Dot.Station.
Meanwhile in the server space we are pretty happy with what we've seen so far. I get that its not the holy grail you expected, but honestly I didn't read Intel's early info as an expectation that gen1 would be all things to all people and revolutionize the industry. What I saw, and what was delivered, was a path forward past the world of NAND and many of its limitations, with the potential to do more down the road.
Today, in low volume and limited form factors it likely will sell all that Intel can produce. My guess is that it will continue to move into the broader space as it improves incrementally generation over generation, like most new memory products have done. Honestly the greatest accomplishment here is Intel and Micron finally introducing a new memory type, at production quantity, with a reasonable cost for its initial markets. We've spent years hearing about phase-change, racetrack, memrister, MRAM and on and on, and nobody has managed to introduce anything at volume since NAND. This is a major milestone, and hopefully it touches off a race between Optane and other technologies that have been in the permanent 3-5 year bucket for a decade plus.
Yeah, I bet you are offering hypetane boards by the dozens LOL. But shouldn't it be more like "in the _servers that don't serve anyone_ space" since in order to take advantage of them low queue depth transfers and latencies, such a s "server" would have to serve what, like a client or two?
I don't claim to be a "server specialist" like you apparently do, but I'd say if a server doesn't have a good saturation, they either your business sucks and you don't have any clients or you have more servers than you need and should cut back until you get a good saturation.
To what kind of servers is it that beneficial to shave off a few microseconds of data access? And again, only in low queue depth loads? I'd understand if hypetane stayed equally responsive regardless of the load, but as the load increases we see it dwindling down to the performance of available nand SSDs. Which means you won't be saving on say query time when the system is actually busy, and when the system is not it will be snappy enough as it is, without magical hypetane storage. After all, servers serve networks, and even local networks are slow enough to completely mask out them "tremendous" SSD latencies. And if we are talking an "internet" server, then the network latency is much, much worse than that.
You also evidently don't understand how the industry works. It is never about "the best thing that can be done", it is always about "the most profitable thing that can be done". As I've repeated many times, even NAND flash can be made tremendously faster, in terms of both latency and bandwidth, it is perfectly possible today and it has been technologically possible for years. Much like it has been possible to make cars that go 200 MPH, yet we only see a tiny fraction of the cars that are actually capable to make that speed. There has been a small but steady market for mram, but that's a niche product, it will never be mainstream because of technological limitations. It is pretty much the same thing with hypetane, regardless of how much intel are trying to shove it to consumers in useless product forms, it only makes sense in an extremely narrow niche. And it doesn't owe its performance to its "new memory type" but to its improved controller, and even then, its performance doesn't come anywhere close to what good old SLC is capable of technologically as a storage medium, which one should not confuse with a compete product stack.
The x25-e was launched almost 10 years ago. And its controller was very much "with the times" which is the reason the drive does a rather meager 250/170 mb/s. Yet even back then its latency was around 80 microseconds, with its "latest and greatest" hypetane struggling to beat that by a single order of magnitude 10 years later. Yet technologically the SLC PE cycle can go as low as 200 nanoseconds, which is 50 times better than hypetane and 400 times better than what the last pure SLC SSD controller was capable of.
No wonder the industry abandoned SLC - it was and still is too good not only for consumers but also for the enterprise. Which begs the question, with the SLC trump card being available for over a decade why would intel and micron waste money on researching a new media. And whether they really did that, or simply took good old SLC, smeared a bunch of lies, hype and cheap PR on it to step forward and say "here, we did something new".
I mean come on, when was the last time intel made something new? Oh that's right, back when they made netburst, and it ended up a huge flop. And then, where did the rescue come from? Something radically new? Nope, they got back to the same old tried and true, and improved instead of trying to innovate. Which is also what this current situation looks like.
I can honestly think of no better reason to be so secretive about the "amazing new xpoint", unless it actually isn't neither amazing, nor new, nor xpoint. I mean if it s a "tech secret" I don't see how they shouldn't be able to protect their IP via patents, I mean if it really is something new, it is not like they are short on the money it will take to patent it. So there is no good reason to keep it such a secret other than the intent to cultivate mystery over something that is not mysterious at all.
It is only natural to have negative sentiments about greedy, lousy corporations because of what they do. It is nothing personal though, I do it because I am a conscious human being. Not cattle. You can throw crapple and moogle into the mix. There is no single good reason to be fond of any corporation. The bigger they are the more damage they do to humanity and the planet as a whole.
...and you are so blind by your hatred that you dismiss every single thing that these companies do. You are not rational in the slightest but do like to boast about how great you are.
Nailed it eddman. Because it does not personally solve ddriver's problems, or because it comes from the wrong brand, its an epic disaster. The funny thing here is I agree this is not a revolution, at least not yet, but the incessant bashing and inability to acknowledge that it has its uses and those use cases are likely to only grow demonstrates the bias involved.
To the insinuation that Optane may somehow be relabeled SLC NAND, I went and did a little research/consultation. All NAND requires writing to blocks, Optane can support bit level writes (expected in DIMM configurations), which is a major advantage over NAND and not technically possible with NAND. It was also pointed out that if Optane was simply disguised SLC, despite the technical impossibility, it would mean that Intel had engaged in financial fraud by materially misrepresenting its technology, capabilities and long-term expectations to investors.
OMG it's the fastest product on the market in its class but because I choose to interpret the early marketing as applying to the first gen product it totally sucks! I refuse to benefit from drastically better performance because Intel *dared* to speak to its potential performance and didn't deliver that in the first product!
In fact, I am so enraged I'm ripping out all my existing SSD's and replacing them with Quantum Bigfoot drives in protest.
It's probably because Intel dared to do something innovative again, and we can't possibly give credit where it's due, can we? If it was Samsung, I bet it would just be Samsung being Samsung. Slap the blue name on top, and it's cool to criticize whatever you can, even in the face of hard numbers. Make sure you also include an edgy name like "Hypetane" to really drive your point home.
To be fair if it were Samsung we'd get a lecture on the oppression of North Korea mixed in there somewhere along with a conspiracy theory about the south being a puppet state not permitted to succeed in the face of America.
Well I don't want to degrade intel's efforts on this. But it's intel/micron co-operation who have engineered this and I would even guess a bit further that science behind this is more micron tech than intels.
That's fair, and Micron definitely deserves credit as well. I'm sure they'll get their own when QuantX comes out, hopefully sometime this year. I suspect that the R&D was split very evenly, though; Intel has always been good at doing things "well" in the fab; Micron had excelled at doing them "cheaply" which is one reason the venture was reasonably successful. Plus, I feel it would be hard to collaborate on R&D together for 10 years and successfully say "we did this together" to the public, if one side (Micron or Intel) did most of the work. I guess we'll never know, though.
Yeah, daring intel, the pioneer, taking mankind to better places.
Oh wait, that's right, it is actually a greedy monopoly that has mercilessly milked people while making nothing aside from barely incremental stuff for years and through its anti-competitive practices has actually held progress back tremendously.
As I already mentioned above, the last time "intel dared to innovate" that resulted in netburst. Which was so bad that in order to save the day intel had to... do what? Innovate once again? Nope, god forbid, what they did was go back and improve on the good design they had and scrapped in their futile attempts to innovate.
And as I already mentioned above, all the secrecy behind xpoint might be exactly because it is NOTHING innovative, but something old and forgotten, just slightly improved.
Also, unlike you, I don't let personal preferences cloud my objectivity. If a product is good, even if made by the most wretched corporation out there, it is not a bad product just because of who makes it, it is still a good product, still made by a wretched corporation.
Even if intel wasn't a lousy bloated lazy greedy monopolist, hypetane would still suck, because it isn't anywhere near the "1000x" improvements they promised. It would suck even if intel was a charity that fed the starving in the 3rd world.
I would have had ZERO objections to hypetane, and also wouldn't call it hypetane to begin with, if intel, the spoiled greedy monopolist was still decent enough to not SHAMELESSLY LIE ABOUT IT.
Had they just said "10x better latency, 4x better low depth queue performance" and stuff like that, I'd be like "well, it's ok, it is faster than nand, you delivered what you promised.
But they didn't. They lied, and lied, and now that it is clear that they lied, they keep on lying and smearing with biased reviews in unrealistic workloads.
This is what companies do. Your technology is useless unless you can market it. And you don't market anything by saying it's mediocre. Look at BP's high octane fuel which supposedly cleans your engine and gets better fuel efficiency. The ONLY thing that higher octane fuel does is resist auto-ignition under compression better and thus certain high performance engines require it. As for cleaning your engine - you're telling me you've got a solvent which is better at cutting through crap than petrol AND can survive the massive temperatures and pressures inside the combustion chamber? It's the petrol which scrubs off the crap so yes, it's technically true. They might throw and additive or two in there but that will only help pre-combustion chamber and if you actually have a problem. And Yes, in certain, newer cars with certain sensors you will get SLIGHTLY higher MPG and therefore they advertise the maximum you'll get under ideal conditions because no one will but into it if you're realistic about the gains. The gains will never offset the extra cost of the fuel, however.
PC marketing is exactly the same and why the J Micron controller was such a disaster so many years ago. They went for advertised high sequential throughput numbers being as high as possible and destroyed the random performance, Anand spotted it and OCZ threw a wobbler. But that experience led to drives being advertised on random performance as well as sequential.
So what's the lesson here? We should always take manufacturer's claims with a mouthful of salt and buy based on objective criteria and independent measurements. Manufacturers will always state what is achievable in basically a lab set up with conditions controlled to perfection. Why? Because for one you can't quote numbers based on real life performance because everyone's experience will differ and you can't account for the different variables they'll experience. And for two, if everyone else is quoting the maximum theoretical potential, you're immediately putting yourself at a disadvantage by not doing so yourself. It's not about your product, it's about how well you can sell it to a customer - see: Stupidly expensive Dyson Hairdryer. Provides no real performance benefit over a cheap hairdryer but cost a lot in R&D and is mostly advertising wank for rich people with small brains.
As for Intel being a greedy monopoly... welcome to capitalism. If you don't want that side effect of the system then bugger off to Cuba. Capitalism has brought society to the highest standard of living ever seen on this planet. No other form of economic operation has allowed so many to have so much. But the result is big companies like Intel, Google, Apple, etc, etc.
Advertising wank is just that. Figures to masturbate over. If they didn't do it then sites like Anandtech wouldn't need to exist as products would always be accurately described by the manufacturer and placed honestly within the market and so reviews wouldn't be required.
I doubt they lied completely - they will be going on the theoretical limits of their technology when all engineering limitations are removed. This will never happen in practice and will certainly never happen in a gen 1 product. Also, whilst I see this product as being pointless, it's obviously just a toe dipping exercise like the enterprise model. Small scale, very controlled use cases and therefore good real world use data to be returned for gen 2/3.
Personally, whilst I'm wowed by the figures, I don't see how they're going to improve things for me. So what's the point in a different technology when SLC can probably perform just as well? It's a different development path which will encounter different limitations and as a result will provide different advantages further down the road. Why do they continue to build coal fired power stations when we have CCGTs, wind, solar, nukes, etc? Because each technology has its strengths and weaknesses and encounters different engineering limitations in development. Plus a plurality of different, competing technologies is always better as it creates progress. You can't whinge about monopolies and then when someone starts doing something different and competing with the established norm start whinging about that.
I have never once had an SSD fail because it has over-used its flash memory... but controllers die all the time. It seems that this will remain true for this as well.
And that's exactly what we're suspecting here. We've likely managed to hit a bug in the controller's firmware. Which to be sure, isn't fantastic, but it can be fixed.
Prior to the P3700's launch, Intel sent us 4 samples specifically for stress testing. We managed to disable every last one of them. However Intel learned from our abuse, and now those same P3700s are rock-solid thanks to better firmware and drivers.
Looks like almost a completely useless interim memory device for almost all workloads (non-server). combine that with a size of 32GB on Kaby Lake, it begs the question : what is the point? Why not release a ready product that has a market niche, and not a slimmed-down beta that is looking for a solution it can't fit?
The point is they burned through a mountain of cash to RD this flop and now they are desperately trying to get some of it back. It is a product that doesn't fit in 99% of the market. Thus the solution is to try and shove it anywhere else possible, regardless of how little sense it makes.
Perhaps, but better to just wait for pricing to come in line and have the entire disk made from optane or similar. Still can't believe the random writes, this is the biggest jump since the original intel X-25. Basically on any file larger than 4kb you are starting at 4x performance and going waaaaaay up.
True, since a SATA based SSD is much cheaper than a NVME drive. I'd like to see the comparison of Optane + 1TB SATA SSD vs 1TB NVME SSD. The 1TB SATA SSD + Optane would be cheaper solution than a 1TB NVME.
"The test that I would be interested in is if this technology could be an effective cache is speeding up mainstream SSDs." That's exactly what I was wondering i.e. if I paired it with my SATA 250 EVO. Or, they have a Crucial MX300 SATA SSD in the test which is an OK lower priced SSD. Given the optane drives are $44 and $77 respectively, if someone had something like the MX300 they might be tempted to pair it with an Optane cache. On the other hand you have to have the latest Intel CPU and chipset, and I just jumped ship and went with a Ryzen 5 - so its all academic to me.
Yes, the replacement will be delivered tomorrow. But don't expect the follow-up article to be real soon. I also want to update the software on the testbeds and run a reasonably large number of drives through, and do some deeper experimentation with the caching to probe its behavior.
Mainstream TLC ssds for sure there will be a speed-up measurable in benchmarks. If we as user would actually notice a difference is a completely other question. Due to KISS instead of spending money on this cache drive, instead just buy a tier higher SSD. If mainstrem choose 960 evo instead or of 960 evo choose 960 pro instead.
"Only Core 13, 15 and i7 processors are supported; Celeron and Pentium parts are excluded."
There's a typo or I've never seen those Core 13 and Core 15 CPUs before.
From the data you showed, I see no real benefit is using Optane as a caching solution vs. using an SSD as boot drive. At least not at that price point.
For the full review, could you also monitor DRAM usage? 16GB is not really an entry-level setup, so with that much DRAM Intel's software might be caching to DRAM as well like Samsung's RAPID mode, which would inflate the scores.
Might also be worthwhile to run at least a couple of the application tests with 4GB/8GB of DRAM to see how things work when caching is done fully by Optane.
Speaking of real-world tests, I am waiting for SQL Server tests on an Optane SSD - like on that DC P4800X. The "enterprise" review of the 4800 was all synthetic benchmarks with some disclaimer that they can't simulate all enterprise loads. Sure, you can't simulate everything, but I'm very disappointed that -nothing- enterprise level was even tested.
SSD cache / Hybrid SSD drives work okay on certain workloads, mainly productivity stuff, but if you have a lot of games/media they tend to fill up really quickly and I don't think any of the companies that write the algorithms, Intel included, can really figure out how reliably and over long usage periods decide what should be in the cache and what shouldn't.
I have a 24GB SSD cache (ExpressCache) in my Notebook and I partitioned the OS/Programmes for to one partition, and put all the media on the second partition, and set it to only cache the first partition. This setup works pretty well.
I also have a Hybrid SSHD in another laptop (only 8GB I think) that I mostly use as a background downloading PC, and after a few days of doing this any useful boot / OS / Chrome stuff that was in the cache has been evicted and it's back to booting at the same speed as a regular HDD.
Nice in theory, highly variable in practice. I never tried the Intel SRT out because larger SSD affordability improved a lot after it was released.
Per the article: "However, the Optane Memory can also be treated as a small and fast NVMe SSD, because all of the work to enable its caching role is performed in software or by the PCH on the motherboard. 32GB is even (barely) enough to be used as a Windows boot drive, though doing so would not be useful for most consumers."
Are you also going to test Intel SRT with a ~$77 SATA SSD and the same WD HDD? I bet it would perform about the same, and SRT works with non-boot drives.
These two Optane reviews interrupted my work on putting together a new 2017 consumer SSD test suite to replace our aging 2015 suite. When the new test suite is ready, you'll get comparisons against the broad range of SSDs that you're used to seeing and more polished presentation of the data.
Per the article: "However, the Optane Memory can also be treated as a small and fast NVMe SSD, because all of the work to enable its caching role is performed in software or by the PCH on the motherboard. 32GB is even (barely) enough to be used as a Windows boot drive, though doing so would not be useful for most consumers."
A desktop Linux distro would fit nicely on it with room for local file storage. I've lived pretty happily with a netbook that had a 32GB compact flash card on a 2.5 inch SATA adapter that had Linux Mint 17.3 on it. The OS and default applications used less than 8GB of space. I didn't give it a swap partition since 2GB was more than enough RAM under Linux (system was idle at less than 200MB and I never saw it demand more than 1.2GB when I was multi-tasking). As such, there was lots of space to store my music, books, and pics of my cat.
And imagine how well DOS will run. And you have ample space for application and data storage. 32 gigs - that's what dreams were made of in the early 90s. Your music, books and cat pics are just icing on the cake. Let me guess, 64 kbit mp3s right?
Looking at the size of it, I'm wondering why they didn't make a 48GB model that would fill up the 80mm stick fully. Or, and unless the 3xpoint dies fully fill the area in the packages make them slightly smaller to support the 2260 form factor (after accounting for the odds and ends at the end of the stick the current design it looks like it's just too big to fit on the smaller size).
Once again, I have to ask.... who on earth is this product for? So you have a cheap $300 laptop, which is going to have a terrible display, minimal RAM, and a small HDD or eMMC drive... are they expecting these users to spring for one of these drives to choke their CPU?
Maybe a more mainstream $5-900 laptop where price is still ultra competitive. What sales metric does this add to which will promote sales over a cheaper device with seemingly the same specs? Either it will have a SSD onboard already and the performance difference will be un-noticed, or it will have a large HDD and the end-user is going to scratch their heads wondering why 2 seemingly identical computers have 4GB of RAM and 1TB HDD, but one costs $100 more.
Ok, so maybe it is in the premium $1-2000 market. Intel says it isn't aiming at these devices, but they are Intel. Maybe they think a $1-2000 laptop is an 'affordable' mass-market device? Here you are talking about ultrabooks; super slim devices with SSDs... oh, and they only have 1 PCIe slot on board. Just add a 2nd one? Where are you going to put it? Going to add more weight? More thickness? A smaller battery? And even after you manage to cram the part in one of these laptops... what exactly is going to be the performance benefit? An extra half a second when coming out of sleep mode? Word opens in .5 sec instead of .8 sec? Yes, these drives are faster than SSDs... but we are way past the point of where software load times matter at all.
So then what about workstation laptops. That is where these look like they will shine. A video editing laptop, or desktop replacement. And for those few brave souls using such a machine with a single HDD or SSD this seems like it would work well... except I don't know anyone like that. These are production machines, which means RAID1 in case of HDD failure. And this tech does not work with RAID (even though I don't see why not... seems like they could easily integrate this into the RAID controller). But maybe they could use the drive as a 3rd small stand-alone render drive... but that only works in linux, not windows. So, nope, this isn't going to work in this market either.
And that brings us to the desktop market. For the same price/raid concerns this product really doesn't work for desktops either, but the Optate SSDs coming out later this year sound interesting... but here we still have a pretty major issue; SATA3 vs PCIe m.2 drives have an odd problem. On paper the m.2 drives benchmark amazingly well. And in production environments for rendering they also work really well. But for work applications and games people are reporting that there is little to no difference in performance. Intel is trying to make the claim that the issue is due to access time on the controllers, and that the extremely fast access time on Optane will finally get us past all that. But I don't think that is true. For work applications most of the wait time is either on the CPU or the network connection to the source material. The end-user storage is no longer the limiting factor in these scenarios. For games, much of the load time is in the GPU taking textures and game data and unpackaging them in the GPU's vRAM for use. The CPU and HDD/SSD are largely idle during this process. Even modern HDDs keep up pretty well with their SSD brethren on game load times. This leads me to believe that there is something else that is slowing down the whole process.
And that single bottleneck in the whole thing is Intel. It is their CPUs that have stopped getting faster. It is their RAM management that rather sucks and works the same speed no matter what your RAM is clocked at. It is the whole x86 platform that is stagnant and inefficient which is the real issue here. It is time for Intel to stop focusing on its next die-shrink, and start working on a new modern efficient instruction set and architecture that can take advantage of all this new tech! Backwards compatibility is killing the computer market. Time to make a clean break on the hardware side for a new way of doing things. We can always add software compatibility in an emulation layer so we can still use our old OSs and tools. Its going to be a mess, but we are at a point where it needs to be done.
It seems to me that this product doesn't really make sense for your average consumer. Let's assume you don't need to upgrade your hardware to use Optane memory as cache, why not just spend the money to get a faster and a bigger SSD drive?
If that's the case, wouldn't it limited to only a few specific case where someone really need the Optane speed?
An extra 4 GB of DDR4 seems to be $30-$40, so getting 16 GB of swap drive for the same price might be a good way to go. I agree that using it for caching seems a little pointless.
I've been considering interactive graphs. I'm not sure how easily our current CMS would let me include external scripts like D3.js, and I definitely want to make sure it provides a usable fallback to a pre-rendered image if the scripts can't load. If you have suggestions for something that might be easy to integrate into my python/matplotlib workflow, shoot me an email.
And once I get the new 2017 consumer SSD test suite finished, I'll go back to having labeled bar charts for the primary scores, because that's the only easy to compare across a large number of drives.
I echo the conclusion that the cache is too little and too late. In a time where SSDs are becoming affordable as compared to the perhaps 5 years back, it makes little sense to fork out so much money for a puny 32gb cache along with other hardware requirements. It's fast, but it is not a full SSD.
The manufacturers work hard, but SSD firmware development and validation is hard. There are a lot of drives out there that are better off today because we broke them first.
I think people need to re-read this article. Going over it makes much of the disappointment seem a bit overdone. Intel spoke to the potential of the technology, they didn't promise it all in the first version. They also spoke to its long term potential, including being able to stack the die and potentially move higher bit levels. I think its fair to say this isn't a consumer level product yet, but to ship a brand new memory tech at production level that is significantly faster and higher endurance than alternatives, is a significant accomplishment. We have been suck for more than a decade with a '3-5 year' timetable on new memory technologies, perhaps this will get other players to actually ship something (I'm looking at you HP and your promise of memristers two years ago).
problem is Intel did not make this clear. Intel has now had multiple chance to clearly seperate the potential of the technology from the first generation implementation. They choose not to take it.
This is slimey and disgusting.
The technology as a whole long term does indeed seem very promising, however.
Couldn't you say that about any company that talks about an upcoming technology and its potential then restricts its launch to specific niches? Which is almost everyone when it comes to new technologies...
Everyone presumes that technology will improve over time. Talking up 1000x improvements, making people wait for a year or more, and then releasing a stupid expensive small drive for the Enterprise segment, and a not particularly useful tiny drive for whoever is running a Core i3 7000 series or better CPU with a mechanical hard drive, for some reason, is slightly disappointing.
We wanted better stuff now after a year of waiting not at some point in the future which was where we've always been.
Hmm... And how does this compare to regular SSD caching using Smart Response? So far I can't see why anyone would want an Optane cache as opposed to that or, even better, a boot SSD paired with a storage hard drive.
Did you brought the WD Caviar to steady state by filling it twice with random data in random files? Performance of magnetic media varies greatly based on drive fragmentation
I didn't pre-condition any of the drives for SYSmark, just for the synthetic tests (which the hard drive wasn't included in). For the SYSmark test runs, the drives were all secure erased then imaged with Windows.
When testing sequential writes at varying queue depths, the Intel SSD DC P3700's performance was highly erratic. We did not have sufficient time to determine what was going wrong, so its results have been excluded from the graphs and analysis below."
Yes, the DC P3700 is definitely excluded from these graphs.. and the other ones ;)
Billy, why is the 960 Evo performing so badly under Sysmark 2014, when it wins almost all synthetic benchmarks against the MX300? Sure, it's got fewer dies.. but that applies to the low level measurements as well.
I don't know for sure yet. I'll be re-doing the SYSmark tests with a fresh install of Windows 10 Creators Update, and I'll experiment with NVMe drivers and settings. My suspicion is that the 960 EVO was being held back by Microsoft's horrific NVMe driver default behavior, while the synthetic tests in this review were run on Linux.
Is there any reason why one couldn't stick this in any old NVMe-compatible motherboard regardless of paltform and use a software caching system like PrimoCache on it? It identifies to the system as a standard NVMe drive, no? Or does it somehow have the system identify itself on POST and refuse to communicate if it provides the "wrong" identifier?
As long as you have Intel RST RAID disabled for NVMe drives, it'll be accessible as a standard NVMe device and available for use with non-Intel caching software.
As an enthusiast who is gaming 90% of the time with my pc, I don't think this is for me right now. I actually just bought a 960 evo 500gb to compliment my 1 tb 840 evo. Overkill for sure, but I'm happy with it, even if the difference is sometimes subtle.
This technology really excites me. If they can get a system running eith no Dram or Nand, and just use a large block of Xpoint, that could make for a really interesting system. Put 128 gb of this stuff paired with a 2c/4t mobile chip in a laptop, and you could get a really lean system that is fast for every day usage cases (web browsing, video watching, etc).
For my use case, I'd love to have a reason to buy it (no more loading times ever would be very futuristic) but it'll take time to really take off.
Blahblahblah indurance, price, consumption, superspeed. Where they are? ROTFLOL At least don't show these shameful speeds if you opened your mouth this loud, Intel. No one will ever look at anything less then 3.5GB/s set by Samsung 960 Pro if you trolled about superspeeds.
I think that Intel SRT caches reads, whereas the Optane Memory caches both reads and writes. My guess is that when Intel SRT places data in the cache, it doesn't immediately update the non-volatile lookup tables indicating where that data is stored. Instead, it probably waits until a bunch of data has been added, and then records the locations of all of the cached data. The reason for this would be that NAND can only be written in page units. If Intel were to update the non-volatile mapping table every time it added a page of data to the cache, that would double the amount of data written to the caching SSD.
If I'm correct, then with Intel SRT, a power loss can cause some of the data in the SSD cache to be lost. The data itself would still be there, but it won't appear in the lookup table, making it inaccessible. That doesn't matter because SRT only caches reads, so the data lost from the cache will still be on the hard drive.
In contrast, Optane Memory memory presumably updates the mapping table for cached data immediately, taking advantage of the fact that it uses a memory technology that allows small writes. So if you perform a bunch of 4K random writes, the data is written to the Optane storage only, resulting in much higher write performance than you would get with Intel SRT.
In short, I would guess that Optane Memory uses a different caching algorithm than Intel SRT; an algorithm that is only implemented in Intel's latest chipsets.
That's unfortunate, because if Optane Memory were supported using software drivers only (without any chipset support), it would be a very attractive upgrade to older computer systems. At $44 or $77, an Optane Memory device is a lot less expensive than upgrading to an SSD. Instead, Optane Memory is targeted at new systems, where the economics are less compelling.
I would really like to see the 16GB Optane filled with system paging file (on a device with 2 or 4 GB of RAM) and then do some general system experience tests. This seems like the perfect solution: The system is pretty good about offloading stuff that's not needed, and pulling needed files into working memory for full speed; and the memory can be offloaded to or loaded from the Optane cache quickly enough that it shouldn't cause many slowdowns when switching between tasks. This seems like the best strategy, in a world where we're still seeing 'pro' devices with 4 GB of RAM.
I wish Intel would release Optane sticks/drives of 1-4TB sizes asap and sell them for 100-300 more than SSDS of same size immediately. I'm kinda disappointed they do this type of tiered rollout where it looks like it'll take ages until i can get an Optane drive at larger sizes for halfway reasonable prices. Please Intel, make it available asap, i want to buy it. Thanks =)
Well the most important thing is that Optane is now real a product on the market, for consumers and enterprise customers. So some Intel senior managers don’t need to get fired or cross off items on their bonus score cards.
Marketing will convince the world that Optane is better, most importantly that only Intel can have it inside: No ARM, no Power no Zen based server shall ever have it.
For the DRAM-replacement variant, that exclusivity had a reason: Without proper firmware support, that won’t work and without special cache flushing instructions it would be too slow or still volatile. Of course, all of that could be shared with the competition, but who want to give up a practical monopoly, which no competition can contest in court before their money runs out.
For the PCIe variant Intel, chipset and OS dependencies are all artificial, but doesn’t that make things better for everyone? Now people can give up ECC support in cheap Pentiums and instead gain Optane support for a premium on CPUs and chipsets, which use the very same hardware underneath for production cost efficiency. Whoever can sell that, truly deserves their bonus!
Actually, I’d propose they be paid in snake oil.
For the consumer with a linear link between Optane and its downstream storage tier, it means the storage path has twice as many opportunities to fail. For the service technician it means he has four times as many test scenarios to perform. Just think on how that will double again, once Optane does in fact also come to the DIMM socket! Moore’s law is not finished after all! Yeah!
Perhaps Microsoft could be talked into creating a special Optane Edition which offers much better granularity for forensic data storage, and surely there would be plenty of work for security researchers, who just love to find bugs really, really deep down in critical Intel Firmware, which is designed for the lowest Total Cost of TakeOwnership in the industry!
Where others see crisis, Intel creates opportunities!
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
110 Comments
Back to Article
YazX_ - Monday, April 24, 2017 - link
"Since our Optane Memory sample died after only about a day of testing"LOL
Chaitanya - Monday, April 24, 2017 - link
And it is supposed to have endurance rating 21x larger than a conventional NAND SSD.Sarah Terra - Monday, April 24, 2017 - link
Funny yes, but teething issues aside the random write Performance is several orders of magnitude faster than all existing storage mediums, this is the number one metric I find that plays into system responsiveness, boot times, and overall performance and the most ignored metric by all Meg's to date. They all go for sequential numbers, which don't mean jack except when doing large file copies.ddriver - Monday, April 24, 2017 - link
So let's summarize:1000 times faster than NAND - in reality only about 10x faster in hypetane's few strongest points, 2-6x better in most others, maximum thorough lower than consumer NVME SSDs, intel lied about speed about 200 times LOL. Also from Tom's review, it became apparent that until the cache of comparable enterprise SSDs fills up, they are just as fast as hypetane, which only further solidifes my claim that xpoint is NO BETTER THAN SLC, because that's what those drives use for cache.
1000 times the endurance of flash - in reality like 2-3x better than MLC. Probably on par with SLC at the same production node. Intel liked about 300-500 times.
10 times denser than flash - in reality it looks like density is actually way lower than. 400 gigs in what.. like 14 chips was it? Samsung has planar flash (no 3d) that has more capacity in a single chip.
So now they step forward to offer this "flash killer" as a puny 32 gb "accelerator" which makes barely any to none improvement whatsoever and cannot even make it through one day of testing.
That's quite exciting. I am actually surprised they brought the lowest capacity 960 evo rather than the p600.
Consumer grade software already sees no improvement whatsoever from going sata to nvme. It won't be any different for hypetane. Latency are low queue depth access is good, but that's mostly the controller here, in this aspect NAND SSDs have a tremendous headroom for improvement. Which is what we are most likely going to see in the next generation from enterprise products, obviously it makes zero sense for consumers, regardless of how "excited" them fanboys are to load their gaming machines with terabytes of hypetane.
Last but not least - being exclusive to intel's latest chips is another huge MEH. Hypetane's value is already low enough at the current price and limited capacity, the last thing that will help adoption is having to buy a low value intel platform for it, when ryzen is available and offers double the value of intel offerings.
Drumsticks - Monday, April 24, 2017 - link
Your bias is showing.1000x -> Harp on it all you want, but that number was for the architecture not the first generation end product. It represents where we can go, not where we are. I'll also note that Toms gave it their editor approved award - "As tested today with mainstream settings, Optane Memory performed as advertised. We observed increased performance with both a hard disk drive and an entry-level NVMe SSD. The value proposition for a hard drive paired with Optane Memory is undeniable. The combination is very powerful, and for many users, a better solution than a larger SSD."
"1000 times the endurance of flash -> You can concede that 3D XPoint density isn't as good as they originally envisioned, but it's still impressive, gen1, and has nowhere to go but up. It's not really worse than other competing drives per drive capacity - this cache supports like 3 DWPD basically. The MX300 750GB only supports like .3 DWPD. 10x better is still good.
10 times denser than flash -> DRAM, not Flash. And it's going to be much denser than DRAM.
Barely any to no improvement -> LOL, did you look at the graphs? Those lines at the bottom and on the left were 500GB and 250GB Sata and NVMe drives getting killed by Optane in a 32GB configuration. 3D XPoint was designed for low queue depth and random performance - i.e. things that actually matter, where it kills its competition. Even sequential throughput, which is far from its design intention, generally outperforms consumer drives.
So, Optane costs, in an enterprise SSD, 2-3x more than other enterprise drives, for record breaking low queue depth throughput that far surpasses its extra cost, while providing 10-80x less latency. In a consumer drive, Optane regularly approaches an order of magnitude faster than consumer drives in only a 32GB configuration.
If Optane is only as fast as SLC, I'd love to understand why the P4800X broke records as pretty much the fastest drive in the world, barring unrealistically high queue depths.
This 32GB cache might be a stopgap, and less compelling of a product in general because of its capacity, but that you could deny the potential that 3D XPoint holds is absolutely laughable. The random performance and low queue depth performance is undeniably better than NAND, and that's where consumer performance matters.
ddriver - Monday, April 24, 2017 - link
"I'd love to understand why the P4800X broke records"Because nobody bothered to make a SLC drive for many many years. The last time there were purely SLC drives on the market it was years ago, with controllers completely outdated compared to contemporary standards.
SLC is so good that today they only use it for cache in MLC and TLC drives. Kinda like what intel is trying to push hypetane as. Which is why you can see SSDs hitting hypetane IOPs with inferior controllers, until they run out of SLC cache space and performance plummets due to direct MLC/TLC access.
I bet my right testicle that with a comparable controller, SLC can do as well and even better than hypetane. SLC PE latencies are in the low hundreds of NANOseconds, which is substantially lower than what we see from hypetane. Endurance at 40 nm is rated at 100k PE cycles, which is 3 times more than what hypetane has to offer. It will probably drop as process node shrinks but still.
"10x better is still good"
Yet the difference between 10x and 1000x is 100x. Like imagine your employer tells you he's gonna pay you 100k a year, and ends up paying you a 1000 bucks instead. Surely not something anyone would object to LOL.
I am not having problems with "10x better". I am having problems with the fact it is 100x less than what they claimed. Did they fail to meet their expectations, or did they simply lie?
I am not denying hypetane's "potential". I merely make note that it is nothing radically better than nand flash that has not been compromised for the sake of profit. xpoint is no better than SLC nand. With the right controller, good old, even ancient and almost forgotten SLC is just as good as intel and micron's overhyped love child. Which is kinda like reinventing the wheel a few thousand years later, just to sell it at a few times what its actually worth.
My bias is showing? Nope, your "intel inside" underpants are ;)
Reflex - Monday, April 24, 2017 - link
SLC has severe limits on density and cost. It's not used because of that. Even at the same capacity as these initial Optane drives it would likely cost considerably more, and as Optane's density increases there is no ability to mitigate that cost with SLC, it would grow linearly with the amount of flash. The primary mitigations already exists: MLC and TLC. Of course those reduce the performance profile far below Optane and decrease it's ability to handle wear. Technically SLC could go with a stacked die approach, as MLC/TLC are doing, however nothing really stops Optane from doing the same making that at best a neutral comparison.ddriver - Monday, April 24, 2017 - link
SLC is half the density of MLC. Samsung has 2 TB of MLC worth in 4 flash chips. Gotta love 3D stacking. Now employ epic math skills and multiply 4 by 0.5, and you get a full TB of SLC goodness, perfectly doable via 3D stacked nand.And even if you put 3D stacking aside, which if I am not mistaken the sm961 uses planar MLC, 2 chips on each side for a full 1 TB. Cut that in half, you'd get 512 GB of planar SLC in 4 modules.
Now, I don't claim to be that good in math, but if you can have 512 GB of SLC nand in 4 chips, and it takes 14 for a 400 GB of xpoint, that would make planar SLC OVER 4 times denser than xpoint.
Thus if at planar dies SLC is over 4 times better, stacked xpoint could not possibly not possibly be better than stacked SLC.
Severe limits my ass. The only factor at play here is that SSDs are already faster than needed in 99% of the applications. Thus the industry would rather churn MLC and TLC to maximize the profit per grain of sand being used. The moment hypetane begins to take market share, which is not likely, they can immediately launch SLC enterprise products.
Also, it should be noted that there is still ZERO information about what the xpoint medium actually is. For all we know, it may well be SLC, now wouldn't that be a blast. Intel has made a bunch of claims about it, none of which seemed plausible, and most of which have already turned out to be a lie.
ddriver - Monday, April 24, 2017 - link
*multiply 2 by 0.5Reflex - Monday, April 24, 2017 - link
You can 3D stack Optane as well. That's a wash. You seem very obsessed with being right, and not with understanding the technology.name99 - Tuesday, April 25, 2017 - link
Why are you so sure you understand the technology? Intel has told us nothing about how it works.What we have are
- a bunch of promises from Intel that are DRAMATICALLY not met
- an exceptionally lousy (expensive, low capacity) product being sold.
You can interpret these in many ways, but the interpretation that "Intel over promised and dramatically underdelivered" is certainly every bit as legit as the interpretation "just wait, the next version (which ships when?) will be super-awesome".
If Optane is capable TODAY of density comparable to NAND, then why ship such a lousy capacity? And if it's not capable, then what makes you so sure that it can reach NAND density? Getting 3D-NAND to work was not a cheap exercise. Does Intel have the stomach (and the project management skills) to last till that point, especially given that the PoS that they're shipping today ain't gonna generate enough of a revenue stream to pay for the electric bill of the Optane team while they take however long they need to get out the next generation.
emn13 - Tuesday, April 25, 2017 - link
Intel hasn't confirmed what it is, but AFAICT all the signs point to xpoint being phase-change ram, or at least very similar to it. Which still leaves a lot of wiggle room, of course.ddriver - Tuesday, April 25, 2017 - link
IIRC they have explicitly denied xpoint being PCM. But then again, who would ever trust a corporate entity, and why?Cellar - Tuesday, April 25, 2017 - link
Implying Intel would only use only the revenue of Optane to fund their next generation of Optane. You forget how much profit they make milking out their Processors? *Insert Woody Harrelson wiping tears away with money gif*name99 - Tuesday, April 25, 2017 - link
Be careful. What he's criticizing is the HYPE (ie Intel's business plan for this technology) rather than the technology itself, and in that respect he is basically correct. It's hard to see what more Intel could have done to give this technology a bad name.- We start with the ridiculous expectations that were made for it. Most importantly the impression given that the RAM-REPLACEMENT version (which is what actually changes things, not a faster SSD) was just around the corner.
- Then we get this attempt to sell to the consumer market a product that makes ZERO sense for consumers along any dimension. The product may have a place in enterprise (where there's often value in exceptionally fast, albeit expensive, particular types of storage), but for consumers there's nothing of value here. Seriously, ignore the numbers, think EXPERIENCE. In what way is the Optane+hard drive experience better than the larger SSD+hard drive or even large SSD and no hard drive experience at the same price points. What, in the CONSUMER experience, takes advantage of the particular strengths of Optane?
- Then we get this idiotic power management nonsense, which reduces the value even further for a certain (now larger than desktop) segment of mobile computing
- And the enforced tying of the whole thing to particular Intel chipsets just shrinks the potential market even further. For example --- you know who's always investigating potential storage solutions and how they could be faster? Apple. It is conceivable (obviously in the absence of data none of us knows, and Intel won't provide the data) that a fusion drive consisting of, say, 4GB of Optane fused to an iPhone or iPad's 64 or 128 or 256GB could have advantages in terms of either performance or power. (I'm thinking particularly for power in terms of allowing small writes to coalesce in the Optane.)
But Intel seems utterly uninterested in investigating any sort of market outside the tiny tiny market it has defined.
Maybe Optane has the POTENTIAL to be great tech in three years. (Who knows since, as I said, right know what it ACTUALLY is is a secret, along with all its real full spectrum of characteristics).
But as a product launch, this is a disaster. Worse than all those previous Intel disasters whose names you've forgotten like ViiV or Intel Play or the Intel Personal Audio Player 3000 or the Intel Dot.Station.
Reflex - Tuesday, April 25, 2017 - link
Meanwhile in the server space we are pretty happy with what we've seen so far. I get that its not the holy grail you expected, but honestly I didn't read Intel's early info as an expectation that gen1 would be all things to all people and revolutionize the industry. What I saw, and what was delivered, was a path forward past the world of NAND and many of its limitations, with the potential to do more down the road.Today, in low volume and limited form factors it likely will sell all that Intel can produce. My guess is that it will continue to move into the broader space as it improves incrementally generation over generation, like most new memory products have done. Honestly the greatest accomplishment here is Intel and Micron finally introducing a new memory type, at production quantity, with a reasonable cost for its initial markets. We've spent years hearing about phase-change, racetrack, memrister, MRAM and on and on, and nobody has managed to introduce anything at volume since NAND. This is a major milestone, and hopefully it touches off a race between Optane and other technologies that have been in the permanent 3-5 year bucket for a decade plus.
ddriver - Tuesday, April 25, 2017 - link
Yeah, I bet you are offering hypetane boards by the dozens LOL. But shouldn't it be more like "in the _servers that don't serve anyone_ space" since in order to take advantage of them low queue depth transfers and latencies, such a s "server" would have to serve what, like a client or two?I don't claim to be a "server specialist" like you apparently do, but I'd say if a server doesn't have a good saturation, they either your business sucks and you don't have any clients or you have more servers than you need and should cut back until you get a good saturation.
To what kind of servers is it that beneficial to shave off a few microseconds of data access? And again, only in low queue depth loads? I'd understand if hypetane stayed equally responsive regardless of the load, but as the load increases we see it dwindling down to the performance of available nand SSDs. Which means you won't be saving on say query time when the system is actually busy, and when the system is not it will be snappy enough as it is, without magical hypetane storage. After all, servers serve networks, and even local networks are slow enough to completely mask out them "tremendous" SSD latencies. And if we are talking an "internet" server, then the network latency is much, much worse than that.
You also evidently don't understand how the industry works. It is never about "the best thing that can be done", it is always about "the most profitable thing that can be done". As I've repeated many times, even NAND flash can be made tremendously faster, in terms of both latency and bandwidth, it is perfectly possible today and it has been technologically possible for years. Much like it has been possible to make cars that go 200 MPH, yet we only see a tiny fraction of the cars that are actually capable to make that speed. There has been a small but steady market for mram, but that's a niche product, it will never be mainstream because of technological limitations. It is pretty much the same thing with hypetane, regardless of how much intel are trying to shove it to consumers in useless product forms, it only makes sense in an extremely narrow niche. And it doesn't owe its performance to its "new memory type" but to its improved controller, and even then, its performance doesn't come anywhere close to what good old SLC is capable of technologically as a storage medium, which one should not confuse with a compete product stack.
The x25-e was launched almost 10 years ago. And its controller was very much "with the times" which is the reason the drive does a rather meager 250/170 mb/s. Yet even back then its latency was around 80 microseconds, with its "latest and greatest" hypetane struggling to beat that by a single order of magnitude 10 years later. Yet technologically the SLC PE cycle can go as low as 200 nanoseconds, which is 50 times better than hypetane and 400 times better than what the last pure SLC SSD controller was capable of.
No wonder the industry abandoned SLC - it was and still is too good not only for consumers but also for the enterprise. Which begs the question, with the SLC trump card being available for over a decade why would intel and micron waste money on researching a new media. And whether they really did that, or simply took good old SLC, smeared a bunch of lies, hype and cheap PR on it to step forward and say "here, we did something new".
I mean come on, when was the last time intel made something new? Oh that's right, back when they made netburst, and it ended up a huge flop. And then, where did the rescue come from? Something radically new? Nope, they got back to the same old tried and true, and improved instead of trying to innovate. Which is also what this current situation looks like.
I can honestly think of no better reason to be so secretive about the "amazing new xpoint", unless it actually isn't neither amazing, nor new, nor xpoint. I mean if it s a "tech secret" I don't see how they shouldn't be able to protect their IP via patents, I mean if it really is something new, it is not like they are short on the money it will take to patent it. So there is no good reason to keep it such a secret other than the intent to cultivate mystery over something that is not mysterious at all.
eddman - Tuesday, April 25, 2017 - link
This is what happens when people let their personal feelings get in the way."Even if they cure cancer, they still suck and I hate them"
ddriver - Tuesday, April 25, 2017 - link
Except it doesn't cure cancer. And I'd say it is always better to prevent cancer than to have the destructive treatment leave you a diminished being.eddman - Tuesday, April 25, 2017 - link
Just admit you have a personal hatred towards MS, intel and nvidia, no matter what they do, and be done with it. It's beyond obvious.ddriver - Wednesday, April 26, 2017 - link
It is only natural to have negative sentiments about greedy, lousy corporations because of what they do. It is nothing personal though, I do it because I am a conscious human being. Not cattle. You can throw crapple and moogle into the mix. There is no single good reason to be fond of any corporation. The bigger they are the more damage they do to humanity and the planet as a whole.In other news, water is wet!
eddman - Wednesday, April 26, 2017 - link
You are not fooling anyone.eddman - Wednesday, April 26, 2017 - link
...and you are so blind by your hatred that you dismiss every single thing that these companies do. You are not rational in the slightest but do like to boast about how great you are.Reflex - Tuesday, April 25, 2017 - link
Nailed it eddman. Because it does not personally solve ddriver's problems, or because it comes from the wrong brand, its an epic disaster. The funny thing here is I agree this is not a revolution, at least not yet, but the incessant bashing and inability to acknowledge that it has its uses and those use cases are likely to only grow demonstrates the bias involved.Reflex - Tuesday, April 25, 2017 - link
To the insinuation that Optane may somehow be relabeled SLC NAND, I went and did a little research/consultation. All NAND requires writing to blocks, Optane can support bit level writes (expected in DIMM configurations), which is a major advantage over NAND and not technically possible with NAND. It was also pointed out that if Optane was simply disguised SLC, despite the technical impossibility, it would mean that Intel had engaged in financial fraud by materially misrepresenting its technology, capabilities and long-term expectations to investors.Thanks to Joel Hruska for looking into it for me.
More info here: https://arstechnica.com/gadgets/2017/04/intel-opta...
More from Joel here: https://www.extremetech.com/author/jhruska
Reflex - Monday, April 24, 2017 - link
OMG it's the fastest product on the market in its class but because I choose to interpret the early marketing as applying to the first gen product it totally sucks! I refuse to benefit from drastically better performance because Intel *dared* to speak to its potential performance and didn't deliver that in the first product!In fact, I am so enraged I'm ripping out all my existing SSD's and replacing them with Quantum Bigfoot drives in protest.
Drumsticks - Monday, April 24, 2017 - link
It's probably because Intel dared to do something innovative again, and we can't possibly give credit where it's due, can we? If it was Samsung, I bet it would just be Samsung being Samsung. Slap the blue name on top, and it's cool to criticize whatever you can, even in the face of hard numbers. Make sure you also include an edgy name like "Hypetane" to really drive your point home.Reflex - Monday, April 24, 2017 - link
To be fair if it were Samsung we'd get a lecture on the oppression of North Korea mixed in there somewhere along with a conspiracy theory about the south being a puppet state not permitted to succeed in the face of America.jabbadap - Monday, April 24, 2017 - link
Well I don't want to degrade intel's efforts on this. But it's intel/micron co-operation who have engineered this and I would even guess a bit further that science behind this is more micron tech than intels.Drumsticks - Monday, April 24, 2017 - link
That's fair, and Micron definitely deserves credit as well. I'm sure they'll get their own when QuantX comes out, hopefully sometime this year. I suspect that the R&D was split very evenly, though; Intel has always been good at doing things "well" in the fab; Micron had excelled at doing them "cheaply" which is one reason the venture was reasonably successful. Plus, I feel it would be hard to collaborate on R&D together for 10 years and successfully say "we did this together" to the public, if one side (Micron or Intel) did most of the work. I guess we'll never know, though.ddriver - Tuesday, April 25, 2017 - link
Yeah, daring intel, the pioneer, taking mankind to better places.Oh wait, that's right, it is actually a greedy monopoly that has mercilessly milked people while making nothing aside from barely incremental stuff for years and through its anti-competitive practices has actually held progress back tremendously.
As I already mentioned above, the last time "intel dared to innovate" that resulted in netburst. Which was so bad that in order to save the day intel had to... do what? Innovate once again? Nope, god forbid, what they did was go back and improve on the good design they had and scrapped in their futile attempts to innovate.
And as I already mentioned above, all the secrecy behind xpoint might be exactly because it is NOTHING innovative, but something old and forgotten, just slightly improved.
Reflex - Tuesday, April 25, 2017 - link
Axe is looking pretty worn down from all that grinding....ddriver - Wednesday, April 26, 2017 - link
Also, unlike you, I don't let personal preferences cloud my objectivity. If a product is good, even if made by the most wretched corporation out there, it is not a bad product just because of who makes it, it is still a good product, still made by a wretched corporation.Even if intel wasn't a lousy bloated lazy greedy monopolist, hypetane would still suck, because it isn't anywhere near the "1000x" improvements they promised. It would suck even if intel was a charity that fed the starving in the 3rd world.
I would have had ZERO objections to hypetane, and also wouldn't call it hypetane to begin with, if intel, the spoiled greedy monopolist was still decent enough to not SHAMELESSLY LIE ABOUT IT.
Had they just said "10x better latency, 4x better low depth queue performance" and stuff like that, I'd be like "well, it's ok, it is faster than nand, you delivered what you promised.
But they didn't. They lied, and lied, and now that it is clear that they lied, they keep on lying and smearing with biased reviews in unrealistic workloads.
What kind of an idiot would ever approve of that?
fallaha56 - Tuesday, April 25, 2017 - link
OMG when our product wasn't as good as we said it was we didn't own-up about itand maybe you test against HDD (like Intel) but the rest of us are already packing SSDs
philehidiot - Saturday, April 29, 2017 - link
This is what companies do. Your technology is useless unless you can market it. And you don't market anything by saying it's mediocre. Look at BP's high octane fuel which supposedly cleans your engine and gets better fuel efficiency. The ONLY thing that higher octane fuel does is resist auto-ignition under compression better and thus certain high performance engines require it. As for cleaning your engine - you're telling me you've got a solvent which is better at cutting through crap than petrol AND can survive the massive temperatures and pressures inside the combustion chamber? It's the petrol which scrubs off the crap so yes, it's technically true. They might throw and additive or two in there but that will only help pre-combustion chamber and if you actually have a problem. And Yes, in certain, newer cars with certain sensors you will get SLIGHTLY higher MPG and therefore they advertise the maximum you'll get under ideal conditions because no one will but into it if you're realistic about the gains. The gains will never offset the extra cost of the fuel, however.PC marketing is exactly the same and why the J Micron controller was such a disaster so many years ago. They went for advertised high sequential throughput numbers being as high as possible and destroyed the random performance, Anand spotted it and OCZ threw a wobbler. But that experience led to drives being advertised on random performance as well as sequential.
So what's the lesson here? We should always take manufacturer's claims with a mouthful of salt and buy based on objective criteria and independent measurements. Manufacturers will always state what is achievable in basically a lab set up with conditions controlled to perfection. Why? Because for one you can't quote numbers based on real life performance because everyone's experience will differ and you can't account for the different variables they'll experience. And for two, if everyone else is quoting the maximum theoretical potential, you're immediately putting yourself at a disadvantage by not doing so yourself. It's not about your product, it's about how well you can sell it to a customer - see: Stupidly expensive Dyson Hairdryer. Provides no real performance benefit over a cheap hairdryer but cost a lot in R&D and is mostly advertising wank for rich people with small brains.
As for Intel being a greedy monopoly... welcome to capitalism. If you don't want that side effect of the system then bugger off to Cuba. Capitalism has brought society to the highest standard of living ever seen on this planet. No other form of economic operation has allowed so many to have so much. But the result is big companies like Intel, Google, Apple, etc, etc.
Advertising wank is just that. Figures to masturbate over. If they didn't do it then sites like Anandtech wouldn't need to exist as products would always be accurately described by the manufacturer and placed honestly within the market and so reviews wouldn't be required.
I doubt they lied completely - they will be going on the theoretical limits of their technology when all engineering limitations are removed. This will never happen in practice and will certainly never happen in a gen 1 product. Also, whilst I see this product as being pointless, it's obviously just a toe dipping exercise like the enterprise model. Small scale, very controlled use cases and therefore good real world use data to be returned for gen 2/3.
Personally, whilst I'm wowed by the figures, I don't see how they're going to improve things for me. So what's the point in a different technology when SLC can probably perform just as well? It's a different development path which will encounter different limitations and as a result will provide different advantages further down the road. Why do they continue to build coal fired power stations when we have CCGTs, wind, solar, nukes, etc? Because each technology has its strengths and weaknesses and encounters different engineering limitations in development. Plus a plurality of different, competing technologies is always better as it creates progress. You can't whinge about monopolies and then when someone starts doing something different and competing with the established norm start whinging about that.
fallaha56 - Tuesday, April 25, 2017 - link
hi @sarah i find that a dead hard drive also plays into responsiveness and boot times(!)this technology is clearly not anywhere near as good as Intel implied it was
CaedenV - Monday, April 24, 2017 - link
I have never once had an SSD fail because it has over-used its flash memory... but controllers die all the time. It seems that this will remain true for this as well.Ryan Smith - Tuesday, April 25, 2017 - link
And that's exactly what we're suspecting here. We've likely managed to hit a bug in the controller's firmware. Which to be sure, isn't fantastic, but it can be fixed.Prior to the P3700's launch, Intel sent us 4 samples specifically for stress testing. We managed to disable every last one of them. However Intel learned from our abuse, and now those same P3700s are rock-solid thanks to better firmware and drivers.
jimjamjamie - Tuesday, April 25, 2017 - link
Interesting that an ad-supported website can stress-test better than a multi-billion dollar company..testbug00 - Tuesday, April 25, 2017 - link
based on what? Have they sent you another model?A sample dying on day one, and only allowing testing via remote server doesn't confidence build.
Shadow7037932 - Tuesday, April 25, 2017 - link
It's a first gen release. Do you remember the issues the first gen SSDs had? Do you remember the JMicron stuttering issues?JoeyJoJo123 - Monday, April 24, 2017 - link
The birth of a new meme.halcyon - Monday, April 24, 2017 - link
Looks like almost a completely useless interim memory device for almost all workloads (non-server). combine that with a size of 32GB on Kaby Lake, it begs the question : what is the point? Why not release a ready product that has a market niche, and not a slimmed-down beta that is looking for a solution it can't fit?ddriver - Monday, April 24, 2017 - link
The point is they burned through a mountain of cash to RD this flop and now they are desperately trying to get some of it back. It is a product that doesn't fit in 99% of the market. Thus the solution is to try and shove it anywhere else possible, regardless of how little sense it makes.menting - Monday, April 24, 2017 - link
you might have forgotten the 1st gen SSDs were about the same, but look at SSDs now.fallaha56 - Tuesday, April 25, 2017 - link
exactly! so with everyone having learnt that lesson (and having amazing SSDs) Intel has to do betterthis is a pointless product that offers no real advantages and many disadvantages
carewolf - Friday, June 2, 2017 - link
I wonder if they still got paid by Intel after revealing that :Dtech6 - Monday, April 24, 2017 - link
The test that I would be interested in is if this technology could be an effective cache is speeding up mainstream SSDs.Sarah Terra - Monday, April 24, 2017 - link
Perhaps, but better to just wait for pricing to come in line and have the entire disk made from optane or similar. Still can't believe the random writes, this is the biggest jump since the original intel X-25. Basically on any file larger than 4kb you are starting at 4x performance and going waaaaaay up.Twingo - Monday, April 24, 2017 - link
True, since a SATA based SSD is much cheaper than a NVME drive. I'd like to see the comparison of Optane + 1TB SATA SSD vs 1TB NVME SSD. The 1TB SATA SSD + Optane would be cheaper solution than a 1TB NVME.Ratman6161 - Monday, April 24, 2017 - link
"The test that I would be interested in is if this technology could be an effective cache is speeding up mainstream SSDs."That's exactly what I was wondering i.e. if I paired it with my SATA 250 EVO. Or, they have a Crucial MX300 SATA SSD in the test which is an OK lower priced SSD. Given the optane drives are $44 and $77 respectively, if someone had something like the MX300 they might be tempted to pair it with an Optane cache.
On the other hand you have to have the latest Intel CPU and chipset, and I just jumped ship and went with a Ryzen 5 - so its all academic to me.
Lolimaster - Wednesday, April 26, 2017 - link
LTT already did, it's worthless.For $77 you're close of a crucial MX300 275GB
Billy Tallis - Monday, April 24, 2017 - link
That's the test that was running when it died.Twingo - Monday, April 24, 2017 - link
Billy, are you expecting to get a replacement so you can conduct all these tests?Billy Tallis - Monday, April 24, 2017 - link
Yes, the replacement will be delivered tomorrow. But don't expect the follow-up article to be real soon. I also want to update the software on the testbeds and run a reasonably large number of drives through, and do some deeper experimentation with the caching to probe its behavior.beginner99 - Tuesday, April 25, 2017 - link
Mainstream TLC ssds for sure there will be a speed-up measurable in benchmarks. If we as user would actually notice a difference is a completely other question. Due to KISS instead of spending money on this cache drive, instead just buy a tier higher SSD. If mainstrem choose 960 evo instead or of 960 evo choose 960 pro instead.fallaha56 - Tuesday, April 25, 2017 - link
absolutely not(!)for the reason you said
the 960 pro offers no meaningful real-world advantage to anyone / 99.9% of users
Glock24 - Monday, April 24, 2017 - link
"Only Core 13, 15 and i7 processors are supported; Celeron and Pentium parts are excluded."There's a typo or I've never seen those Core 13 and Core 15 CPUs before.
From the data you showed, I see no real benefit is using Optane as a caching solution vs. using an SSD as boot drive. At least not at that price point.
Kristian Vättö - Monday, April 24, 2017 - link
For the full review, could you also monitor DRAM usage? 16GB is not really an entry-level setup, so with that much DRAM Intel's software might be caching to DRAM as well like Samsung's RAPID mode, which would inflate the scores.Might also be worthwhile to run at least a couple of the application tests with 4GB/8GB of DRAM to see how things work when caching is done fully by Optane.
Sarah Terra - Monday, April 24, 2017 - link
Also optane's incredibly low latency should be tested for real world benefitsromrunning - Monday, April 24, 2017 - link
Speaking of real-world tests, I am waiting for SQL Server tests on an Optane SSD - like on that DC P4800X. The "enterprise" review of the 4800 was all synthetic benchmarks with some disclaimer that they can't simulate all enterprise loads. Sure, you can't simulate everything, but I'm very disappointed that -nothing- enterprise level was even tested.ddriver - Monday, April 24, 2017 - link
I am sure it is just an unfortunate coincidence, and it is not like intel is trying to hide the actuality of real world performance :)darkfalz - Monday, April 24, 2017 - link
SSD cache / Hybrid SSD drives work okay on certain workloads, mainly productivity stuff, but if you have a lot of games/media they tend to fill up really quickly and I don't think any of the companies that write the algorithms, Intel included, can really figure out how reliably and over long usage periods decide what should be in the cache and what shouldn't.I have a 24GB SSD cache (ExpressCache) in my Notebook and I partitioned the OS/Programmes for to one partition, and put all the media on the second partition, and set it to only cache the first partition. This setup works pretty well.
I also have a Hybrid SSHD in another laptop (only 8GB I think) that I mostly use as a background downloading PC, and after a few days of doing this any useful boot / OS / Chrome stuff that was in the cache has been evicted and it's back to booting at the same speed as a regular HDD.
Nice in theory, highly variable in practice. I never tried the Intel SRT out because larger SSD affordability improved a lot after it was released.
satai - Monday, April 24, 2017 - link
Can I just put it into a PCIe slot (via a reduction), boot linux from an other SSD drive and use it as any other block device?romrunning - Monday, April 24, 2017 - link
Per the article: "However, the Optane Memory can also be treated as a small and fast NVMe SSD, because all of the work to enable its caching role is performed in software or by the PCH on the motherboard. 32GB is even (barely) enough to be used as a Windows boot drive, though doing so would not be useful for most consumers."DigitalFreak - Monday, April 24, 2017 - link
Are you also going to test Intel SRT with a ~$77 SATA SSD and the same WD HDD? I bet it would perform about the same, and SRT works with non-boot drives.eddieobscurant - Monday, April 24, 2017 - link
How about using the same test setup as with the other ssds and run the same benchmarks for comparison?I get you wanna please intel for giving you access to optane (which should be named remote preview by the way) , but come on !!!
Also the new graphs ( probably suggested from intel , since tomshardware has something like these ) are not easy to understand with a quick look.
Billy Tallis - Monday, April 24, 2017 - link
These two Optane reviews interrupted my work on putting together a new 2017 consumer SSD test suite to replace our aging 2015 suite. When the new test suite is ready, you'll get comparisons against the broad range of SSDs that you're used to seeing and more polished presentation of the data.Shadowmaster625 - Monday, April 24, 2017 - link
Can this be used as a boot drive?romrunning - Monday, April 24, 2017 - link
Per the article: "However, the Optane Memory can also be treated as a small and fast NVMe SSD, because all of the work to enable its caching role is performed in software or by the PCH on the motherboard. 32GB is even (barely) enough to be used as a Windows boot drive, though doing so would not be useful for most consumers."BrokenCrayons - Monday, April 24, 2017 - link
A desktop Linux distro would fit nicely on it with room for local file storage. I've lived pretty happily with a netbook that had a 32GB compact flash card on a 2.5 inch SATA adapter that had Linux Mint 17.3 on it. The OS and default applications used less than 8GB of space. I didn't give it a swap partition since 2GB was more than enough RAM under Linux (system was idle at less than 200MB and I never saw it demand more than 1.2GB when I was multi-tasking). As such, there was lots of space to store my music, books, and pics of my cat.ddriver - Monday, April 24, 2017 - link
And imagine how well DOS will run. And you have ample space for application and data storage. 32 gigs - that's what dreams were made of in the early 90s. Your music, books and cat pics are just icing on the cake. Let me guess, 64 kbit mp3s right?BrokenCrayons - Monday, April 24, 2017 - link
I'm impressed at the level of your insecurity.mkozakewich - Thursday, April 27, 2017 - link
I've made the decision to never read any comment with his name above, but sometimes I accidentally miss it.DanNeely - Monday, April 24, 2017 - link
Looking at the size of it, I'm wondering why they didn't make a 48GB model that would fill up the 80mm stick fully. Or, and unless the 3xpoint dies fully fill the area in the packages make them slightly smaller to support the 2260 form factor (after accounting for the odds and ends at the end of the stick the current design it looks like it's just too big to fit on the smaller size).CaedenV - Monday, April 24, 2017 - link
Once again, I have to ask.... who on earth is this product for?So you have a cheap $300 laptop, which is going to have a terrible display, minimal RAM, and a small HDD or eMMC drive... are they expecting these users to spring for one of these drives to choke their CPU?
Maybe a more mainstream $5-900 laptop where price is still ultra competitive. What sales metric does this add to which will promote sales over a cheaper device with seemingly the same specs? Either it will have a SSD onboard already and the performance difference will be un-noticed, or it will have a large HDD and the end-user is going to scratch their heads wondering why 2 seemingly identical computers have 4GB of RAM and 1TB HDD, but one costs $100 more.
Ok, so maybe it is in the premium $1-2000 market. Intel says it isn't aiming at these devices, but they are Intel. Maybe they think a $1-2000 laptop is an 'affordable' mass-market device? Here you are talking about ultrabooks; super slim devices with SSDs... oh, and they only have 1 PCIe slot on board. Just add a 2nd one? Where are you going to put it? Going to add more weight? More thickness? A smaller battery? And even after you manage to cram the part in one of these laptops... what exactly is going to be the performance benefit? An extra half a second when coming out of sleep mode? Word opens in .5 sec instead of .8 sec? Yes, these drives are faster than SSDs... but we are way past the point of where software load times matter at all.
So then what about workstation laptops. That is where these look like they will shine. A video editing laptop, or desktop replacement. And for those few brave souls using such a machine with a single HDD or SSD this seems like it would work well... except I don't know anyone like that. These are production machines, which means RAID1 in case of HDD failure. And this tech does not work with RAID (even though I don't see why not... seems like they could easily integrate this into the RAID controller). But maybe they could use the drive as a 3rd small stand-alone render drive... but that only works in linux, not windows. So, nope, this isn't going to work in this market either.
And that brings us to the desktop market. For the same price/raid concerns this product really doesn't work for desktops either, but the Optate SSDs coming out later this year sound interesting... but here we still have a pretty major issue;
SATA3 vs PCIe m.2 drives have an odd problem. On paper the m.2 drives benchmark amazingly well. And in production environments for rendering they also work really well. But for work applications and games people are reporting that there is little to no difference in performance. Intel is trying to make the claim that the issue is due to access time on the controllers, and that the extremely fast access time on Optane will finally get us past all that. But I don't think that is true. For work applications most of the wait time is either on the CPU or the network connection to the source material. The end-user storage is no longer the limiting factor in these scenarios. For games, much of the load time is in the GPU taking textures and game data and unpackaging them in the GPU's vRAM for use. The CPU and HDD/SSD are largely idle during this process. Even modern HDDs keep up pretty well with their SSD brethren on game load times. This leads me to believe that there is something else that is slowing down the whole process.
And that single bottleneck in the whole thing is Intel. It is their CPUs that have stopped getting faster. It is their RAM management that rather sucks and works the same speed no matter what your RAM is clocked at. It is the whole x86 platform that is stagnant and inefficient which is the real issue here. It is time for Intel to stop focusing on its next die-shrink, and start working on a new modern efficient instruction set and architecture that can take advantage of all this new tech! Backwards compatibility is killing the computer market. Time to make a clean break on the hardware side for a new way of doing things. We can always add software compatibility in an emulation layer so we can still use our old OSs and tools. Its going to be a mess, but we are at a point where it needs to be done.
Cliff34 - Monday, April 24, 2017 - link
It seems to me that this product doesn't really make sense for your average consumer. Let's assume you don't need to upgrade your hardware to use Optane memory as cache, why not just spend the money to get a faster and a bigger SSD drive?If that's the case, wouldn't it limited to only a few specific case where someone really need the Optane speed?
mkozakewich - Thursday, April 27, 2017 - link
An extra 4 GB of DDR4 seems to be $30-$40, so getting 16 GB of swap drive for the same price might be a good way to go.I agree that using it for caching seems a little pointless.
zodiacfml - Monday, April 24, 2017 - link
Wow, strong at random perf where SSDs are weak. I guess this will be the drive for me. Next gen please.p2131471 - Monday, April 24, 2017 - link
I wish you'd make interactive graphs for random reads. Or at least provide numbers in a table. Right now I can only approximate the exact values.Billy Tallis - Monday, April 24, 2017 - link
I've been considering interactive graphs. I'm not sure how easily our current CMS would let me include external scripts like D3.js, and I definitely want to make sure it provides a usable fallback to a pre-rendered image if the scripts can't load. If you have suggestions for something that might be easy to integrate into my python/matplotlib workflow, shoot me an email.And once I get the new 2017 consumer SSD test suite finished, I'll go back to having labeled bar charts for the primary scores, because that's the only easy to compare across a large number of drives.
watzupken - Monday, April 24, 2017 - link
I echo the conclusion that the cache is too little and too late. In a time where SSDs are becoming affordable as compared to the perhaps 5 years back, it makes little sense to fork out so much money for a puny 32gb cache along with other hardware requirements. It's fast, but it is not a full SSD.menting - Monday, April 24, 2017 - link
It's not aimed at replacing a SSD.Morawka - Monday, April 24, 2017 - link
has chipworks or anyone else figured out the material science behind this technology?zeeBomb - Tuesday, April 25, 2017 - link
Damn you guys killed the optane in a dayRyan Smith - Tuesday, April 25, 2017 - link
As is tradition.The manufacturers work hard, but SSD firmware development and validation is hard. There are a lot of drives out there that are better off today because we broke them first.
Reflex - Tuesday, April 25, 2017 - link
http://www.anandtech.com/show/9470/intel-and-micro...I think people need to re-read this article. Going over it makes much of the disappointment seem a bit overdone. Intel spoke to the potential of the technology, they didn't promise it all in the first version. They also spoke to its long term potential, including being able to stack the die and potentially move higher bit levels. I think its fair to say this isn't a consumer level product yet, but to ship a brand new memory tech at production level that is significantly faster and higher endurance than alternatives, is a significant accomplishment. We have been suck for more than a decade with a '3-5 year' timetable on new memory technologies, perhaps this will get other players to actually ship something (I'm looking at you HP and your promise of memristers two years ago).
Reflex - Tuesday, April 25, 2017 - link
Also, apparently typing comments at 11PM after a long day at the office isn't the best idea. Ignore my typos please. ;)testbug00 - Tuesday, April 25, 2017 - link
problem is Intel did not make this clear. Intel has now had multiple chance to clearly seperate the potential of the technology from the first generation implementation. They choose not to take it.This is slimey and disgusting.
The technology as a whole long term does indeed seem very promising, however.
Reflex - Tuesday, April 25, 2017 - link
Couldn't you say that about any company that talks about an upcoming technology and its potential then restricts its launch to specific niches? Which is almost everyone when it comes to new technologies...evilpaul666 - Thursday, April 27, 2017 - link
Everyone presumes that technology will improve over time. Talking up 1000x improvements, making people wait for a year or more, and then releasing a stupid expensive small drive for the Enterprise segment, and a not particularly useful tiny drive for whoever is running a Core i3 7000 series or better CPU with a mechanical hard drive, for some reason, is slightly disappointing.We wanted better stuff now after a year of waiting not at some point in the future which was where we've always been.
Lehti - Tuesday, April 25, 2017 - link
Hmm... And how does this compare to regular SSD caching using Smart Response? So far I can't see why anyone would want an Optane cache as opposed to that or, even better, a boot SSD paired with a storage hard drive.Calin - Tuesday, April 25, 2017 - link
Did you brought the WD Caviar to steady state by filling it twice with random data in random files? Performance of magnetic media varies greatly based on drive fragmentationBilly Tallis - Wednesday, April 26, 2017 - link
I didn't pre-condition any of the drives for SYSmark, just for the synthetic tests (which the hard drive wasn't included in). For the SYSmark test runs, the drives were all secure erased then imaged with Windows.MrSpadge - Tuesday, April 25, 2017 - link
"Queue Depth > 1When testing sequential writes at varying queue depths, the Intel SSD DC P3700's performance was highly erratic. We did not have sufficient time to determine what was going wrong, so its results have been excluded from the graphs and analysis below."
Yes, the DC P3700 is definitely excluded from these graphs.. and the other ones ;)
Billy Tallis - Wednesday, April 26, 2017 - link
Oops. I copied a little too much from the P4800X review...MrSpadge - Tuesday, April 25, 2017 - link
Billy, why is the 960 Evo performing so badly under Sysmark 2014, when it wins almost all synthetic benchmarks against the MX300? Sure, it's got fewer dies.. but that applies to the low level measurements as well.Billy Tallis - Wednesday, April 26, 2017 - link
I don't know for sure yet. I'll be re-doing the SYSmark tests with a fresh install of Windows 10 Creators Update, and I'll experiment with NVMe drivers and settings. My suspicion is that the 960 EVO was being held back by Microsoft's horrific NVMe driver default behavior, while the synthetic tests in this review were run on Linux.MrSpadge - Wednesday, April 26, 2017 - link
That makes sense, thanks for answering!Valantar - Tuesday, April 25, 2017 - link
Is there any reason why one couldn't stick this in any old NVMe-compatible motherboard regardless of paltform and use a software caching system like PrimoCache on it? It identifies to the system as a standard NVMe drive, no? Or does it somehow have the system identify itself on POST and refuse to communicate if it provides the "wrong" identifier?Billy Tallis - Wednesday, April 26, 2017 - link
As long as you have Intel RST RAID disabled for NVMe drives, it'll be accessible as a standard NVMe device and available for use with non-Intel caching software.fanofanand - Tuesday, April 25, 2017 - link
I came here to read ddriver's "hypetane" rants, and I was not disappointed!TallestJon96 - Tuesday, April 25, 2017 - link
Too bad about the drive breaking.As an enthusiast who is gaming 90% of the time with my pc, I don't think this is for me right now. I actually just bought a 960 evo 500gb to compliment my 1 tb 840 evo. Overkill for sure, but I'm happy with it, even if the difference is sometimes subtle.
This technology really excites me. If they can get a system running eith no Dram or Nand, and just use a large block of Xpoint, that could make for a really interesting system. Put 128 gb of this stuff paired with a 2c/4t mobile chip in a laptop, and you could get a really lean system that is fast for every day usage cases (web browsing, video watching, etc).
For my use case, I'd love to have a reason to buy it (no more loading times ever would be very futuristic) but it'll take time to really take off.
MrSpadge - Tuesday, April 25, 2017 - link
> no more loading timesNot going to happen, because there's quite some CPU work involved with loading things.
SanX - Tuesday, April 25, 2017 - link
Blahblahblah indurance, price, consumption, superspeed. Where they are? ROTFLOL At least don't show these shameful speeds if you opened your mouth this loud, Intel. No one will ever look at anything less then 3.5GB/s set by Samsung 960 Pro if you trolled about superspeeds.cheshirster - Wednesday, April 26, 2017 - link
Is there any technical reasoning why this won't work with older CPU's?I don't see this being any different than Intel RST.
KAlmquist - Thursday, April 27, 2017 - link
I think that Intel SRT caches reads, whereas the Optane Memory caches both reads and writes. My guess is that when Intel SRT places data in the cache, it doesn't immediately update the non-volatile lookup tables indicating where that data is stored. Instead, it probably waits until a bunch of data has been added, and then records the locations of all of the cached data. The reason for this would be that NAND can only be written in page units. If Intel were to update the non-volatile mapping table every time it added a page of data to the cache, that would double the amount of data written to the caching SSD.If I'm correct, then with Intel SRT, a power loss can cause some of the data in the SSD cache to be lost. The data itself would still be there, but it won't appear in the lookup table, making it inaccessible. That doesn't matter because SRT only caches reads, so the data lost from the cache will still be on the hard drive.
In contrast, Optane Memory memory presumably updates the mapping table for cached data immediately, taking advantage of the fact that it uses a memory technology that allows small writes. So if you perform a bunch of 4K random writes, the data is written to the Optane storage only, resulting in much higher write performance than you would get with Intel SRT.
In short, I would guess that Optane Memory uses a different caching algorithm than Intel SRT; an algorithm that is only implemented in Intel's latest chipsets.
That's unfortunate, because if Optane Memory were supported using software drivers only (without any chipset support), it would be a very attractive upgrade to older computer systems. At $44 or $77, an Optane Memory device is a lot less expensive than upgrading to an SSD. Instead, Optane Memory is targeted at new systems, where the economics are less compelling.
mkozakewich - Thursday, April 27, 2017 - link
I would really like to see the 16GB Optane filled with system paging file (on a device with 2 or 4 GB of RAM) and then do some general system experience tests. This seems like the perfect solution: The system is pretty good about offloading stuff that's not needed, and pulling needed files into working memory for full speed; and the memory can be offloaded to or loaded from the Optane cache quickly enough that it shouldn't cause many slowdowns when switching between tasks. This seems like the best strategy, in a world where we're still seeing 'pro' devices with 4 GB of RAM.Ugur - Monday, May 1, 2017 - link
I wish Intel would release Optane sticks/drives of 1-4TB sizes asap and sell them for 100-300 more than SSDS of same size immediately.I'm kinda disappointed they do this type of tiered rollout where it looks like it'll take ages until i can get an Optane drive at larger sizes for halfway reasonable prices.
Please Intel, make it available asap, i want to buy it.
Thanks =)
abufrejoval - Monday, May 8, 2017 - link
Well the most important thing is that Optane is now real a product on the market, for consumers and enterprise customers. So some Intel senior managers don’t need to get fired or cross off items on their bonus score cards.Marketing will convince the world that Optane is better, most importantly that only Intel can have it inside: No ARM, no Power no Zen based server shall ever have it.
For the DRAM-replacement variant, that exclusivity had a reason: Without proper firmware support, that won’t work and without special cache flushing instructions it would be too slow or still volatile.
Of course, all of that could be shared with the competition, but who want to give up a practical monopoly, which no competition can contest in court before their money runs out.
For the PCIe variant Intel, chipset and OS dependencies are all artificial, but doesn’t that make things better for everyone? Now people can give up ECC support in cheap Pentiums and instead gain Optane support for a premium on CPUs and chipsets, which use the very same hardware underneath for production cost efficiency. Whoever can sell that, truly deserves their bonus!
Actually, I’d propose they be paid in snake oil.
For the consumer with a linear link between Optane and its downstream storage tier, it means the storage path has twice as many opportunities to fail. For the service technician it means he has four times as many test scenarios to perform. Just think on how that will double again, once Optane does in fact also come to the DIMM socket! Moore’s law is not finished after all! Yeah!
Perhaps Microsoft could be talked into creating a special Optane Edition which offers much better granularity for forensic data storage, and surely there would be plenty of work for security researchers, who just love to find bugs really, really deep down in critical Intel Firmware, which is designed for the lowest Total Cost of TakeOwnership in the industry!
Where others see crisis, Intel creates opportunities!