As part of this year’s Intel’s Developer Forum, we had half expected some more insights into the new series of 3D XPoint products that would be hitting the market, either in terms of narrower time frames or more insights into the technology. Last year was the outing of some information, including the ‘Optane’ brand name for the storage version. Unfortunately, new information was thin on the ground and Intel seemed reluctant to speak any further about the technology that what had already been said.

What we do know is that 3D XPoint based products will come in storage flavors first, with DRAM extension parts to follow in the future. This ultimately comes from the fact that storage is easier to implement and enable than DRAM, and the characteristics for storage are not as tight as those for DRAM in terms of break-neck speed, latency or read/write cycles.

For IDF, Optane was ultimately relegated to a side presentation at the same time as other important talks were going on, and we were treated to discussions about ‘software defined cache hierarchy’ whereby a system with an Optane drive can define the memory space as ‘DRAM + Optane’. This means a system with 256GB of DRAM and a 768GB Optane drive can essentially act like a system with ‘1TB’ of DRAM space to fill with a database. The abstraction layer in the software/hypervisor is aimed at brokering the actual interface between DRAM and Optane, but it should be transparent to software. This would enable some database applications to move from ‘partial DRAM and SSD scratch space’ into a full ‘DRAM’ environment, making it easier for programming. Of course, the performance compared to an all-DRAM database is lower, but the point of this is to move databases out of the SSD/HDD environment by making the DRAM space larger.

Aside from the talk, there were actually some Optane drives on the show floor, or at least what we were told were Optane. These were PCIe x4 cards with a backplate and a large heatsink, and despite my many requests neither demonstrator would actually take the card out to show what the heatsink looked like. Quite apart from which, neither drive was actually being used - one demonstration was showing a pre-recorded video of a rendering result using Optane, and the other was running a slideshow with results of Optane on RocksDB.

I was told in both cases that these were 140 GB drives, and even though nothing was running I was able to feel the heatsinks – they were fairly warm to the touch, at least 40C if I were to narrow down a number.  One of the demonstrators was able to confirm that Intel has now moved from an FPGA-based controller down to their own ASIC, however it was still in the development phase.


Click through for high resolution

One demo system was showing results from a previous presentation given earlier in the lifespan of Optane: rendering billions of water particles in a scene where most of the scene data was being shuffled from storage to memory and vice versa. In this case, compared to Intel’s enterprise PCIe SSDs, the rendering reduced down from 22hr to ~9hr.

It's worth noting that we can see some BGA pads on the image above. The pads seem to be in an H shape, and there are several present, indicating that these should be the 3D XPoint ICs. Some of the pads are empty, suggesting that this prototype should be a model that offers a larger size. Given the fact that one of the benefits of 3D XPoint is density, we're hoping to see a multi-terabyte version at some point in the distant future.

The other demo system was a Quanta / Quanta Cloud Technology system node, featuring two Xeon E5 v4 processors and a pair of PCIe slots on a riser card – the Optane drive was put into one of these slots. Again, it was pretty impossible to see more of the drive aside from its backplate, but the onscreen presentation of RocksDB was fairly interesting, especially as it mentioned optimizing the software for both the hardware and Facebook.

RocksDB is a high-performance key/store database designed for fast embedded storage, used by Facebook, LinkedIn and Yahoo, but the fact that Facebook was directly involved in some testing indicates that at some level the interest in 3D XPoint will brush the big seven cloud computing providers before it hits retail. In the slides on screen, the data showed a 10x reduction in latency as well as a 3x improvement in database GETs. There was a graph plotted showing results over time (not live data), with the latency metrics being pretty impressive. It’s worth noting that there were no results shown for storing key/value data pairs.

Despite these demonstrations on the show floor, we’re still crying out for more information about 3D XPoint, how it exactly work (we have a good idea but would like confirmation), Optane (price, time to market) as well as the generation of DRAM products for enterprise that will follow. With Intel being comparatively low key about this during IDF is a little concerning, and I’m expecting to see/hear more about it during Supercomputing16 in mid-November. For anyone waiting on an Optane drive for consumer, it feels like it won’t be out as soon as you think, especially if the big seven cloud providers are wanting to buy every wafer from the production line for the first few quarters.

More images in the gallery below.

 

Comments Locked

66 Comments

View All Comments

  • Omoronovo - Monday, August 29, 2016 - link

    Those aren't apples-to-apples comparisons, so you can't point to NVME as the factor causing the difference in price.

    For example, you're comparing a 512GB Samsung 950 pro (for completeness sake I'll assume you meant the m.2 one) and an SM951. Not only are these devices going to cost a different amount regardless of their technical merits - one is an OEM device, the other is retail for a start - they aren't even the same market segment.

    The rest of your post doesn't have enough specifics to actually compare the drives, let alone enough to point to a specific feature as the differentiator in price. My example in my previous post is fair because it's comparing *two exact same drives* where the only difference is that one is AHCI and the other is NVMe.

    Just for completeness sake, don't forget that literally no 2.5" sata drives use NVMe. Feel free to try to dispute that, though. This means that you *must* compare only m.2 drives if you want to actually find out what the specific cost overhead NVMe adds versus AHCI.

    2.5" drives can use substantially cheaper nand in terms of cost per gigabyte; they can fit generally 16 packages onto the pcb, meaning that full capacity can be made up of many smaller capacity dies. These are cheaper to design and manufacture and have higher yields. An M.2 drive will have substantially higher density NAND due to space constraints, which means that - at least for the moment - m.2 versus 2.5" sata form factors are always going to have a substantial price disparity. This has nothing to do with NVMe.
  • ewitte - Sunday, August 28, 2016 - link

    Based off the prices they quoted compared to dram I'd guess closer to $500.if they go too high people will just mostly just get a 960 pro which will have 2-3 times the iops performance and a new NVMe controller and costs about the same as a 950 pro.
  • Pork@III - Friday, August 26, 2016 - link

    Will wait to 100+TB optane in future generations . In this hardly slowest times of piece by piece tiny progress and big steps backward...May I will have to wait until at least Y2027.
  • Vlad_Da_Great - Friday, August 26, 2016 - link

    "...we’re still crying out for more information about 3D XPoint, ..". That is all you guys are good for, crying and whining like babies, creating drama and conspiracy. I tell you what, go to Intel and say we are looking to buy 10 000, 1TB Optane SSD's here is $1M non refundable deposit. After that you will get any detail specification and abilities of those drivers. Can you dig it?
    Second, how low IBM has fallen to advertise on AnandTech, man-o-man, shaking my head, rolling my eyes.
  • Ian Cutress - Friday, August 26, 2016 - link

    Can't tell if troll or...
  • fanofanand - Monday, August 29, 2016 - link

    The last few comments made by Vlad on Anandtech all scream "Troll". Better to just ignore this one.
  • iwod - Friday, August 26, 2016 - link

    Once we get next generation DDR5, and TSV Stacked DRAM, I presume we should be able to get 128GB to 512GB per DIMM. That is the same range as Xpoint Capacity, may be 2-3x the price / GB, but massive win in latency. Not to mention limitless read write, may be the TCO in long term is better.

    This is reaching 4-8TB Memory per server, and if you add in the ever increasing speed of SSD as 2nd tier. What *exactly* does XPoint provide in this future?
  • Lolimaster - Friday, August 26, 2016 - link

    It seems SSD's are a dead end in terms of latency and 4k random red performance.

    Just in latency DRM is 1000x faster, Xpoint should reduce that to 10-100x.
  • ddriver - Saturday, August 27, 2016 - link

    SDDs can go a long way in those regards, but need MOAR CASH... hot data needs to be kept in cache, and only flushed to nand on shutdown. It may well turn out that intel are doing that same thing with xpoint, thus the big secrecy. There is no word on the actual volume of touched data in those tests they present, and I suspect it is not all that much to begin with, and a lot of that performance they are bragging about is just cache hits for a workload, small enough to stay in cache in its entirety. And the only reason it is slower than dram is that it is on pci-e...
  • nils_ - Friday, September 9, 2016 - link

    This sort of technology already exists as NVDIMM. Basically you have the same size RAM + NAND on a DIMM module (there is also PCIe cards that come as block devices) and an array of capacitors + controller that will flush the RAM to NAND on power loss.

Log in

Don't have an account? Sign up now