Comments Locked

23 Comments

Back to Article

  • Anymoore - Monday, December 30, 2019 - link

    For SLC, SCM such as 3D XPoint is cheaper. Also, the diameter of the NAND channel cannot be shrunk below ~100 nm, whereas SCM is expected to go below ~20 nm. In other words, SCM achieves the same density with fewer layers than 3D NAND.
  • Billy Tallis - Monday, December 30, 2019 - link

    You cannot use density as a proxy for cost when comparing 3D NAND against something like 3D XPoint, even if you stipulate that you're talking about designs with the same layer count. Adding layers to 3D NAND is simpler and involves fewer process steps than adding layers to 3D XPoint, so 3D NAND doesn't need to shrink the horizontal cell dimensions as much as 3D XPoint in order for 3D NAND to remain way ahead on cost-effectiveness.
  • Anymoore - Thursday, January 2, 2020 - link

    As SLC, 3D NAND is in fact, more expensive per bit.
  • Billy Tallis - Thursday, January 2, 2020 - link

    The Samsung 983 ZET 960GB enterprise SLC SSD is cheaper than the Intel Optane 905P 960GB consumer 3DXP drive, by about 22%. Intel's Optane drives are also far more than 3x the price of consumer TLC SSDs, or 4x the price of consumer QLC SSDs. You must be basing your assertion on something other than real-world prices.
  • peevee - Monday, December 30, 2019 - link

    There will always be cases which are capacity-bound and speed-bound, and for hybrid cases there are memory/cache hierarchies. 3DXPoint over MLC over QLC is just another hierarchy.
  • nandnandnand - Monday, December 30, 2019 - link

    This is basically propaganda for Toshiba/Kioxia.

    There are many post-NAND candidates. Only one has to succeed to smash that graph.
  • Eliadbu - Tuesday, December 31, 2019 - link

    Flash storage is not going anywhere soon like hard drive won't phase out from the world anytime soon. but Kioxia seems to be overlooking technology improvements that may help 3d Xpoint to overcome the challenges of being both dense and cost-effective. And even if 3D Xpoint won't be the fitting technology there are plenty of other candidates to replace flash memory, they seem to put all the eggs in the flash basket and try to dismiss any other initiatives.
  • Fujikoma - Tuesday, December 31, 2019 - link

    As written in the third paragraph:
    2-bits per cell (00, 01, 10, 00)

    Shouldn't that be:
    2-bits per cell (00, 01, 10, 11)
  • Billy Tallis - Tuesday, December 31, 2019 - link

    Fixed. Thanks for pointing it out.
  • jjj - Tuesday, December 31, 2019 - link

    This is complete nonsense. XPont does not scale well but that does not say much about any other memory.
    NAND scaling is slow already, it's far from ideal at this point.
    The right memory scales well, horizontal, vertical, bits per cell. At some point, someone will make that.
  • Billy Tallis - Tuesday, December 31, 2019 - link

    Intel's 3D XPoint memory is hardly the only SCM to use a crosspoint layout. Kioxia's point here applies generally to everything that uses a crosspoint layout, regardless of whether the specific materials and switching mechanism are a match for what Intel's 3D XPoint uses.
  • HaroldM - Wednesday, January 1, 2020 - link

    @Dr. Ian Cutress, what is your take on SCM solution "Non-filamentary interface switching ReRAM"?
  • HollyDOL - Wednesday, January 1, 2020 - link

    I have a bit of a feeling this is the case of "who wants, looks for ways, who doesn't want, looks for reasons"... maybe they are right and crosspoint is fundamentally flawed principle, but it smells way too much like result of task 'find ways to smear something we cannot do and competition is better at'.

    Coming from MIT research paper (or similar) I'd trust the claim much more.
  • FunBunny2 - Wednesday, January 1, 2020 - link

    The whole point of SCM is to eliminate a boundary and eliminate some hardware. What you get is single level storage, really, not just address space across RAM and 'disk'. To what degree cost/scaling/etc. are greater for SCM (XPoint or otherwise) is only 1/2 or 1/3 of the C/B analysis. Just think about all of the parts of OS and app code that goes away with real SCM. It adds up very, very quickly.
  • dropme - Wednesday, January 1, 2020 - link

    And meanwhile Sony is reportedly inching toward to commercialization of a new type of SCM. At least we could easily know who's the winner of the next format war for Toshiba is acting like a coward who refuses changes.
  • twotwotwo - Wednesday, January 1, 2020 - link

    We talk about literal bit-addressability, but usually CPUs want to read/write at least a 512-bit cache line at a time. Wonder if wiring for accessing chunks at a time (much less than 4KB, more than...one bit) scales any better. And larger chunks could help ECC handle somewhat larger clusters of errors.

    I'm not sure how that applies/not to any particular products. Just, you don't inherently need literal bit-addressability to make something good enough in practice to sit between DRAM and Flash.
  • Billy Tallis - Thursday, January 2, 2020 - link

    You're right that bit-granular accessibility isn't on its own a particularly useful feature for real products. But it does have two important consequences: the lack of NAND's awkward page vs erase block dichotomy, and the flexibility to easily use whatever word size is most convenient—such as matching cache line sizes for direct-attached memory, or larger block sizes for SSDs.
  • edzieba - Thursday, January 2, 2020 - link

    So Koixia's argument basically boils down to "NAND layer scaling has no limits, Ovonic memory cannot scale into layers", conveniently ignoring that NAND layer scaling is already slowing, and that we're still on Gen1 Ovonic memory with no attempts to stack layers yet to generate data from.
    Or to strip out the marketing speak entirely: "we make NAND, we don't make Ovonic memory. NAND is better".
  • Billy Tallis - Thursday, January 2, 2020 - link

    The argument is less that 3D NAND has no limits, but that they don't become significant until layer counts that are far beyond any reasonable expectation for scaling up 3D crosspoint style layer counts. Kioxia's estimates are that the sweet spot for a crosspoint memory's layer count is probably around 4 layers, and that beyond about a dozen layers, it ends up being more expensive per-bit than a single layer. Even if Kioxia's estimates for SCM are too low by an order of magnitude, that still would mean that crosspoint memories won't be scaling up to the layer counts of 3D NAND that's already on the market.

    There's a big difference between noting that Intel 3DXP on the market is still one layer, vs claiming that there have been no attempts to stack crosspoint memories that would provide an indication of how much that costs.
  • Anymoore - Sunday, January 5, 2020 - link

    The main correction needed for the equation is for Cv, which should be proportional to the pitch of the cell within the plane. This reflects how many cells cover the wafer area within one layer. The larger pitch would mean fewer cells, so each would be more expensive. The cell pitch of course can be shrunk for the 3D NAND; this would be ultimately limited by the aspect ratio that can be tolerated by the process.
  • Anymoore - Sunday, January 5, 2020 - link

    proportional to the cell area, not pitch, but pitch^2
  • WarthogARJ - Wednesday, July 22, 2020 - link

    Very interesting.
    And do you have a reference for this?
    As in it's a Kioxia researcher, but whom?
    I imagine that by now there's something more he's done.
    After all, Kioxia is still trying to sell things to other companies, so it needs to release some information.
  • WarthogARJ - Wednesday, July 22, 2020 - link

    In addition, looking at the first graph, you can see where they want to focus on: Enterprise.
    In 2020, it's the largests secoctor, and they think it's going to almost triple in size 5 years.

    Whereas, PC's was the smallest by far in 2020, and they think will grow faster, by maybe 4-5 times, but is still 2nd to last in size.

    Entertaintainment and Mobile don't grow much.

    So if you consider what Enterprise is replacing, HDD, it doesn't have to be anything very fancy in terms of speed relative to other SSD NAND, but better than the current HDD it's replacing.

    So for either process, SCM or NAND flash, the main advantage of more layers wrt HDD, is COST, or I think more accurately, COST/"endurance units". And where it's different depending on the R/W ratio.

    As far as that graph is concerned, I think it's misleading to use a single function for it as they've done. Or perhaps CLAIM to have done.

    And especially to normalise everything, such that 1 layer of NAND cost = 1 layer of 3D SCM.

    If you reconstruct the curves to be TOTAL cost as a function of layers, then it looks very different.
    You get a very flat LINE for NAND, that shows a gradual rise in TOTAL cost as layers increase, but it's a VERY slow increase.

    And you get a quite rapid increase in costs for layers, that tends to accelerate as you add layers.

    And then, to adjust for the fact that SCM is currently more expensive than NAND, you need to add an offset, say make 1 layer of SCM = twice the cost of NAND.
    So just add 1.0 to the SCM curve.

    Then if you go back to the norrmalised bit cost by dividing by number layers, the two curves look VERY different.

    The initial difference at one layer is indeed quite big, BUT, as you add layers, the 3D SCM cost per extra layer does NOT act the same, it CONTINUES to drop until 10 layers, and then does not rise very fast.

    I cannot add a graph in Comments, but do it yourself and you shall see.

    The point is that the Kioxia curves are misleading, because it depends very heavily on the cost per layer of each one. Assuming they are equal is NOT a "safe" assumption.

    I doubt that they don't realise this, I think they are trying to push this point to sell the idea of using NAND flash for data centres for as long as they can. Its smoke and mirrors though.

Log in

Don't have an account? Sign up now