I'm a little surprised they're only talking about 8Gb chips; current games are already bumping up against the 8GB limit. Especially with the higher bandwidth I'd think there'd be demand for 2GB chips to get the total ram capacity up without massively increasing bandwidth beyond what's needed and driving up costs with extra thick PCBs to support all the extra traces needed for 12/16 ram chips.
"current games are already bumping up against the 8GB limit"
Only if you take a naive look at vRAM utilisation. For years, games have aggressively cached texture and geometry data into vRAM up to the maximum possible (whether that's the limit of available vRAM, or until you run out of textures used in that level/stream chunk). The amount of vRAM actually NEEDED is a fraction of the amount used. That cached data can and is overwritten without any penalty, and can otherwise be treated as free space when it comes to live workloads.
Samsung 16Gb GDDR6 Memory – The fastest and lowest-power DRAM for next generation, graphics-intensive applications. It processes images and video at 16Gbps with 64GB/s data I/O bandwidth....
I didn't know GDDR5X was bad for mining. Is it just one specific type of cryptocurrency that it's not good for and why is that? You'd think faster memory wouldn't hurt anything.
I suppose Ethereum dislikes GDDR5X due to the higher latencies (as I haven't seen and can't think of a better explanation). At 16 Gbps the absolute latency of GDDR6 should be about similar to GDDR5 at 8 Gbps. I don't know if at that point Ethereum would benefit from the higher bandwidth, or if they're simply jumping in memory like mad (and hence would be mainly latency bound).
Likely not the latency but rather the prefetch. Ethash uses 128B random memory accesses. Prefetch was doubled between GDDR5 and GDDR5X, which returns twice the data for a given address request. (source: http://www.anandtech.com/show/9883/gddr5x-standard... This can waste bandwidth for small random memory accesses, but it's fine for the large accesses for which most GPUs are likely tuned.
Quote: Given the transfer rates, it is reasonable to guess that we are dealing with 8 Gb chips – especially since Micron and Hynix have already announced their own 8Gb chips – but we do not know this for sure.
Really!! You are unsure. The press release clearly states "Samsung 16Gb GDDR6 Memory"
No, it said 16Gb before. I think it's a typo and they meant 16Gbps since the article proceeds to talk extensively about the data rate and later even mentions 8Gb capacity.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
19 Comments
Back to Article
DanNeely - Tuesday, November 14, 2017 - link
I'm a little surprised they're only talking about 8Gb chips; current games are already bumping up against the 8GB limit. Especially with the higher bandwidth I'd think there'd be demand for 2GB chips to get the total ram capacity up without massively increasing bandwidth beyond what's needed and driving up costs with extra thick PCBs to support all the extra traces needed for 12/16 ram chips.edzieba - Wednesday, November 15, 2017 - link
"current games are already bumping up against the 8GB limit"Only if you take a naive look at vRAM utilisation. For years, games have aggressively cached texture and geometry data into vRAM up to the maximum possible (whether that's the limit of available vRAM, or until you run out of textures used in that level/stream chunk). The amount of vRAM actually NEEDED is a fraction of the amount used. That cached data can and is overwritten without any penalty, and can otherwise be treated as free space when it comes to live workloads.
zepi - Tuesday, November 14, 2017 - link
Samsung 16Gb GDDR6 Memory – The fastest and lowest-power DRAM for next generation, graphics-intensive applications. It processes images and video at 16Gbps with 64GB/s data I/O bandwidth....I think it is 2GB 16Gbps chip.
lefty2 - Tuesday, November 14, 2017 - link
Does anyone know will GDDR6 be bad for mining like GDDR5X was?PeachNCream - Tuesday, November 14, 2017 - link
I didn't know GDDR5X was bad for mining. Is it just one specific type of cryptocurrency that it's not good for and why is that? You'd think faster memory wouldn't hurt anything.ImSpartacus - Tuesday, November 14, 2017 - link
GDDR6 works much like GDDR5X (but will hit higher speeds), so the answer is "probably".MrSpadge - Tuesday, November 14, 2017 - link
I suppose Ethereum dislikes GDDR5X due to the higher latencies (as I haven't seen and can't think of a better explanation). At 16 Gbps the absolute latency of GDDR6 should be about similar to GDDR5 at 8 Gbps. I don't know if at that point Ethereum would benefit from the higher bandwidth, or if they're simply jumping in memory like mad (and hence would be mainly latency bound).voicequal - Tuesday, November 14, 2017 - link
Likely not the latency but rather the prefetch. Ethash uses 128B random memory accesses. Prefetch was doubled between GDDR5 and GDDR5X, which returns twice the data for a given address request. (source: http://www.anandtech.com/show/9883/gddr5x-standard... This can waste bandwidth for small random memory accesses, but it's fine for the large accesses for which most GPUs are likely tuned.MrSpadge - Wednesday, November 15, 2017 - link
Thanks, that makes sense. So if they could make good use of 256 bit transfers, GDDR5X and GDDR6 would perform well again.HighTech4US - Tuesday, November 14, 2017 - link
Quote: Given the transfer rates, it is reasonable to guess that we are dealing with 8 Gb chips – especially since Micron and Hynix have already announced their own 8Gb chips – but we do not know this for sure.Really!! You are unsure. The press release clearly states "Samsung 16Gb GDDR6 Memory"
https://news.samsung.com/global/samsung-honored-fo...
DanNeely - Tuesday, November 14, 2017 - link
It's so obvious that I wonder if Samsung updated the PR between when Anton wrote this and now. Really hard to explain him missing it otherwise...DanNeely - Tuesday, November 14, 2017 - link
... nope. I found a copy from the 10th on the wayback machine. Said 16Gb there too.ImSpartacus - Tuesday, November 14, 2017 - link
No, it said 16Gb before. I think it's a typo and they meant 16Gbps since the article proceeds to talk extensively about the data rate and later even mentions 8Gb capacity.HighTech4US - Tuesday, November 14, 2017 - link
No it is not a type it is a 16Gb chip running at 16Gbps.Bates123 - Tuesday, November 14, 2017 - link
What is does "pre-announce" mean? Either you announce or not...ImSpartacus - Tuesday, November 14, 2017 - link
It's not even close to being ready.This is a proof of concept to get people hyped.
Expect 12-14 Gbps GDDR6 in 2018 and 16 Gbps stuff in 2-3 years.
Bates123 - Tuesday, November 14, 2017 - link
Should be:What does "pre-announce" mean? Either you announce or not...