From the first peak power chart, the 10700K consumes almost twice as much power as an equivalent AMD offering at that price point. The "65W" number is blatantly false advertising.
Intel is just lying at this point as they are 'effectively' ~215w parts if you put them in a motherboard from Asus, Asrock, MSI, Gigabyte, etc. Only in an OEM system like an HP Elitedesk or Dell Workstation will they run anywhere close to their TDP rating but I'd guess they are using PL2 as well because why not, Intel said its ok.
It's become painfully obvious Intel has had to resort to extreme measures here to compete. And compete is a pretty loose definition as they are using almost double the power of the competition and still slower clock for clock, dollar for dollar. No wonder Intel has shaken up the ranks, this is embarrassing.
I snagged a Ryzen 3700X 8c/16t for $280 3 months ago. The price is at $325 or so these days until it is supplanted by a Ryzen 5700X. Makes the Core i7-10700 at 197w very, very sad.
Doesn't matter, 10700 isn't really a 65W part either. I can deal with a 105W Ryzen pulling 150W under full load but having a "65W" part pull 215W is just BS. That's over triple and will overstress crap VRMs.
It's 2.9ghz base clock uses 65w, that's what the tdp rating basically is. It would be nice if anandtech posted the actual wattage during each test for each cpu. Not just for many fps it got but how much wattage it used in that test.
specially for games. keep reading how some say in games, intel is still better then amd when it comes to power usage, but dont really see much about it.
easy job online from home. I have received exactly $20845 last month from this home job. Join now this job and start making extra cash online. salary8 . com
easy job online from home. I have received exactly $20845 last month from this home job. Join now this job and start making extra cash online. salary8 . com
What I'm missing from this review is a benchmark running under intels recommended settings. From what I've seen often people see the 65w rating and go on to combine the i7-10700 with cheap B460/H470 motherboards and basic coolers.
The problem here is that the turbo limit is not enforced by the chip, but by the mobo. So even cheap B460/H470 boards can set that limit to be higher than Intel's recommendations if they choose to. And no one that would be buying these boards will necessarily care to dig into the BIOS and set the limits themselves.
Yes, they would. There are lots of (admittedly niche) applications where outright sustained performance is less important that bursty performance in a limited thermal envelope, either due to space or ventilations issues. HTPCs, home servers, small industry applications, etc
So yeah, i agree with the OP, I would have liked to have seen performance numbers at the "suggested" 65w PL1.
I totally agree with your comment, but what you ask for is a different article. Performance numbers in a strictly power limited environment - from Intel and AMD both (although Intel will be unfairly penalized by being three or so lithography generations behind).
The product you test is the product they have on sale - that's not unfair in the context of a test designed to represent a specific real-world requirement.
I totally agree, a 5600X and a 10700 on their 65W TDP, and their maximum performance, to gives a baseline of what performance-level is WARRANTED by their makers.
Setting the limits in the bios is very nice and all but without the voltage regulation and thermal capacity they can not sustain this performance for very long regardless of the numbers set.
I very much doubt that on the 60$ cheap 4 phase vrm boards the manufacturers set the limits very high, they will get fried boards within the warranty period and we know very well they can't have that.
That would be nice to see. Perhaps an article showing which of a representative selection of processors provide the best performance at a given set of fairly common power levels (65W / 95W / 125W).
Something for when Dr Cutress finds himself with infinite time and no impending deadlines š
So happy I waited patiently and got a Ryzen 5600x for my small form factor system. The fact it can hang with the i7's and only consumes 1/3 the peak power draw is great for heat output and playing nicely with SFX PSUs.
Yeah, if Alder Lake doesn't get things under control later this year, my next ITX build will also be Ryzen. There's just no sense in using a CPU drawing 200+ watts in SFF when cooling the current crop of GPUs is hard enough as it is!
In SFF I suspect you'll be operating well below 215W. If not because the mobo can't supply the power, but because your small form factor cooler can't handle the heat and you're limited to short turbo periods due to thermal throttling.
You should well be able to run the 215watt mode if you're daring. Some folks over on r\sffpc use the big CPUs, because the mITX boards have power delivery similar to the ATX boards and do support them. The problem is they report idle temps around 50C and gaming temps around 70C.
Personally, I'm just not willing to run those temps. My current ITX build is using an older 95watt CPU(that actually drew 95watts at turbo) and idles at 35C and games around 50C. A new CPU with an idle temp that's the same as my current load temp is just mind boggling.
You do realize that 50 and 70c are not anywhere close to being worried about anything? You said personally you aren't willing to run those temps. Have you even looked into what modern processors are capable of? Have you ever used a laptop, which will hit that temp just by opening the lid?
I remember one processor (or maybe GPU) had temperature limits (internal temperature, as measured on-die) of a bit over 100 Celsius. It might have been an NVidia chip, however I remember Intel coming close to that. Compared to that, 50 Celsius at idle and 70 at load is positively arctic ;)
I too just picked up a 5600X for list of $299 to build a NAS. Paired it with a $120 B550, some extra RAM and a video card I had collecting dust. Wish I would have researched the case more though. Ironically itās considerably faster than my desktop with itās pokey old i7-6800K. Iāll wait until the 5950X comes more readily available for a upgrade.
Using a 5600X for a NAS, I assume it is for home and not enterprise, is total overkill. You would be fine using an Athlon 200G in a home NAS and would never notice the difference.
Thanks for the review, as it basically shows what other reviews already show, namely if you set aggressive PL1 and PL2 values across K & non K SKUs then you'll get similar performance.
I am curious why you said the performance is much lower with a 65W power limit and then didn't include those results.
I feel like it is common knowledge, especially with 10th Gen Intel CPUs that you need to manually configure PL1 and PL2 in keeping with your cooling solution, but perhaps not.
Intel has no incentive to change their policy and label their products with the actual power draw they'll be using because they'll show how much more they suck up compared to AMD. People are constantly looking for metrics to compare the 2 "teams", and Intel getting to keep the labels of 65w and 125w lets the fans say "see it has the same power usage as AMD!"
From my observations of Zen 2 at stock operation, 65W TDP models tended to sit continuously at the 88W PPT limit under most all-core load conditions. Has this changed with Zen 3? Do they not hit 88W so easily, or is another (current) limiter taking over? Or is the limit a different value now?
Presumably, the Zen3 would operate under the same 'constraints' ----- The constraints are as follows:
ā¦ Package Power Tracking (PPT): The power threshold that is allowed to be delivered to the socket. This is 88W for 65W TDP processors, and 142W for 105W TDP processors.
ā¦ Thermal Design Current (TDC): The maximum amount of current delivered by the motherboardās voltage regulators when under thermally constrained scenarios (high temperatures). This is 60A for 65W TDP processors, and 95A for 105W TDP processors.
ā¦ Electrical Design Current (EDC): This is the maximum amount of current at any instantaneous short period of time that can be delivered by the motherboardās voltage regulators. This is 90A for 65W TDP processors, and 140A for 105W TDP processors.
"Looking at the total power consumption of the new 3700X, the chip is very much seemingly hitting and maintaining the 88W PPT limitations of the default settings, and weāre measuring 90W peak consumption across the package."
The motherboard has power limits - both in instant maximum current from the voltage regulation phase (remember, the mainboard receives 3.3 Volts, 5V, 12V and maybe -5V and -12V from the PSU and has to convert that to processor voltage), and in cooling capacity for the VRM (Voltage Regulation Module). Regardless of the power limits, the processor will slow down if its internal temperature is too great. So yes, the "my mainboard's power delivery module cannot deliver more than 80 amps" is a possible reason. Another would be "My case has bad cooling and I want to keep the processor colder". Another would be "As soon as the sustained power goes over 140 watts, the fans in the case start whirring and I hate the sound".
>This does come with a reasonably good default cooler.
No. Just no. The Ryzen coolers are utter trash and you're doing a disservice to your readers who may not have ever had a quiet cooler to say otherwise. I build PCs and I've had several Ryzens go through and I have never seen one where I would call the acoustics livable. My first sale was a 1700 with the stock cooler since I didn't have any other AM4 compatible ones at the time and I still feel bad about selling it that way. It was just terrible. The 212 EVO seems to be within its thermal envelope for quiet cooling up to a stock 3700X, so I'd highly recommend one of those over the stock cooler. Going above the ~85W of a 3700X you should spring for a Fuma 2.
A stock 3700X has a total package power of 88W and the 212 EVO is a 150W TDP cooler. Whereas the included Wraith Prism cooler with the 3700X is a 125W TDP cooler. One would expect that the larger capacity cooler with the larger fan would be quieter.
where T is temperature (K), P is power (W), and R is thermal resistance (K/W).
Unless the temperature rise is known the only thing "150W cooler" tells you is that the heat pipes won't dry out at 150W with reasonable ambient temperature. (That's a thing that can happen. It's not permanent damage, but it does mean R gets a lot bigger.)
The fact is the Wraith Prism is the same 92mm downdraft cooler AMD has been shipping with their CPUs since the Phenom II 965.
The Wraith Prisms are fine - the one that comes with the low-end ryzens (and I think now the 5600) aren't so great for noise, but they do let the CPU come within 95% of its peak performance, so not bad for a freebie.
Am not seeing the point of this article is 65w an option and then you blatantly ignore the actual TDP stated and produce a test, for the test to be a fair comparison all the chips should be limited to actual power stated and then run through any Benchmarks, its like saying we are testing CPUs at 125w and including the LN2 FX AMD chip and seeing how much power you can actually run through it, running these chips like this constantly will degrade them and eat up a considerable amount of power that you dont need to use. Then again I should be surprised, yet again 12 articles on the front page regarding Intel 3 regarding AMD guess Intels media budget is bigger hmm
Their Turbo is literally built around it. It will lower clocks as the chip degrades. The degradation is all over Reddit. I'm surprised no tech site has followed up on the scandal.
I just spent a bit of time on Google and the majority of the results are people saying "I heard this, is it true?" - the rest are people talking about how they ran their chip way outside spec (significant overvoltage, overclock *and* high temperatures) and can no longer get the same overclock out of it.
It took me less than 15 minutes to confirm that this is a lie.
Incidentally, the only CPUs I've ever had "degradation" problems with were all Sandy Bridge - 2 i3s, one i5 and one i7. Only one of them was ever overclocked. They started to show strange issues after 3-5 years - stuff like frame-rate inconsistency in games, graphics artefacts, random crashes.
I've never gone around slamming Intel, though, because sometimes you just get a bad chip. It happens.
Ryzen 5 5600x at $299 is a lie right now than and has been for months. It's slowly coming down to $399 with general availability. It will be months before it's actually available at $299.
Please no one respond with stories of one-off deals that they happened to get from some rare and hard to find vendor, where the deal was only available for 10 minutes anyway.
The simple fact is that no Ryzen 3 processors have had general availability at anywhere near MSRP for months.
Micro Center is not "general availability" given that it's only accessible to a few million people who happen to be within driving distance of one of their stores so, you fail.
Well... one stockist (OCUK) in the UK has had the 5600X at Ā£279 for at least the past 24 hours, whereas the average on Google seems to be about Ā£310 to Ā£320. Your mileage may obviously vary, I suppose.
The US MSRP is $299 which is 218 British Pounds. So the numbers you quote indicate a significant reseller mark-up which makes my point. So thanks for agreeing with me.
I wouldn't mind paying AMD a fair price ($399 apparently) for a 5600X, but I will NOT give $100 or more to scalpers. AMD will use my money to make me more of what I want (faster chips). Scalpers will use my money to just scalp me harder in the future. I will never buy a scalped product.
And so I continue to wait and wait to build a 5600X/RTX3080 gaming PC ... been waiting for months now ...
I think the UK price indicate about 20% of Value Added Tax - which is paid directly at the moment of sale. If I remember correctly, US prices do not contain "State Tax" and the like.
Exactly. Aside of any delivery costs from retailer to customer, we pay what's written on the price tag. That $299 isn't looking so cheap now. Funnily, the current conversion from dollars to pounds means a near 1:1 for comparison.
Everything is more expensive in the UK. Our MSRP is different to yours because of import duties. MSRP is still MSRP just that our MSRP is different from your MSRP.
We would not be expecting to be paying US pricing just as we wouldn't expect to pay the going rate in Australia, that is the case for every saleable item.
Then I would appreciate if the O.P. would indicate both what the MSRP is in his country as well as the price that he is quoting availability at so that all the details are known. Since he didn't say the MSRP was any different over there, I just assumed it was the same. It helps to seed the discussion with relevant information at the outset so that we don't have to devolve into useless bickering over unavailable data. I agree that I could have immediately asked what the MSRP was there instead of just assuming it, so that's on me, but even better would have been me not even having to ask.
It is very difficult to get these parts at MSRP in the USA. I think the safer assumption is that it is also difficult to get these parts at MSRP elsewhere.
And yet Anandtech will continue to show the USA MSRP in their CPU comparisons as if that is the realistically available price for the part, which is exactly the incorrect information I was trying to rail against when I posted my original comment that started this whole discussion.
but its not incorrect information. the only reason hardware isnt anywhere near MSRP, is due to the fact, that there is more people wanting the hardware, then there are products available. not to mention, that MSRP is for all intents and purposes, constant vs what the prices are in stores.
MSRP includes a mark-up for the retailer to already make the expected profit. A small additional profit is fine; but in the USA what you have is 'scalpers' buying up parts and then trying to resell them for egregious profits. Like a 33% mark-up is the minimum, and until recently 75% - 100% markups were the norm for the Ryzen 5 5600X.
I also bought a 5800X at microcenter for MSRP in early December.
The "bring up to counter to pick up" sheet they gave me showed they got 75 in on the shipment as well, so it wasn't like it was the one chip they got and I got lucky either...
Oh my god how many times am I going to have to explain to posters on AnandTech that Micro Center is NOT general availability. They are limited to a few million people who happen to live within driving distance of one of their stores. I wish there was some way to put a disclaimer about Micro Center in my posts without just inviting further debate. I mean the WHOLE REASON that I wrote "general availability" in my comment and put the note about "hard to find vendor" was to try to head of the Micro Center comments, but, apparently, people who shop at Micro Center cannot fathom the idea that 95% of people in the USA do not have access to a Micro Center.
To clarify: 95% was an exaggeration. Micro Center has like 20 or 30 stores and is accessible by more than 5% of the USA population. But it's probably not more than 20%. Anyway any store that is not available to the majority of people cannot be called "general availability".
I mean seriously. If Micro Center were that easy to get through, would there even be a shortage of these chips at online vendors? Everyone would already be satisfied by the Micro Center supply. But a) the fact that there is online shortage means very clearly that most people can't get one from Micro Center (otherwise they already would have and there would not be zero supply online), and b) if a significant fraction of the population could actually get them from Micro Center, Micro Center would clearly already be sold out also.
I'd suspect its probably well more than 50% of the US population is within an hour drive of a Microcenter. The names of the cities listed on their site aren't all easily recognizable but the metro areas are almost all the large ones.
Atlanta Baltimore Boston Chicago Cincinnati Cleveland Columbus DC Detroit Denver DFW Houston Kansas City LA Minneapolis Newark NYC Philadelphia St Louis
They were actually available today at Amazon for a very long time at @299. Of course it's back up again after mass ordering at the time I'm posting this.
Nephew bought a 5600x for MSRP here in the UK from Currys which is by far the largest national chain of electronics and computer equipment.
You could try signing up for an alerts service, providing you have the funds ready to put down you could likely have one in the next week or so. Not as ideal as next day from any major retailer but the chips are coming in stock.
Your original post said that none of the processors have had general availability and that it would be months before prices hit MSRP. People have pointed out a few times that in various parts of the world, availability is there and prices are at (or very near) MSRP, yet you've still found reasons not to admit that maybe you overstated things a bit.
Well I guess it's a difference of opinion about what "general availability" means. If most people who are not within direct driving distance of a MicroCenter are having a hard time finding these chips, then I would call that not "general availability". I did admit that I overstated the low number of people who are in driving distance to a MicroCenter, but I still contend that it's a low number relative to the total number of people who want to buy these chips. We could start arguing now about who is TRULY within reasonable driving distance of a MicroCenter -- if you have to drive 1 hour to get the chip, is it really still generally available? If your time is even worth $15/hr minimum wage, that's adding a significant cost in the dollar value of your time to the cost.
But I really don't want to argue about it any more. I still think these chips are not generally available and posting an article that compares one to to another when one or both are not easy to get for the vast majority of people, is disingenuous as it presents choices that do not practically exist. If you disagree, then that's fine.
It's not a difference of opinion, it's a difference in perspective. I don't even care about MicroCenter because I'm not from the USA. These chips have "general availability" in the EU and UK at the very least, so I appreciate the comparison - as will many other people
This article will be around longer than demand continues to outstrip supply in the USA, and maybe even long enough to see $15 an hour become the minimum wage there. š
Maybe they'll go the route of GPUs and just stop advertising TDPs entirely?
This whole system works because most desktop users don't seem to care about TDP or power consumption. Performance at the absolute extreme end of the frequency/voltage curve is all that seems to matter.
I appreciate the testing, showing that with most other things 'equal', there is effectively little difference between the 10700/K.
Would you consider running any tests with a 65W-class cooler to demonstrate exactly how performance is damaged by only matching the TDP on the box?
Having the full-bore tests for 10700 is quite useful if you can't find the K at a decent price, but if we're talking about quoted TDP 'limits', let's try limiting to that heat dissipation and see how quickly it flops.
I built a few Haswell systems back in the day, and whoo boy, were those stock coolers ineffective if you got any decent chip under it.
Eh? The stock cooler on the 4770 was fine. They only pulled 50W. I have no idea what you mean by "any decent chip under it" since there was only the 4770 and 4790 at the top and their wattage was about the same. (my 4790 was undervolted a tad and was 50W)
Running prime95 on a stock cooler with a 4770 sent temperatures skyrocketing in my experience (80+C). I might have had too-high standards for cooling, since I always recommended at least a 212 EVO or better when it would fit into the cases we used. My employer used cheap cases that didn't come with an exhaust fan by default - installing 1+ case fans was always a recommendation for anything generating a lot of heat if I could convince the customer how much it was needed.
Sounds like our use cases might be different? Or, you might have superior case cooling. All I know is we always had more tolerable temps once dumping the included cooler for something more suited to very heavy work.
You keep talking about how the TDP means nothing in the context of peak power draw, but it has nothing to do with peak power draw. That is only restricted by PL2, which is probably well over 200W for that chip, and like you say, motherboards can come up with their own PL2 values, especially if you slot this into the same high-end Z490 board you bought for your K CPU. Different story if you put it in a low-end B460 board.
Here's what "65W TDP" means: If you pair this CPU with a 65W cooler, it will work.
Yes, it is sad that even well respected PhDs in the field can't seem to understand that TDP is not total consumed power. Never has been, never will be. TDP is simply the minimum power to design your cooling system around.
I actually think that Intel went in the right direction with Tiger Lake. It will do everyone a service to drop any mention of TDP solely into the fine print of tech documents because so many people misunderstand it.
Yes, TSMC has a fantastic node right now with lower power that AMD is making good use of. Yes, that makes Intel look bad. Lets clearly state that fact and move on.
Power usage matters for mobile (battery life), servers (cooling requirements and energy costs), and the mining fad (profits). Power usage does not matter to most desktop users.
Also don't forget that we are talking about 12 seconds or 28 seconds of more power, then it drops back down unless the motherboard manufacturer overrides it. The costs to desktop users for those few seconds is fractions of a penny.
Bji, no, that is not how how engineering works. You need to know the failure limit on the minimum side. If your cooling system cannot consistently cool at least 65W, then your product will fail to meet specifications. That is a very important number for a system designer. Make a 60W cooling system around the 10700 chip and you'll have a disaster.
You can always cool more than 65W and have more and/or faster turbos. There is no upper limit to how much cooling capability you can use. A 65W cooler will work, a 125W cooler will work, a 5000 W cooler will work. All you get with better cooling is more turbo, more often. That is a selling point, but that is it - a selling point. It is the the 65W number that is the critical design requirement to avoid failures.
Minor correction on " Never has been, never will be": TDP and peak package power draw WERE synonymous once, for consumer CPUs, back when a CPU just ran at a single fixed frequency all the time. It's not been true for a very long time, but now persists as a 'widely believed fact'. Something being true only in very specific scenarios but being applied generally out of ignorance is pretty common in the 'enthusiast' world: RAM heatsinks (if you're not running DDR2 FBDIMMs they're purely decorative), m.2 heatsinks (cooling the NAND dies is actively harmful, cooling the controller was only necessary for a single model of OEM-only brown-box Samsung drives because nobody had the tool to tell the controller to not run at max power all the time), hugely oversized WC radiators (from the days when rad area was calculated assuming repurposed low-density-high-flow car AC radiators, not current high-density-low-flow radiators), etc. Even now "more cores = more better" in the consumer market, despite very few consumer-facing workloads spanning more than a handful of threads (and rarely maxing out more than a single core).
What's really sad is that you apparently prefer to write a long comment trying to dunk on the author, rather than read the article he wrote for you to enjoy *for free*.
"I actually think that Intel went in the right direction with Tiger Lake" You think poorly.
"Yes, TSMC has a fantastic node right now with lower power that AMD is making good use of. Yes, that makes Intel look bad. Lets clearly state that fact and move on." Aaaand there's the motivation for the sour grapes.
Spunjji, I must assume since you didn't have anything to actually refute what I said, that you have nothing to refute it and instead choose to bash the messenger. Thanks for backing me up!
Assuming makes an ass out of, well, just you in this case.
You replied to - and agreed with - a comment from someone who clearly didn't properly read the article, because their complaints were addressed within the article.
I can't "refute" your personal opinions because they were just that; opinions. I can - and did - freely imply that I think they're bad opinions and are based around motivated reasoning. If you think that backs you up, you're even less logical than this "debate me" shtick implies.
One of the things i wonder is, with Asrock Turbo Unlock only allows 125W max (z490m itx ac), will it be worth the money to purchase the K version just so we can tinker with the PL1&2 value? I mean, compared to 6core which can make do with 125watt, 8 core should go around 160w. The motherboard used here might allow TDP bigger than 125w load but what happened with boards that do limit them only with 125watt?
-> "Also when it comes to benchmarking, because if we were to take an extreme view of everything, then benchmarking is pointless and I'm out of a job."
A very good article, but the above sentence is syntactic gibberish. There are several other grammatical errors/typos that an editor -- or even good editing software -- should have caught.
At any point, did anyone perhaps run HWmonitor to see what clock speeds were being achieved on the 10700? If only 100 MHz less than the 10700K, clearly we are NOT running in a true 65W TDP envelope...
Many folks still referred to the R7-1700X as a 65 watt CPU when running it at 3.9 GHz in it clearly drawing 140-165W from the wall at load...same as the 1800X. (I'm sure criticisms flew equally Team Red back then, right?) :)
Nice FUD / JAQing off combo you're doing, there! š
The 1700 had a 65W TDP, and it drew ~45W in games and ~82W running Prime 95. The 1700X and 1800X had a 95W TDP and drew ~105W and ~113W in Prime 95, respectively
Those chips were pretty roundly criticised for their low power efficiency vs. Intel in lightly threaded workloads at the time. š
This might be a too simple-minded approach, but even the most overdone power regulation will run into the declared thermal limit if the heatsink that is used is speced according to the declared "TdP". So, if you'd give a "65Wh CPU" a heatsink that can only dissipate 65 Wh sustained, the CPU would run into it's thermal shutdown temperature. That would unmask mislabeled CPUs quickly. Just a suggestion.
CPU will not shutdown. It will only throttle. As it is designed to do. As every laptop CPU does.
And performance will be much better than 65/200, because power vs. frequency grows much faster than linear, and very few workloads run anywhere near max power anyway.
I like making fun of Intel's poor efficiency as much as the next guy, but this is much ado about nothing.
I actually meant throttling; the point is that keeping the heat budget to the one stated will show just how fast a CPU can go if it's not allowed to pull way above its stated power envelope. Giving a CPU and MoBo a fancy cooling solution that can dissipate over 3x the rated power invites "cheating". To use an automotive analogy: the rated horsepower of a gasoline engine is for commercially available gas with all the required accessories connected, not with open headers, disconnected pumps and boosted with nitrous.
That's why I steer clear from Intel chips nowadays. The TDP and actual power consumption are worlds apart. While people can argue that TDP = base clockspeed. That is true. But when Intel is advertising that their CPU can beat competition, in order to achieve this feat, they failed to mentioned that it actually takes 200W or more, and not the advertised TDP. If I were AMD, I will use power metrics to put Intel to shame in my marketing material. The same sort of power consumption will likely stick with Rocket Lake.
It's actually likely to get slightly worse - larger, more complex cores running at similar clock speeds on the same lithography. We're likely to see as high as 65W+ for single core loads under turbo, and could get as high as 250W for all 8 on the 11900K.
long story short, Intel just placed 65W there without regard to anything except they want this chip to squarely compete with the 5600x. Amazing to find Intel in this position vs AMD. I don't remember AMD being this bad in the past while 2 generations of process nodes behind Intel which I recall AMD offers slightly less performance but around twice the power of a competing Intel CPU.
"There is a large number of the tech audience that complain when DDR4-2933 memory is used" No kidding? You mostly run 2100, and can't be bothered with testing anything beyond, because of warranty? Maybe you should just review Dell and HP systems. The long video you linked, basically explains nothing but opinion. Why do you even bother with a K processor or a Z motherboard? You are no longer an enthusiast site, and have made this place into a tweeting disaster that can't be bothered with any recent video card. A PHD may make you knowledgeable about a subject, but obviously not about your audience, who are the one's that pay your salary.
I kinda agree here actually, while I understand the argument for only testing at JEDEC specs, I would like to see tests at decent XMP speeds as well, at the very least, if one is not going to further tweak the RAM. Overclocking is more common I suspect these days, if one is an enthusiast. For a lot of chips, it is a huge waste not to OC.
I would agree as well that more recent GPUs should be invested in/obtained for reviews, though understandably this is not a great time to find them. On the plus side, kudos for using the MSI Meg Godlike, that is one heck of a board. I have the Ace myself.
Who's to decide which memory speeds they ought to test with, though? They can't rightly test with all of them, but what's "enough" - 3200, 3600, 4000?
The honest truth is you can get info about performance of a specific CPU with varying memory speeds elsewhere. For the few percent difference in performance it actually makes, I personally don't care whether or not they bother, and I've been coming here for well over 15 years now.
What? No. A reviewers job is to provide consistent and trackable data that holds the vendor to their claims. What you're looking for is a buyers guide.
Oh please. A review tells people how the equipment performs. Since Zen 2, for instance, is most efficient at 3600 RAM speed, running it at JEDEC, a standard designed so that el-cheapo boards can support it, is not a good one. It does not tell readers how the equipment maximally performs.
Product reviews show customer experience with a product or brand. Itās where customers give their opinion on what they think about your product, and this can be positive or negative. Positive reviews show the customer was satisfied with the quality of the product, while negative reviews indicate bad experiences.
Buying guides, on the other hand, help customers identify the right product to buy. A buyerās guide includes a productās specifications, how it compares with other similar products, the benefits, and such. Buying guides provide readers with the necessary information needed to make informed purchasing decisions.
What if I don't agree about the location of the "sweet spot" based on RAM prices local to me, or my own performance needs? Your response isn't the best solution to the problem, it's just a different answer.
My preference is for the reviewer to test a CPU at-or-near stock settings, then publish separate articles on things like the returns on faster RAM speeds/timings and overclocking. The minor performance difference from faster RAM really isn't enough to invalidate the review's conclusions.
It's not at all an attempt to change the subject. The point is that the issue of an "optimal" RAM speed is a moving target dependent on multiple variables.
As a part of his audience, I am actually satisfied with this review, as I am looking for the out of the box experience. I'm most likely not to tamper with the manufacturer's recommendations. So he tested the CPUs following both Intel and AMD recommendations. Something I don't see a problem with.
If it shouldn't be the case, shouldn't consumers take it up with Intel, and not with the reviewer? Why does Intel put lower max than the current JEDEC standards (e.g., Intel at 2933 while AMD at 3200)? I ask, if Intel was so confident about their products, why not up the official support?
This is really the crux of it. Intel wants to make it part of the difference between their Z boards and their non-Z boards, which adds more cost for the consumers, while at the same time washing themselves from this responsibility should something not work (you cannot RMA based on non-JEDEC compatibility). It is also fascinating that consumers think this is on the reviewers.
Honestly, this "everything must be treated exactly the same otherwise nothing is fair" rhetoric is ghastly and corrosive in whatever domain it's applied.
" That means no reviewing boards that violate the official base clock and turbo behavior. " considering that intel doesnt make board makers conform to anything as far as what " default " would be, good luck with this.
Please go ahead and enforce both Intel and AMD to rate their memory controllers faster then. Its not opinion, it's the literal standard,and the only way to ensure consistency for comparisons across generations. I'm not going to offer one CPU a higher dram overclock than another, that isn't fair, just in the same way I'm not going to overclocked the cores. I regularly dive into AT's audience metrics, and have done for years. If you want data that's different, then please by all means either do your own testing or find other reviews. But you know, also get them to deep dive into microarchitecture as well as get all the behind the curtain info. Also, all our content is free at the point of use. By your logic, I'm also a consumer of content at AT, thus I also pay my salary. Please enjoy.
This site posted articles about overclocking that were done wildly, without true stability testing and with reckless amounts of voltage and you're going to now pretend that turning on XMP for RAM is some kind of terrible reckless matter?
The thing is... if you wish to take a stand about JEDEC and company standards that's fine. Just don't post a lot of nonsensical reasons for it, like "Most users don't know how to plug in a computer so we're going to skip the plug for this review".
Personally, here is all I'd say on the subject, were I to be taking your stand:
'We use JEDEC standards for RAM speed because those are what AMD, Intel, and other CPU makers use to rate their chips. Anything beyond JEDEC is overclocking and is therefore running out of spec.
Although motherboard makers frequently choose to run CPUs out of spec, such as by boosting to the turbo speech and keeping it there indefinitely, and by including XMP profiles for RAM with lists of 'compatible' RAM running beyond JEDEC, it is our belief that the best place for a CPU's maximum supported RAM speech spec to come from is from the CPU's creator.
If anyone is unhappy about this standard we suggest lobbying the CPU makers to be more aggressive about officially supporting faster RAM speeds, such as by formally adopting XMP as a spec that is considered to be within spec for a CPU.
To compliment the goal of our JEDEC stance, are going to only create reviews using motherboards that fully comply with the turbo spec of vendors and/or disable all attempts by board makers to game that spec. If a board cannot be brought into full compliance we will refuse to post a review of it and any mention of it with the possible exception of a list of boards that run out of spec, are non-compliant.'
Oxford Guy you seem to be quite unhappy about this review, and by other posts, the site as well, so if this site is so bad, WHY do you keep coming here ?
1) That wasn't an ad-hominem - if you're going to do the "master debater" thing, at least learn to distinguish between commentary on the person and their argument.
2) Re: "extreme vagueness" - that was my personal opinion stated as a colloquialism. I don't owe anyone an annotated list of every comment you made, metric measurements of precisely how far they went, an objectively-defined scale of how far is too far, and a peer-reviewed thesis on the precise moment at which you exceeded that point.
To answer your question ā This site is not bad. This site is good because people are able to give their honest opinions instead of living in a disgusting echo chamber like on ArsTechnica or Slashdot.
why would i go there ? this site is top notch when it comes to reviews and comp hardware news. "This site is good because people are able to give their honest opinions " yes, but sometimes, some go to far with the whining and complaining :-)
Would be good if we could have some temperatures to compare as well. I used to buy the mid-end non-K Intel CPUs since I don't overclock but I always ended up with temperatures about 10 degrees higher than what most people report. With my latest build (ok, not that new now that it's actually a i5-6600k), I went for the K variant and temperatures are much better and in line with what most users report.
You can infer temperature from wattage more accurately than via a temperature measurement, because that measurement depends on the configuration of the test system (cooler type, fan speeds, case airflow).
thats not true because its not proportional with power draw. A 10700k uses 200+W and runs at around 70C while a 5600x uses much less power running at around the same temps. Power draw is not a good indicator and yes it comes down to your setup. Intel is just not as efficient but doesnt make it a hot chip.
"A 10700k uses 200+W and runs at around 70C" Again, with what cooler and fan speeds? Even accounting for the different die sizes, the only way this comparison can really be true is if there isn't an equal amount of cooling between the two processors. As OxfordGuy said, that heat has to go *somewhere*; for the temperatures to be the same between different heat loads *something* must be causing more heat to be dissipated.
That's a point I had forgotten, and a fair one - but temps in a review still won't tell an end-user much about the temps they'll get, especially as variability can be quite high depending on the voltage an individual CPU requires to operate at its various speeds.
Nobody wants Intel until they ditch 14nm. I love Intel, but Iām getting their next CPU, as well as a desktop AMD 5000 w/APU. Seems they survived their diversity exercises and are back in the game, but not until 2022. Until then a few 4790kās are still paying me.
Wow, you're really pinning Intel's faults on diversity? Meanwhile you are ignoring AMDs success is lead by an Asian American woman? You really need to check your bigotry at the door.
Yes. There is an issue with power consumption. And that is a lead into the real story. Intel has been at 14nm for 3 years now. Historically that time frame is unheard of. Some may say the complexity of the Intel CPU die is partly to blame. Some may say it is no wonder that Apple went to M1. Everyone will say Intel has dropped the ball.
Wow, here's a shock. Modern games get very little difference from CPUs as they are all GPU bound. And a good high end GPU is going to burn far more coal than a CPU ever will.
As a gamer, WTF do I care about CPU power usage for? When I run out of coal there is still lots of gasoline š
Is it fair to say that the 10700 is on par (at best) or slower (in most multi-threaded scenarios) than the Ryzen 5600X, despite using roughly 2X the power?
I just measured my "65W" i7-10700 non-K while stress testing it, and it eats 165 W at the wall plug. 64GB RAM, good quality Corsair 450W PSU.
Then I compared to to my "65W" Ryzen 3700X, 32GB RAM = 157 W. That one has an expensive fanless Seasonic 500W PSU which nominally better efficiency at these power draw levels.
So the difference is 10W and may as well be attributed to PSU quality, RAM consumption and whatnot.
If you are going to make wild speculations whose veracity anyone can check, you might want to go over your material a bit better.
The i7-10700, in this article, pulls 197w to 214w. Ooops.
Psssst ___ By the way, my local MicroCenter (Duluth) offers the AMD Ryzen 3700X at $299 after $30 off, and the i7-10700 for $280 after $120 off. My-my-my, how the mighty has fallen . . .
Plausible explanations for the discrepancy, in order of likelihood: 1) The unspecified stress test you're using isn't actually stressing the 10700 very heavily. 2) You're not measuring like-for-like in some other way - be it components or configuration. 3) Your wattmeter is poorly calibrated (This level would be a reach). 4) You're simply not being honest (I don't like to assume this, but you seem aggressive about people questioning your implausible conclusions).
Implausible explanations: 1) Every review on the internet performed with calibrated equipment, specified configurations and specified software loads is somehow wrong and you are right.
I'll go as far as requiring/requesting/asking for their MB model (an exact model number and manufacturer thereof). Without that one key piece of information, I have concluded the following: Using a Z490 or other relatively high end LGA 1200 MB indicates that the i7-10700 will run at or significantly above 200W in continuous 247 operation.
Remember this user claims to be using a 450W PSU, so very likely not a Z490 MB, so indicative of a rather low end system (e. g. no medium to high end GPU, not that that matters as these are essentially CPU tests unless stated otherwise in this review).
I believe their power number but I don't believe that they are testing on a medium to high end LGA 1200 MB. In other words it is all about the MB default settings for PL1, PL2 and Tau and not the CPU itself.
Noteworthy points: It is an i7-10700F On a Gigabyte B460M Populated with 4x16 GB DDR-3000 With an ancient Quadro K2000 and an NVMe SSD. Hyperthreading is disabled. I use it for running FEA simulations aside my Ryzen workstation, and it performs like a champ. The cheap&old wattmeter hoovers around 157 W or so during simulations. 100% CPU load.
So I take it that if I got the Z490, the CPU would draw 60W more. Would it go faster?
Annnnd... that's it. The rest I find are all "compare" sites listing numbers culled from manufacturer sites. And here comes Anandtech and tells me that my eyes are deceiving me and that my CPU is actually pulling twice as much as I am observing. The explanation of which would be that better mobos have a power setting that allows it to draw much more than default, with no obvious benefits? I don't get it.
Well, now you are almost there. Wherever there is. that is.
Watts (power) * Time (seconds) = Energy (e. g. kWh) used
Power (W) versus frequency (Hz) is highly nonlinear (concave up and more so the closer you get to the redline). Your cooling solution can only dissipate so much power per unit time in 247 continuous operation, at a low enough core temperature.
This is all really basic stuff.
So, it will take longer to complete a fixed task at 125W then that same fixed task at 250W (all other things being equal), wherein the first task is running at 4GHz and the 2nd task is running at 5Ghz. These are only example numbers btw.
That TechPowerUp review has plenty of fixed task benchmarks (on the other pages) wherein the total time (in seconds) is given. You might want to check out those pages also.
They use four settings on a Z490 MB. The one that is closest to the out-of-box MB tests mentioned here is their "The third data point (blue bar) sees us relaxing the power limits to enable the maximum turbo frequency available for this processor." or what those bar charts are labeled as "Core i7-10700 Max Turbo" ...
It is a real shame that more sites don't do thorough enough reviews. So, for example, on this review on the 2nd page ... https://images.anandtech.com/doci/16343/10700KInte... That is a fixed time test and not a fixed task test. That should have been explained in this review.
Maybe this site will do better next time, by using a low end out-of-the-box MB in addition to their high end out-of-the-box Z490 MB. Report frequency, power, energy and time for all tests/tasks. Use proper recording of all these to get a more complete picture of what the heck is going on (time series and integrals thereof even).
My formal and informal (or on the job) training in doing scientific experiments goes back almost fifty years now. Not that that means anything on the internet. :/
HarkPtooie toms hardware, gamers nexus, redgamingtech, moores law is dead. all pretty much say intel uses more power then amd. in some cases, quite a bit more.
so either you have your system set up differently, and are forcing it to use the power it does, and the rest, let the board run as it see's fit, as you said : The explanation of which would be that better mobos have a power setting that allows it to draw much more than default, with no obvious benefits? I don't get it. " actually there is a benefit, when intel's cpus are allowed to use as much power as it can and wants, the performance goes up. but what ever, you believe what you want.
Yes, they do - but they do not say that the i7-10700 non-K uses twice the power of an equivalent Ryzen. That is exclusive to this article, and the explanation is that here they use "boost max all the time" BIOS settings that are not quite the nominal default for this CPU.
This is overclocking.
Personally I turn it around and think "I am impressed at the performance Intel managed to squeeze out of this CPU at this power level, considering the process node disadvantage".
I am no fanboy. I usually buy AMD because bang/buck. This time I needed AVX-2 without having to tinker with experimental settings, which is the case with AMD+ANSYS.
Ah - so the thing is that my CPU runs default as Intel intended it out of the box, whereas this review uses special motherboard settings that overdrives into a "use any power you need" zone where the max turbo runs all the time?
Okay. That would explain things.
That Intel uses more power than AMD is not surprising since there is a substantial difference between 10 nm and 7 nm. And I am well aware that they cheat the numbers to look better - but that does not change the fact that nominally my 10700 draws about as much as my 3700X - and performs more or less equally. A bit faster single-thread, a bit slower multi-thread.
What this review amounts to is "If you reach inside your system and boost the shit out of your i7, it draws much more power than Ryzen." - why not go all the way and overclock them to 6 GHz and shriek about how the Intel draws 800W while the AMD only needs 600W?
What MB are you using and/or can you set PL!/PL2 in your BIOS settings? The article is suggesting that on higher end MB's, or some such, the PL1/PL2 settings are set to infinity or can be changed in the BIOS settings (even on a non-K CPU). PL1 is 125W so it appears that your MB has that limit.
OK, made a mistake, the i7-10700 has a PL1 value of 65W and a PL2 of 224W and a PL1 Tau of 28s (those appear to be nominal or default values). Still curious as to the MB and accessible BIOS settings. Also is there any system software to see these settings (e. g. like AIDA64). TIA
I did not set any PL. The systems are default except for the RAM speed with is set by XMP to 3000 and 3200 MHz respectively.
Should I interpret it as "during certain settings, the i7 can be made to consume vastly more power than it does by default"? That seems contrived.
All I know is that their power consumptions as measured for the whole system are roughly on par during conditions where incidentally the i7 also outperforms the Ryzen in single-thread applications. It is not a bad CPU.
Gigabyte B460M DS3H Pegged at 100% CPU utilization on 8 cores (HT disabled) the wall meter says 149-163 W, CoreTemp says I use about 70 W core and 8 W uncore. CPU multiplier bounces between 43-47x, though mainly resting at 46x. Temps are 65-66Ā°C using a humble CoolerMaster TX3 Evo.
Just upped the PL1 to 250 W in BIOS. It made no discernible difference, so I suppose it doesn't work on B460 chipsets.
Enable HT. If not then why not? The battery of tests conducted here and everywhere else have HT enabled. So far, you are still at the apples != oranges stage. It is now time for you to step up or ... :/
So: I set all the PL limits to max (4090 W) and reran. 173 W. Up 10-15 W from default.
Then I enabled HT and reran. 213 W. +40 W compared to non-HT.
So I turned off the PL tweaking and reran, with HT on. 204 W initially, then after a while it went down to ca 140 W and the multipliers reduced to about 37x.
Kind of surprised that HT made such a difference, I was under the impression that HT "cores", being a small backpack aside the "real" core, added a tiny percent of transistors overall. I usually disable HT because the software I run don't benefit from them and actually loses performance with it.
So: mystery solved and I stand corrected.
Intel is not lying when they call this a 65 W CPU. They are however obscuring the fact that it does so with REDUCED PERFORMANCE. Its default behavior is to only run at 100% for half a minute.
When allowed by BIOS tweaks, it will double the power draw but run at 100% all the time. This is overclocking in the sense that default settings are overridden - but it is not in the sense that the peak speed is not actually driven above its intended levels. Just maintained at higher power draw.
Aight. I'm back to non-HT and free power. 173W is not that much.
Just did a compare of performance during my simulations, and they were more or less identical to the default settings.
at the same time, though, what asus, asrock and msi have done, isnt really overclocking, but more of allowing the cpu to use its turbo states longer, then what intel allows
both of those links, could explain, at least partly, HarkPtooie, why you are getting the results you have.
"Turbo Power Limits Allows you to set a power limit for CPU Turbo mode. When the CPU power consumption exceeds the specified power limit, the CPU will automatically reduce the core frequency in order to reduce the power. Auto sets the power limit according to the CPU specifications. (Default: Auto)
Package Power Limit TDP (Watts) / Package Power Limit Time Allows you to set the power limit for CPU Turbo mode and how long it takes to operate at the specified power limit. If the specified value is exceeded, the CPU will automatically reduce the core frequency in order to reduce the power. Auto sets the power limit according to the CPU specifications. This item is configurable only when Turbo Power Limits is set to Enabled. (Default: Auto)
DRAM Power Limit (Watts) / DRAM Power Limit Time Allows you to set the power limit for memory Turbo mode and how long it takes to operate at the specified power limit. Auto lets the BIOS automatically configure this setting. This item is configurable only when Turbo Power Limits is set to Enabled. (Default: Auto)"
That same language can be found for all three MB manuals. So. it would appear that pl1, pl2 and tau are adjustable as HarkPtooie has suggested (but to be sure the latest bios version should be installed imho).
The only question I have is, why did Gigabyte apparently update the B460M DS3H (rev. 1.0) to the B460M DS3H V2 (rev. 1.0) (maybe they are different in some hardware way that I have failed to notice).
The stress test should be the one that produces the highest temperatures together with the best cooling solution possible for these non-K parts. It sounds a bit circular but then these are non-K parts where we constrain the control knobs to just pl1, pl2 and tau.
It would be ironic if I were wrong, but I sort of trust my eyes here. And my point was that anyone possessing an i7-10700 and a $20 wattmeter can easily check this too.
I would have liked them to test the processors in addition to the heatsink that comes in the retail box, that would provide a sample of how the product behaves that an end user obtains when buying it. Obviously the use of a heatsink from a 3rd party manufacturer improves the performance of both due to the superior ability to eliminate heat, which helps to maintain the turbo frequencies for longer in both processors.
one thing i don't see is that the CPU is officially rated 2.9ghz. Not 4.0 as the graphs seems to suggest. We are getting 4.0 with propper cooling, but what i gave it a 90W cooler? Would i end up back at 2.9ghz? We all know that frequency and powerdraw is never a linear curve so we might see 25% lower powerformance at 1/3 the power draw and as such their claim about 65w could be true, but that it peaks if allowed to. I mean don't get me wrong, it's shitty, but is it really that wrong though?
Dr. Cuttess, thank you as always for these in-depth analyses.
I would really like to see how this compares with the previous 9th generation Intel parts (9900, 9700, 9600, etc). However in your bench tool the 2020 and 2019 tests make this difficult. Couldnāt the data be back ported or forward ported and just put N\A for tests that arenāt in both datasets? I love then bench tool, but itās recently hamstrung by not allowing for comparisons of 8th gen and 9th gen (and equivalent AMD parts)
I was under the impression that TDP was the maximum amount of thermal energy, measured in watts, that a CPU would ever produce and would need to be āremovedā by the thermal solution, not the amount of energy measured in watts that a CPU consumes. Surely a processor is not converting all of the power it consumes into heat, else it would be a very efficient space heater and not a CPU.
simply put, intel bases its TDP at BASE clocks, with what they would consider default settings. AMD, bases its TDP, on, for the most part, max power draw. same value, WAY different view of what TDP is between them
Dear Anandtech team, thanks a lot for this great clarification of the difference between K and non K Intel products. However, I would like to know what you think about these Geekbench multi-score results showing a gap of around 13% between the 10700 and 10700k on multi-core bench ? => https://browser.geekbench.com/processor-benchmarks Since the difference of all core turbo frequency between those 2 processor is around 2-3% (4.7Ghz vs 4.6Ghz) I cannot understand why there would be a 13% gap on this benchmark ? Does it mean that the Geekbench aggregated data of the 10700 comes from OEM builds with entry level motherboard which doesn't maximize turbo (probably because the VRM are not great) and stay within the intel recommended turbo ?
WeĆ¢ā¬ā¢ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the SiteĆ¢ā¬ā¢s updated Terms of Use and Privacy Policy.
210 Comments
Back to Article
Marlin1975 - Thursday, January 21, 2021 - link
"65 watt" you keep using that word, I don't think it means what you think it means.YB1064 - Thursday, January 21, 2021 - link
From the first peak power chart, the 10700K consumes almost twice as much power as an equivalent AMD offering at that price point. The "65W" number is blatantly false advertising.heickelrrx - Friday, January 22, 2021 - link
10700k is not 65w part, 10700 is the one that labeled as 65wSamus - Friday, January 22, 2021 - link
Intel is just lying at this point as they are 'effectively' ~215w parts if you put them in a motherboard from Asus, Asrock, MSI, Gigabyte, etc. Only in an OEM system like an HP Elitedesk or Dell Workstation will they run anywhere close to their TDP rating but I'd guess they are using PL2 as well because why not, Intel said its ok.It's become painfully obvious Intel has had to resort to extreme measures here to compete. And compete is a pretty loose definition as they are using almost double the power of the competition and still slower clock for clock, dollar for dollar. No wonder Intel has shaken up the ranks, this is embarrassing.
Smell This - Friday, January 22, 2021 - link
The AMD 3rd Gen Ryzen Deep Dive Review:3700X (65w) and 3900X Raising The Bar
https://www.anandtech.com/show/14605/the-and-ryzen...
I snagged a Ryzen 3700X 8c/16t for $280 3 months ago. The price is at $325 or so these days until it is supplanted by a Ryzen 5700X. Makes the Core i7-10700 at 197w very, very sad.
Fully loaded (by Andrei & Gavin) was around 90w.
bananaforscale - Monday, January 25, 2021 - link
Doesn't matter, 10700 isn't really a 65W part either. I can deal with a 105W Ryzen pulling 150W under full load but having a "65W" part pull 215W is just BS. That's over triple and will overstress crap VRMs.III-V - Friday, January 22, 2021 - link
It's peak power... Not sustained power, which is what TDP deals with.shabby - Saturday, January 23, 2021 - link
It's 2.9ghz base clock uses 65w, that's what the tdp rating basically is.It would be nice if anandtech posted the actual wattage during each test for each cpu. Not just for many fps it got but how much wattage it used in that test.
Qasar - Saturday, January 23, 2021 - link
specially for games. keep reading how some say in games, intel is still better then amd when it comes to power usage, but dont really see much about it.scottlarm - Saturday, January 23, 2021 - link
gfhalexane - Sunday, January 24, 2021 - link
easy job online from home. I have received exactly $20845 last month from this home job. Join now this job and start making extra cash online. salary8 . comMilaEaston - Tuesday, January 26, 2021 - link
easy job online from home. I have received exactly $20845 last month from this home job. Join now this job and start making extra cash online. salary8 . com
flyingpants265 - Friday, January 22, 2021 - link
Right. I'm not even sure why this is an issue. TDP stands for "thermal design power", it's how much power the chip uses, it's not debatable.etal2 - Thursday, January 21, 2021 - link
What I'm missing from this review is a benchmark running under intels recommended settings.From what I've seen often people see the 65w rating and go on to combine the i7-10700 with cheap B460/H470 motherboards and basic coolers.
Duraz0rz - Thursday, January 21, 2021 - link
The problem here is that the turbo limit is not enforced by the chip, but by the mobo. So even cheap B460/H470 boards can set that limit to be higher than Intel's recommendations if they choose to. And no one that would be buying these boards will necessarily care to dig into the BIOS and set the limits themselves.Cygni - Thursday, January 21, 2021 - link
Yes, they would. There are lots of (admittedly niche) applications where outright sustained performance is less important that bursty performance in a limited thermal envelope, either due to space or ventilations issues. HTPCs, home servers, small industry applications, etcSo yeah, i agree with the OP, I would have liked to have seen performance numbers at the "suggested" 65w PL1.
Calin - Friday, January 22, 2021 - link
I totally agree with your comment, but what you ask for is a different article.Performance numbers in a strictly power limited environment - from Intel and AMD both (although Intel will be unfairly penalized by being three or so lithography generations behind).
Spunjji - Friday, January 22, 2021 - link
"unfairly penalized"The product you test is the product they have on sale - that's not unfair in the context of a test designed to represent a specific real-world requirement.
olde94 - Monday, January 25, 2021 - link
yeah i never heard anyone saing that amd was "unfairly penalized" in 2015. they could just "suck it up"Spunjji - Monday, January 25, 2021 - link
To be fair, some people did (GloFo's 28nm is terrible, I don't care about power, etc.) and I had no time for them either.Samus - Friday, January 22, 2021 - link
I don't think ANYONE actually wants to see the numbers for these chips at 65W :)Spunjji - Monday, January 25, 2021 - link
I love a good laugh!iAPX - Saturday, January 23, 2021 - link
I totally agree, a 5600X and a 10700 on their 65W TDP, and their maximum performance, to gives a baseline of what performance-level is WARRANTED by their makers.etal2 - Thursday, January 21, 2021 - link
Setting the limits in the bios is very nice and all but without the voltage regulation and thermal capacity they can not sustain this performance for very long regardless of the numbers set.I very much doubt that on the 60$ cheap 4 phase vrm boards the manufacturers set the limits very high, they will get fried boards within the warranty period and we know very well they can't have that.
Spunjji - Friday, January 22, 2021 - link
That would be nice to see. Perhaps an article showing which of a representative selection of processors provide the best performance at a given set of fairly common power levels (65W / 95W / 125W).Something for when Dr Cutress finds himself with infinite time and no impending deadlines š
u.of.ipod - Thursday, January 21, 2021 - link
So happy I waited patiently and got a Ryzen 5600x for my small form factor system. The fact it can hang with the i7's and only consumes 1/3 the peak power draw is great for heat output and playing nicely with SFX PSUs.Golgatha777 - Thursday, January 21, 2021 - link
And that's how you end up with graphs like this one.https://cdn.mos.cms.futurecdn.net/i9W8M8HgGaTqRs4b...
Spunjji - Friday, January 22, 2021 - link
Oof.Samus - Friday, January 22, 2021 - link
Oof.magreen - Friday, January 22, 2021 - link
Pentium 4 Extreme Edition all over againMr Perfect - Thursday, January 21, 2021 - link
Yeah, if Alder Lake doesn't get things under control later this year, my next ITX build will also be Ryzen. There's just no sense in using a CPU drawing 200+ watts in SFF when cooling the current crop of GPUs is hard enough as it is!DanNeely - Thursday, January 21, 2021 - link
In SFF I suspect you'll be operating well below 215W. If not because the mobo can't supply the power, but because your small form factor cooler can't handle the heat and you're limited to short turbo periods due to thermal throttling.Mr Perfect - Thursday, January 21, 2021 - link
You should well be able to run the 215watt mode if you're daring. Some folks over on r\sffpc use the big CPUs, because the mITX boards have power delivery similar to the ATX boards and do support them. The problem is they report idle temps around 50C and gaming temps around 70C.Personally, I'm just not willing to run those temps. My current ITX build is using an older 95watt CPU(that actually drew 95watts at turbo) and idles at 35C and games around 50C. A new CPU with an idle temp that's the same as my current load temp is just mind boggling.
Dug - Thursday, January 21, 2021 - link
You do realize that 50 and 70c are not anywhere close to being worried about anything? You said personally you aren't willing to run those temps. Have you even looked into what modern processors are capable of? Have you ever used a laptop, which will hit that temp just by opening the lid?Calin - Friday, January 22, 2021 - link
I remember one processor (or maybe GPU) had temperature limits (internal temperature, as measured on-die) of a bit over 100 Celsius. It might have been an NVidia chip, however I remember Intel coming close to that.Compared to that, 50 Celsius at idle and 70 at load is positively arctic ;)
at_clucks - Friday, January 22, 2021 - link
Perhaps idling at 50 is not outstanding but full load at 70 definitely is, especially for a cramped SFF PC.Spunjji - Friday, January 22, 2021 - link
It all depends on the fan speeds, really. 50 at idle is extremely impressive if the system is silent!Wineohe - Thursday, January 21, 2021 - link
I too just picked up a 5600X for list of $299 to build a NAS. Paired it with a $120 B550, some extra RAM and a video card I had collecting dust. Wish I would have researched the case more though. Ironically itās considerably faster than my desktop with itās pokey old i7-6800K. Iāll wait until the 5950X comes more readily available for a upgrade.schujj07 - Friday, January 22, 2021 - link
Using a 5600X for a NAS, I assume it is for home and not enterprise, is total overkill. You would be fine using an Athlon 200G in a home NAS and would never notice the difference.magreen - Friday, January 22, 2021 - link
I run a pentium M for my Ubuntu server/NASblckgrffn - Thursday, January 21, 2021 - link
Thanks for the review, as it basically shows what other reviews already show, namely if you set aggressive PL1 and PL2 values across K & non K SKUs then you'll get similar performance.I am curious why you said the performance is much lower with a 65W power limit and then didn't include those results.
I feel like it is common knowledge, especially with 10th Gen Intel CPUs that you need to manually configure PL1 and PL2 in keeping with your cooling solution, but perhaps not.
blckgrffn - Thursday, January 21, 2021 - link
I mean, published PL1 for this CPU is 10700 is 65, PL2 is 224 for 28 seconds.Running outside of those value is essentially turbo overclocking (yeah, I know Intel has also redefined that term).
If your motherboard auto overclocks the CPU via a ridiculous PL1 value then ĀÆ\_(ć)_/ĀÆ
Tunnah - Thursday, January 21, 2021 - link
Intel has no incentive to change their policy and label their products with the actual power draw they'll be using because they'll show how much more they suck up compared to AMD. People are constantly looking for metrics to compare the 2 "teams", and Intel getting to keep the labels of 65w and 125w lets the fans say "see it has the same power usage as AMD!"yeeeeman - Thursday, January 21, 2021 - link
this looks just fine to me as long as it is clear for the user.magreen - Friday, January 22, 2021 - link
But it is not.porina - Thursday, January 21, 2021 - link
From my observations of Zen 2 at stock operation, 65W TDP models tended to sit continuously at the 88W PPT limit under most all-core load conditions. Has this changed with Zen 3? Do they not hit 88W so easily, or is another (current) limiter taking over? Or is the limit a different value now?Smell This - Friday, January 22, 2021 - link
Presumably, the Zen3 would operate under the same 'constraints' ----- The constraints are as follows:
ā¦ Package Power Tracking (PPT): The power threshold that is allowed to be delivered to the socket. This is 88W for 65W TDP processors, and 142W for 105W TDP processors.
ā¦ Thermal Design Current (TDC): The maximum amount of current delivered by the motherboardās voltage regulators when under thermally constrained scenarios (high temperatures). This is 60A for 65W TDP processors, and 95A for 105W TDP processors.
ā¦ Electrical Design Current (EDC): This is the maximum amount of current at any instantaneous short period of time that can be delivered by the motherboardās voltage regulators. This is 90A for 65W TDP processors, and 140A for 105W TDP processors.
"Looking at the total power consumption of the new 3700X, the chip is very much seemingly hitting and maintaining the 88W PPT limitations of the default settings, and weāre measuring 90W peak consumption across the package."
Olaf van der Spek - Thursday, January 21, 2021 - link
Why would one want to limit turbo budgets? Thermals? If there's no thermal headroom the CPU won't turbo (as far).Efficiency?
Calin - Friday, January 22, 2021 - link
The motherboard has power limits - both in instant maximum current from the voltage regulation phase (remember, the mainboard receives 3.3 Volts, 5V, 12V and maybe -5V and -12V from the PSU and has to convert that to processor voltage), and in cooling capacity for the VRM (Voltage Regulation Module).Regardless of the power limits, the processor will slow down if its internal temperature is too great.
So yes, the "my mainboard's power delivery module cannot deliver more than 80 amps" is a possible reason. Another would be "My case has bad cooling and I want to keep the processor colder". Another would be "As soon as the sustained power goes over 140 watts, the fans in the case start whirring and I hate the sound".
DominionSeraph - Thursday, January 21, 2021 - link
>This does come with a reasonably good default cooler.No. Just no. The Ryzen coolers are utter trash and you're doing a disservice to your readers who may not have ever had a quiet cooler to say otherwise. I build PCs and I've had several Ryzens go through and I have never seen one where I would call the acoustics livable. My first sale was a 1700 with the stock cooler since I didn't have any other AM4 compatible ones at the time and I still feel bad about selling it that way. It was just terrible. The 212 EVO seems to be within its thermal envelope for quiet cooling up to a stock 3700X, so I'd highly recommend one of those over the stock cooler. Going above the ~85W of a 3700X you should spring for a Fuma 2.
schujj07 - Friday, January 22, 2021 - link
A stock 3700X has a total package power of 88W and the 212 EVO is a 150W TDP cooler. Whereas the included Wraith Prism cooler with the 3700X is a 125W TDP cooler. One would expect that the larger capacity cooler with the larger fan would be quieter.vegemeister - Friday, January 22, 2021 - link
Heat transfer does not work that way.ĪT = P * R
where T is temperature (K), P is power (W), and R is thermal resistance (K/W).
Unless the temperature rise is known the only thing "150W cooler" tells you is that the heat pipes won't dry out at 150W with reasonable ambient temperature. (That's a thing that can happen. It's not permanent damage, but it does mean R gets a lot bigger.)
The fact is the Wraith Prism is the same 92mm downdraft cooler AMD has been shipping with their CPUs since the Phenom II 965.
Spunjji - Friday, January 22, 2021 - link
The Wraith Prisms are fine - the one that comes with the low-end ryzens (and I think now the 5600) aren't so great for noise, but they do let the CPU come within 95% of its peak performance, so not bad for a freebie.alufan - Thursday, January 21, 2021 - link
Am not seeing the point of this article is 65w an option and then you blatantly ignore the actual TDP stated and produce a test, for the test to be a fair comparison all the chips should be limited to actual power stated and then run through any Benchmarks, its like saying we are testing CPUs at 125w and including the LN2 FX AMD chip and seeing how much power you can actually run through it, running these chips like this constantly will degrade them and eat up a considerable amount of power that you dont need to use.Then again I should be surprised, yet again 12 articles on the front page regarding Intel 3 regarding AMD guess Intels media budget is bigger hmm
DominionSeraph - Thursday, January 21, 2021 - link
It's AMD CPUs that degrade at stock clocks. Intel will run for decades even with moderate overclocks.bji - Thursday, January 21, 2021 - link
AMD CPUs do not "degrade" at stock clocks or overclocks.DominionSeraph - Thursday, January 21, 2021 - link
Their Turbo is literally built around it. It will lower clocks as the chip degrades. The degradation is all over Reddit. I'm surprised no tech site has followed up on the scandal.bigboxes - Thursday, January 21, 2021 - link
I'm surprised there aren't more trolling like youSpunjji - Friday, January 22, 2021 - link
I'm not.I just spent a bit of time on Google and the majority of the results are people saying "I heard this, is it true?" - the rest are people talking about how they ran their chip way outside spec (significant overvoltage, overclock *and* high temperatures) and can no longer get the same overclock out of it.
Take your FUD and cram it. š„°
Spunjji - Friday, January 22, 2021 - link
It took me less than 15 minutes to confirm that this is a lie.Incidentally, the only CPUs I've ever had "degradation" problems with were all Sandy Bridge - 2 i3s, one i5 and one i7. Only one of them was ever overclocked. They started to show strange issues after 3-5 years - stuff like frame-rate inconsistency in games, graphics artefacts, random crashes.
I've never gone around slamming Intel, though, because sometimes you just get a bad chip. It happens.
magreen - Friday, January 22, 2021 - link
Man Spunji, you are diligent. I was just going to ignore the obvious trollSpunjji - Monday, January 25, 2021 - link
It's a sensible policy - I just like debunking FUD šbji - Thursday, January 21, 2021 - link
Ryzen 5 5600x at $299 is a lie right now than and has been for months. It's slowly coming down to $399 with general availability. It will be months before it's actually available at $299.Please no one respond with stories of one-off deals that they happened to get from some rare and hard to find vendor, where the deal was only available for 10 minutes anyway.
The simple fact is that no Ryzen 3 processors have had general availability at anywhere near MSRP for months.
Golgatha777 - Thursday, January 21, 2021 - link
I've personally purchased 2 5600X and 1 5800X for MSRP at Micro Center?bji - Thursday, January 21, 2021 - link
Micro Center is not "general availability" given that it's only accessible to a few million people who happen to be within driving distance of one of their stores so, you fail.silverblue - Thursday, January 21, 2021 - link
Well... one stockist (OCUK) in the UK has had the 5600X at Ā£279 for at least the past 24 hours, whereas the average on Google seems to be about Ā£310 to Ā£320. Your mileage may obviously vary, I suppose.bji - Thursday, January 21, 2021 - link
The US MSRP is $299 which is 218 British Pounds. So the numbers you quote indicate a significant reseller mark-up which makes my point. So thanks for agreeing with me.I wouldn't mind paying AMD a fair price ($399 apparently) for a 5600X, but I will NOT give $100 or more to scalpers. AMD will use my money to make me more of what I want (faster chips). Scalpers will use my money to just scalp me harder in the future. I will never buy a scalped product.
And so I continue to wait and wait to build a 5600X/RTX3080 gaming PC ... been waiting for months now ...
Calin - Friday, January 22, 2021 - link
I think the UK price indicate about 20% of Value Added Tax - which is paid directly at the moment of sale. If I remember correctly, US prices do not contain "State Tax" and the like.silverblue - Sunday, January 24, 2021 - link
Exactly. Aside of any delivery costs from retailer to customer, we pay what's written on the price tag. That $299 isn't looking so cheap now. Funnily, the current conversion from dollars to pounds means a near 1:1 for comparison.jimbo2779 - Friday, January 22, 2021 - link
Everything is more expensive in the UK. Our MSRP is different to yours because of import duties. MSRP is still MSRP just that our MSRP is different from your MSRP.We would not be expecting to be paying US pricing just as we wouldn't expect to pay the going rate in Australia, that is the case for every saleable item.
bji - Tuesday, January 26, 2021 - link
Then I would appreciate if the O.P. would indicate both what the MSRP is in his country as well as the price that he is quoting availability at so that all the details are known. Since he didn't say the MSRP was any different over there, I just assumed it was the same. It helps to seed the discussion with relevant information at the outset so that we don't have to devolve into useless bickering over unavailable data. I agree that I could have immediately asked what the MSRP was there instead of just assuming it, so that's on me, but even better would have been me not even having to ask.Qasar - Tuesday, January 26, 2021 - link
most of the time, MSRP, is based on US dollarsSpunjji - Wednesday, January 27, 2021 - link
It was safe to assume that, as his reply was in contradiction to yours, Ā£279 was at or near MSRP.bji - Wednesday, January 27, 2021 - link
It is very difficult to get these parts at MSRP in the USA. I think the safer assumption is that it is also difficult to get these parts at MSRP elsewhere.And yet Anandtech will continue to show the USA MSRP in their CPU comparisons as if that is the realistically available price for the part, which is exactly the incorrect information I was trying to rail against when I posted my original comment that started this whole discussion.
Qasar - Thursday, January 28, 2021 - link
but its not incorrect information. the only reason hardware isnt anywhere near MSRP, is due to the fact, that there is more people wanting the hardware, then there are products available. not to mention, that MSRP is for all intents and purposes, constant vs what the prices are in stores.Spunjji - Friday, January 22, 2021 - link
UK RRP is Ā£280. We have VAT and get the British Tax. šEven at Ā£300, I personally wouldn't lose sleep over a retailer taking an extra Ā£20 given the crappy margins they usually get.
bji - Wednesday, January 27, 2021 - link
MSRP includes a mark-up for the retailer to already make the expected profit. A small additional profit is fine; but in the USA what you have is 'scalpers' buying up parts and then trying to resell them for egregious profits. Like a 33% mark-up is the minimum, and until recently 75% - 100% markups were the norm for the Ryzen 5 5600X.Spunjji - Thursday, January 28, 2021 - link
Yeah, I definitely wouldn't buy at those prices. Fortunately I'm in no hurry; the bank account lies empty. š¬drexnx - Thursday, January 21, 2021 - link
I also bought a 5800X at microcenter for MSRP in early December.The "bring up to counter to pick up" sheet they gave me showed they got 75 in on the shipment as well, so it wasn't like it was the one chip they got and I got lucky either...
bji - Thursday, January 21, 2021 - link
Oh my god how many times am I going to have to explain to posters on AnandTech that Micro Center is NOT general availability. They are limited to a few million people who happen to live within driving distance of one of their stores. I wish there was some way to put a disclaimer about Micro Center in my posts without just inviting further debate. I mean the WHOLE REASON that I wrote "general availability" in my comment and put the note about "hard to find vendor" was to try to head of the Micro Center comments, but, apparently, people who shop at Micro Center cannot fathom the idea that 95% of people in the USA do not have access to a Micro Center.bji - Thursday, January 21, 2021 - link
To clarify: 95% was an exaggeration. Micro Center has like 20 or 30 stores and is accessible by more than 5% of the USA population. But it's probably not more than 20%. Anyway any store that is not available to the majority of people cannot be called "general availability".bji - Thursday, January 21, 2021 - link
I mean seriously. If Micro Center were that easy to get through, would there even be a shortage of these chips at online vendors? Everyone would already be satisfied by the Micro Center supply. But a) the fact that there is online shortage means very clearly that most people can't get one from Micro Center (otherwise they already would have and there would not be zero supply online), and b) if a significant fraction of the population could actually get them from Micro Center, Micro Center would clearly already be sold out also.magreen - Friday, January 22, 2021 - link
@bji: How do you feel about Microcenter?calc76 - Friday, January 22, 2021 - link
I'd suspect its probably well more than 50% of the US population is within an hour drive of a Microcenter. The names of the cities listed on their site aren't all easily recognizable but the metro areas are almost all the large ones.Atlanta
Baltimore
Boston
Chicago
Cincinnati
Cleveland
Columbus
DC
Detroit
Denver
DFW
Houston
Kansas City
LA
Minneapolis
Newark
NYC
Philadelphia
St Louis
Dug - Friday, January 22, 2021 - link
They were actually available today at Amazon for a very long time at @299. Of course it's back up again after mass ordering at the time I'm posting this.bji - Friday, January 22, 2021 - link
I checked several times today. How did I miss that? I doubt it was for a very long time. Probably closer to that 10 minutes I mentioned.jimbo2779 - Friday, January 22, 2021 - link
Nephew bought a 5600x for MSRP here in the UK from Currys which is by far the largest national chain of electronics and computer equipment.You could try signing up for an alerts service, providing you have the funds ready to put down you could likely have one in the next week or so. Not as ideal as next day from any major retailer but the chips are coming in stock.
Spunjji - Friday, January 22, 2021 - link
UK here. They do mention low availability of that chip in the article, but I notice that didn't prevent you coming here to whinge about it anyway.From my perspective in the UK, they have 10+ in stock at Overclockers for Ā£299.99 and unspecified stock at CCL for Ā£309.
I must say I especially enjoyed how you elaborately set up the goalposts in your first post, then moved them around with every response. Good game!
bji - Saturday, January 23, 2021 - link
No goalposts were moved. If you'd care to elaborate, I'd respond. Or were you just whining about my whinging?Spunjji - Monday, January 25, 2021 - link
That last bit. šYour original post said that none of the processors have had general availability and that it would be months before prices hit MSRP. People have pointed out a few times that in various parts of the world, availability is there and prices are at (or very near) MSRP, yet you've still found reasons not to admit that maybe you overstated things a bit.
bji - Tuesday, January 26, 2021 - link
Well I guess it's a difference of opinion about what "general availability" means. If most people who are not within direct driving distance of a MicroCenter are having a hard time finding these chips, then I would call that not "general availability". I did admit that I overstated the low number of people who are in driving distance to a MicroCenter, but I still contend that it's a low number relative to the total number of people who want to buy these chips. We could start arguing now about who is TRULY within reasonable driving distance of a MicroCenter -- if you have to drive 1 hour to get the chip, is it really still generally available? If your time is even worth $15/hr minimum wage, that's adding a significant cost in the dollar value of your time to the cost.But I really don't want to argue about it any more. I still think these chips are not generally available and posting an article that compares one to to another when one or both are not easy to get for the vast majority of people, is disingenuous as it presents choices that do not practically exist. If you disagree, then that's fine.
Spunjji - Wednesday, January 27, 2021 - link
It's not a difference of opinion, it's a difference in perspective. I don't even care about MicroCenter because I'm not from the USA. These chips have "general availability" in the EU and UK at the very least, so I appreciate the comparison - as will many other peopleThis article will be around longer than demand continues to outstrip supply in the USA, and maybe even long enough to see $15 an hour become the minimum wage there. š
brucethemoose - Thursday, January 21, 2021 - link
Maybe they'll go the route of GPUs and just stop advertising TDPs entirely?This whole system works because most desktop users don't seem to care about TDP or power consumption. Performance at the absolute extreme end of the frequency/voltage curve is all that seems to matter.
Mr_Spock - Thursday, January 21, 2021 - link
I appreciate the testing, showing that with most other things 'equal', there is effectively little difference between the 10700/K.Would you consider running any tests with a 65W-class cooler to demonstrate exactly how performance is damaged by only matching the TDP on the box?
Having the full-bore tests for 10700 is quite useful if you can't find the K at a decent price, but if we're talking about quoted TDP 'limits', let's try limiting to that heat dissipation and see how quickly it flops.
I built a few Haswell systems back in the day, and whoo boy, were those stock coolers ineffective if you got any decent chip under it.
DominionSeraph - Thursday, January 21, 2021 - link
Eh? The stock cooler on the 4770 was fine. They only pulled 50W. I have no idea what you mean by "any decent chip under it" since there was only the 4770 and 4790 at the top and their wattage was about the same. (my 4790 was undervolted a tad and was 50W)Mr_Spock - Thursday, January 21, 2021 - link
Running prime95 on a stock cooler with a 4770 sent temperatures skyrocketing in my experience (80+C). I might have had too-high standards for cooling, since I always recommended at least a 212 EVO or better when it would fit into the cases we used. My employer used cheap cases that didn't come with an exhaust fan by default - installing 1+ case fans was always a recommendation for anything generating a lot of heat if I could convince the customer how much it was needed.Sounds like our use cases might be different? Or, you might have superior case cooling. All I know is we always had more tolerable temps once dumping the included cooler for something more suited to very heavy work.
vegemeister - Friday, January 22, 2021 - link
80Ā°C is safe. Read the Haswell datasheet. The embedded controller won't even tell you to start spinning up the fan until 80Ā°C idle. https://www.intel.com/content/dam/www/public/us/en...quiq - Sunday, January 24, 2021 - link
now using a 4590 whit aftermarket cooler ambient temperature now is 34Ā°C/93Ā°Fhttps://ibb.co/NKYXx6T
I have to upgrade because whit the itb heatsink run over 60Ā°C/140Ā°F all the time
geokilla - Thursday, January 21, 2021 - link
One reason to go for the K series is for faster RAM and the possibility of overclocking it to 5GHz. Shame there's no overclocking vs turbo results.Naine - Thursday, January 21, 2021 - link
You keep talking about how the TDP means nothing in the context of peak power draw, but it has nothing to do with peak power draw. That is only restricted by PL2, which is probably well over 200W for that chip, and like you say, motherboards can come up with their own PL2 values, especially if you slot this into the same high-end Z490 board you bought for your K CPU. Different story if you put it in a low-end B460 board.Here's what "65W TDP" means: If you pair this CPU with a 65W cooler, it will work.
That's literally what it means.
dullard - Thursday, January 21, 2021 - link
Yes, it is sad that even well respected PhDs in the field can't seem to understand that TDP is not total consumed power. Never has been, never will be. TDP is simply the minimum power to design your cooling system around.I actually think that Intel went in the right direction with Tiger Lake. It will do everyone a service to drop any mention of TDP solely into the fine print of tech documents because so many people misunderstand it.
Yes, TSMC has a fantastic node right now with lower power that AMD is making good use of. Yes, that makes Intel look bad. Lets clearly state that fact and move on.
Power usage matters for mobile (battery life), servers (cooling requirements and energy costs), and the mining fad (profits). Power usage does not matter to most desktop users.
dullard - Thursday, January 21, 2021 - link
Also don't forget that we are talking about 12 seconds or 28 seconds of more power, then it drops back down unless the motherboard manufacturer overrides it. The costs to desktop users for those few seconds is fractions of a penny.bji - Thursday, January 21, 2021 - link
"minimum power to design your cooling system around" makes NO SENSE.You don't design any cooling system to handle the "minimum", you design it to handle the "maximum".
It sounds like you've bought into Intel's convoluted logic for justifying their meaningless TDP ratings?
iphonebestgamephone - Thursday, January 21, 2021 - link
Why are there low end and high end coolers then? Arent the cheap ones for the minimum, in this case 65w?Spunjji - Friday, January 22, 2021 - link
dullard's comments are, indeed, a post-hoc justification in search of an audience.dullard - Friday, January 22, 2021 - link
Bji, no, that is not how how engineering works. You need to know the failure limit on the minimum side. If your cooling system cannot consistently cool at least 65W, then your product will fail to meet specifications. That is a very important number for a system designer. Make a 60W cooling system around the 10700 chip and you'll have a disaster.You can always cool more than 65W and have more and/or faster turbos. There is no upper limit to how much cooling capability you can use. A 65W cooler will work, a 125W cooler will work, a 5000 W cooler will work. All you get with better cooling is more turbo, more often. That is a selling point, but that is it - a selling point. It is the the 65W number that is the critical design requirement to avoid failures.
edzieba - Friday, January 22, 2021 - link
Minor correction on " Never has been, never will be": TDP and peak package power draw WERE synonymous once, for consumer CPUs, back when a CPU just ran at a single fixed frequency all the time. It's not been true for a very long time, but now persists as a 'widely believed fact'.Something being true only in very specific scenarios but being applied generally out of ignorance is pretty common in the 'enthusiast' world: RAM heatsinks (if you're not running DDR2 FBDIMMs they're purely decorative), m.2 heatsinks (cooling the NAND dies is actively harmful, cooling the controller was only necessary for a single model of OEM-only brown-box Samsung drives because nobody had the tool to tell the controller to not run at max power all the time), hugely oversized WC radiators (from the days when rad area was calculated assuming repurposed low-density-high-flow car AC radiators, not current high-density-low-flow radiators), etc.
Even now "more cores = more better" in the consumer market, despite very few consumer-facing workloads spanning more than a handful of threads (and rarely maxing out more than a single core).
dullard - Friday, January 22, 2021 - link
I'll give you credit there. I should have said "not since turbo" instead of "never has been". Good catch. I wish there was an edit button.Spunjji - Friday, January 22, 2021 - link
What's really sad is that you apparently prefer to write a long comment trying to dunk on the author, rather than read the article he wrote for you to enjoy *for free*."I actually think that Intel went in the right direction with Tiger Lake"
You think poorly.
"Yes, TSMC has a fantastic node right now with lower power that AMD is making good use of. Yes, that makes Intel look bad. Lets clearly state that fact and move on."
Aaaand there's the motivation for the sour grapes.
dullard - Friday, January 22, 2021 - link
Spunjji, I must assume since you didn't have anything to actually refute what I said, that you have nothing to refute it and instead choose to bash the messenger. Thanks for backing me up!Spunjji - Monday, January 25, 2021 - link
Assuming makes an ass out of, well, just you in this case.You replied to - and agreed with - a comment from someone who clearly didn't properly read the article, because their complaints were addressed within the article.
I can't "refute" your personal opinions because they were just that; opinions. I can - and did - freely imply that I think they're bad opinions and are based around motivated reasoning. If you think that backs you up, you're even less logical than this "debate me" shtick implies.
Spunjji - Friday, January 22, 2021 - link
That is, in fact, mentioned in the article. If only people would read before commenting.hansip87 - Thursday, January 21, 2021 - link
One of the things i wonder is, with Asrock Turbo Unlock only allows 125W max (z490m itx ac), will it be worth the money to purchase the K version just so we can tinker with the PL1&2 value? I mean, compared to 6core which can make do with 125watt, 8 core should go around 160w. The motherboard used here might allow TDP bigger than 125w load but what happened with boards that do limit them only with 125watt?Endymio - Thursday, January 21, 2021 - link
-> "Also when it comes to benchmarking, because if we were to take an extreme view of everything, then benchmarking is pointless and I'm out of a job."A very good article, but the above sentence is syntactic gibberish. There are several other grammatical errors/typos that an editor -- or even good editing software -- should have caught.
MDD1963 - Thursday, January 21, 2021 - link
At any point, did anyone perhaps run HWmonitor to see what clock speeds were being achieved on the 10700? If only 100 MHz less than the 10700K, clearly we are NOT running in a true 65W TDP envelope...MDD1963 - Thursday, January 21, 2021 - link
Many folks still referred to the R7-1700X as a 65 watt CPU when running it at 3.9 GHz in it clearly drawing 140-165W from the wall at load...same as the 1800X. (I'm sure criticisms flew equally Team Red back then, right?) :)zeealpal - Thursday, January 21, 2021 - link
Everywhere I look references the R7-1700X TDP as 95W not 65W? With ~108W full package power draw at load https://www.anandtech.com/bench/product/2281shabby - Thursday, January 21, 2021 - link
Fake news...Spunjji - Friday, January 22, 2021 - link
Nice FUD / JAQing off combo you're doing, there! šThe 1700 had a 65W TDP, and it drew ~45W in games and ~82W running Prime 95.
The 1700X and 1800X had a 95W TDP and drew ~105W and ~113W in Prime 95, respectively
Those chips were pretty roundly criticised for their low power efficiency vs. Intel in lightly threaded workloads at the time. š
eastcoast_pete - Thursday, January 21, 2021 - link
This might be a too simple-minded approach, but even the most overdone power regulation will run into the declared thermal limit if the heatsink that is used is speced according to the declared "TdP". So, if you'd give a "65Wh CPU" a heatsink that can only dissipate 65 Wh sustained, the CPU would run into it's thermal shutdown temperature. That would unmask mislabeled CPUs quickly. Just a suggestion.vegemeister - Friday, January 22, 2021 - link
CPU will not shutdown. It will only throttle. As it is designed to do. As every laptop CPU does.And performance will be much better than 65/200, because power vs. frequency grows much faster than linear, and very few workloads run anywhere near max power anyway.
I like making fun of Intel's poor efficiency as much as the next guy, but this is much ado about nothing.
eastcoast_pete - Saturday, January 23, 2021 - link
I actually meant throttling; the point is that keeping the heat budget to the one stated will show just how fast a CPU can go if it's not allowed to pull way above its stated power envelope. Giving a CPU and MoBo a fancy cooling solution that can dissipate over 3x the rated power invites "cheating". To use an automotive analogy: the rated horsepower of a gasoline engine is for commercially available gas with all the required accessories connected, not with open headers, disconnected pumps and boosted with nitrous.watzupken - Thursday, January 21, 2021 - link
That's why I steer clear from Intel chips nowadays. The TDP and actual power consumption are worlds apart. While people can argue that TDP = base clockspeed. That is true. But when Intel is advertising that their CPU can beat competition, in order to achieve this feat, they failed to mentioned that it actually takes 200W or more, and not the advertised TDP. If I were AMD, I will use power metrics to put Intel to shame in my marketing material. The same sort of power consumption will likely stick with Rocket Lake.Spunjji - Friday, January 22, 2021 - link
It's actually likely to get slightly worse - larger, more complex cores running at similar clock speeds on the same lithography. We're likely to see as high as 65W+ for single core loads under turbo, and could get as high as 250W for all 8 on the 11900K.scineram - Monday, January 25, 2021 - link
No, we will not.Spunjji - Monday, January 25, 2021 - link
Is this saying you don't think Rocket Lake will hit those power requirements, or that you won't buy it? šzodiacfml - Thursday, January 21, 2021 - link
long story short, Intel just placed 65W there without regard to anything except they want this chip to squarely compete with the 5600x. Amazing to find Intel in this position vs AMD. I don't remember AMD being this bad in the past while 2 generations of process nodes behind Intel which I recall AMD offers slightly less performance but around twice the power of a competing Intel CPU.Dug - Friday, January 22, 2021 - link
"There is a large number of the tech audience that complain when DDR4-2933 memory is used"No kidding? You mostly run 2100, and can't be bothered with testing anything beyond, because of warranty? Maybe you should just review Dell and HP systems. The long video you linked, basically explains nothing but opinion. Why do you even bother with a K processor or a Z motherboard? You are no longer an enthusiast site, and have made this place into a tweeting disaster that can't be bothered with any recent video card. A PHD may make you knowledgeable about a subject, but obviously not about your audience, who are the one's that pay your salary.
Shmee - Friday, January 22, 2021 - link
I kinda agree here actually, while I understand the argument for only testing at JEDEC specs, I would like to see tests at decent XMP speeds as well, at the very least, if one is not going to further tweak the RAM. Overclocking is more common I suspect these days, if one is an enthusiast. For a lot of chips, it is a huge waste not to OC.I would agree as well that more recent GPUs should be invested in/obtained for reviews, though understandably this is not a great time to find them. On the plus side, kudos for using the MSI Meg Godlike, that is one heck of a board. I have the Ace myself.
Oxford Guy - Sunday, January 24, 2021 - link
I don't think it's particularly credible to consider XMP overclocking in the same way manual overclocking is seen.1. It comes from the vendor, guaranteed to run at those settings.
2. Motherboard makers post lists of RAM, at XMP speeds, for people to pick so they can have something that works.
3. Overclocking is putting 215W sustained into a 65W chip. Ask Intel and its motherboard maker friends about that.
Spunjji - Friday, January 22, 2021 - link
Who's to decide which memory speeds they ought to test with, though? They can't rightly test with all of them, but what's "enough" - 3200, 3600, 4000?The honest truth is you can get info about performance of a specific CPU with varying memory speeds elsewhere. For the few percent difference in performance it actually makes, I personally don't care whether or not they bother, and I've been coming here for well over 15 years now.
Oxford Guy - Friday, January 22, 2021 - link
Who decides? The reviewer. Reviewers are supposed to know where the sweet spot is. Itās not that difficult to figure out.Oxford Guy - Friday, January 22, 2021 - link
Itās also hilarious for Intel to allegedly be worried about āRAMā overclocking but not pushing ā65Wā CPUs to 215W sustained.Iāll buy that for a dollar!
Spunjji - Monday, January 25, 2021 - link
On this point, we agree. Intel's definition of what does and does not constitute overclocking has become increasingly bizarre.IanCutress - Saturday, January 23, 2021 - link
What? No. A reviewers job is to provide consistent and trackable data that holds the vendor to their claims. What you're looking for is a buyers guide.Oxford Guy - Sunday, January 24, 2021 - link
The data comes from assembling the correct components.Qasar - Sunday, January 24, 2021 - link
which, as Dr Cutress said, is called a buyers guide. this is a review.Oxford Guy - Monday, January 25, 2021 - link
Oh please. A review tells people how the equipment performs. Since Zen 2, for instance, is most efficient at 3600 RAM speed, running it at JEDEC, a standard designed so that el-cheapo boards can support it, is not a good one. It does not tell readers how the equipment maximally performs.Qasar - Tuesday, January 26, 2021 - link
Product reviews show customer experience with a product or brand. Itās where customers give their opinion on what they think about your product, and this can be positive or negative. Positive reviews show the customer was satisfied with the quality of the product, while negative reviews indicate bad experiences.Buying guides, on the other hand, help customers identify the right product to buy. A buyerās guide includes a productās specifications, how it compares with other similar products, the benefits, and such. Buying guides provide readers with the necessary information needed to make informed purchasing decisions.
Spunjji - Monday, January 25, 2021 - link
What if I don't agree about the location of the "sweet spot" based on RAM prices local to me, or my own performance needs? Your response isn't the best solution to the problem, it's just a different answer.My preference is for the reviewer to test a CPU at-or-near stock settings, then publish separate articles on things like the returns on faster RAM speeds/timings and overclocking. The minor performance difference from faster RAM really isn't enough to invalidate the review's conclusions.
Oxford Guy - Monday, January 25, 2021 - link
'What if I don't agree about the location of the "sweet spot" based on RAM prices local to me, or my own performance needs?'Price vs. performance is a different topic. Nice attempt to change the subject, though.
Spunjji - Wednesday, January 27, 2021 - link
It's not at all an attempt to change the subject. The point is that the issue of an "optimal" RAM speed is a moving target dependent on multiple variables.Makste - Saturday, January 23, 2021 - link
As a part of his audience, I am actually satisfied with this review, as I am looking for the out of the box experience. I'm most likely not to tamper with the manufacturer's recommendations. So he tested the CPUs following both Intel and AMD recommendations. Something I don't see a problem with.Oxford Guy - Sunday, January 24, 2021 - link
Turning on XMP is hardly tampering, particularly since motherboard vendors provide lists of recommended RAM to use those XMP profiles with.theqnology - Monday, January 25, 2021 - link
If it shouldn't be the case, shouldn't consumers take it up with Intel, and not with the reviewer? Why does Intel put lower max than the current JEDEC standards (e.g., Intel at 2933 while AMD at 3200)? I ask, if Intel was so confident about their products, why not up the official support?This is really the crux of it. Intel wants to make it part of the difference between their Z boards and their non-Z boards, which adds more cost for the consumers, while at the same time washing themselves from this responsibility should something not work (you cannot RMA based on non-JEDEC compatibility). It is also fascinating that consumers think this is on the reviewers.
Oxford Guy - Monday, January 25, 2021 - link
Sure, as long as the reviewer is consistent with the logic.That means no reviewing boards that violate the official base clock and turbo behavior.
Things like that.
Spunjji - Wednesday, January 27, 2021 - link
Then he can't review any board at all.Honestly, this "everything must be treated exactly the same otherwise nothing is fair" rhetoric is ghastly and corrosive in whatever domain it's applied.
Qasar - Thursday, January 28, 2021 - link
" That means no reviewing boards that violate the official base clock and turbo behavior. "considering that intel doesnt make board makers conform to anything as far as what " default " would be, good luck with this.
quiq - Sunday, January 24, 2021 - link
i want to view the 1070 whit stock box and a h410/470 motherboard thats the real user for a non k cpuvs 10700k and z490 aftermarket HeatSink or WC Oc memories etc Real entusiast cpu
IanCutress - Saturday, January 23, 2021 - link
Please go ahead and enforce both Intel and AMD to rate their memory controllers faster then. Its not opinion, it's the literal standard,and the only way to ensure consistency for comparisons across generations. I'm not going to offer one CPU a higher dram overclock than another, that isn't fair, just in the same way I'm not going to overclocked the cores. I regularly dive into AT's audience metrics, and have done for years. If you want data that's different, then please by all means either do your own testing or find other reviews. But you know, also get them to deep dive into microarchitecture as well as get all the behind the curtain info. Also, all our content is free at the point of use. By your logic, I'm also a consumer of content at AT, thus I also pay my salary. Please enjoy.Oxford Guy - Sunday, January 24, 2021 - link
This site posted articles about overclocking that were done wildly, without true stability testing and with reckless amounts of voltage and you're going to now pretend that turning on XMP for RAM is some kind of terrible reckless matter?Oxford Guy - Sunday, January 24, 2021 - link
The thing is... if you wish to take a stand about JEDEC and company standards that's fine. Just don't post a lot of nonsensical reasons for it, like "Most users don't know how to plug in a computer so we're going to skip the plug for this review".Oxford Guy - Sunday, January 24, 2021 - link
Personally, here is all I'd say on the subject, were I to be taking your stand:'We use JEDEC standards for RAM speed because those are what AMD, Intel, and other CPU makers use to rate their chips. Anything beyond JEDEC is overclocking and is therefore running out of spec.
Although motherboard makers frequently choose to run CPUs out of spec, such as by boosting to the turbo speech and keeping it there indefinitely, and by including XMP profiles for RAM with lists of 'compatible' RAM running beyond JEDEC, it is our belief that the best place for a CPU's maximum supported RAM speech spec to come from is from the CPU's creator.
If anyone is unhappy about this standard we suggest lobbying the CPU makers to be more aggressive about officially supporting faster RAM speeds, such as by formally adopting XMP as a spec that is considered to be within spec for a CPU.
To compliment the goal of our JEDEC stance, are going to only create reviews using motherboards that fully comply with the turbo spec of vendors and/or disable all attempts by board makers to game that spec. If a board cannot be brought into full compliance we will refuse to post a review of it and any mention of it with the possible exception of a list of boards that run out of spec, are non-compliant.'
Qasar - Sunday, January 24, 2021 - link
Oxford Guy you seem to be quite unhappy about this review, and by other posts, the site as well, so if this site is so bad, WHY do you keep coming here ?Spunjji - Monday, January 25, 2021 - link
He's certainly not his own best advocate in that regard.I'd always defend someone's right to criticise aspects of something they otherwise like, but sometimes it goes a bit far.
Oxford Guy - Monday, January 25, 2021 - link
'but sometimes it goes a bit far.'Extreme vagueness + ad hom = extra fail.
Spunjji - Wednesday, January 27, 2021 - link
@Oxford Guy - š1) That wasn't an ad-hominem - if you're going to do the "master debater" thing, at least learn to distinguish between commentary on the person and their argument.
2) Re: "extreme vagueness" - that was my personal opinion stated as a colloquialism. I don't owe anyone an annotated list of every comment you made, metric measurements of precisely how far they went, an objectively-defined scale of how far is too far, and a peer-reviewed thesis on the precise moment at which you exceeded that point.
Oxford Guy - Monday, January 25, 2021 - link
Ad hom... how unsurprising.To answer your question ā This site is not bad. This site is good because people are able to give their honest opinions instead of living in a disgusting echo chamber like on ArsTechnica or Slashdot.
Perhaps that's where YOU should go instead.
Qasar - Tuesday, January 26, 2021 - link
why would i go there ? this site is top notch when it comes to reviews and comp hardware news. "This site is good because people are able to give their honest opinions " yes, but sometimes, some go to far with the whining and complaining :-)trenzterra - Friday, January 22, 2021 - link
Would be good if we could have some temperatures to compare as well. I used to buy the mid-end non-K Intel CPUs since I don't overclock but I always ended up with temperatures about 10 degrees higher than what most people report. With my latest build (ok, not that new now that it's actually a i5-6600k), I went for the K variant and temperatures are much better and in line with what most users report.Spunjji - Friday, January 22, 2021 - link
You can infer temperature from wattage more accurately than via a temperature measurement, because that measurement depends on the configuration of the test system (cooler type, fan speeds, case airflow).Hxx - Friday, January 22, 2021 - link
thats not true because its not proportional with power draw. A 10700k uses 200+W and runs at around 70C while a 5600x uses much less power running at around the same temps. Power draw is not a good indicator and yes it comes down to your setup. Intel is just not as efficient but doesnt make it a hot chip.Oxford Guy - Sunday, January 24, 2021 - link
Yes, it does. That heat doesn't vanish into thin air. It exists.Spunjji - Monday, January 25, 2021 - link
"A 10700k uses 200+W and runs at around 70C"Again, with what cooler and fan speeds? Even accounting for the different die sizes, the only way this comparison can really be true is if there isn't an equal amount of cooling between the two processors. As OxfordGuy said, that heat has to go *somewhere*; for the temperatures to be the same between different heat loads *something* must be causing more heat to be dissipated.
vegemeister - Sunday, January 24, 2021 - link
No. IIRC all of the K variants have soldered IHS and shaved down dies. Not all of the non-K products do.Spunjji - Monday, January 25, 2021 - link
That's a point I had forgotten, and a fair one - but temps in a review still won't tell an end-user much about the temps they'll get, especially as variability can be quite high depending on the voltage an individual CPU requires to operate at its various speeds.dsplover - Friday, January 22, 2021 - link
Nobody wants Intel until they ditch 14nm. I love Intel, but Iām getting their next CPU, as well as a desktop AMD 5000 w/APU.Seems they survived their diversity exercises and are back in the game, but not until 2022.
Until then a few 4790kās are still paying me.
Spunjji - Monday, January 25, 2021 - link
"Seems they survived their diversity exercises"Intel are doing badly, it MUST be because they stopped almost exclusively hiring white men! /s
Motivated reasoning is a disease.
Hixbot - Saturday, February 13, 2021 - link
Wow, you're really pinning Intel's faults on diversity? Meanwhile you are ignoring AMDs success is lead by an Asian American woman? You really need to check your bigotry at the door.Oxford Guy - Friday, January 22, 2021 - link
āSpecifically on the sha256 tests, both AMD and Via pull out a lead due to a dedicated sha256 compute block in each core.āVIA, eh?
sjkpublic@gmail.com - Friday, January 22, 2021 - link
Yes. There is an issue with power consumption. And that is a lead into the real story. Intel has been at 14nm for 3 years now. Historically that time frame is unheard of. Some may say the complexity of the Intel CPU die is partly to blame. Some may say it is no wonder that Apple went to M1. Everyone will say Intel has dropped the ball.DieselPunk - Saturday, January 23, 2021 - link
Wow, here's a shock. Modern games get very little difference from CPUs as they are all GPU bound. And a good high end GPU is going to burn far more coal than a CPU ever will.As a gamer, WTF do I care about CPU power usage for? When I run out of coal there is still lots of gasoline š
headmaster - Saturday, January 23, 2021 - link
it's a great post admin thanks for ithttps://www.snapseedforpcguide.co/
yankeeDDL - Saturday, January 23, 2021 - link
Is it fair to say that the 10700 is on par (at best) or slower (in most multi-threaded scenarios) than the Ryzen 5600X, despite using roughly 2X the power?Makste - Saturday, January 23, 2021 - link
Put the number of cores into consideration as another factor, and then come up with your own conclusion.HarkPtooie - Sunday, January 24, 2021 - link
I registered just to post this: you're nuts.I just measured my "65W" i7-10700 non-K while stress testing it, and it eats 165 W at the wall plug. 64GB RAM, good quality Corsair 450W PSU.
Then I compared to to my "65W" Ryzen 3700X, 32GB RAM = 157 W. That one has an expensive fanless Seasonic 500W PSU which nominally better efficiency at these power draw levels.
So the difference is 10W and may as well be attributed to PSU quality, RAM consumption and whatnot.
If you are going to make wild speculations whose veracity anyone can check, you might want to go over your material a bit better.
Smell This - Sunday, January 24, 2021 - link
LOL
mmm ... Let me see.
Three feature writers at AT versus some 'anecdotal' FUD-peddling troll on the Internet. The Universe will make the call.
The 65w 8c/16t AMD Ryzen 3700X, fully loaded, pulls 90w. There is also a fancy multi-colored chart for you!
https://www.anandtech.com/show/14605/the-and-ryzen...
The i7-10700, in this article, pulls 197w to 214w. Ooops.
Psssst ___ By the way, my local MicroCenter (Duluth) offers the AMD Ryzen 3700X at $299 after $30 off, and the i7-10700 for $280 after $120 off. My-my-my, how the mighty has fallen . . .
HarkPtooie - Tuesday, January 26, 2021 - link
So you are saying that their wattmeters are right and mine is wrong because... appeal to authority?It may be that my Ryzen draws 90 W, but from the looks of it, the i7 is not far off. 10 more watts, not 130.
The universe will indeed make the call.
Spunjji - Wednesday, January 27, 2021 - link
Plausible explanations for the discrepancy, in order of likelihood:1) The unspecified stress test you're using isn't actually stressing the 10700 very heavily.
2) You're not measuring like-for-like in some other way - be it components or configuration.
3) Your wattmeter is poorly calibrated (This level would be a reach).
4) You're simply not being honest (I don't like to assume this, but you seem aggressive about people questioning your implausible conclusions).
Implausible explanations:
1) Every review on the internet performed with calibrated equipment, specified configurations and specified software loads is somehow wrong and you are right.
Everett F Sargent - Wednesday, January 27, 2021 - link
I'll go as far as requiring/requesting/asking for their MB model (an exact model number and manufacturer thereof). Without that one key piece of information, I have concluded the following: Using a Z490 or other relatively high end LGA 1200 MB indicates that the i7-10700 will run at or significantly above 200W in continuous 247 operation.Remember this user claims to be using a 450W PSU, so very likely not a Z490 MB, so indicative of a rather low end system (e. g. no medium to high end GPU, not that that matters as these are essentially CPU tests unless stated otherwise in this review).
I believe their power number but I don't believe that they are testing on a medium to high end LGA 1200 MB. In other words it is all about the MB default settings for PL1, PL2 and Tau and not the CPU itself.
HarkPtooie - Saturday, January 30, 2021 - link
Noteworthy points:It is an i7-10700F
On a Gigabyte B460M
Populated with 4x16 GB DDR-3000
With an ancient Quadro K2000 and an NVMe SSD.
Hyperthreading is disabled.
I use it for running FEA simulations aside my Ryzen workstation, and it performs like a champ. The cheap&old wattmeter hoovers around 157 W or so during simulations. 100% CPU load.
So I take it that if I got the Z490, the CPU would draw 60W more. Would it go faster?
Qasar - Saturday, January 30, 2021 - link
as Spunjji said Harkptooie, practically every review out there, says the opposite of what you are.so, who is correct then ?
HarkPtooie - Sunday, January 31, 2021 - link
Oh, they do? Be a sport and link me to all those reviews.TechPowerUp puts it at 2W above the 3700X at stress test.
https://www.techpowerup.com/review/intel-core-i7-1...
Annnnd... that's it. The rest I find are all "compare" sites listing numbers culled from manufacturer sites.
And here comes Anandtech and tells me that my eyes are deceiving me and that my CPU is actually pulling twice as much as I am observing.
The explanation of which would be that better mobos have a power setting that allows it to draw much more than default, with no obvious benefits? I don't get it.
Everett F Sargent - Sunday, January 31, 2021 - link
Well, now you are almost there. Wherever there is. that is.Watts (power) * Time (seconds) = Energy (e. g. kWh) used
Power (W) versus frequency (Hz) is highly nonlinear (concave up and more so the closer you get to the redline). Your cooling solution can only dissipate so much power per unit time in 247 continuous operation, at a low enough core temperature.
This is all really basic stuff.
So, it will take longer to complete a fixed task at 125W then that same fixed task at 250W (all other things being equal), wherein the first task is running at 4GHz and the 2nd task is running at 5Ghz. These are only example numbers btw.
That TechPowerUp review has plenty of fixed task benchmarks (on the other pages) wherein the total time (in seconds) is given. You might want to check out those pages also.
They use four settings on a Z490 MB. The one that is closest to the out-of-box MB tests mentioned here is their "The third data point (blue bar) sees us relaxing the power limits to enable the maximum turbo frequency available for this processor." or what those bar charts are labeled as "Core i7-10700 Max Turbo" ...
It is a real shame that more sites don't do thorough enough reviews. So, for example, on this review on the 2nd page ...
https://images.anandtech.com/doci/16343/10700KInte...
That is a fixed time test and not a fixed task test. That should have been explained in this review.
Maybe this site will do better next time, by using a low end out-of-the-box MB in addition to their high end out-of-the-box Z490 MB. Report frequency, power, energy and time for all tests/tasks. Use proper recording of all these to get a more complete picture of what the heck is going on (time series and integrals thereof even).
My formal and informal (or on the job) training in doing scientific experiments goes back almost fifty years now. Not that that means anything on the internet. :/
Qasar - Sunday, January 31, 2021 - link
HarkPtooie toms hardware, gamers nexus, redgamingtech, moores law is dead. all pretty much say intel uses more power then amd. in some cases, quite a bit more.so either you have your system set up differently, and are forcing it to use the power it does, and the rest, let the board run as it see's fit, as you said : The explanation of which would be that better mobos have a power setting that allows it to draw much more than default, with no obvious benefits? I don't get it. " actually there is a benefit, when intel's cpus are allowed to use as much power as it can and wants, the performance goes up.
but what ever, you believe what you want.
HarkPtooie - Monday, February 1, 2021 - link
Yes, they do - but they do not say that the i7-10700 non-K uses twice the power of an equivalent Ryzen. That is exclusive to this article, and the explanation is that here they use "boost max all the time" BIOS settings that are not quite the nominal default for this CPU.This is overclocking.
Personally I turn it around and think "I am impressed at the performance Intel managed to squeeze out of this CPU at this power level, considering the process node disadvantage".
I am no fanboy. I usually buy AMD because bang/buck. This time I needed AVX-2 without having to tinker with experimental settings, which is the case with AMD+ANSYS.
HarkPtooie - Monday, February 1, 2021 - link
Ah - so the thing is that my CPU runs default as Intel intended it out of the box, whereas this review uses special motherboard settings that overdrives into a "use any power you need" zone where the max turbo runs all the time?Okay. That would explain things.
That Intel uses more power than AMD is not surprising since there is a substantial difference between 10 nm and 7 nm. And I am well aware that they cheat the numbers to look better - but that does not change the fact that nominally my 10700 draws about as much as my 3700X - and performs more or less equally. A bit faster single-thread, a bit slower multi-thread.
What this review amounts to is "If you reach inside your system and boost the shit out of your i7, it draws much more power than Ryzen." - why not go all the way and overclock them to 6 GHz and shriek about how the Intel draws 800W while the AMD only needs 600W?
Everett F Sargent - Monday, January 25, 2021 - link
What MB are you using and/or can you set PL!/PL2 in your BIOS settings? The article is suggesting that on higher end MB's, or some such, the PL1/PL2 settings are set to infinity or can be changed in the BIOS settings (even on a non-K CPU). PL1 is 125W so it appears that your MB has that limit.Everett F Sargent - Monday, January 25, 2021 - link
OK, made a mistake, the i7-10700 has a PL1 value of 65W and a PL2 of 224W and a PL1 Tau of 28s (those appear to be nominal or default values). Still curious as to the MB and accessible BIOS settings. Also is there any system software to see these settings (e. g. like AIDA64). TIAHarkPtooie - Tuesday, January 26, 2021 - link
I did not set any PL. The systems are default except for the RAM speed with is set by XMP to 3000 and 3200 MHz respectively.Should I interpret it as "during certain settings, the i7 can be made to consume vastly more power than it does by default"? That seems contrived.
All I know is that their power consumptions as measured for the whole system are roughly on par during conditions where incidentally the i7 also outperforms the Ryzen in single-thread applications. It is not a bad CPU.
Spunjji - Wednesday, January 27, 2021 - link
The review didn't say it is a bad CPU.HarkPtooie - Tuesday, February 2, 2021 - link
Gigabyte B460M DS3HPegged at 100% CPU utilization on 8 cores (HT disabled) the wall meter says 149-163 W, CoreTemp says I use about 70 W core and 8 W uncore. CPU multiplier bounces between 43-47x, though mainly resting at 46x. Temps are 65-66Ā°C using a humble CoolerMaster TX3 Evo.
Just upped the PL1 to 250 W in BIOS. It made no discernible difference, so I suppose it doesn't work on B460 chipsets.
Everett F Sargent - Tuesday, February 2, 2021 - link
Enable HT. If not then why not? The battery of tests conducted here and everywhere else have HT enabled. So far, you are still at the apples != oranges stage. It is now time for you to step up or ... :/Please post results with HT enabled.
Everett F Sargent - Tuesday, February 2, 2021 - link
Oh and the benchmark application that you are using (e. g. Prime95 or whatever) if you do not mind. Please. TIAHarkPtooie - Wednesday, February 3, 2021 - link
So: I set all the PL limits to max (4090 W) and reran. 173 W. Up 10-15 W from default.Then I enabled HT and reran. 213 W. +40 W compared to non-HT.
So I turned off the PL tweaking and reran, with HT on. 204 W initially, then after a while it went down to ca 140 W and the multipliers reduced to about 37x.
Kind of surprised that HT made such a difference, I was under the impression that HT "cores", being a small backpack aside the "real" core, added a tiny percent of transistors overall. I usually disable HT because the software I run don't benefit from them and actually loses performance with it.
So: mystery solved and I stand corrected.
Intel is not lying when they call this a 65 W CPU. They are however obscuring the fact that it does so with REDUCED PERFORMANCE. Its default behavior is to only run at 100% for half a minute.
When allowed by BIOS tweaks, it will double the power draw but run at 100% all the time. This is overclocking in the sense that default settings are overridden - but it is not in the sense that the peak speed is not actually driven above its intended levels. Just maintained at higher power draw.
Aight. I'm back to non-HT and free power. 173W is not that much.
Just did a compare of performance during my simulations, and they were more or less identical to the default settings.
Qasar - Thursday, February 4, 2021 - link
it is possible that the Gigabyte B460M DS3H that you are using ( as per a previous post ) could be holding the cpu back as far as overclocking, power usage and such goes. as the B460m doesnt support overclocking by intel, but asus, asrock and msi seems to have found a way to enable overclocking:https://www.techpowerup.com/266489/asrock-enables-...
https://videocardz.com/newz/asus-asrock-and-msi-br...
at the same time, though, what asus, asrock and msi have done, isnt really overclocking, but more of allowing the cpu to use its turbo states longer, then what intel allows
both of those links, could explain, at least partly, HarkPtooie, why you are getting the results you have.
Everett F Sargent - Thursday, February 4, 2021 - link
Yes, I found those links also. Conspicuously absent from all those reports was Gigabyte. But ...https://www.gigabyte.com/us/Motherboard/Intel-H470...
There you will find ...
B460M DS3H (rev. 1.0)
B460M DS3H AC (rev. 1.x)
B460M DS3H V2 (rev. 1.0)
(ranked oldest to newest afaik)
From the manual for the B460M DS3H (rev. 1.0) (page 25) ...
https://download.gigabyte.com/FileList/Manual/mb_m...
https://download.gigabyte.com/FileList/Manual/mb_m...
https://download.gigabyte.com/FileList/Manual/mb_m...
"Turbo Power Limits
Allows you to set a power limit for CPU Turbo mode. When the CPU power consumption exceeds the specified power limit, the CPU will automatically reduce the core frequency in order to reduce the power. Auto sets the power limit according to the CPU specifications. (Default: Auto)
Package Power Limit TDP (Watts) / Package Power Limit Time
Allows you to set the power limit for CPU Turbo mode and how long it takes to operate at the specified power limit. If the specified value is exceeded, the CPU will automatically reduce the core frequency in order to reduce the power. Auto sets the power limit according to the CPU specifications. This item is configurable only when Turbo Power Limits is set to Enabled. (Default: Auto)
DRAM Power Limit (Watts) / DRAM Power Limit Time
Allows you to set the power limit for memory Turbo mode and how long it takes to operate at the specified power limit. Auto lets the BIOS automatically configure this setting. This item is configurable only when Turbo Power Limits is set to Enabled. (Default: Auto)"
That same language can be found for all three MB manuals. So. it would appear that pl1, pl2 and tau are adjustable as HarkPtooie has suggested (but to be sure the latest bios version should be installed imho).
The only question I have is, why did Gigabyte apparently update the B460M DS3H (rev. 1.0) to the B460M DS3H V2 (rev. 1.0) (maybe they are different in some hardware way that I have failed to notice).
The stress test should be the one that produces the highest temperatures together with the best cooling solution possible for these non-K parts. It sounds a bit circular but then these are non-K parts where we constrain the control knobs to just pl1, pl2 and tau.
Spunjji - Monday, January 25, 2021 - link
"If you are going to make wild speculations whose veracity anyone can check, you might want to go over your material a bit better."The irony of ending your FUD with this... it's glorious!
HarkPtooie - Tuesday, January 26, 2021 - link
It would be ironic if I were wrong, but I sort of trust my eyes here. And my point was that anyone possessing an i7-10700 and a $20 wattmeter can easily check this too.Spunjji - Wednesday, January 27, 2021 - link
Good for you, but I don't trust your eyes - not when every objective review available on the internet contradicts you.quiq - Sunday, January 24, 2021 - link
I would have liked them to test the processors in addition to the heatsink that comes in the retail box, that would provide a sample of how the product behaves that an end user obtains when buying it. Obviously the use of a heatsink from a 3rd party manufacturer improves the performance of both due to the superior ability to eliminate heat, which helps to maintain the turbo frequencies for longer in both processors.olde94 - Monday, January 25, 2021 - link
one thing i don't see is that the CPU is officially rated 2.9ghz. Not 4.0 as the graphs seems to suggest. We are getting 4.0 with propper cooling, but what i gave it a 90W cooler? Would i end up back at 2.9ghz? We all know that frequency and powerdraw is never a linear curve so we might see 25% lower powerformance at 1/3 the power draw and as such their claim about 65w could be true, but that it peaks if allowed to. I mean don't get me wrong, it's shitty, but is it really that wrong though?noxplague - Monday, January 25, 2021 - link
Dr. Cuttess, thank you as always for these in-depth analyses.I would really like to see how this compares with the previous 9th generation Intel parts (9900, 9700, 9600, etc). However in your bench tool the 2020 and 2019 tests make this difficult. Couldnāt the data be back ported or forward ported and just put N\A for tests that arenāt in both datasets?
I love then bench tool, but itās recently hamstrung by not allowing for comparisons of 8th gen and 9th gen (and equivalent AMD parts)
Nesteros - Wednesday, January 27, 2021 - link
I was under the impression that TDP was the maximum amount of thermal energy, measured in watts, that a CPU would ever produce and would need to be āremovedā by the thermal solution, not the amount of energy measured in watts that a CPU consumes. Surely a processor is not converting all of the power it consumes into heat, else it would be a very efficient space heater and not a CPU.Qasar - Thursday, January 28, 2021 - link
take a look at this :https://www.anandtech.com/show/13544/why-intel-pro...
simply put, intel bases its TDP at BASE clocks, with what they would consider default settings. AMD, bases its TDP, on, for the most part, max power draw. same value, WAY different view of what TDP is between them
Peter-fra - Saturday, February 13, 2021 - link
Dear Anandtech team, thanks a lot for this great clarification of the difference between K and non K Intel products. However, I would like to know what you think about these Geekbench multi-score results showing a gap of around 13% between the 10700 and 10700k on multi-core bench ?=> https://browser.geekbench.com/processor-benchmarks
Since the difference of all core turbo frequency between those 2 processor is around 2-3% (4.7Ghz vs 4.6Ghz) I cannot understand why there would be a 13% gap on this benchmark ?
Does it mean that the Geekbench aggregated data of the 10700 comes from OEM builds with entry level motherboard which doesn't maximize turbo (probably because the VRM are not great) and stay within the intel recommended turbo ?
Scour - Monday, February 15, 2021 - link
ThatĀ“s the end of my 350W-PSUs :(stealth-katana - Friday, April 2, 2021 - link
The image with the text written with a sharpie on the CPU is making me cringe. š¤£briantim - Wednesday, September 8, 2021 - link
http://home.anandtech.com/show/16343/intel-core-i7...