...overall gains in cpu performance on the intel side of things hasnt been improving in leaps and bounds since 2012.
compare 3770k 2012 to 7700k 2017 its roughly a 30% performance increase...in 5 years.
AMD record in the same timeframe is even worse.
adamsleath :
the graphics card side of things is much more impressive..
GPUs are throughput machines that work on parallel workloads. Increasing performance is so easy as adding moar cores.
adamsleath :
in this regard, even as was mentioned many years ago with the move to dual and then more cores that multicore would pave the way for the future of cpus (and therefore programming aswell) because of GHz limitations. finally in 2017 we have a cpu that can achieve 5GHz...i remember talking about this in 2008 5 GHz seemed beyond reach in 2008 a good intel cpu was around 4.0 GHz.
anyway my main point is cpu gains on the single core front have been limited since they started to reach GHz limitations / process limitations. an economy of scale way to increase processing power is obviously more cores.
Except that moar cores is limited by Amdahl's law. Otherwise, hundred core CPUs would be the norm today.
We have talked about this multiple times, of course AMD want us to use their CPU's at higher resolutions so they can hide their weak single core performance, because at low resolutions, the Intel advantage is significant, and it trails off as you run games at higher resolutions and detail settings only (because the graphics card becomes the performance bottleneck) duh. The intel core i7 7700K outpaced the 1700 by around 40fps in the CPU-intensive game Ashes of the Singularity test.
The 7700K’s good performance continued to single-threaded tests: its 472-point result in POVRay was easily ahead of the Ryzen chip’s 315-point result, and it is nearly 60 points better in Cinebench. The Core i7 is a better overclocker than AMD, too, and its power consumption isn't much higher than the Ryzen 7 1700.The only area where the Core i7-7700K falls behind is in multi-threaded benchmarks, don't let fake reviews fool you.
Do you really want me to believe that when we ran a test and my i7 990X past 4GHz released Q1'11 gets equal and sometimes 20+ more FPS than my Friend's new Ryzen 7 each one using one of my 2 GTX 980 Ti's graphics cards while playing online together!!!??? Same games, same levels walking side by side while talking through a headset. !!!??? I just can't.
As the graph shows depending on the game results can vary depending on the title, and there were compatibility issues with RAM and optimization issues with games. Try comparing frame rates now. agesa 1.0.0.6 update have made things much better now for RAM compatibility, and tightened the internal latency problem with Ryzen a bit. You do know that the difference in using 2133MHz RAM vs 3200MHz RAM offers a 15% better FPS performance increase vs. Intel using the same RAM. I'm sure your testing conditions were likely less than ideal.
Lol I'm still using triple channel DDR3 and he is using DDR4
You processor doesn't suffer Internal latency problems. Faster RAM reduces internal latency associated with infinity fabric. Also, as mentioned before,"agesa 1.0.0.6 update have made things much better now for RAM compatibility, and tightened the internal latency problem with Ryzen a bit." Like I also said, "I'm sure your testing conditions were likely less than ideal."
Regardless of what you have said, that's still a SIX year old chip matching or beating a chip released this year.
Looking at benchmarks it comes within 10% of Ryzen's single core score with both chips overclocked. Sure, it gets pretty much demolished in MC, but that's a pretty unimpressive result.
I doubt there are viruses or malware on a brand new computer, and in gaming Ryzen's many more cores than needed for that task ensure that background tasks likely aren't getting in the way.
Stop using the slower RAM as an excuse, as it's entirely possible that the 990x was indeed faster at processing the assets required for that game. And even if that isn't the case, 3200MHz ram is considerably more expensive than 2133MHz RAM.
Regardless of what you have said, that's still a SIX year old chip matching or beating a chip released this year.
Looking at benchmarks it comes within 10% of Ryzen's single core score with both chips overclocked. Sure, it gets pretty much demolished in MC, but that's a pretty unimpressive result.
You have already made up a determination in you mind that what I've said doesn't matter to you and your are going to ignore it, which was based on reason and factual evidence. And you do just that with your next comment.
Looking at benchmarks it comes within 10% of Ryzen's single core score with both chips overclocked. Sure, it gets pretty much demolished in MC, but that's a pretty unimpressive result.
You dismiss and down play what are obvious strengths of Ryzen noting 8 cores for the 1700, which is cheaper than the i7-7700K, and smashes multithreading score 990x by more than double smashes i7-7700k by ~50%. But you can't be bother with that unimpressive result.
that's still a SIX year old chip matching or beating a chip released this year.
Ryzen has a single thread performance is limited by it's slower frequency, and inability to reach higher clocks. Despite this there are games in which Ryzen performs very close to and even beats the i7-7700K in some games. Clock for clock core for core Ryzen 1500X is only 8.5% slower than the 7700K, and clock for clock core for core Ryzen 1500X is 8.5% faster in multithreading. My 5 year old i7-3770K@4.6GHz single thread score of 171 vs. the 7900X@4.6GHz single thread score of 196 is just 14.62% faster than my 5 year old CPU.
I doubt there are viruses or malware on a brand new computer, and in gaming Ryzen's many more cores than needed for that task ensure that background tasks likely aren't getting in the way.
I'm glad you were there to inspect their testing methodology. Wait you are basing your opinions on hearsay. It won't hold up in court, and it won't hold up with me either! Now you choose to use "unimpressive" core count in your argument because now it's multithreading capability matters too you when making your point. You just pick and choose what you want to ignore and acknowledge regardless whenever it suits you. Clearly not a strong foundation based on science being displayed here! And for my thoughts on your final statement.
Stop using the slower RAM as an excuse, as it's entirely possible that the 990x was indeed faster at processing the assets required for that game. And even if that isn't the case, 3200MHz ram is considerably more expensive than 2133MHz RAM.
No one denied that, but you don't even know what game it was. And you definitely don't know the testing conditions, but regardless of that you want to speculate away using biased opinions to etch out some kind of win with what amounts to more of a rant than scientific reason.
Edit: PCPartPicker part list: https://pcpartpicker.com/list/Gph9kT
Price breakdown by merchant: https://pcpartpicker.com/list/Gph9kT/by_merchant/
Memory: G.Skill - Ripjaws V Series 16GB (2 x 8GB) DDR4-3200 Memory ($138.99 @ Newegg)
Memory: Mushkin - Silverline 16GB (2 x 8GB) DDR4-2133 Memory ($119.99 @ Newegg)
Total: $258.98
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2017-09-22 22:54 EDT-0400
Clock for clock core for core Ryzen 1500X is only 8.5% slower than the 7700K, and clock for clock core for core Ryzen 1500X is 8.5% faster in multithreading.
That is only true on a review that is comparing overclocked RyZen vs stock Intel and does on a workload favorable to RyZen. Stock vs stock and on general applications the gap is 10--20% clock-for-clock and core-for-core.
...overall gains in cpu performance on the intel side of things hasnt been improving in leaps and bounds since 2012.
compare 3770k 2012 to 7700k 2017 its roughly a 30% performance increase...in 5 years.
whacky do.
the graphics card side of things is much more impressive..
in this regard, even as was mentioned many years ago with the move to dual and then more cores that multicore would pave the way for the future of cpus (and therefore programming aswell) because of GHz limitations. finally in 2017 we have a cpu that can achieve 5GHz...i remember talking about this in 2008 5 GHz seemed beyond reach in 2008 a good intel cpu was around 4.0 GHz.
anyway my main point is cpu gains on the single core front have been limited since they started to reach GHz limitations / process limitations. an economy of scale way to increase processing power is obviously more cores.
My i7-3770K@4.6 measures up against the stock i7-7700K fairly good in some tests. 171/194; 848/984. Mind you my testing environment was less than ideal with programs running while I was testing, and being far from a fresh installation of windows 10. Now of course that's why we get K processors, for longevity. I also agree that more cores increases throughput!
Edit: 171/196 of the similar overclock 7900X@4.6GHz
certainly speaks for the longevity of the cpu. yes. interesting results you show...; just doing the math with your scores...its about 16% behind the latest 7700k. single thread int that test. fyi the 30% figure i took from userbenchmark...while not regarded as particularly accurate, for me it gives a ballpark rating, and is at least based on a large sample of cpu scores / benchmarks / gaming etc. people seem to be remarking that there are unfair comparisons with biased tests and so on. (comparing ryzens to intel etc) but im not arguing anything about that here...its a mixed bag of results when they are compared anyway.
and yeah. gotta compare apples with apples. performance per clock. the extra oomph of the 5.0GHz ish sure doesnt hurt. there are some bench scores floating around for the 8700k @ 4.7GHz im hoping to achieve this myself. i too have been caught up in the whole ryzen phenomenon, simply because its refreshing to have more products on the market. ive had my eye on coffee lake, as i vowed to myself my next cpu would be a 6-core. and im too cheap to buy into HEDT. also mainstream adoption of 6-core will potentially increase the relevance of it in day to day usage.
anyway, just my thoughts.
but a question i have is about the "ring-bus" issue. and mesh interconnect...something that affects latency. apparently the ringbus is still the faster of the two? and coffee lake cpus will have it? thats my question.
...and just to add that the 'more cores are better' argument is still valid, as it works when programs are designed to utilise it. cant just sit back on your laurels (intel) relying on the ipc/architecture advantage.
are we really going to be discussing single core performance in 5 or 10 or 15 years time? for like 15-30% improvement? in cinebench? :lol: multithreads surely have more potential than that...
but it goes back to longevity....if improvements are too rapid, obsolescence becomes a factor....can't win
...overall gains in cpu performance on the intel side of things hasnt been improving in leaps and bounds since 2012.
compare 3770k 2012 to 7700k 2017 its roughly a 30% performance increase...in 5 years.
whacky do.
the graphics card side of things is much more impressive..
in this regard, even as was mentioned many years ago with the move to dual and then more cores that multicore would pave the way for the future of cpus (and therefore programming aswell) because of GHz limitations. finally in 2017 we have a cpu that can achieve 5GHz...i remember talking about this in 2008 5 GHz seemed beyond reach in 2008 a good intel cpu was around 4.0 GHz.
anyway my main point is cpu gains on the single core front have been limited since they started to reach GHz limitations / process limitations. an economy of scale way to increase processing power is obviously more cores.
My i7-3770K@4.6 measures up against the stock i7-7700K fairly good in some tests. 171/194; 848/984. Mind you my testing environment was less than ideal with programs running while I was testing, and being far from a fresh installation of windows 10. Now of course that's why we get K processors, for longevity. I also agree that more cores increases throughput!
Edit: 171/196 of the similar overclock 7900X@4.6GHz
certainly speaks for the longevity of the cpu. yes. interesting results you show...; just doing the math with your scores...its about 16% behind the latest 7700k. single thread int that test. fyi the 30% figure i took from userbenchmark...while not regarded as particularly accurate, for me it gives a ballpark rating, and is at least based on a large sample of cpu scores / benchmarks / gaming etc. people seem to be remarking that there are unfair comparisons with biased tests and so on. (comparing ryzens to intel etc) but im not arguing anything about that here...its a mixed bag of results when they are compared anyway.
and yeah. gotta compare apples with apples. performance per clock. the extra oomph of the 5.0GHz ish sure doesnt hurt. there are some bench scores floating around for the 8700k @ 4.7GHz im hoping to achieve this myself. i too have been caught up in the whole ryzen phenomenon, simply because its refreshing to have more products on the market. ive had my eye on coffee lake, as i vowed to myself my next cpu would be a 6-core. and im too cheap to buy into HEDT. also mainstream adoption of 6-core will potentially increase the relevance of it in day to day usage.
anyway, just my thoughts.
but a question i have is about the "ring-bus" issue. and mesh interconnect...something that affects latency. apparently the ringbus is still the faster of the two? and coffee lake cpus will have it? thats my question.
Ringbus has lower latency than the mesh interconnect. Overclocking the mesh can significantly help in some circumstances. I don't know which the new 6 cores will have. As for as benchmarks go, more is better! Give me as much data as you can is my line of thinking. It's helps from being steered into one direction that can be misleading or just plain wrong. I like consistent results with proven scientific methodology across a wide range of results. Holding onto 1 review and ignoring 5 others is just ridiculous. And going by the mindset just because you want something to be true seeking it out till you find it, and basing all your arguments on it isn't the best way to got about it. Sometimes you are forced to make assumption just for lack of more data, and that's fine as long as you are reasonable to future or new evidence. And for the Ryzen phenomenon I want them to be successful! I wish there were 10 more companies making CPU's! The market has been stagnant without competition, and ultimately unexciting. I want the best bang for my buck like you said HEDT, isn't worth it for what I use my computer for. Intel mainstreaming more cores will apply more pressure to programmers to adjust for this as well.
Clock for clock core for core Ryzen 1500X is only 8.5% slower than the 7700K, and clock for clock core for core Ryzen 1500X is 8.5% faster in multithreading.
That is only true on a review that is comparing overclocked RyZen vs stock Intel and does on a workload favorable to RyZen. Stock vs stock and on general applications the gap is 10--20% clock-for-clock and core-for-core.
The comparison I made is with the 1500X@3.5GHz and the 7700K@3.5GHz with cinebench single and multithreaded workloads, which is fine.
but a question i have is about the "ring-bus" issue. and mesh interconnect...something that affects latency. apparently the ringbus is still the faster of the two? and coffee lake cpus will have it? thats my question.
...and just to add that the 'more cores are better' argument is still valid, as it works when programs are designed to utilise it. cant just sit back on your laurels (intel) relying on the ipc/architecture advantage.
CoffeLake uses ringbus. CoffeLake is based in same design principles than SkyLake /KabyLake.
Moar cores is not always better. Adding moar cores, where each core has a similar performance, is good.
Replacing a small number of strong cores with a large number of weaker cores is only better in some cases.
Clock for clock core for core Ryzen 1500X is only 8.5% slower than the 7700K, and clock for clock core for core Ryzen 1500X is 8.5% faster in multithreading.
That is only true on a review that is comparing overclocked RyZen vs stock Intel and does on a workload favorable to RyZen. Stock vs stock and on general applications the gap is 10--20% clock-for-clock and core-for-core.
The comparison I made is with the 1500X@3.5GHz and the 7700K@3.5GHz with cinebench single and multithreaded workloads, which is fine.
And the review is not comparing 1500X@3.5GHz and 7700K@3.5GHz but
1500X@3.5GHz (plus interconnect OC) and 7700K@3.5GHz.
That is how the managed to reduce the IPC gap from 11% to 8.5%.
Ringbus has lower latency than the mesh interconnect. Overclocking the mesh can significantly help in some circumstances.
Latency depends on the number of cores. Buses and rings scale nonlinearly with number of cores attached. Meshes and torus scale linearly. That is the reason why designs with large core counts (e.g. 128 cores or more) always use meshes and torus.
Overclocking the mesh on SKL-X reduces the latency. And this always improves performance on latency-sensitive benchmarks.
Clock for clock core for core Ryzen 1500X is only 8.5% slower than the 7700K, and clock for clock core for core Ryzen 1500X is 8.5% faster in multithreading.
That is only true on a review that is comparing overclocked RyZen vs stock Intel and does on a workload favorable to RyZen. Stock vs stock and on general applications the gap is 10--20% clock-for-clock and core-for-core.
The comparison I made is with the 1500X@3.5GHz and the 7700K@3.5GHz with cinebench single and multithreaded workloads, which is fine.
And the review is not comparing 1500X@3.5GHz and 7700K@3.5GHz but
1500X@3.5GHz (plus interconnect OC) and 7700K@3.5GHz.
That is how the managed to reduce the IPC gap from 11% to 8.5%.
If you mean that simply putting in a stick of 3000MHz RAM and having it run at 2,933MHz as overclocking the interconnect, I will still consider the test valid, because this shows a real world scenario, which the average person would encounter by simple popping in a stick of RAM. Note that Intel already beats Ryzen's internal latency by high margins, and this only gets Ryzen closer to the latency Intel already enjoys.
Also, one must consider now with the new AGESA 1.0.0.6 allows even greater compatibility of higher speed RAM to be used. 3200MHz is now much more widely compatible and typically used in all testing, and if used in this test would have changed the results to more favor Ryzen by a greater margin. But I think it's fine for the comparison I wish to make here. The fact that Ryzen infinity fabric speed uses RAM speed as reference means the faster the RAM the lower the latency of the interconnect. But again as in real world scenarios this is a simple as someone popping in RAM and having it work at it's registered speed. The by product of higher RAM speed is the latency gets closer to the much lower latency Intel already enjoys, so it's more than a fair comparison.
Clock for clock core for core Ryzen 1500X is only 8.5% slower than the 7700K, and clock for clock core for core Ryzen 1500X is 8.5% faster in multithreading.
That is only true on a review that is comparing overclocked RyZen vs stock Intel and does on a workload favorable to RyZen. Stock vs stock and on general applications the gap is 10--20% clock-for-clock and core-for-core.
The comparison I made is with the 1500X@3.5GHz and the 7700K@3.5GHz with cinebench single and multithreaded workloads, which is fine.
And the review is not comparing 1500X@3.5GHz and 7700K@3.5GHz but
1500X@3.5GHz (plus interconnect OC) and 7700K@3.5GHz.
That is how the managed to reduce the IPC gap from 11% to 8.5%.
If you mean that simply putting in a stick of 3000MHz RAM and having it run at 2,933MHz as overclocking the interconnect, I will still consider the test valid, because this shows a real world scenario, which the average person would encounter by simple popping in a stick of RAM.
No one said it not a real-world scenario. The point was other. The point was that the chip is overclocked, but it is labeled as stock in the graphs.
It is also a real world-scenario to have 7700k overclocked, but they only test the i7 on stock. It is also a real-world scenario to overclock the interconnect on SKL-X chips, but Arstechnica guys only overclock the AMD RyZen and ThreadRipper chips.
BIASED!
goldstone77 :
Also, one must consider now with the new AGESA 1.0.0.6 allows even greater compatibility of higher speed RAM to be used. 3200MHz is now much more widely compatible and typically used in all testing, and if used in this test would have changed the results to more favor Ryzen by a greater margin. But I think it's fine for the comparison I wish to make here. The fact that Ryzen infinity fabric speed uses RAM speed as reference means the faster the RAM the lower the latency of the interconnect. But again as in real world scenarios this is a simple as someone popping in RAM and having it work at it's registered speed. The by product of higher RAM speed is the latency gets closer to the much lower latency Intel already enjoys, so it's more than a fair comparison.
Looking at the new AGESA 1.0.0.6 we can see how crippled Ryzen has been when tested in the past, and how beneficial faster RAM is for Ryzen.
As I have mentioned a couple of times AGESA 1.0.0.6 doesn't provide performance improvements for stock RAM, but it simply provided better overclocking of RAM. Intel also benefits which higher overclock RAM.
Clock for clock core for core Ryzen 1500X is only 8.5% slower than the 7700K, and clock for clock core for core Ryzen 1500X is 8.5% faster in multithreading.
That is only true on a review that is comparing overclocked RyZen vs stock Intel and does on a workload favorable to RyZen. Stock vs stock and on general applications the gap is 10--20% clock-for-clock and core-for-core.
The comparison I made is with the 1500X@3.5GHz and the 7700K@3.5GHz with cinebench single and multithreaded workloads, which is fine.
And the review is not comparing 1500X@3.5GHz and 7700K@3.5GHz but
1500X@3.5GHz (plus interconnect OC) and 7700K@3.5GHz.
That is how the managed to reduce the IPC gap from 11% to 8.5%.
If you mean that simply putting in a stick of 3000MHz RAM and having it run at 2,933MHz as overclocking the interconnect, I will still consider the test valid, because this shows a real world scenario, which the average person would encounter by simple popping in a stick of RAM.
No one said it not a real-world scenario. The point was other. The point was that the chip is overclocked, but it is labeled as stock in the graphs.
It is also a real world-scenario to have 7700k overclocked, but they only test the i7 on stock. It is also a real-world scenario to overclock the interconnect on SKL-X chips, but Arstechnica guys only overclock the AMD RyZen and ThreadRipper chips.
BIASED!
goldstone77 :
Also, one must consider now with the new AGESA 1.0.0.6 allows even greater compatibility of higher speed RAM to be used. 3200MHz is now much more widely compatible and typically used in all testing, and if used in this test would have changed the results to more favor Ryzen by a greater margin. But I think it's fine for the comparison I wish to make here. The fact that Ryzen infinity fabric speed uses RAM speed as reference means the faster the RAM the lower the latency of the interconnect. But again as in real world scenarios this is a simple as someone popping in RAM and having it work at it's registered speed. The by product of higher RAM speed is the latency gets closer to the much lower latency Intel already enjoys, so it's more than a fair comparison.
Looking at the new AGESA 1.0.0.6 we can see how crippled Ryzen has been when tested in the past, and how beneficial faster RAM is for Ryzen.
As I have mentioned a couple of times AGESA 1.0.0.6 doesn't provide performance improvements for stock RAM, but it simply provided better overclocking of RAM. Intel also benefits which higher overclock RAM.
When making a comparison clock for clock the test is fine. And going form 2133MHz to 3200MHz offer 15% more FPS over the performance Intel offers, because infinity fabric uses RAM speed as a reference thus reducing the internal latency that Intel already enjoys by greater margins! So, the test is more than fair to use as a comparison despite your complaints of bias where none exist.
We have talked about this multiple times, of course AMD want us to use their CPU's at higher resolutions so they can hide their weak single core performance, because at low resolutions, the Intel advantage is significant, and it trails off as you run games at higher resolutions and detail settings only (because the graphics card becomes the performance bottleneck) duh. The intel core i7 7700K outpaced the 1700 by around 40fps in the CPU-intensive game Ashes of the Singularity test.
The 7700K’s good performance continued to single-threaded tests: its 472-point result in POVRay was easily ahead of the Ryzen chip’s 315-point result, and it is nearly 60 points better in Cinebench. The Core i7 is a better overclocker than AMD, too, and its power consumption isn't much higher than the Ryzen 7 1700.The only area where the Core i7-7700K falls behind is in multi-threaded benchmarks, don't let fake reviews fool you.
When you put a CPU that is aimed at tasks that are not primarily gaming or "single threaded", it is *obvious* you will test it in such disciplines as an addition. Then, you will not care how the CPU actually achieves the performance, only that is achieves a particular threshold at the stipulated price range it will sell. In this particular case, TR and the 7900X were both put into a shizzle ton of tests across 50+ disciplines (if not more) and in all they were trading blows. There is no clear winner and the answer in every single review (as far as I have read) is that TR justifies its price with no issues facing the 7900X.
If you want to de-merit AMD for achieving a milestone just because "it doesn't game as well as the Intel", that is on you, not on the data gathered nor the tasks at hand. If you can think of a test that actually is needed that should be added, why not make the suggestion for a next round of tests here at Toms or another site that is good at reviewing stuff?
Also, if you have doubts on methodology, that is another story. Not all sites are trustworthy or draw conclusions in a weird manner, but as long as they expose their testing methodology and you can reproduce their figures, you will know why those numbers come to be and have more information to take in and make a decision.
Hell, I still haven't seen a *SINGLE* site that has put OBS (or any streaming software) while testing games to see how the CPUs behave. People has whined about 4K SO FRIGGIN' MUCH, but I haven't seen a single tear shed about streaming. So, I'll have to shed those manly tears from now on.
My complain is not about reviews whose tone shows an evident bias towards a given brand, where when Intel wins by a huge margin such as 45%, this is dismissed with a "Zen gives enough performance" or a similar claim, but when AMD wins by a margin of 30%, the tone changes to something as "the 1950X completely smashes the i9-7900X".
My main complain is when reviews use dirty tricks to favor one brand over other. Since Zen launch, and including AMD pre-release demos, we have seen the next dirty tricks:
■ quad-channel disabled on Intel chips
■ turbo disabled on Intel chips
■ testing with workloads that have a bug favoring Zen
■ most of tests being workloads where Zen shines such as rendering, rendering, rendering, and rendering
■ testing at 4K to generate huge GPU-bottlenecks and favor Zen on games
■ testing memory-bound workloads with memory subsystem crippled on Intel side
■ testing with compiler/flags combinations that reduce Intel chips performance by 40--60%
■ testing a concrete model of Intel CPU on motherboards with known compatibility issues with that concrete model of CPU
■ testing Intel engineering samples instead retails chips, and label the chip on graphs as if was retail
■ testing overclocked AMD chips vs stock Intel chips, and label chips on the graphs as if were both stock
■ testing custom workload that favor Zen, instead existent official workloads
■ test Intel platforms with beta BIOS, and don't retest latter with final BIOS
■ And so on
1. Less than 1% performance difference in the relevant benchmarks from quad channel memory per testing at 3 different sources. It looks good on AIDA, but beyond that, it is mostly superfluous outside professional level workloads that need the raw capacity.
2. It was also disabled on Ryzen in those tests, are we going to make it fair, or edge one side over the other? That was per clock testing, and it was relevant because the clocks were locked (which is what any reviewer would do to test per clock).
3. Source? There were no workloads favoring Zen in any testing, and the sleep bug was discovered by a random guy on the internet. Clearly AMD never put their PCs to sleep.
4. What about the gameplay with OBS running? What about the photo editing and video editing workloads where Intel normally shines?
5. Both sides were tested at 4K (which is relevant), you can complain all you want about it, but I want to know how a top end system performs running top end settings. I could not care less how a $4k PC build runs @ 320x240 or some other insanely bad resolution.
6. The memory system was never crippled on Intel processors in any tests.
7. There were no compiler optimizations beyond accommodating for removal of libquantum that heavily favors Intel.
8. If Intel has known compatibility issues in their product stack, that is not an issue of AMD's
9. They tested retail chips...AMD does not get hands on with Intel engineering samples (I cannot even make a leap in logic to a world where Intel would legitimately send AMD engineering samples...they went to court over that very idea).
10. This is a fallacy, AMD never tested an overclocked chip against a stock Intel chip.
11. Uhh...SPEC is official.
12. They tested what was available...How many reviewers have gone back to test Ryzen since the AGESA update? None? No sympathy.
13. Considering everything listed to this point in this rant was bad information, or flat out wrong, I am curious to hear what "And so on" could possibly be.
1. It is difficult to accept that AMD disabled half the memory channels on Intel platform only to get less than 1% advantage. Moreover, here a real test where quad-channel provides a nice 7% performance advantage.
2. Turbo 3 disabled on Intel chips. Turbo enabled on RyZen. That is why reviews couldn't reproduce AMD performance claims.
3. Early leaks and benches using CPU-Z. It was found a bug affected scores on chips with 256KB L2 (as Broadwell-E) and gave extra performance advantage to RyZen over Broadwell-E. The bug was corrected latter in a new version of CPU-Z.
Also the bugs that affected several claimed performance overclocks made with RyZen, until the point that HWBOT banned all those claimed scores submitted for RyZen, because the performance measured was fake.
5. Testing at 4K generates a GPU bottleneck and hides the performance deficits of RyZen. That is why the so-called "CPU test" involve low resolutions. Those 720p tests aren't made because people play games at those resolutions, but for a different reason; a technical reason which has been given a dozen of times.
6. It was made on AMD demos involving Broadwel-E and Broadwell Xeons.
7. The libquantum subscore was removed for both Intel and AMD. The cheating was on the rest of subtests. For instance the 403.gcc subscore was 40% slower on Broadwell-E Xeons and about 60% slower on Skylake Xeons thanks to special compiler/flag choices. Curiously using those choices the biased review manage to reproduce AMD official results for Broadwell chips.
8. No one said it is an issue for AMD. It is an issue for the reviewer that chose the only known incompatible X299 motherboard for his review.
9. Guru3d or HU/Techspot have used engineering samples of Intel chips in their reviews and comparisons of Intel vs AMD.
10. Guru3D, Techspot, Arstechnica do compare OC AMD vs stock Intel.
11. No one mentioned SPEC in this point. My claim was about AMD using a custom workload for Blender. Using one of the standard Blender workoads the picture is difference: RyZen loses instead wining.
12. Virtually any review site has retested Zen with latest AGESA/BIOS. Some reviews have published special articles comparing new versions of AGESA/BIOS for RyZen, and some few have published special articles detailing changes made on new BIOS/AGESA. On the other hand I only know a pair of sites that retested i9 with final BIOS. Everyone else only published the launch BETA BIOS results, and some didn't even mention the use of BETA bios.
1. you cherry picked 7 zip.
2. Several reviewers reproduced AMD's results...hard OCP, OC3D, and others.
3. CPU-Z is not considered a real world benchmark by anyone of note.
5. Testing at 4K shows what the performance is at maximum resolution. I do not care how a game plays at 720p on minimum settings. Those benchmarks are not indicative of performance of a processor at 1440p, 21:9, or 4K, because many tests have shown that performance at higher resolutions is not reflected by performance at lower resolutions. This is a classic fallacy that many Intel grognards put forth as justification for trying to create a situation that still shows some sort of optimal scenario for that processor.
6. Source? I have never seen any evidence of this.
7. Source? I have read AMD's required procedures for reproducing the benchmark, and it makes no mention of these specific requirements.
8. X299 board for X299 processor. If there is an issue there, then that is on Intel.
9. That is not an issue for AMD. AMD does not have Intel ES chips.
10. Reviewers do what reviewers do, which is why I like computerbase.de
11. Using whose blender settings? Blender is so ubiquitous that settings vary. If you are going to quote Gamers Nexus, I will dismiss their claims, same as you would dismiss Adored TV. Known shills are known shills.
12. Virtually any? LOL...ok. OC3D is the only one I know in mainstream media. There may be others that are fringe sites...but they do not reach most sets of eyes looking for reviews now do they?
Clock for clock core for core Ryzen 1500X is only 8.5% slower than the 7700K, and clock for clock core for core Ryzen 1500X is 8.5% faster in multithreading.
That is only true on a review that is comparing overclocked RyZen vs stock Intel and does on a workload favorable to RyZen. Stock vs stock and on general applications the gap is 10--20% clock-for-clock and core-for-core.
The comparison I made is with the 1500X@3.5GHz and the 7700K@3.5GHz with cinebench single and multithreaded workloads, which is fine.
And the review is not comparing 1500X@3.5GHz and 7700K@3.5GHz but
1500X@3.5GHz (plus interconnect OC) and 7700K@3.5GHz.
That is how the managed to reduce the IPC gap from 11% to 8.5%.
If you mean that simply putting in a stick of 3000MHz RAM and having it run at 2,933MHz as overclocking the interconnect, I will still consider the test valid, because this shows a real world scenario, which the average person would encounter by simple popping in a stick of RAM.
No one said it not a real-world scenario. The point was other. The point was that the chip is overclocked, but it is labeled as stock in the graphs.
It is also a real world-scenario to have 7700k overclocked, but they only test the i7 on stock. It is also a real-world scenario to overclock the interconnect on SKL-X chips, but Arstechnica guys only overclock the AMD RyZen and ThreadRipper chips.
BIASED!
goldstone77 :
Also, one must consider now with the new AGESA 1.0.0.6 allows even greater compatibility of higher speed RAM to be used. 3200MHz is now much more widely compatible and typically used in all testing, and if used in this test would have changed the results to more favor Ryzen by a greater margin. But I think it's fine for the comparison I wish to make here. The fact that Ryzen infinity fabric speed uses RAM speed as reference means the faster the RAM the lower the latency of the interconnect. But again as in real world scenarios this is a simple as someone popping in RAM and having it work at it's registered speed. The by product of higher RAM speed is the latency gets closer to the much lower latency Intel already enjoys, so it's more than a fair comparison.
Looking at the new AGESA 1.0.0.6 we can see how crippled Ryzen has been when tested in the past, and how beneficial faster RAM is for Ryzen.
As I have mentioned a couple of times AGESA 1.0.0.6 doesn't provide performance improvements for stock RAM, but it simply provided better overclocking of RAM. Intel also benefits which higher overclock RAM.
When making a comparison clock for clock the test is fine. And going form 2133MHz to 3200MHz offer 15% more FPS over the performance Intel offers, because infinity fabric uses RAM speed as a reference thus reducing the internal latency that Intel already enjoys by greater margins! So, the test is more than fair to use as a comparison despite your complaints of bias where none exist.
And my points have been ignored once again...
The review is biased because label an overclocked chip as "stock" in the graphs.
The review is biased because overclocks the AMD chip but doesn't overclock the Intel chip.
The review is biased because overclocks the interconnect in RyZen and ThreadRipper to reduce the lantecy, but doesn't overclock the interconnect on SKL-X chips.
The 720p testing has a different reason. The reason has been explained a dozen of times. Ignoring the reason will not stop reviews from performing 720p and 1080p tests.
The 720p testing has a different reason. The reason has been explained a dozen of times. Ignoring the reason will not stop reviews from performing 720p and 1080p tests.
It does have a different reason, to create an artificial benchmark that proves nothing. performance there does not accurately indicate performance in anything.
The 720p testing has a different reason. The reason has been explained a dozen of times. Ignoring the reason will not stop reviews from performing 720p and 1080p tests.
It does have a different reason, to create an artificial benchmark that proves nothing. performance there does not accurately indicate performance in anything.
Precisely those tests are named "CPU tests" because give us the true performance of the CPU...
The 720p testing has a different reason. The reason has been explained a dozen of times. Ignoring the reason will not stop reviews from performing 720p and 1080p tests.
It does have a different reason, to create an artificial benchmark that proves nothing. performance there does not accurately indicate performance in anything.
Believe it or not some people play 1080p at high refresh rates. Mind blown.
Testing in 720p allows you to see the framerate at which the CPU can push to at maximum. Let's say in 1 year 200hz@1080p monitors become standard for 1080p. People upgrade GPUs more often than CPUs. While Ryzen mght only be able to push 110fps, An Intel chip might push 160fps or more.
When you put a CPU that is aimed at tasks that are not primarily gaming or "single threaded", it is *obvious* you will test it in such disciplines as an addition. Then, you will not care how the CPU actually achieves the performance, only that is achieves a particular threshold at the stipulated price range it will sell. In this particular case, TR and the 7900X were both put into a shizzle ton of tests across 50+ disciplines (if not more) and in all they were trading blows. There is no clear winner and the answer in every single review (as far as I have read) is that TR justifies its price with no issues facing the 7900X.
If you want to de-merit AMD for achieving a milestone just because "it doesn't game as well as the Intel", that is on you, not on the data gathered nor the tasks at hand. If you can think of a test that actually is needed that should be added, why not make the suggestion for a next round of tests here at Toms or another site that is good at reviewing stuff?
Also, if you have doubts on methodology, that is another story. Not all sites are trustworthy or draw conclusions in a weird manner, but as long as they expose their testing methodology and you can reproduce their figures, you will know why those numbers come to be and have more information to take in and make a decision.
Hell, I still haven't seen a *SINGLE* site that has put OBS (or any streaming software) while testing games to see how the CPUs behave. People has whined about 4K SO FRIGGIN' MUCH, but I haven't seen a single tear shed about streaming. So, I'll have to shed those manly tears from now on.
My complain is not about reviews whose tone shows an evident bias towards a given brand, where when Intel wins by a huge margin such as 45%, this is dismissed with a "Zen gives enough performance" or a similar claim, but when AMD wins by a margin of 30%, the tone changes to something as "the 1950X completely smashes the i9-7900X".
My main complain is when reviews use dirty tricks to favor one brand over other. Since Zen launch, and including AMD pre-release demos, we have seen the next dirty tricks:
■ quad-channel disabled on Intel chips
■ turbo disabled on Intel chips
■ testing with workloads that have a bug favoring Zen
■ most of tests being workloads where Zen shines such as rendering, rendering, rendering, and rendering
■ testing at 4K to generate huge GPU-bottlenecks and favor Zen on games
■ testing memory-bound workloads with memory subsystem crippled on Intel side
■ testing with compiler/flags combinations that reduce Intel chips performance by 40--60%
■ testing a concrete model of Intel CPU on motherboards with known compatibility issues with that concrete model of CPU
■ testing Intel engineering samples instead retails chips, and label the chip on graphs as if was retail
■ testing overclocked AMD chips vs stock Intel chips, and label chips on the graphs as if were both stock
■ testing custom workload that favor Zen, instead existent official workloads
■ test Intel platforms with beta BIOS, and don't retest latter with final BIOS
■ And so on
1. Less than 1% performance difference in the relevant benchmarks from quad channel memory per testing at 3 different sources. It looks good on AIDA, but beyond that, it is mostly superfluous outside professional level workloads that need the raw capacity.
2. It was also disabled on Ryzen in those tests, are we going to make it fair, or edge one side over the other? That was per clock testing, and it was relevant because the clocks were locked (which is what any reviewer would do to test per clock).
3. Source? There were no workloads favoring Zen in any testing, and the sleep bug was discovered by a random guy on the internet. Clearly AMD never put their PCs to sleep.
4. What about the gameplay with OBS running? What about the photo editing and video editing workloads where Intel normally shines?
5. Both sides were tested at 4K (which is relevant), you can complain all you want about it, but I want to know how a top end system performs running top end settings. I could not care less how a $4k PC build runs @ 320x240 or some other insanely bad resolution.
6. The memory system was never crippled on Intel processors in any tests.
7. There were no compiler optimizations beyond accommodating for removal of libquantum that heavily favors Intel.
8. If Intel has known compatibility issues in their product stack, that is not an issue of AMD's
9. They tested retail chips...AMD does not get hands on with Intel engineering samples (I cannot even make a leap in logic to a world where Intel would legitimately send AMD engineering samples...they went to court over that very idea).
10. This is a fallacy, AMD never tested an overclocked chip against a stock Intel chip.
11. Uhh...SPEC is official.
12. They tested what was available...How many reviewers have gone back to test Ryzen since the AGESA update? None? No sympathy.
13. Considering everything listed to this point in this rant was bad information, or flat out wrong, I am curious to hear what "And so on" could possibly be.
1. It is difficult to accept that AMD disabled half the memory channels on Intel platform only to get less than 1% advantage. Moreover, here a real test where quad-channel provides a nice 7% performance advantage.
2. Turbo 3 disabled on Intel chips. Turbo enabled on RyZen. That is why reviews couldn't reproduce AMD performance claims.
3. Early leaks and benches using CPU-Z. It was found a bug affected scores on chips with 256KB L2 (as Broadwell-E) and gave extra performance advantage to RyZen over Broadwell-E. The bug was corrected latter in a new version of CPU-Z.
Also the bugs that affected several claimed performance overclocks made with RyZen, until the point that HWBOT banned all those claimed scores submitted for RyZen, because the performance measured was fake.
5. Testing at 4K generates a GPU bottleneck and hides the performance deficits of RyZen. That is why the so-called "CPU test" involve low resolutions. Those 720p tests aren't made because people play games at those resolutions, but for a different reason; a technical reason which has been given a dozen of times.
6. It was made on AMD demos involving Broadwel-E and Broadwell Xeons.
7. The libquantum subscore was removed for both Intel and AMD. The cheating was on the rest of subtests. For instance the 403.gcc subscore was 40% slower on Broadwell-E Xeons and about 60% slower on Skylake Xeons thanks to special compiler/flag choices. Curiously using those choices the biased review manage to reproduce AMD official results for Broadwell chips.
8. No one said it is an issue for AMD. It is an issue for the reviewer that chose the only known incompatible X299 motherboard for his review.
9. Guru3d or HU/Techspot have used engineering samples of Intel chips in their reviews and comparisons of Intel vs AMD.
10. Guru3D, Techspot, Arstechnica do compare OC AMD vs stock Intel.
11. No one mentioned SPEC in this point. My claim was about AMD using a custom workload for Blender. Using one of the standard Blender workoads the picture is difference: RyZen loses instead wining.
12. Virtually any review site has retested Zen with latest AGESA/BIOS. Some reviews have published special articles comparing new versions of AGESA/BIOS for RyZen, and some few have published special articles detailing changes made on new BIOS/AGESA. On the other hand I only know a pair of sites that retested i9 with final BIOS. Everyone else only published the launch BETA BIOS results, and some didn't even mention the use of BETA bios.
1. you cherry picked 7 zip.
2. Several reviewers reproduced AMD's results...hard OCP, OC3D, and others.
3. CPU-Z is not considered a real world benchmark by anyone of note.
5. Testing at 4K shows what the performance is at maximum resolution. I do not care how a game plays at 720p on minimum settings. Those benchmarks are not indicative of performance of a processor at 1440p, 21:9, or 4K, because many tests have shown that performance at higher resolutions is not reflected by performance at lower resolutions. This is a classic fallacy that many Intel grognards put forth as justification for trying to create a situation that still shows some sort of optimal scenario for that processor.
6. Source? I have never seen any evidence of this.
7. Source? I have read AMD's required procedures for reproducing the benchmark, and it makes no mention of these specific requirements.
8. X299 board for X299 processor. If there is an issue there, then that is on Intel.
9. That is not an issue for AMD. AMD does not have Intel ES chips.
10. Reviewers do what reviewers do, which is why I like computerbase.de
11. Using whose blender settings? Blender is so ubiquitous that settings vary. If you are going to quote Gamers Nexus, I will dismiss their claims, same as you would dismiss Adored TV. Known shills are known shills.
12. Virtually any? LOL...ok. OC3D is the only one I know in mainstream media. There may be others that are fringe sites...but they do not reach most sets of eyes looking for reviews now do they?
1. I chose 7-zip to demonstrate that the performance difference is not less than 1%.
2. I didn't check if those reviews you mention did what you mention. But any review or person that disables Turbo 3 on Intel chip can reproduce the performance. Not a mystery here! The point was that AMD crippled the performance of Intel chips with dirty tricks such as disabling turbo 3.
3. CPU-Z must be not serious, but it was broadly used in pre-launch ZXen leaks to give a false impression about its performance.
5. Testing at 4K is not testing the CPU, because of GPU bottlenecks. Testing at 1080p or 720p is the correct way to test the gaming performance of the CPU. This is all well-known and a standard in the industry. Your beloved computerbase.de test games at 720p.
6. I don't have six-month old links at hand. The links are available on former pages of the threads..
7. This was extensively discussed at RWT, where SPEC scores of non-crippled Xeon were given.
8. Once again: It is an issue for the reviewer that chose the only known incompatible X299 motherboard for his review.
9. Once again: It is an issue for the reviewers that tested engineering samples of Intel chips instead using retail chips.
10. Yes, reviews can do weird things and can be biased, and we can mention how wrong or biased they are. So what is the problem here?
11. For instance using bmw27 instead custom "RyZen" workload. What is more, third part reviews using the "RyZen" workload found that BDW-E was faster than 1800X, contrary to what AMD promised in demos. We know now that AMD used dirty tricks to cripple BDW-E performance.