Intel's Future Chips: News, Rumours & Reviews

Page 100 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


AMD record in the same timeframe is even worse.



GPUs are throughput machines that work on parallel workloads. Increasing performance is so easy as adding moar cores.



Except that moar cores is limited by Amdahl's law. Otherwise, hundred core CPUs would be the norm today.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I edited the post. Thanks for the correction.
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965


Regardless of what you have said, that's still a SIX year old chip matching or beating a chip released this year.

Looking at benchmarks it comes within 10% of Ryzen's single core score with both chips overclocked. Sure, it gets pretty much demolished in MC, but that's a pretty unimpressive result.

Regardless of what you have said
You have already made up a determination in you mind that what I've said doesn't matter to you and your are going to ignore it, which was based on reason and factual evidence. And you do just that with your next comment.
Looking at benchmarks it comes within 10% of Ryzen's single core score with both chips overclocked. Sure, it gets pretty much demolished in MC, but that's a pretty unimpressive result.
You dismiss and down play what are obvious strengths of Ryzen noting 8 cores for the 1700, which is cheaper than the i7-7700K, and smashes multithreading score 990x by more than double smashes i7-7700k by ~50%. But you can't be bother with that unimpressive result.
that's still a SIX year old chip matching or beating a chip released this year.
Ryzen has a single thread performance is limited by it's slower frequency, and inability to reach higher clocks. Despite this there are games in which Ryzen performs very close to and even beats the i7-7700K in some games. Clock for clock core for core Ryzen 1500X is only 8.5% slower than the 7700K, and clock for clock core for core Ryzen 1500X is 8.5% faster in multithreading. My 5 year old i7-3770K@4.6GHz single thread score of 171 vs. the 7900X@4.6GHz single thread score of 196 is just 14.62% faster than my 5 year old CPU.
I doubt there are viruses or malware on a brand new computer, and in gaming Ryzen's many more cores than needed for that task ensure that background tasks likely aren't getting in the way.
I'm glad you were there to inspect their testing methodology. Wait you are basing your opinions on hearsay. It won't hold up in court, and it won't hold up with me either! Now you choose to use "unimpressive" core count in your argument because now it's multithreading capability matters too you when making your point. You just pick and choose what you want to ignore and acknowledge regardless whenever it suits you. Clearly not a strong foundation based on science being displayed here! And for my thoughts on your final statement.
Stop using the slower RAM as an excuse, as it's entirely possible that the 990x was indeed faster at processing the assets required for that game. And even if that isn't the case, 3200MHz ram is considerably more expensive than 2133MHz RAM.
Now you want to make excuses for handicapping or crippling Ryzen performance stating 3200MHz RAM is more expensive, but so are Intel CPUs.
it's entirely possible that the 990x was indeed faster at processing the assets required for that game.
No one denied that, but you don't even know what game it was. And you definitely don't know the testing conditions, but regardless of that you want to speculate away using biased opinions to etch out some kind of win with what amounts to more of a rant than scientific reason.

Edit: PCPartPicker part list: https://pcpartpicker.com/list/Gph9kT
Price breakdown by merchant: https://pcpartpicker.com/list/Gph9kT/by_merchant/

Memory: G.Skill - Ripjaws V Series 16GB (2 x 8GB) DDR4-3200 Memory ($138.99 @ Newegg)
Memory: Mushkin - Silverline 16GB (2 x 8GB) DDR4-2133 Memory ($119.99 @ Newegg)
Total: $258.98
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2017-09-22 22:54 EDT-0400
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


That is only true on a review that is comparing overclocked RyZen vs stock Intel and does on a workload favorable to RyZen. Stock vs stock and on general applications the gap is 10--20% clock-for-clock and core-for-core.
 

adamsleath

Honorable
Sep 11, 2017
97
0
10,640


certainly speaks for the longevity of the cpu. yes. interesting results you show...; just doing the math with your scores...its about 16% behind the latest 7700k. single thread int that test. fyi the 30% figure i took from userbenchmark...while not regarded as particularly accurate, for me it gives a ballpark rating, and is at least based on a large sample of cpu scores / benchmarks / gaming etc. people seem to be remarking that there are unfair comparisons with biased tests and so on. (comparing ryzens to intel etc) but im not arguing anything about that here...its a mixed bag of results when they are compared anyway.

and yeah. gotta compare apples with apples. performance per clock. the extra oomph of the 5.0GHz ish sure doesnt hurt. there are some bench scores floating around for the 8700k @ 4.7GHz im hoping to achieve this myself. i too have been caught up in the whole ryzen phenomenon, simply because its refreshing to have more products on the market. ive had my eye on coffee lake, as i vowed to myself my next cpu would be a 6-core. and im too cheap to buy into HEDT. also mainstream adoption of 6-core will potentially increase the relevance of it in day to day usage.

anyway, just my thoughts.

but a question i have is about the "ring-bus" issue. and mesh interconnect...something that affects latency. apparently the ringbus is still the faster of the two? and coffee lake cpus will have it? thats my question.

...and just to add that the 'more cores are better' argument is still valid, as it works when programs are designed to utilise it. cant just sit back on your laurels (intel) relying on the ipc/architecture advantage.

are we really going to be discussing single core performance in 5 or 10 or 15 years time? for like 15-30% improvement? in cinebench? :lol: multithreads surely have more potential than that...

but it goes back to longevity....if improvements are too rapid, obsolescence becomes a factor....can't win :wahoo:
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965


Ringbus has lower latency than the mesh interconnect. Overclocking the mesh can significantly help in some circumstances. I don't know which the new 6 cores will have. As for as benchmarks go, more is better! Give me as much data as you can is my line of thinking. It's helps from being steered into one direction that can be misleading or just plain wrong. I like consistent results with proven scientific methodology across a wide range of results. Holding onto 1 review and ignoring 5 others is just ridiculous. And going by the mindset just because you want something to be true seeking it out till you find it, and basing all your arguments on it isn't the best way to got about it. Sometimes you are forced to make assumption just for lack of more data, and that's fine as long as you are reasonable to future or new evidence. And for the Ryzen phenomenon I want them to be successful! I wish there were 10 more companies making CPU's! The market has been stagnant without competition, and ultimately unexciting. I want the best bang for my buck like you said HEDT, isn't worth it for what I use my computer for. Intel mainstreaming more cores will apply more pressure to programmers to adjust for this as well.
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965


The comparison I made is with the 1500X@3.5GHz and the 7700K@3.5GHz with cinebench single and multithreaded workloads, which is fine.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


CoffeLake uses ringbus. CoffeLake is based in same design principles than SkyLake /KabyLake.

Moar cores is not always better. Adding moar cores, where each core has a similar performance, is good.

Replacing a small number of strong cores with a large number of weaker cores is only better in some cases.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


And the review is not comparing 1500X@3.5GHz and 7700K@3.5GHz but

1500X@3.5GHz (plus interconnect OC) and 7700K@3.5GHz.

That is how the managed to reduce the IPC gap from 11% to 8.5%.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Latency depends on the number of cores. Buses and rings scale nonlinearly with number of cores attached. Meshes and torus scale linearly. That is the reason why designs with large core counts (e.g. 128 cores or more) always use meshes and torus.

Overclocking the mesh on SKL-X reduces the latency. And this always improves performance on latency-sensitive benchmarks.
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965

If you mean that simply putting in a stick of 3000MHz RAM and having it run at 2,933MHz as overclocking the interconnect, I will still consider the test valid, because this shows a real world scenario, which the average person would encounter by simple popping in a stick of RAM. Note that Intel already beats Ryzen's internal latency by high margins, and this only gets Ryzen closer to the latency Intel already enjoys.

Ryzen 5 review: AMD muscles in on Intel’s i5 sweet spot Gaming performance still an anomaly, but the Ryzen 5 1600X and 1500X are great chips.
MARK WALTON (UK) - 5/8/2017, 9:35 AM

Specs for both systems used in the test.
wULBO8t.png

Review-chart-template-2017-final.003-1440x1080.png

Review-chart-template-2017-final.002-1440x1080.png

Also, one must consider now with the new AGESA 1.0.0.6 allows even greater compatibility of higher speed RAM to be used. 3200MHz is now much more widely compatible and typically used in all testing, and if used in this test would have changed the results to more favor Ryzen by a greater margin. But I think it's fine for the comparison I wish to make here. The fact that Ryzen infinity fabric speed uses RAM speed as reference means the faster the RAM the lower the latency of the interconnect. But again as in real world scenarios this is a simple as someone popping in RAM and having it work at it's registered speed. The by product of higher RAM speed is the latency gets closer to the much lower latency Intel already enjoys, so it's more than a fair comparison.

Looking at the new AGESA 1.0.0.6 we can see how crippled Ryzen has been when tested in the past, and how beneficial faster RAM is for Ryzen. Ryzen is still an infant and they continue to workout the issues, but it's getting more promising as the days go by.
AGESA 1.0.0.6 update given workout by AMD's OC expert
by Mark Tyson on 17 July 2017, 10:01

7b34a548-4bd5-4f24-864a-99842d8a9f62.png

1b4012c1-a53e-4a22-8aae-7b6a42e4bc0c.png

971c491b-bd97-4678-8f43-4842e0647c55.png
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


No one said it not a real-world scenario. The point was other. The point was that the chip is overclocked, but it is labeled as stock in the graphs.

It is also a real world-scenario to have 7700k overclocked, but they only test the i7 on stock. It is also a real-world scenario to overclock the interconnect on SKL-X chips, but Arstechnica guys only overclock the AMD RyZen and ThreadRipper chips.

BIASED!



As I have mentioned a couple of times AGESA 1.0.0.6 doesn't provide performance improvements for stock RAM, but it simply provided better overclocking of RAM. Intel also benefits which higher overclock RAM.

71c7c2b3_Arma20III20cpu20vs20ram.png




Note that the biggest gain is obtained in Hitman. This is not causality, because RyZen performs very bad on Hitman.

LkF2BYLBn4DM6SRxkVFP86-650-80.png


So even if Zen gets a 15% gain in Hitman it is very far from Intel.
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965


When making a comparison clock for clock the test is fine. And going form 2133MHz to 3200MHz offer 15% more FPS over the performance Intel offers, because infinity fabric uses RAM speed as a reference thus reducing the internal latency that Intel already enjoys by greater margins! So, the test is more than fair to use as a comparison despite your complaints of bias where none exist.
 

8350rocks

Distinguished


nobody plays games on HEDT @ 720p though...
 

8350rocks

Distinguished


1. you cherry picked 7 zip.

2. Several reviewers reproduced AMD's results...hard OCP, OC3D, and others.

3. CPU-Z is not considered a real world benchmark by anyone of note.

5. Testing at 4K shows what the performance is at maximum resolution. I do not care how a game plays at 720p on minimum settings. Those benchmarks are not indicative of performance of a processor at 1440p, 21:9, or 4K, because many tests have shown that performance at higher resolutions is not reflected by performance at lower resolutions. This is a classic fallacy that many Intel grognards put forth as justification for trying to create a situation that still shows some sort of optimal scenario for that processor.

6. Source? I have never seen any evidence of this.

7. Source? I have read AMD's required procedures for reproducing the benchmark, and it makes no mention of these specific requirements.

8. X299 board for X299 processor. If there is an issue there, then that is on Intel.

9. That is not an issue for AMD. AMD does not have Intel ES chips.

10. Reviewers do what reviewers do, which is why I like computerbase.de

11. Using whose blender settings? Blender is so ubiquitous that settings vary. If you are going to quote Gamers Nexus, I will dismiss their claims, same as you would dismiss Adored TV. Known shills are known shills.

12. Virtually any? LOL...ok. OC3D is the only one I know in mainstream media. There may be others that are fringe sites...but they do not reach most sets of eyes looking for reviews now do they?
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


And my points have been ignored once again...

The review is biased because label an overclocked chip as "stock" in the graphs.

The review is biased because overclocks the AMD chip but doesn't overclock the Intel chip.

The review is biased because overclocks the interconnect in RyZen and ThreadRipper to reduce the lantecy, but doesn't overclock the interconnect on SKL-X chips.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


The 720p testing has a different reason. The reason has been explained a dozen of times. Ignoring the reason will not stop reviews from performing 720p and 1080p tests.
 

8350rocks

Distinguished


It does have a different reason, to create an artificial benchmark that proves nothing. performance there does not accurately indicate performance in anything.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Precisely those tests are named "CPU tests" because give us the true performance of the CPU...
 

KirbysHammer

Reputable
Jun 21, 2016
401
1
4,865


Believe it or not some people play 1080p at high refresh rates. Mind blown.

Testing in 720p allows you to see the framerate at which the CPU can push to at maximum. Let's say in 1 year 200hz@1080p monitors become standard for 1080p. People upgrade GPUs more often than CPUs. While Ryzen mght only be able to push 110fps, An Intel chip might push 160fps or more.



 

adamsleath

Honorable
Sep 11, 2017
97
0
10,640
it matters for 1080ti owners, basically. these are the only ones really pushing 144Hz ...particularly those who prioritize hitman performance :lol:

but can argue that if you're a 5 year investor type. the very best cpu will go the distance.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790

1. I chose 7-zip to demonstrate that the performance difference is not less than 1%.

2. I didn't check if those reviews you mention did what you mention. But any review or person that disables Turbo 3 on Intel chip can reproduce the performance. Not a mystery here! The point was that AMD crippled the performance of Intel chips with dirty tricks such as disabling turbo 3.

3. CPU-Z must be not serious, but it was broadly used in pre-launch ZXen leaks to give a false impression about its performance.

5. Testing at 4K is not testing the CPU, because of GPU bottlenecks. Testing at 1080p or 720p is the correct way to test the gaming performance of the CPU. This is all well-known and a standard in the industry. Your beloved computerbase.de test games at 720p.

6. I don't have six-month old links at hand. The links are available on former pages of the threads..

7. This was extensively discussed at RWT, where SPEC scores of non-crippled Xeon were given.

8. Once again: It is an issue for the reviewer that chose the only known incompatible X299 motherboard for his review.

9. Once again: It is an issue for the reviewers that tested engineering samples of Intel chips instead using retail chips.

10. Yes, reviews can do weird things and can be biased, and we can mention how wrong or biased they are. So what is the problem here?

11. For instance using bmw27 instead custom "RyZen" workload. What is more, third part reviews using the "RyZen" workload found that BDW-E was faster than 1800X, contrary to what AMD promised in demos. We know now that AMD used dirty tricks to cripple BDW-E performance.

1a50a75d_aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9YLzgvNjU2OTcyL29yaWdpbmFsLzAxLUJsZW5kZXItUnl6ZW4ucG5n.png


12. Arstechnica, guru3d, Anandtech, legitreviews... aren't mainstream media? LOL
 
Status
Not open for further replies.