Intel's Future Chips: News, Rumours & Reviews

Page 90 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


Strawman. I mentioned GloFo (it was IBM with GloFo at the end) had a working proto in 2015 just like Intel. You were given evidence of that being the case. No hype there, just facts. I don't see anyone making claims the process is better or worse than the other (which "hype" would apply for).

Cheers!
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Is it so difficult to accept that other people think different? There are lot of customers that preferred to build X299 system than a AM4 or a TR4 system.

Performance is not a linear function of costs of design and fabrication. So claims of "overpriced" have to have taken with suspicion. Even considering that, your above builds only show a 20% difference and the performance gap must be around that. So the RyZen build doesn't have a clear performance/price lead, and as noted above, not everyone is bound to that metric. If everyone purchased according to performance/price metrics, then luxury products wouldn't exist... but they exist.

Also you are giving me American pricing where the 7800X is 24% more expensive than the 1700. In my country, the gap reduces to 20%. For mobos you give a 63% price gap, but in my country it is 55% for the same mobos. And so on. The price gap is not so high as in American stores.

Regarding the 10-core Intel chip, it is clearly the best from a overall point of view. People only does rendering will prefer 16-core ThreadRipper. People only play games will prefer a quad-core Kabylake. But overall the 10-core is king:

In the end Intel’s i9-7900X appears to offer the best combination of singlethreaded performance, multithreaded performance, and efficiency at the $1000 price point. It’s not as fast as AMD’s Ryzen Threadripper in well threaded tasks, but it offers significantly stronger performance in single and lightly threaded workloads while remaining more efficient than the competition. More to the point its performance in multithreaded workloads is really quite good. Given the massive disadvantage it has in core count, the gap in performance is smaller than one would expect.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


The 1700 is so good that AMD choses the 7770k for its marketing and public demos

slides-49.jpg


tr8u3gjs6eez.jpg

 

YoAndy

Reputable
Jan 27, 2017
1,277
2
5,665


Actually cpu.userbenchmark is pretty good and really ACCURATE(it tells you what you need to know).
But of course some people rather bring up graphs by different websites with different opinions made by some fan guys in a basement(trying to pull others peoples legs ;)) instead of real users benchmarks

Did I forget to mention that some people don't care about price when they are looking for the best performance, stability and looks(yes looks for most of us computers are more than just a piece of hardware stuck in a shelve, they are more like a work of art (sorry but AMD motherboards always look like crap) , why do you think most new cases are made out of glass and most custom liquid cooling solutions are UV reactive) And Intel's platforms (always) have better features like better audio or liquid cooling ready, faster memory, RGB etc . I build my computer every 5+ years so price is not an issue. At this moment I'm just waiting for the ASUS Republic of Gamers Rampage VI Extreme to be released.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


The average IPC gap between KBL and Zen is around 15%, with Cinebench being one of those cases where the gap is smaller, whereas it is higher in other cases; KBL has 24% higher IPC in Audacity for instance

clock-audacity.png


The review you pick is misleading, because they compared overclocked RyZen vs stock Intel chips. The RyZen chips are overclocked, but they don't mentioned this in any part of the review, neither in the graphics. This is one of the typical tricks played by biased media to increase the IPC of Zen and reduce the gap to 8.5% in Cinebench. Stock vs stock the IPC gap is 10--11% in Cinebench.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


The hype wasn't on who got the first test chip on a given node. I already acknowledged >>>here<<< that IBM fabricated a 7nm test chip in 2015. The hype is on pretending those foundries are leading the industry, when their commercial nodes are a disaster, and customers are abandoning them by the competence.

· I don't care if IBM fabricated a 7nm toy chip in 2015. That means nothing when IBM has not still fabricated a 14nm real chip.

· I don't care if Samsung fabricated a 7nm toy chip in 2015. Qualcomm just broke partnership with Samsung because do not want to fabricate its chips with Samsung 7nm. Qualcomm chose TSMC 7nm.

· I don't care if Glofo fabricated a 7nm toy chip in 2015. AMD is paying millions to Glofo for having access to production nodes from other companies such as TSMC 7nm.

IBM, Samsung, and Globalfoundries are leading the hype!
 


You were also given an example of a client leaving Intel's manufacturing. One of the few clients they have, leaves them.

So, should I also consider their node a disaster just because one of their clients thought the node didn't suit them?

Cheers!
 


I find it disingenuous that you did such a broad price range. Why did you pick the "popular" board for Ryzen but the most expensive one with X299? They are not even comparable boards in anything but the CPU and overclocking.

For example the cheapest X299 compared to the $90 dollar B350, has double the channels and double the maximum memory support, can do x16/x16/x8 while the Ryzen board can only do x16 (x8 if it is an A series APU), 2 M.2 slots and a U.2 slot vs 1, 8 SATA 6Gbps ports vs 4, a superior audio codec, a superior NIC (Intel on the X299 vs Realtek on the B350) 4 3.1 Gen1 Type A, 1 Gen2 Type A and one Gen2 Type C ports vs 3 Gen1 Type A and one Gen1 Type C.

The X299 is absolutely a superior board feature wise. It is like a car. Sure the SE is nice and performs the same as the limited but the limited has more bells and whistles.

Try using comparable feature boards instead. Anyone can go get the cheapest board and compare it to the most expensive.



The Asus ROG Zenith Extreme only has 48 for the PCIe slots as it can do x16/x8/x16/x8 however one of the PCIe x8 slots is also used to share bandwidth with the U.2 slot.

And with Intel it depends on the board. The chipset has 24 of its own PCIe lanes to use up in addition to the 44 on the CPU. The cheapest MSI X299 Goldstone picked, for example, uses 40 PCIe lanes of the 44 from the CPU and gives you x16/x16/x8 and the other 4 are used for M.2.

Then the rest of the boards lanes are pushed into USB, SATA etc. And it really isn't your choice, it is the board manufactures choice. Although again considering the poor scaling beyond two GPUs what gamer would buy 4 GPUs? Especially since the current performance leader, NVidia, doesn't support beyond 2?
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965


Actually, I've read most of the reviews available in English. But since you trust user.benchmark.com let's see what they have to say, and see how close it matches the other website. Btw, never put all your faith in 1 review. Look at all of them, so you can compare.
Processors FAQ
Q How does IPC compare between AMD's Ryzen and Intel's Kaby Lake?
A Ryzen’s IPC nearly matches Kaby Lake but at worst lags by 9%.
Skylake and Kaby Lake have the same IPC.
Taking the Kaby Lake i5-7500 which has a quad core boost frequency of 3.6 GHz and the Skylake i5-6600 which also has a quad core boost frequency of 3.6 GHz, then comparing the quad core mixed performance shows that they score 452 and 451 respectively. This demonstrates near identical IPC between Kaby Lake and Skylake (averaged over 11,000+ samples).
Kaby Lake vs Ryzen IPC estimate (Feb 27, 2017).
Until we know the exact frequency profile of Ryzen under various loads we can only estimate its IPC. What it comes down
Assuming the Ryzen 1700X operates at 3.4 GHz under quad core load (possible scenario):
(Ryzen 1700X = 416) vs (6700T with 3.4 GHZ boost = 418) implies Ryzen IPC equals Kaby Lake.
Assuming the Ryzen 1700X operates at 3.6 GHz under quad core load (less likely scenario):
(Ryzen 1700X = 416) vs (7500 = 452) implies Ryzen's IPC lags Kaby Lake's by 9%.
Assuming the Ryzen 1700X operates at 3.5 GHz under quad core load (most likely scenario):
Extrapolating from the above two results implies that Ryzen's IPC lags Kaby Lake's by 4.6%.
Conclusion.
Based on the small handful of samples we have seen, Ryzen's IPC could match Kaby Lake but at worst lags by 9%. The big unknown is Ryzen's overclockability as this will determine the actual performance levels delivered. The Kaby Lake 7700K can comfortably hold 4.8 GHz so, under quad core workloads the 1700X would have to overclock to 4.8 GHz or better in order to beat it. On the other hand, on an eight core workload assuming a clock speed of 3.4 GHz (stock) for the 1700X and 4.8 GHz (overclocked) for the 7700K, the 1700X wins by a whopping 41%. Even though these preliminary results are based on a limited number of samples one thing is for sure: Christmas has come early for workstation users.

We know that the 1700 will overclock to 3.8-4.1GHz depending on the chip. 4.15GHz is the highest I've seen. Most people buy a computer based on needs and price to performance. The 1700 offered close to 6900K multi-threading performance, which was a $1,000 CPU that you could buy for $329. Now, if you look at synthetics the Ryzen and Kaby lakes are fairly close to each other. Ryzen was designed for multi-threading, encoding, rendering, and streaming, which it does those tasks exceedingly well. Most programs/applications over the last 10 years have been optimized for Intel, and Ryzen is new and does lack optimization, which creates the appearance the Ryzen might not be that good on some programs. This may change or may not change depending on the willingness to optimized for the processor. You would think that as the processor becomes more popular(sells) it will get optimization, or it may not. Now that we have a good idea how good Ryzen actually is people make a decision based on different factors. When it comes to gaming do you buy the 4 core Kaby lake 7700K for $339 to game on a 60HZ monitor with a 1060GTX, or do you buy a 6 core 1600 or $196 or 8 core 1700 for $296 that offers twice the multi-threading and delivers the same gaming experience? Depending on the game the difference in average FPS ~10% depending on the video card. Benchmarks show you that Ryzen is better than Intel for streaming, and offers ridiculously better multi-threading performance for the price. Most people will take the hit to gaming performance that might not even effect their game play at all unless they are gaming at 144HZ on 1080p, which will require 1080 with 7700K, and a 1080Ti with a 1600 or 1700.
Here is a reference to show you the differences in FPS just from game to game.
Total.png

Click here for Link
Montreal-based tech YouTuber Karl Morin gave us the Coffee Lake scores when he ran across an HP Omen PC sporting an Intel Core-i7 8700K at the HWBot event. It didn’t have an attached monitor, so Morin grabbed one and ran Cinebench. He also ran some CPU-Z multi- and single-threaded tests.
Intel’s six core/12 thread CPU scored 1230 points in the Cinebench R15 multi-threaded test and 196 points for single-core performance. In our own Cinebench benchmarks, that places the 8700K above the four core/eight thread Core-i7 7700K (941) but just below AMD’s six core/12 thread Ryzen 5 1600X (1260) for multi-threaded performance. That single-thread score, meanwhile, is also an improvement over Kaby Lake.
Click here for link

Great single thread score compared to the 7700k, but even the newest mainstream Intel 6 core i7 8700K doesn't beat the 1600 in multi-threading although it is vey close. My guess is depending on your needs the 8600K 6 core 6 thread will be the gaming GPU people go too.
 

YoAndy

Reputable
Jan 27, 2017
1,277
2
5,665


I do look at a lot of reviews and I laugh at a lot of them too ;). How do you exactly think that optimization will increase Ryzen's low IPC.
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965

I find it disingenuous that you did such a broad price range. Why did you pick the "popular" board for Ryzen but the most expensive one with X299? They are not even comparable boards in anything but the CPU and overclocking.

I didn't do this to be deceptive. I did this to show platform cost! If you purchase a 4-8 core CPU with the Skylake-X platform you pay for features you will not be able to use, and increased memory bandwidth will have very little affect on performance depending on your application needs like gaming. Even the cheapest option is over $200, and probably the best option for those with 4-8 core Skylake-X processors depending on your rendering needs. Content creators will require 4 video cards for rendering. This will require a more expensive board, which with X299 range up to $499.99 if you like RGB. Now, if you are buying Skylake-X along the ideology of upgrading in the future the 10 Core is the basement bottom to have access to all those PCI-E lanes, and require a more expensive board and the purchase of a ~$500 dongle to unluck NVME raid functionalities. And with Intel's 10nm and Ryzens 7nm on the rise how much sense does it make to sink money into Skylake-X they will be beat easily by the next node within a year or two. PCI-E is upgrading to PCI-E 4.0 in 2020 as well.



Skylake-X is claiming to be HEDT, which is the root of the problem for 4-8 core CPU's like I've been trying to stress. They don't belong here. This has been expressed in a multitude of online rants and negative reviews. HEDT is mainly for professionals who need the PCI-E lanes for workloads, NVME high performance drives, and 4 GPU rendering. If you are buying this to play games, which I tried to show how much cheaper as well as much more value you get form the Ryzen platform.
Here is a higher quality Motherboard if you want 2 GPU's for Ryzen, and M.2.
ASRock X370 Killer SLI/ac AM4 AMD Promontory X370 SATA 6Gb/s USB 3.0 HDMI ATX AMD Motherboard $129.99 Sale Ends in 3 Days (Thu) Save: $20.00 (13%) $109.99 after $20.00 rebate
 

aldaia

Distinguished
Oct 22, 2010
533
18
18,995
Speaking about process density. According to Intel propaganda, Intel is a generation ahead and claims a 3-year lead over foundries.

It is true that on PAPER (or should we say on slide) intel is more dense. For instance according to Intel slides Intel 14 nm achieves on PAPER 37.5 million Trs/mm2 while foundries 14nm get only 30.5 million and TSMC 16 nm 29 million Tr/mm2. But that is far from a generation ahead. If Intel 14 nm was a generation ahead over foundries 14 nm density advantage should be 2x instead of 1.2x. So on PAPER and according to Intel`s own estimations 14 nm is only 1/5th of a generation ahead.

But what is the REAL density on REAL products?
The density on Intel products is actually way less than half what they claim on PAPER.
Broadwell Low Core Count is only 13.8 million Tr/mm2 (0.37x the PAPER density)
Broadwell High Core Count is only 15.8 million Tr/mm2 (0.42x the PAPER density)
Skylake i7 is only 14.3 million Tr/mm2 (0.38x the PAPER density)

The REAL density of foundry products is actually much better
Apple A10 (built on TSMC 16 nm) is 26.4 million Tr/mm2 (0.91x the PAPER density)

Yeah I already know the excuse here, SOCs are much denser and cannot be compared with CPUs. So lets include a CPU too.
AMD Ryzen die (built on Gf/Samsung 14 nm) is 22.5 Tr/mm2 (0.74x the PAPER density)

The thing is why Intel REAL density is so bad compared to PAPER density? (lets talk about hype?)

At 10 nm Intel claims a PAPER density of 100 million Tr/mm2 but if they keep the same ratio (some people claim the ratio will be worse), that will translate to ~40 million Tr/mm2. Meanwhile the Kirin 970 SoC built on TSMC 10 nm delivers a REAL density of ~55 million Tr/mm2 and SD 835 built on Samsung 10 nm has a REAL density of 41.4 million Tr/mm2. TSMC 10 nm has slightly better specs on PAPER that apparently translate to real products.
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965


I think Ryzen's IPC will remain the same as it is until next node. People perception of IPC will change with program/application optimization. I will give you an example of the negative stereo type given to Ryzen about IPC. Despite it's lower IPC it's performs better than the 6900K in streaming core for core. Why is this? Ryzen was designed to focus on multi-threading, encoding, and streaming. Not every program is optimized to do specific tasks with each CPU. So, if the program or application isn't optimized for that CPU than you see a lower score. If they have similar IPC and are optimized for the same program you will see similar performance. And if it's "broken" you see tremendously lower score. People will use this to claim IPC advantage or disadvantage. Synthetic test scores give us a better insight on what the CPU is capable. Remember the awful and broken FPS scores by Ryzen on opening day. People started making judgement calls on Ryzen IPC then. Remember Rise of the Tomb Raider, after optimization there was a 30% improvement in FPS. Other programs/applications would have similar benefits if they were optimized.
 


IPC... Ok, I'll bite: you can always offload an IPC deficit with a higher speed or by lessening other bottlenecks in your execution pipeline (broader sense). Ryzen (as an already developed product) has a lot of them that can be improved without attacking "IPC" directly. I already mentioned a couple that I'd like to see for Zen v2: IF tuning (using a 2x or 1.5X clock to RAM signaling might be nice, but hard to pull off), cache improvement and just adding more execution units to the design could contribute as well (this attacks IPC directly), or even adding a couple of steps with a shorter pipeline (specific sense). I already read Agner's guide and it actually gives a lot of interesting bits to think about.

Cheers!
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


And the goal changes again... Do you mean now that article that writes "So Intel can’t make 10nm chips", when 10nm chips are in production and 10nm+ have been just announced?

kaizad-mistry-2017-manufacturing%281%29_29_575px.png
 

YoAndy

Reputable
Jan 27, 2017
1,277
2
5,665
Gaming isn't a strong point for AMD's fledgling architecture. For many, myself included, that is a massive disappointment i was hoping to have my first AMD build in years(same happened with vega). AMD clearly sees an eight-core even a 16 core future, just as it did with Bulldozer. But developers still aren't there yet. For all the fuss made about DX12 and the PlayStation 4 and Xbox One ushering in a new era of multicore-optimised games, a 4C/8T CPU continues to be the best way to shovel data over to a graphics card.

Intel processors are still the best choice if you want the best performance in any application regardless of cost.
Until AMD can prove it's worth the upheaval of a new motherboard, chipset and socket, the processors have proven themselves to be extremely quick, and at much lower prices than Intel's equivalents. But it isn't a slam dunk. Games aren't yet optimised for Ryzen or that many cores, and Intel processors are still the best choice if you want the best performance in any application regardless of cost. Intel will remain dominant, though, and across mid-range and high-end processors there’s an enormous amount of choice. For powerful, everyday computing even the Core i5K continues to serve well for everyday gamers.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


The average OC of the 1700 is only 3873MHz, which means many chips cannot even hit 3.8GHz



And here we go again. Let us review that silly 'review' again:

(1) They tested with frame-limiting and GPU-bound settings. That is the reason why OC the 7800X by a huge 34% did only bring 3% higher framerates. The performance of Intel chips was crippled.

(2) The guy finally admitted he didn't even test a retail 7800X chip, but he used an engineering sample or a qualification sample (he doesn't seem to grasp the difference).

(3) If this wasn't enough, he used a motherboard is incompatible with the 7800X and he managed to burn the 7800X thanks to that.

http://www.asrock.com/MB/Intel/X299%20Taichi/index.asp#BIOS

This review is useless. Please stop from mentioning it again.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Why do you insist on confounding Skylake-X and Kabylake-X? Those 4-core CPUs are not Skylake-X.


 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Not all transistors are the same, which makes transistor counts irrelevant, and your point moot. Density is measured in other ways. For instance

TSMC "16nm": 0.07µm² HD SRAM

GloFo/Samsung "14nm": 0.064μm² HD SRAM

Intel "14nm": 0.0499μm² HD SRAM
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Apples and oranges. YoAndy is replying to a remark made by goldstone77 about how game developers will optimize games for RyZen. This is completely unrelated to what you are trying to discuss.

That software optimization that goldstone mentions falls in the same hype train that those magic BIOS updates that were promised at launch and did never happen...
 

YoAndy

Reputable
Jan 27, 2017
1,277
2
5,665
This is my personal opinion, If you are looking for a more cost-effective processor, AMD processors may be the best bet. For students, budget gamers, and individuals with straight-forward computing needs, AMD processors are a great choice – they are powerful and fast enough for most while undercutting competitor pricing. And yet, you give up a little when going with AMD chips. They can be a bit "overclocked" for increased speed and are good for multithreaded applications.

Now on the higher end of the spectrum and for people that like to have the best because they don't upgrade that often(I'm still rocking the i7 990X) Intel processors still shine. If you are seeking performance that is at the upper range of the market, there is sure to be an Intel chip that suits your needs. You may pay more for it, and you may give up flexibility for this performance, but the processor itself will provide you with lightning speed, incredible capability, and beautiful graphics. In terms of outright power, it's hard to beat Intel processors.
 


What "goal"?

-> GloFo also had a proto ~2015 -> NO IT DID NOT -> proof it did -> WELL IT IS DUMB AND DOES NOT COUNT -> well, Intel has also lost a client to their manufacturing plant -> THAT IS NOT THE POINT.

I don't know why you get 10nm into the discussion when my original point was that GloFo (via IBM partnership) had a 7nm prototype in 2015 and most probably will have full capability by the times they've announced compared to initial Intel projections that, well, stated the same thing. The difference is most manufacturing firms (TSMC, GloFo and Samsung mainly) have been shortening the gap with Intel so fast that Intel does not seem like the "only choice" for manufacturing anymore.



So you have no use for extra cores... Ok, move along then?

EDIT:


No, it is not "Apples to Oranges". He mentioned IPC, I addressed that. But, if you want me to answer in the particular context of games, see Battlefield 1. FPS'es, for a very long time now, have been the worst examples of thread scaling out there (with the exception of ID Tech 3+ games), but that game escales pretty damn well. IF that is not optimization, what the hell is then? DOTA 2? That miserable game no one uses for benchmarks but has a player base the size of a small Country?

Pff.

Cheers!
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965


(1) You didn't complain about crippled benchmarks on Ryzen opening day.
(2) Doesn't matter results are in line with the majority of other reviews.
(3) There is nothing wrong with the motherboard.

Ryzen 7 1700 vs. Core i7-7820X: 8-Core Royal Rumble
By Steven Walton on August 2, 2017

Buying an 8-core processor was a wallet ripping affair prior to the arrival of Ryzen. In absence of competition, the Intel Broadwell-E Core i7-6900K was ridiculously overpriced at $1,050. Intel has had to make changes to its HEDT platform by releasing the Core i7-7820X in response to the 8-core/16-thread Ryzen 7 1700.
However, I'm not quite sure Intel understands how competition works. Whereas AMD's solution launched with an MSRP of $330 and can be readily purchased online for as little as $290 today, Intel is still asking $600 for the i7-7820X. Worse still, the cheaper Ryzen 7 1700 can be happily paired with a $100 motherboard, while the pricier Core i7-7820X requires one that costs $220+.
Although it's clear that the R7 1700 is considerably cheaper than the Core i7-7820X, we've been wondering just how much faster Intel's solution is considering both chips have 8 cores and 16 threads. Granted, we've already run plenty of Ryzen and Core-X benchmarks here at TechSpot, so we have a pretty good idea of how these CPUs compare.
9OhqRL3.png

FC.png

Moving on we have Far Cry Primal which might seem like an odd choice given that it isn't a well optimized game for high core count CPUs, but it often delivers interesting results so I thought it was worth a look. Here we see when using the DDR4-2666 memory, the 7820X is 13% slower than the 6900K, but 8% faster than the R7 1700, at least when looking at the average frame rate performance.
The DDR4-3200 results show that faster memory increased the minimum frame rate for the 7820X by 5% while the R7 1700 enjoyed a 10% boost. The Ryzen CPU was now 8% faster when comparing the minimum frame rate and a few frames faster for the average.
After squeezing everything we could out of the 7820X, it saw an 11% leap in performance and overtook the R7 1700.
Hitman.png

Hitman is another title where the older Core i7-6900K slays both the Ryzen 7 1700 and Core i7-7820X when matched clock-for-clock. The R7 1700 made out quite poorly with the DDR4-2666 memory this time, being 10% slower than the 7820X and 17% slower than the 6900K when comparing the minimum frame rate.
Increasing the memory speed to 3200 didn't do much for the 7820X and we've seen this several times already. The R7 1700, on the other hand, enjoyed a 13% jump in minimum frame rate and a 15% increase for the average, so it's not far behind the 7820X here.
Turning up the heat with the Core i7-7820X at 4.5GHz helped improve performance by a further 8%, placing it comfortable ahead of the R7 1700.
Warhammer.png

Finally we have Total War Warhammer and this is a title that has been well optimized to take advantage of Ryzen through to a few handy updates. As you can see even with DDR4-2666 memory Ryzen is able to lay waste to not just the 7820X but also the 6900K in this title. It's also worth noting that the Broadwell-E CPU was again faster than Intel's new 7820X, delivering an 8% greater minimum frame rate.
Upgrading to DDR4-3200 did improve the 7820X's minimum frame rate result by 10%, which is decent, but the R7 1700 saw a massive 16% performance bump here and stayed slightly ahead of the 7820X even when it was pushed to 4.5GHz and 3GHz mesh.
Cinebench1.png

Before checking out memory performance and power consumption, I ran a few synthetic and application benchmarks. Cinebench R15 is good for measuring raw CPU performance and memory performance has little impact here. Interestingly, the Core i7-7820X outpaced the 6900K when matched clock-for-clock, albeit by only 4% faster, but this bucks the trend we've seen so far.
When it comes to multi-threaded performance, the Ryzen 7 1700 and 7820X are on par, though the Intel CPU does enjoy a 5% advantage in single-threaded scenarios. Once overclocked to 4.5GHz, the 7820X pulled well ahead and achieved a score of more than 1900pts.
7zip.png

Next up we have compression and decompression performance with 7-Zip. From what I've gathered, Ryzen's SMT feature helps massively with decompression work but isn't utilized for compression. I haven't looked into this properly yet but whatever the case, Ryzen is worlds better at decompression than it is compression, though that's not too concerning seeing as most users do significantly more decompression work anyway.
Clock-for-clock, the 6900K and 7820X are similar in this test while Ryzen was noticeably faster for decompression but significantly slower for compression. Increasing the memory speed helped Ryzen a bit but had little impact on the 7820X. After being overclocked to 4.5GHz, the 7820X took a large step forward and managed to match Ryzen's decompression performance.
Blender1.png

The Blender render time is measured in seconds, so lower is better here. Memory frequency also has little to no impact on performance so this didn't help Ryzen close the gap on the Intel CPUs. Ryzen was 8% slower than the 7820X in this test when compared clock-for-clock and 23% slower once the 7820X was overclocked to 4.5GHz -- a pretty solid win for Intel here.
Corona1.png

We find a similar story when testing with Corona. The 7820X is a little bit faster than the 6900K and a lot faster than the R7 1700. With both CPUs overclocked to the max, the 7820X was 16% faster, though of course it does cost more than twice as much so this is hardly a win in terms of value.
Memory.png

Before we get to the power consumption figures, here we see that Ryzen's dual-channel memory controller has limited bandwidth compared to Broadwell-E and Skylake-X. Whereas the 7820X can push 66GB/s for the read throughput, the R7 1700 was limited to 40GB/s. That gap only gets wider with overclocking, after which the 7820X hummed along to the tune of 81GB/s.
MemoryL1.png

Memory Latency
What's interesting to note is that the Core i7-6900K is significantly better than both the R7 1700 and 7820X when it comes to memory latency. Ryzen does improve with higher clocked memory, as does the 7820X, but that's to be expected.
L1.png

Ryzen is well down on the graph when looking at L1 cache throughput -- it's basically half that of Intel's CPUs.
L1L1.png

That said, while down on bandwidth, latency is much the same.
L2.png

Ryzen's L2 cache performance is excellent, smashing the 6900K clock-for-clock and even beating the 7820X. At 4.5GHz the 7820X does pull ahead but even so Ryzen is strong here.
L2L1.png

Although its overall throughput is weaker, the 6900K's L2 cache offers considerably lower latency than the 7820X until the latter is overclocked. Ryzen couldn't quite catch the 6900K here but still offered strong results.
L3.png

When compared to the 6900K, the R7 1700's L3 cache throughput looks excellent, but not so much when seated next to the 7820X.
L3L1.png

Throughput isn't everything of course and here we see that despite its big bandwidth, the 7820X's latency is lousy even when overclocked. In terms of responsiveness, Ryzen has a big advantage here.
Power1.png

Since memory frequency has little to no impact on overall system power consumption, I only included the 2666 RAM figures here along with the 7820X's 4.5GHz overclock.
Clock-for-clock, the Core i7-6900K was very efficient pushing total system consumption to just 206 watts in Cinebench R15's multi-threaded test. The Ryzen 7 1700 also performed well at 248 watts while the 7820X was a bit hungrier, hitting 268 watts.
Once overclocked to 4.5GHz, the 7820X increased total system consumption by 26% and pulled 36% more power than the R7 1700.

Closing Thoughts

We have some interesting results to discuss. Let's start with the Core i7-6900K and 7820X.
It was shocking to find that when comparing clock-for-clock performance using the same memory speed on both setups, the older 6900K was faster in every single game we tested and significantly so in titles such as Civilization IV. Even if we give the 7820X the advantage of having faster DDR4-3200 RAM, which the 6900K doesn't support, it was rare for the Skylake-X CPU to take the lead.

When overclocked to 4.5GHz with a 3GHz mesh frequency, the 7820X was still only able to match the 6900K in most of the titles tested and realistically we could have squeezed a few hundred MHz more out of the Broadwell-E CPU. Adding insult to injury, the 7820X consumes significantly more power to deliver similar performance of the previous generation part, not to mention that you can expect to require a high-end liquid cooling setup to achieve the 7820X's 4.5GHz overclock without heavy throttling.
When it came to application performance, the 7820X did look much better, though even then it wasn't always superior to the 6900K. For example, we saw similar performance in 7-Zip, while Cinebench R15's numbers weren't drastically different. The 7820X was marginally better in our Blender and Corona tests but not to the degree where you would find yourself getting excited about the results.
The only advantage the 7820X has over the 6900K is the fact that it's around 35% cheaper ($600 versus $1,050). That's obviously a big deal, but if you made me choose between these two CPUs at the same price, I'd probably take the 6900K.
At $290, it's pretty clear that the Ryzen 7 CPU is in a different league when it comes to value and I don't think the most loyal Intel fanboy could argue otherwise. Factoring in the cost of a motherboard ($220+ versus ~$100), the 7820X is around 130% more expensive than the R7 1700, and of course it was never anywhere near that much faster.
When comparing the R7 1700 against the 7820X in terms of maximum overclocked gaming performance, the results were much the same overall. The 7820X enjoyed a win in Hitman while the R7 1700 was noticeably better in Civilization IV and the rest of the games were largely a wash. However, the 7820X was 23% faster in Blender and 16% faster in Corona, so it was hands down faster for these workloads, just not 130% faster, and to achieve that extra performance it consumed 36% more power.

Edit: For comparison here is a link Tom's Hardware review to prove my point.
Intel Core i7-7820X Skylake-X Review
by Paul Alcorn July 26, 2017 at 6:00 AM
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


The goal of let us derail the Intel thread with hype about Glofo/Samsung/IBM again, and when it is demonstrated that those foundries are not leading the industry then let us move to attack Intel 10nm with false information from an article about Intel 10nm.



You did bring Intel 10nm in the discussion when you mentioned Charlie article about Intel Foundry customers



Yes, it is apples and oranges. He asked about how game optimizations would increase Zen low IPC and you replied him about how engineers could increase IPC for Zen2.
 


If you're using an Intel Core i7-7820K, then your argument is pointless.

Only elitists want to pay 75% for some dinky little 20% performance increase. You can do so much more with that $400 dollars than to obtain what can be obtained for free. You can always overclock the CPU to make up for that 20% performance, even if it means pulling the 75% cost to 10% difference. After you overclock it to match the i7-7820K stock speed, you'll find that you're still paying some 12.5% difference for 0% performance increase. This is what the meaning of a bad value is. Sure, some fanboys would still buy the i7 because of their loyalty, but they know they're wasting money.

As for your i7 to i5 comparison, that's irrelevant. You can't make up difference in multithreaded performance because you don't have the cores. That justifies the ~50% cost difference. But if you can overclock the i5-7400 to match the i5-7600K performance, then it's not worth the ~30% cost difference because you'll end up paying ~30% more for 0% performance gains.

Also, you are not a professional *insert job here*. You aren't buying Intel CPUs for their reliability and support. You don't need to pay extra for that when AMD's CPUs can catch up to Intel's when overclocked.

And I find it hilarious that "AMD boards look worse than Intel's boards". They're literally the same designs. The LEDs are located in the same place; the paint jobs are the same.

Last but not least, CPU Userbenchmark tells nothing about a CPU. There are no set conditions for how the CPU is tested. Frankly, it's for elitists who think they don't have time to read up on reviews. You use review websites to tell where a CPU is having trouble. For example, CPU Userbenchmark cannot tell whether a CPU will struggle in game due to single-threaded or multi-threaded performance.

To all of you, though, stay on topic. This isn't for you to compare Ryzen to the Intel lineup.
 
Status
Not open for further replies.