Intel's Future Chips: News, Rumours & Reviews

Page 89 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Yes, you are right. IBM did a 7nm test chip in 2015. But we are still awaiting IBM to launch a 14nm commercial chip.



No Glofo didn't.
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965
Beyond silicon: IBM unveils world’s first 7nm chip
With a silicon-germanium channel and EUV lithography, IBM crosses the 10nm barrier. SEBASTIAN ANTHONY (UK) - 7/9/2015, 9:44 AM

The picture below shows functional 7nm CPU's 2015!
19298819448_d13bb30926_o-640x432.jpg

IBM, working with GlobalFoundries, Samsung, SUNY, and various equipment suppliers, has produced the world's first 7nm chip with functional transistors. While it should be stressed that commercial 7nm chips remain at least two years away, this test chip from IBM and its partners is extremely significant for three reasons: it's a working sub-10nm chip (this is pretty significant in itself); it's the first commercially viable sub-10nm FinFET logic chip that uses silicon-germanium as the channel material; and it appears to be the first commercially viable design produced with extreme ultraviolet (EUV) lithography.

First, the facts and figures. This is a 7nm test chip, built at the IBM/SUNY (State University of New York) Polytechnic 300mm research facility in Albany, NY. The transistors are of the FinFET variety, with one significant difference over commercialised FinFETs: the channel of the transistor is a silicon-germanium (SiGe) alloy, rather than just silicon. To reach such tiny geometries, self-aligned quadruple patterning (SAQR) and EUV lithography is used.

Somewhat extraordinarily, due to incredibly tight stacking (30nm transistor pitch), IBM claims a surface area reduction of "close to 50 percent" over today's 10nm processes. All told, IBM and its partners are targeting "at least a 50 percent power/performance improvement for the next generation of systems"—that is, moving from 10nm down to 7nm. The difference over 14nm, which is the current state of the art for commercially shipping products, will be even more pronounced.
Technologically, SiGe and EUV are both very significant. SiGe has higher electron mobility than pure silicon, which makes it better suited for smaller transistors. The gap between two silicon nuclei is about 0.5nm; as the gate width gets ever smaller (about 7nm in this case), the channel becomes so small that the handful of silicon atoms can't carry enough current. By mixing some germanium into the channel, electron mobility increases, and adequate current can flow. Silicon generally runs into problems at sub-10nm nodes, and we can expect Intel and TSMC to follow a similar path to IBM, GlobalFoundries, and Samsung (aka the Common Platform alliance).

EUV lithography is a more interesting innovation. Basically, as chip features get smaller, you need a narrower beam of light to etch those features accurately, or you need to use multiple patterning (which we won't go into here). The current state of the art for lithography is a 193nm ArF (argon fluoride) laser; that is, the wavelength is 193nm wide. Complex optics and multiple painstaking steps are required to etch 14nm features using a 193nm light source. EUV has a wavelength of just 13.5nm, which will handily take us down into the sub-10nm realm, but so far it has proven very difficult and expensive to deploy commercially (it has been just around the corner for quite a few years now).

I don't think that IBM, GloFo, and Samsung have magically found a way of making EUV commercially viable, but they are probably counting on the wrinkles being ironed out by 2017-2018—7nm's expected arrival date.
18734617954_450fe07427_o.jpg

Enlarge / Bulk 7nm transistors, with a 30nm pitch (the distance between the front edge of one transistor and the front edge of the next transistor).
While IBM isn't revealing too much at this juncture, we did manage to squeeze a little more information out of Mukesh V. Khare, IBM Research's point person for sub-10nm processes. When asked about whether this 7nm process is actually viable, and not just a one-off chip designed to strike fear into the hearts of Intel and TSMC, Khare told us:

The IBM Research alliance's work focuses on technology that can be used towards IBM’s and our partners’ product needs. The 7nm node defined by the IBM alliance and the test chip produced here are towards the same goal and is expected to meet technology requirements for products.
We also quizzed Khare on the commercial viability of the proposed 7nm node. Usually, as chip makers move to smaller processes, more chips can be crafted from one silicon wafer, which drives down the cost of each chip. Over the last few years, however, as we've moved past 28nm, new processes have been so complex that cost reductions have mostly dried up. Khare's response was somewhat ambivalent:

It's not a given that shrinking makes the next generation of chips less expensive. Given the performance improvements and power efficiencies achieved with 7nm chips, it is expected that the performance-per-cost trade-offs make it a viable technology.
And now, we wait. 10nm is currently being commercialised by Intel, TSMC, GlobalFoundries, and Samsung. It is much too early to guess when 7nm might hit mass production. Earlier this week, a leaked document claimed that Intel was facing difficulties at 10nm and that Cannonlake (due in 2016/2017) had been put on hold. In theory, 7nm should roll around in 2017/2018, but we wouldn't be surprised if it misses that target by some margin.

If IBM and friends actually get 10nm and then 7nm out of the door with relative ease, though, then Intel's process mastery might finally be in contention.

This post originated on Ars Technica UK
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965
IBM unveils world’s first 5nm chip
Built with a new type of gate-all-around transistor, plus extreme ultraviolet lithography.
SEBASTIAN ANTHONY (UK) - 6/5/2017, 9:23 AM

NicolasLoubet-1440x960.jpg

IBM, working with Samsung and GlobalFoundries, has unveiled the world's first 5nm silicon chip. Beyond the usual power, performance, and density improvement from moving to smaller transistors, the 5nm IBM chip is notable for being one of the first to use horizontal gate-all-around (GAA) transistors, and the first real use of extreme ultraviolet (EUV) lithography.

GAAFETs are the next evolution of tri-gate finFETs: finFETs, which are currently used for most 22nm-and-below chip designs, will probably run out of steam at around 7nm; GAAFETs may go all the way down to 3nm, especially when combined with EUV. No one really knows what comes after 3nm.

2D, 3D, and back to 2D

For the longest time, transistors were mostly fabricated by depositing layers of different materials on top of each other. As these planar 2D transistors got shorter and shorter (i.e. more transistors in the same space), it became increasingly hard to make transistors that actually perform well (i.e. fast switching, low leakage, reliable). Eventually, the channel got so small that the handful of remaining silicon atoms just couldn't ferry the electricity across the device quickly enough.
FinFETs solve this problem by moving into the third dimension: instead of the channel being a tiny little 2D patch of silicon, a 3D fin juts out from the substrate, allowing for a much larger volume of silicon. Transistors are still getting smaller, though, and the fins are getting thinner. Now chipmakers need to use another type of transistor that provides yet another stay of execution.
Enter GAAFETs, which are kind of 2D, but they build upon the expertise, machines, and techniques that were required for finFETs. There are a few ways of building GAAFETs, but in this case IBM/Samsung/GloFo are talking about horizontal devices. The easiest way to think of these lateral GAAFETs is to take a finFET and turn it through 90 degrees. Thus, instead of the channel being a vertical fin, the channel becomes a horizontal fin—or to put it another way, the fin is now a silicon nanowire (or nanosheet, depending on its width) stretched between the source and drain.

In the case of IBM's GAAFET, there are actually three nanosheets stacked on top of each other running between the source and drain, with the gate (the bit that turns the channel on and off) filling in all the gaps. As a result, there's a relatively large volume of gate and channel material—which is what makes the GAAFET reliable, high-performance, and better suited for scaling down even further.
ibm-nanosheet-fabrication-tem.jpg

finfet-vs-nanosheet-transistors-width.jpg

Nanosheet5nm.jpg

Fabrication-wise, GAAFETs are particularly fascinating. Basically, you lay down some alternating stacks of silicon and silicon-germanium (SiGe). Then you carefully remove the SiGe with a new process called atomic layer etching (probably with an Applied Materials Selectra machine), leaving gaps between each of the silicon layers, which are now technically nanosheets. Finally, without letting those nanosheets droop, you fill those gaps with a high-κ gate metal. Filling the gaps is not easy, though IBM has seemingly managed it with atomic layer deposition (ALD) and the right chemistries.
One major advantage of IBM's 5nm GAAFETs is a significant reduction in patterning complexity. Ever since we crossed the 28nm node, chips have become increasingly expensive to manufacture, due to the added complexity of fabricating ever-smaller features at ever-increasing densities. Patterning is the multi-stage process where the layout of the chip—defining where the nanosheets and other components will eventually be built—is etched using a lithographic process. As features get smaller and more complex, more patterning stages are required, which drives up the cost and time of producing each wafer.

IBM Research's silicon devices chief, Huiming Bu, says this 5nm chip is the first time that extreme ultraviolet (EUV) lithography has been used for front-end-of-line patterning. EUV has a much narrower wavelength (13.5nm) than current immersion lithography machines (193nm), which in turn can reduce the number of patterning stages. EUV has been waiting in the wings for about 10 years now, always just a few months away from commercial viability. This is the best sign yet that ASML's EUV tech is finally ready for primetime.
different-transistor-topologies.jpg

So, how good are GAAFETs?

IBM says that, compared to commercial 10nm chips (presumably Samsung's 10nm process), the new 5nm tech offers a 40 percent performance boost at the same power, or a 75 percent drop in power consumption at the same performance. Density is also through the roof, with IBM claiming it can squeeze up to 30 billion transistors onto a 50-square-millimetre chip (roughly the size of a fingernail), up from 20 billion transistors on a similarly-sized 7nm chip.
GAAFETs don't necessarily have the 5nm node sewn up, though. As always with the semiconductor industry, chipmakers prefer to tweak existing fabrication processes and transistor designs, rather than spending billions on deploying new, immature tech. Current silicon-germanium FinFETs will probably get us to 7nm, and the use of exotic III-V semiconductors might take the finFET a step further to 5nm.

GAAFETs don't necessarily have the 5nm node sewn up, though. As always with the semiconductor industry, chipmakers prefer to tweak existing fabrication processes and transistor designs, rather than spending billions on deploying new, immature tech. Current silicon-germanium FinFETs will probably get us to 7nm, and the use of exotic III-V semiconductors might take the finFET a step further to 5nm.

At some point, though, it probably won't be worth the time, cost, and complexity of producing ever-smaller transistors and chips. Someone will realise that much larger gains can be had by going properly 3D: stacking dozens of logic dies on top of each other, connected together with through-silicon vias (TSVs). Intel has been looking at chip stacking to mitigate its slow progress towards the 10nm node since at least 2015. Maybe we'll soon see the fruits of that labour; though I doubt they'll be cooled with electronic blood just yet.
This post originated on Ars Technica UK

 

YoAndy

Reputable
Jan 27, 2017
1,277
2
5,665


Lets be clear, we all love competition and we all love what AMD is doing now, but AMD is not quite there yet. Ryzen is only better for people with a budget. Is like comparing Corvette vs Ferrari and complaining about price vs performance or how expensive the parts and oil changes for the Ferrari's are. You will always pay a bit premium for the best and like it or not Intel still the best, that's why Intel still selling well.
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965


Skylake-X is not selling well. More people are buying the Corvette, because the speed limit is 60. The Corvette come all the options. Unless you buy the best Ferrari you it's going to be sitting on blocks until you can afford wheels. All analogies aside. Reports from manufactures show ThreadRipper out selling Skylake-X. Beating the first month of Skylake in 1 day to 1 week. That's like having the mom and pop burger joint outselling McDonalds(last analogy I promise). All the bad media is justified. Intel "fans" might not like it, but the platform is including 4-8 cores is bad from an HEDT stand point. And that is why ThreadRipper is outselling Skylake-X.

Edit: Intel might be able to suck in some fans with upgrade path ideology, but professionals and the HEDT market won't be fooled by marketing ploys.
 


Woah, woah. Bad car analogy!

The HEDT world is not akin to "exotic" cars. You don't buy a Ferrari (no matter the sports model) to tow your boat or help you work on your fields. I'd say the "sports car" of the CPU world would be the mainstream K line: fast in straights and nimble in circuits, but not workhorses, even though you can use them as such (those V8, V10 and V12 most definitely can pull/tow a boat), but are not meant to.

This is how I see it for good car analogies:
- EPYC, and Xeons are proper trucks (18 wheelers).
- TR and X/XE models are heavy trucks (from Hilux all the way to F350s, RAM 2500HDs, etc... lifted bro-dude trucks as well).
- Ryzen and K models are a masshup between SUVs and sports cars. Ryzen being a SUV and the Ks different cars (I like to think of the i7 K's as Audi's S6 RS).
- Everything lower is a station wagon Corolla: gets you to places and allows you to work with them, but you die a little inside each time you drive them.

Cheers!
 

YoAndy

Reputable
Jan 27, 2017
1,277
2
5,665


Yes, we all know that for gamers the speed limit right now is 4k, but if you want to do nothing but run a game as fast as possible, you're still better off with the Ferrari Core i5K X or i7K X processor. For gaming at comparable price points, Intel's latest CPUs deliver somewhat higher frame rates than the new Ryzen chips do. Fewer but faster and stronger cores are just better for games and streaming right now. Who the :??: needs 16 cores and 32 threads for games and streaming?, that's the most stupid thing that AMD had said in a while, just because they lack IPC so they are telling their customers go ahead and buy Threadripper for gaming and streaming... We are not quite there yet you don't need 32 threads. For most of us Threadripper is unnecessary(though that should change over time). At low resolutions, the Intel advantage is very significant, and it trails off as you run games at higher resolutions and detail settings ONLY (because the graphics card becomes the performance bottleneck).
;)
 

logainofhades

Titan
Moderator


I love my 6700k but if I had to buy new, there is no way I would consider an X299 system. Even going to an SLI capable X370 board, and not even the cheapest one, the price difference is still quite a bit. I chose a board that I would personally use, if I were to make an ATX Ryzen build, and have recommended to people that want SLI capable Ryzen rigs. (My current rig is mini-ITX.)

PCPartPicker part list / Price breakdown by merchant

CPU: AMD - Ryzen 7 1700 3.0GHz 8-Core Processor ($294.99 @ B&H)
Motherboard: ASRock - X370 KILLER SLI/ac ATX AM4 Motherboard ($131.98 @ Newegg)
Memory: G.Skill - Trident Z 16GB (2 x 8GB) DDR4-3200 Memory ($143.99 @ Newegg)
Video Card: Zotac - GeForce GTX 1080 Ti 11GB AMP Edition Video Card ($739.99 @ Amazon)
Total: $1310.95
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2017-09-11 12:19 EDT-0400

vs
(I did go cheapest board on the x299 build.)

PCPartPicker part list / Price breakdown by merchant

CPU: Intel - Core i7-7800X 3.5GHz 6-Core Processor ($365.89 @ SuperBiiz)
CPU Cooler: CRYORIG - R1 Universal 76.0 CFM CPU Cooler ($88.89 @ OutletPC)
Motherboard: MSI - X299 RAIDER ATX LGA2066 Motherboard ($214.98 @ Amazon)
Memory: G.Skill - Ripjaws V Series 16GB (4 x 4GB) DDR4-3200 Memory ($168.99 @ Newegg)
Video Card: Zotac - GeForce GTX 1080 Ti 11GB AMP Edition Video Card ($739.99 @ Amazon)
Total: $1578.74
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2017-09-11 12:23 EDT-0400


That price difference is enough to add in a 500gb 960evo and a Cryorig H5 to the AMD build. The 1700 doesn't really need a big R1 or water cooling, where the 7800x does.

PCPartPicker part list / Price breakdown by merchant

CPU: AMD - Ryzen 7 1700 3.0GHz 8-Core Processor ($294.99 @ B&H)
CPU Cooler: CRYORIG - H5 Universal 65.0 CFM CPU Cooler ($42.99 @ Newegg Marketplace)
Motherboard: ASRock - X370 KILLER SLI/ac ATX AM4 Motherboard ($131.98 @ Newegg)
Memory: G.Skill - Trident Z 16GB (2 x 8GB) DDR4-3200 Memory ($143.99 @ Newegg)
Storage: Samsung - 960 EVO 500GB M.2-2280 Solid State Drive ($220.98 @ Newegg)
Video Card: Zotac - GeForce GTX 1080 Ti 11GB AMP Edition Video Card ($739.99 @ Amazon)
Total: $1574.92
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2017-09-11 12:32 EDT-0400


The X299 platform is overpriced for what you get. The Threadripper platform, I admit is a bit high in price, as the motherboard selection is still quite low. But once you hit the 10 core Intel, the prices of the CPU make up the cost. You can get a 12c Ryzen for a similar price to the 10 core intel. If you need that kind of threading support, Intel is simply a bad buy right now. Once the board selection improves, as does board price, the favor will swing even better in AMD's favor.
 

YoAndy

Reputable
Jan 27, 2017
1,277
2
5,665


Not everyone wants to build the cheapest system. This days of RGB a lot more people want their PC too look nice, they also care about quality and performance. The mid range boards are selling way more than the cheaper ones.(always do)
But do you know that if we compare the i7 7800K vs R7 1700, after both been peak overclocked the i7 has a Faster OC single-core speed +21%. Core per core the i7 is 21% faster and that's a lot.

Comparing Intel vs AMD core per core Intel is ahead. Like it or heate it when you are the fastest kid on the block you can always charge for premium.

Have you compared the i7 7820X vs R7 1800X ? they are both 8 cores. And putting price aside because arguing that a product is better than another just because is cheaper is nuts. That's why we have Ferraris And Corvettes.
http://cpu.userbenchmark.com/Compare/Intel-Core--i7-7820X-vs-AMD-Ryzen-7-1800X/3928vs3916
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965


1700 is much better than the 7700K for streaming. Depending on the game it's like night and day. I like numbers, but watching the 7700K output stutter is just obvious, and makes the winner an easy pick. Numbers can be deceiving depending on the tests being done. We know that average FPS isn't good enough to describe the quality! That is why additional information like 1 and 0.1% frame times come into play to give us a better numeric perception of what's happening. Also, note the 1700 at stock consumes the same power as the 7700K at stock while streaming. This changes when the 7700K is overclocked.
Dirt Rally
DOTA2
 

YoAndy

Reputable
Jan 27, 2017
1,277
2
5,665


So funny how despite the price you are comparing a quad core CPU with another with double the cores (8) for multitasking and still a close call:lol: and you are calling it slightly better, no shit sherlock holmes.

But is the R7 1700, 1700X or the 1800X better than the i7 7820X for streaming? They all have 8 cores?
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965


I'm glad you asked that question, and the answer is YES! I've posted the review of the 1800X beating the 6900K, and the 7820X does do a better job of competing with the 1800X, but over all the 1800X does come away with the win although they slightly trade blows in a few titles. And that leads us right back to the better priced 1700 that costs $296! The value you get with the 1600 or 1700 is currently unmatched, unless you are gaming at 144HZ on 1080p.


Edit: Click here for the link to the review!

Also, click here for the link of the 1800X beating the 6900K
 

Phaaze88

Titan
Ambassador


This is what's been bothering me for awhile on this thread and multiple others. If you're going to make a comparison, it SHOULD at least be between an equal number of cores. Why do I see battles of 7700k(4c/8/t) vs 1600(6c/12t)/1700(8c/t16t), or TR 1920x(12c/24t) vs 7900x(10c/20t - granted, that this doesn't have an AMD equivalent ATM), and so on? It's not really fair. It makes more sense - to me, at least, to do 7700k vs 1500x/1400, the 7800x(although, it's not in the same market)/8700k vs 1600, and 7920x vs 1920x.
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965


I understand you want the Intel's single thread performance dominance to work in Intel's favor. But for these multi-threaded task Ryzen core for core is better, cheaper, and uses less power.

1800X vs 7820X vs 7700K | Streaming & gaming| PuBG CSGO GTA V

Ryzen is THE BEST CPU for Game Streaming? - $h!t Manufacturers Say Ep. 2
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965
This is from a Ryzen posted I made. Intel's high performance node allowing high frequency does give it an advantage in some applications. But Ryzen's IPC is still very good on a clock for clock basis.
Review-chart-template-2017-final.003-1440x1080.png

-8.5% 1500X@3.5GHz vs 7700k@3.5GHz
Review-chart-template-2017-final.002-1440x1080.png

+8.2% 1500X@3.5GHz vs 7700k@3.5GHz
Review-chart-template-2017-final.009-1440x1080.png

+4.3% 1500X@3.5GHz vs 7700k@3.5GHz in Physics showing Ryzens superior number crunching vs. Intel
-5.3% 1500X@3.5GHz vs 7700k@3.5GHz overall
But look at the 7600k with a base clock of 3.8GHz, which is 8.5% faster than the 1500X
+30.9% 1500X@3.5GHz vs 7600K@3.8GHz
-3.4% 1500X@3.5GHZ vs 7600K@3.8GHz
Review-chart-template-2017-final.012-1440x1080.png

-5.2% 1500X@3.5GHz vs 7700K@3.5GHz overall
But look at the 7600k with a base clock of 3.8GHz, which is 8.5% faster than the 1500X
0.2% 1500X@3.5GHz vs 7600K@3.8GHz overall
 
No, you never compare on number of cores or single threaded benchmarks (unless you need it?) alone. You compare based on price points and overall performance for the tasks you want (whole platform).

I already gave the caveat my i7 2700K copes well with 1080p streaming and DX9c games at least (tried DOOM as well, works fine). I don't think the dominance of the 1700 (or 1700X or 1800X) will be on merit for games+streaming only, but their well rounded nature and platform choices at each price point you might need. Intel is expensive by nature, but I have to say you will always find a comparable system if you look hard enough and sacrifice features if you really want to force the comparison. I do know that, because in the i3 and Pentium segments, AMD and Intel are usually matched in price to performance+features ratios.

For the "all out bling bling", that is getting into "bro-truck" territory. If you have the money to pimp your ride, the ride quality is a secondary concern and you just go with your bias. Case in point: why do people keep buying Camrys when they could get a Focus/Mondeo or even a V6 Mustang (lol)?

Cheers!
 

Phaaze88

Titan
Ambassador


I was taking it from a performance standpoint only in my last post. But yeah, if doing both price/performance, Ryzen clearly wins... but for some folks price isn't an issue, to an extent(I'm looking at you, 6900k &6950x).

@goldstone77
I'm still seeing it again(a performance only view). Gaming+streaming, or other multi-threaded apps still clearly favoring more cores/threads(AMD) over Intel's current best mainstream offering. More cores/threads, though slightly weaker(AMD) vs fewer, but slightly stronger(Intel).
And to mention the 7600k vs the 1500x... surely the 1500s 4 extra threads account for the performance gap over the clockspeed difference, regardless of 7600ks potentially higher clocks from OC.
But when price is in play - not always - AMD wins, when not, it's Intel. 8700k should shake things up for 1600, performance-wise, albeit costing more than the latter.
But Zen loses some points in value when it practically requires one to purchase 3200mhz speed ram or greater for optimal performance(thanks, infinity fabric), when an Intel build doesn't suffer from this issue(just a little bit cheaper in that department).
 

logainofhades

Titan
Moderator
I don't argue Intel's performance, but price/performance of the platform is poor, and that is my argument against going Intel at this time. Intel has not learned yet that their prices are not going to fly as well, now that there is competition. They will figure it out eventually. Old habits are hard to break.

99% of the builds I see here don't have the budget for x299, or even threadripper, but would love to be able to stream gameplay. Ryzen is the best deal going for these people. Even an R5 1600 would be better than a 7700k for such situations. Ryzen isn't the best at anything, but it is good enough at everything with a price to match.

Coffee Lake, if Intel prices it right, could turn the tables a bit, keeping the much needed competition going.
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965


Intel has a higher single thread score, but Ryzen has a higher multi-thread score core for core at the same clock. But Intel's ability to clock at frequencies 4.8-5.0GHz give is 20-31% performance advantage we compare to a 3.8-4GHZ Ryzen. There is no denying that. Some applications especially certain gaming engines utilize these higher clocks, and gaming benchmarks for those games show a huge difference at lower resolutions with high end graphics cards that emphasize the CPU's single core performance. Intel, has had basically the same design for the last 10 years, and there is something to be said for stability, compatability, and peace of mind.
 


PCPartPicker part list / Price breakdown by merchant

CPU: Intel - Core i7-7820X 3.6GHz 8-Core Processor ($579.99 @ Amazon)
Motherboard: MSI - X299 GAMING PRO CARBON AC ATX LGA2066 Motherboard ($331.50 @ Amazon)
Memory: G.Skill - Trident Z 16GB (2 x 8GB) DDR4-3200 Memory ($143.99 @ Newegg)
Total: $1055.48
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2017-09-11 19:55 EDT-0400

PCPartPicker part list / Price breakdown by merchant

CPU: AMD - Ryzen 7 1700X 3.4GHz 8-Core Processor ($309.99 @ Amazon)
Motherboard: MSI - X370 GAMING PRO CARBON ATX AM4 Motherboard ($141.98 @ Newegg)
Memory: G.Skill - Trident Z 16GB (2 x 8GB) DDR4-3200 Memory ($143.99 @ Newegg)
Total: $595.96
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2017-09-11 19:58 EDT-0400

8 core vs 8 core. Let's assume your performance difference of 21%. You are paying $460 more for the i7 platform for a proper 8 core 16 threads, which is roughly 73% of Ryzen platform's cost. You're paying 73% of Ryzen's platform for 21% improvement over the Ryzen platform. Let's just say it's not worth it.

Also, cpu.userbenchmark is the worst for benchmarks. Tells nothing.
 

adamsleath

Honorable
Sep 11, 2017
97
0
10,640
one thing ive seen is that 7700k 4c/8t is maxed out in bf1 (a multicore optimised game) whereas the ryzen1600/1700 is not. and im sure many have experienced the problems when cpu is maxed. it becomes a bottleneck and fps can stick, despite the max fps being higher. 8700 6c/12t will address this for me. even though i like the ryzen cpu's great value. im almost sold on the 8700 at this point, and have been since i first heard about coffee lake. i always intended to upgrade to a 6 core once it hit the mainstream. i have a 5 year old system now. so it's about time.

i'm back on track with my pc savings fund XD so i'm also keen on trading up to the icelake cpu when it arrives. just for fun. :D

exciting times ahead as ryzen 14nm+ arch and ryzen2 (7nm?) are also coming.
 
So, considering we're talking streaming now (funny how the discussion has moved from absolute gaming performance to gaming + streaming in the past week...), I now point out my 2600k at stock handles streaming @ 4k just fine. Pretty sure any post Sandy i7 can handle streaming just fine.
 

KirbysHammer

Reputable
Jun 21, 2016
401
1
4,865


"Let's just say it's not worth it"

Maybe it isn't to you, but a 20% performance improvement isn't nothing.

Otherwise, why buy an i7 over an i5? The i7 is only around 20% faster at MC and in SC when both overclocked the i5 will beat the i7 by a few points clock for clock, and on thermals it does significantly worse.

Yet the i7 costs 40-50% more.

Same thing with ryzen- 25% performance improvement for 50% more cost when going from 6-8 core models- and losing a few points on SC.

When you're buying high end sometimes people don't want what is the best value, they want what performs the best.

 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


3806850-7935486973-43696.jpg


IBM: IBM has not still produced a 14nm commercial chip... and their fopundries business was so disastrous that had to sell it. No foundry wanted the IBM technology, and finally IBM had to give the foundry business to Glofo after paying a millionary amount to Glofo. Yes, so weird as it sounds IBM had to pay to sell its foundry, because no one wanted it even for free.

Samsung: They lost part of Apple 14nm business and Apple moved to TSMC for 10nm. A company that chose Samsung 10nm was Qualcomm. It is a disaster and Qualcomm already announced that moves to TSMC for the 7nm node.

Globalfoundriesf: This is considered the worst foundry; a well-deserved lable because all Glofo nodes are either late and disappointing or simply canceled. Glofo has the honor of being the only foundry that canceled the 10nm node two times; first canceled 10XM, then canceled 10LP. AMD has renegotiated the WSA and did pay a millionaire sum to Glofo to get the possibility of fabricating chips in other foundries! AMD want to do chips in TSMC 7nm for instance, surely GPUs.

Notice that isn't Intel leading... the HYPE
 
Status
Not open for further replies.