Will 8700k be absolute after Ice lake?

Pijmzhchus

Honorable
Jul 28, 2016
166
0
10,690
I always find it difficult to make investment in pc hardware, because three progress is so fast. I'm asking if 8700k will be absolute after Ice lake launch. Ice lake is going to be 10nm sooooo... idk I'm just curious
 
I assume you mean Obsolete. No, of course it wont be obsolete. The 8700k was released last week and the 7700k gets the same fps in games today that it got a week ago. CPUs don't get slower, they get faster. So when Ice Lake comes out the 8700k will still get the same fps that it gets today.

Now, will tech get better? I sure hope so. But don't think that just because something that comes along is better and faster does not mean that everything else is useless.

My opinion is not to make purchasing decisions based on product releases that are more than a month or two out. Because there is something always new on the horizon and if you always wait for bigger and better you end up with nothing.

The 8700k is the fastest gaming processor in the world. If you want or need it and you have the available funds, go for it.
 
the 4790 wasn't significantly faster than the 4770
the 6700 wasn't significantly faster than the 4790
the 7700 wasn't significantly faster than the 6700
the 8700 is significantly faster than the 7700 in apps that are multi-threaded for 6 cores.
as for single core performance, the 7700 is sometimes even faster.

so no, I don't reckon the 8700 will become obsolete in 6-10 months
 
Ahh obsolete. Things will always get obsolete, just get the best system you can afford at the time and enjoy yourself. Only time I would wait on purchases is if there is going to be a new product release in a month or two, beyond that you are always waiting for the next big thing.
 

Is it possible Ice lake will be 8 cores 16 threads? :heink:

 


Was it another paper launch because I cant seem to find anything on the e tale centers????? Hopefully by the end of the year they will be able to provide enough for supply and demand....
 

But how many times are we gonna have to buy a new motherboard? The old days was better you stuck with a board for a few cycles of chips but anymore on The Intel boards you have to buy a new board every new chip they annouce... Thats Costly
 
I wouldn't expect Ice Lake until 2019. Cannon Lake is next and bringing the 10nm process. Ice Lake is supposed to be second gen 10nm and bring a new architecture. Considering the Cannon Lake was supposed to originally come in Q3 2017 and instead we got Coffee Lake. Intel is going to want to give Coffee and Cannon Lake CPU's time to sell, so that leave Ice Lake in 2019 at best.

Also, a CPU is never obsolete until it no longer fits your needs. My CPU is a 1st Gen Core i7 and it performs perfectly for everything I do...even runs Forza Horizon 3 on high settings.

Tiger Lake is 3rd gen 10nm and depending on what it changes, I would say no earlier than 2020/2021.
 


Why are you asking me this? Did I say anything about availability? It is a big tech launch, of course there is gonna be availability problems. With any big CPU or GPU release there is always limited availability. Why would this be any different?
 
Intel will continue to release new socket and CPUs every year FOREVER. You are lucky if you can even upgrade the CPU anymore because the socket change is so quick.

Almost every generation is roughly 10% faster than the last. So in 4-5 years the new i5 will be faster than your i7 8700K.
 


Intel supports 2 generations per motherboard since their core series was launched.
but since an i7 has had a life span of 5-7 years so far not even AMD will provide an upgrade path for that long.
 
Next years IceLake is indeed rumored to bring 8c/16 thread 9700K/9800K, but, this will not suddenly make a hypothetical 8700K-based system seem 'slow' by any means....

(If an existing system is performing at least adequately, I always try to enjoy it as is; but, if needing a system, I'd not put it off because of something releasing in '2H2018', which *could be* as late as 14 months from now....)
 


Unless Intel has cancelled Cannon Lake or make it into a Broadwell type launch, I strongly doubt Ice Lake will be in 2018...
 


Obsolete.

Ice Lake is due out mid-to-late 2018, and will be 10nm, I would probably wait for that if you can.
 


I beg to differ, going from 14 to 10nm is going to be a MASSIVE difference. Buying Coffee Lake now with Ice Lake on 10nm right around the lake is like buying a 980 Ti on 28nm 3 months before 1080 Ti is released on 16nm, which is 50-60% faster with the same TDP and comparable die size.

i7 9700k on 10nm will absolutely demolish an 8700k. It won't be the incremental improvement we've seen from 6700 to 7700k and now 8700k (essentially a 7700k with two additional cores). Sky Lake, Kaby Lake and now Coffee Lake are all 14nm.

I'm waiting for Ice Lake, you do whatever you want with your money.
 
Why would it be 50%?

A 4790k is basically the same in gaming as a 6700k

So when Intel shifted from 22nm to 14nm we didn't gain much. Why should the shrink from 14nn to 10nm make that much of a difference?

Not saying it won't, just curious about your sources concerning that one.
 


I'm not saying it will be 50% faster, I'm pointing out the difference in speed of a processor simply reducing the process size citing the difference between 980 Ti and 1080 Ti of 50% going from 28 to 16nm.

A 4790k is NOT as fast as a 6700k, a 6700k is around 10% faster due to the process shrink as can be shown with a CPU intensive game such as Far Cry 4 at the 1 min mark in the following video:

4790k 77 FPS
6700k 95 FPS

https://youtu.be/xlohn0JZ98M?t=1m


Edit:

Oh and the video comparison above is with both processors at stock speed, with the 4790k at 4.4 GHz and the 6700k at 4.2 GHz. Considering they both can do 4.9 GHz or so, a 6700k overclocked and running at the same frequency as an overclocked 4790k will be that much faster.

It's 200 MHz slower and still 10-20% faster in games.

You might have read somewhere that running at the same frequency there is no difference in speed but you have been misinformed. A process size reduction from 22nm to 14nm is a big deal and hence, you see nearly a 20 FPS difference in Far Cry 4 with both processors running at the same frequency.

I'm in the same boat, my hex-core i7 4930k @ 4.5 GHz is about 25% slower running Zelda: Breath of the Wild via Cemu than a 7700k at the same frequency (45 FPS vs 60 FPS for example) and doing simple arithmetic this correlates tightly with the 3D Mark Firestrike CPU score for both processors. My 4930k does 16.8k CPU at 4.5 GHz, a 7700k does 14k at 4.5 GHz, if I reduce my core count to 4 dividing 16.8 by 6 and then multiplying that by 4 I come up with 11k, meaning, if it was a quad core and not a hex core it would do about 11k CPU vs 14k, or about the 25% performance difference between 45 and 60 FPS. All of this boils down to the architectural differences between the processor, namely, the reduction in process size.

So you may think that your 4790k is as fast as a 6700k but it isn't and yes Ice Lake will be AT LEAST 5% (probably more like 10%+) faster than Coffee Lake and it's less than a year away. So if you DON'T need to build a PC right now, say youre in my position where you have to save anyway, well the silver lining is that by the time you save enough Ice Lake will be out.



 
Following up on my comment above, check out the logic transistor density difference here:

https://images.anandtech.com/doci/11722/kaizad-mistry-2017-manufacturing%281%29_08.png

22nm = 15 mTr / mm2
14nm = 37 mTr / mm2
10nm = 100 mTr / mm2

So there is really no telling how much faster Ice Lake will be compared to Coffee Lake with 3 times the transistor density. We might be able to extrapolate some, if 14nm lithography is 2x the density of 22nm with 10-20% performance difference, maybe 3x will be 20-30%+ difference.

Unless youre Mr. Moneybags and always just upgrade for the hell of it I would wait for Ice Lake.

More: https://www.anandtech.com/show/11722/intel-reveals-ice-lake-core-architecture-10nm-plus

 
You're missing the point. It's not about upgrading now out of spite. If your CPU needs an upgrade, waiting for a year will be a long time. And by the time ice lake is releases we might be talking about 7nm dies and start the discussion all over.

But anyway, a few things to add here. Comparing the advancement of GPUs who become much much stronger with every generation and extrapolating that on CPUs doesn't work anymore.
In the begging you were suggesting 50%, now we're down to 5-10% in the later posts. So should someone wait for a year to gain 5-10%? For ultra high refresh rate gaming, maybe. But for that you need a moneybag anyway. Question was, will the 8700k be obsolete. And with 5-10% extra on new generations, that's a definite no. 5-10% don't render a CPU obsolete and don't justify an upgrade.

As for performance difference -- why are you arguing with synthetic benchmarks? In firestrike an i9 or Threadripper CPU get much better scores than an i7. But would you suggest anyone a Threadripper CPU for gaming? Don't think so.

For gaming, yes Far Cry 4 runs faster on a i7-6700k.


But other than that?
CPU_01.png

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9DLzUvNjM5MzY1L29yaWdpbmFsLzA5YS1NZXRyby1MYXN0LUxpZ2h0LnBuZw

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9DLzYvNjM5MzY2L29yaWdpbmFsLzA5Yi1XYXRjaC1Eb2dzLTIucG5n

CPU_01.png

intel-7700k-bf1-benchmark.png

intel-7700k-gta-benchmark.png

civ-fps.gif


Differences are very small and within the margin of error often. There appears to be a small difference between games that were released prior to the Skylake launce and after the Skylake launch, but differences are still pretty minimal.
Major speed difference is the iGPU which improved significantly.

Synthetic benchmarks and transistor density is all nice and looks really impressive but is not important and doesn't have that much of an impact if at the end of the day if actual performance in real life scenarios is the roughly the same.
 


Firstly, I never said that the CPU performance difference would be 50% now did I? I clearly pointed to the performance difference between a 980 Ti and a 1080 Ti with everything else being mostly equal going from 28 to 16nm.

Second, the performance difference between a 4790k and a 6770k isn't "only 5-10%" as you still erroneously believe, and we can throw all of the benchmarks you linked to right in the trash because they ALL, every single one of them, are using games that are GPU intensive / bound (yes Fallout 4 with Ultra God Rays will bring any system to it's knees, my 980 Ti couldn't muster 60 FPS at 2560x1440 with this crap, simply disabling it yielded 20 FPS). Per my last post, a 6700k pushes Far Cry 4 at 95 FPS at 4.2 GHz and a 4790k at 78 FPS at 4.4 GHz. This is much more than "5-10%". Try over 20%? 78 to 95 is 17 FPS is over 20%? Considering 6700k overclocks as well as 4790k a 6700k at 4.X GHz is that much faster than a 4790k at 4.X GHz considering a 6700k is 20% or more faster in Far Cry 4, a CPU intensive game, vs a 4790k running 200 Mhz faster? Far Cry 4 is NOT a GPU intensive game, nowhere near as intensive as Metro LL, BF1 or Fallout 4 with God Rays on Ultra, the latter not being an example of gorgeous graphics but because Bethesda uses AA on their God Rays for whatever reason.

Or, we can use the following comparison, I have a 4930k, which is nearly identical to a 4790k in terms of single core speed:

4790k at 4.4 GHz = 10.5k CPU Firestrike
4930k (with two cores removed) at 4.4 GHz = 10.5k CPU Firestrike

They are both 22nm, Haswell brought no real improvements architecturally speaking over Ivy E as we can see, Haswell can overclock to 4.8 GHz or so from what I gather, whereas a 4930k 4.5 GHz is a good OC and the most you will see is 4.6-4.7 GHz at or under 1.4v.

Here's Cemu running Zelda: BOTW on the latest 1.11 build which introduced some new optimizations. In this video, a 7700k at 4.6 GHz is seen holding 37-40 FPS at a demanding area, "Dueling Rocks Stable", with the CPU the bottleneck (this game isn't GPU intensive and the creator is using a 980 Ti and pushing the game at 1440p).

For comparison, I get ~25 FPS here. At least 50% less. Cemu is insanely CPU intensive so this is a fantastic benchmark.

Given that a 7700k isn't really that much faster than a 6700k and given that your 4790k isn't any faster than my 4930k (it just has 4 cores instead of 6) it's easy to see that yeah, we aren't talking about incremental, 5-10% performance difference here. We're really looking at 20-40% or more between at 4790k and a 6700k. But you can believe whatever you want, just be prepared to have your faulty thinking challenged if you go spouting nonsense on the internet.

https://www.youtube.com/watch?v=BzROgsZHreM&t=1s

My comment, which can be found in the comment section of the video:


Vincent Tuminello
2 days ago
I'm using the Patreon Build (paid) of 1.11 and am seeing 25 FPS at Riverside stable. I don't know if it's due to architectural difference between my i7 4930k and your 7700k but I am at 4.5 GHz as well (this CPU does have HT), High Performance / Cores Unparked, SSD (850 Evo) on Win 7 with nothing else open. Out in the open I see about 45-50 FPS, not the 60 youre seeing. Still an improvement over 1.10 nonetheless but yeah, I wish I was seeing what youre seeing. I didn't want to upgrade my CPU this early (only 3 years into build) but yeah, maybe it's time. Sucks. This CPU was basically the fastest thing you could go with in 2014.

https://steamcommunity.com/discussions/forum/11/626329186832966007

https://www.futuremark.com/hardware/cpu/Intel+Core+i7-4790K/review

https://www.3dmark.com/fs/13403818

 
Follow up:

The difference between a 4790k / 4930k (with 2 cores virtually removed) and a 6700k in 3DMark Firestrike CPU with all processors running at the same frequency:

4790k / 4930k (4 cores): 10.5k
7700k / 14k

Hmmm, yeah that's only 5-10%. Try 30%?

6700k @ 4.4 GHz
https://www.3dmark.com/fs/7712068

4790k @ 4.4 GHz
https://www.futuremark.com/hardware/cpu/Intel+Core+i7-4790K/review

The difference between a 4790k / 4930k running BOTW on Cemu:

4790k / 4930k: 22-25 FPS (I actually see dips down to 22 FPS)
7700k: 37-40 FPS

Yeah that's only 5-10%. Try 50%?

Listen, I'm running a 1080 Ti, not a 980 Ti or something weaker, so I'm SEEING the CPU bottleneck, in nearly every title, even at 2560x1440 with most all of the settings turned all the way up / Ultra:

GTA 5 avg. GPU utilization (144 Hz monitor): 70-80%

Far Cry 4: 80-90%, sometimes dips down to 80%, everything ultra except PCSS shadows which look like garbage and aren't worth the resources

The Witcher 3 in certain areas, namely large cities 80-90%

And on the list goes.

The benchmarks you referred to, this were conducted by outlets that might not know, as I and many other with over 300 hours in Fallout 4, that sure, I can run around outside near Sanctuary and see full utilization on my GPU, indicative of no CPU bottleneck, but as soon as I get near downtown Boston proper, excessive draw calls rear their ugly head and a CPU bottleneck is very present, with 70% GPU utilization the best I can expect. Did the outlets conduct said Fallout 4 benchmark in downtown Boston with God Rays not on Ultra? No, they didn't, because they are mostly clueless.

The reason why you think that dropping the lithography from 22 to 14 nm (a 50% reduction by the way) and increasing the core density by 2.5% has only yielded "5-10% at best" is because THE GAMES THAT IDIOTIC CLUELESS OUTLETS USE AS BENCHMARKS HAVE THE SETTINGS ON MAXED OUT AND THEY HAVEN'T REMOVED THE GPU AS THE VARIABLE. I mean, it's fairly intuitive isn't it?

Lets go ahead and look again at what a benchmark looks like with the GPU variable removed:

6700k @ 4.4 GHz (13.8k CPU)
https://www.3dmark.com/fs/7712068

4790k @ 4.4 GHz (10.5k CPU)
https://www.futuremark.com/hardware/cpu/Intel+Core+i7-4790K/review

A 3k point difference. Is this only "5-10%"? No, try 30%.

You don't compare one cpu to the next using a game that is known to be GPU intensive and then set all of the settings to "Ultra". This is beyond retarded and it's high time various hardware review outlets stopped doing so.
 

TRENDING THREADS