Intel's Future Chips: News, Rumours & Reviews

Page 65 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


Then you have things like Display Port in the same breath that make near quarterly updates to spec, and newer versions progress displays with each step. By the same token, on the opposite end, you have things like JEDEC that plod along.
 


If I were a gambling man, I would advise you not to hold your breath for 3.1 type C to catch on. The main draw right now is for charging mobile devices more quickly...not much else. Regular USB 3.2 is likely going to be the way forward, and there are already USB specs that match thunderbolt for bandwidth without the proprietary schtick to go with TB, they just are not mainstream yet.
 


That's because you don't need updates to the base specification that often. Nevermind the cost of retooling production for each individual specification. Unless there's a glaring need, there isn't much reason for all these specs to constantly be adding features at a breakneck pace.

And I'll say again: Displayport has no purpose. It's DOA outside of PC Monitors as HDMI is ubiquitous in Home Theaters now, and HDMI offers pretty much the same feature set at a lower cost. I have no idea what problem Displayport was ever trying to solve.
 


Not having to pay a royalty to HDMI because it is an open source spec and not a privatized spec requiring a license.

Also, DP allows up to 8K through a single connection, even the newest HDMI does not allow that without 2 cables.
 


And...who needs 8k again? Call me when 4k becomes standard. And by then, HDMI will update the spec to cover that case.
 
There are really no actual performance gains between sandy bridge and kaby lake.
Its all in clock speed, which you could achieve with Sandy Bridge.

Isn't it funny that Ivy Bridge and Haswell and Broadwell couldn't hit the same clocks as Sandy Bridge and now Skylake can? SO that they can create a fake performance increase by slowly reaching the same performance they had before and pretending like their specialized architecture extensions which aren't used by 99% of applications are "gains".
 


There is currently no USB spec that is faster than TB, short of the rumored USB 4.0 with a 10GBps bandwidth. The fastest available USB is 3.1 with a 10Gbps bandwidth, slower than just one direction for TB.

I see what Intel is trying to do with TB. USB was supposed to be an "all in on" port but there are still plenty of non USB ports. TB is trying to tie everything into a single port.



I am more of a fan of DP than HDMI though. Still plenty of issues to be had with it. I would prefer if DP replaced HDMI but that probably wont happen since HDMI is backed by major electronics conpanies (like Panasonic and Sony) and major television/movie studios like Fox and Warner Brothers.



http://www.anandtech.com/show/11083/the-intel-core-i3-7350k-60w-review

Yet a 2c/4t CPU is matching an i7 2600K in most areas.

There are plenty of performance gains outside of clock speed. Problem is that the majority of consumers will not see them or utilize them. The other problem is that Intel doesn;t thrive on consumer sold chips, their bread and butter is in the professional and server/HPC market, that they own over 90% of, where they can make 2-10x the margins per CPU.
 
Skylake is 23% faster than Sandy Bridge, clock for clock.

 


About 20% faster in average clock-for-clock. Up to 70--80% faster in some benches.

01%20-%20Gains%20over%20Sandy_575px.png


Also Kabylake can use higher clocked DDR4 memory, which is useful on memory-bound workloads.

The reduction on clocks you mention is a consequence of two aspects:

1) The whole market moving from desktop to mobile. Mobile drives the market, and foundries have optimized the new process nodes for mobile chips.

2) Die shrinks increase the thermal density and makes more difficult to dissipate heat, which affects max clocks can be achieved with standard cooling.

Those points aren't exclusive to Intel. AMD has also seen an almost systematic reduction in clocks, the same than IBM.

Although the max clocks achievable on standard cooling have been reduced a bit 5057MHz for i7 2600K vs 4860MHz for i7 7700K, the clocks achieved on non-standard cooling (watter and above) have improved a lot of. The worldwide record for the i7 2600K is 6061.24 MHz, whereas Kabylake beats that by more than 1GHz: with 7383.72 MHz.
 


You are correct.I have 2 main gaming rigs mine is a 2008-2009 first generation core i7 965 Extreme running a @4.15Ghz playing head to head with my wife's gaming rig(I build her a i7 6700k) and some of my friends barely new i7, i5 and Ryzen builds getting almost same FPS using same GPU. On paper everything looks great, but n real life gaming you won't notice any difference from any enthusiast tweaked core i7 running past 4ghz, at least we haven't and we have 7 core i7 builds between all of us. One example here is the link for my build, Don't look at my SSD performance lol is kinda slow that's why desktop and workstation performance is so bad, but is a gaming rig so I don't care :). Here are the links for my 8 year old i7 965X build and my friend's few days old new Ryzen build, btw I get easy over 50fps+ in every game we play well he has the new RX 580 so yeah.

here is my build: http://www.userbenchmark.com/UserRun/3291099

my wife's:http://www.userbenchmark.com/UserRun/3457850

friend's new build: http://www.userbenchmark.com/UserRun/3457494
 


You need to read this:
https://www.cnet.com/news/intel-kaby-lake-7th-gen-7700-7600-7350/
 


Which has absolutely nothing to do with his claim.
 
Yes, Kaby Lake isn't any faster than Skylake. It's basically Haswell and Haswell Refresh all over again. :/

 


Not quite. There are multiple improvements in areas that we normal consumers do not benefit from as much. At least not until software takes advantage of it.

For example, cache speed and fetch improvements, memory speed improvements, instruction set improvements etc.

We don't see them because software doesn't utilize it. Imagine if software were memory bandwidth focused. Kaby Lake has almost 2x the memory bandwidth of Haswell.
 
Read my post again. I said Kaby Lake isn't much of an improvement at all over Skylake. :/

 



So why should we care about those improvements if we don't benefit from them and they are minimal. It doesn't justify upgrading to a newer CPU.

 


Kaby Lake is faster. It is not Intel nor AMDs fault that software does not take advantage of the improvements available.



I never said you should. However they are there and it is always up to software to catch up. Up until now Sandy Bridge was still a viable CPU. Software has caught up and now it is starting to age a bit.

However the biggest improvements now and for a while, I say 2-3 generations, will be around the CPU. CPUs on both ends have hit a wall. I see people predicting AMDs next CPU will be a big boost but I think we will wee similar returns as what Intel is getting. However the platform around it can improve. Hell the biggest bottleneck is still storage. Even the fastest SSD (Samsung 960) is not nearly as fast as the rest of the components.

I am still, and probably will be for a few more years, running an i5 4670K. I am happy with it and don't see the need for something more. I am sure in a few years software will catch up and an i7 or Ryzen equivalent will be the better choice.
 



You are right, I have a first generation core i7 Extreme past 4ghz and I can play every game i have @ultra without any problems,. When gaming cpu usage is always below 60% with multiple programs running at the same time. That's an 8 year old CPU. I also have a i7 6700k and their performance in games is almost the same.
 
@Jimmy

The one thing I think people are overlooking when they say CPUs are hitting a wall is that multi-tasking and parallelism in software are starting to catch on in areas where it was previously not really considered. Software engineers will innovate to improve performance, and when the low hanging fruit (single/dual core optimization) is tapped out, now it becomes time to start on the bigger work. Once software gains momentum gaining parallelism, there will be a broader shift overall.

Granted, there are some things that will require an entire ground-up framework rebuild to go parallel, and those will likely drag their feet moving forward, but even things like CAD are looking at going wider to gain boosts in performance due to necessity.
 
It's faster than Skylake because of its higher clock-speed. Again, it's Haswell and Haswell Refresh all over again with some very minor changes.
 


*cough* HDD speeds are exactly why we attempt to pre-load as much as we can into main system memory. And slow main system memory is why we keep cramming more and more cache directly on the CPU. There's only so much you can do in that area.
 


The problem is that that will not translate into consumer end products for a while. Servers moved past that a while ago and now most every single box server runs multiple server tasks in their own VM environment. And that makes sense. No one would buy an 8 core server for AD, and 8 core server for Exchange etc. And it is better to have independent servers and roles on their own installation so take an 8 core server and divide it into 2-4 VMs and you not only save money but power costs which is huge for big companies.

My server room here has one large SAN and 3 VMHost boxes. I have 6 Windows servers that manage AD, Backups and other tasks. I do have one independent server box that is mainly just our backup and archive store.

There are only two ways I can see consumer software taking advantage of multiple cores and both require a pretty drastic re-write of everything from the OS to the software. One would be to have the application utilize all the cores. It is doable but still has a lot of hiccups and there are still diminishing returns. The second would be VM. Maybe get the OS so that it can hoist a game or app in its own sandbox VM with two or so cores (depending on what you have) dedicated to heavier applications.

Even so we still have quite a few years till an 8 core is properly utilized for gaming. Workstation applications are getting there faster but CAD work and similar are faster using GPU acceleration.



Which is why even though the first iteration of Optane is meh I am excited that Intel is starting to push it out. When NVDIMMs become a viable storage solution that could easily solve a lot of our bottleneck issues with storage and the best part is that it will grow with the CPU as the faster the CPUs memory controller is, the faster the storage will be.

The Athlon had 600MB/s bandwidth in memory in 2001. It is high time we erase storage bottlenecks.
 


Which is why even though the first iteration of Optane is meh I am excited that Intel is starting to push it out. When NVDIMMs become a viable storage solution that could easily solve a lot of our bottleneck issues with storage and the best part is that it will grow with the CPU as the faster the CPUs memory controller is, the faster the storage will be.

The Athlon had 600MB/s bandwidth in memory in 2001. It is high time we erase storage bottlenecks.[/quotemsg]

Optane really isn't going to move the needle much. You still have relatively slow HDDs compared to main memory; there's a reason we pre-load as much as possible.

Now yes, Optane and other tech would help this pre-load process out a TON (and I'm speaking as someone who's played DA:I, and had 2-3 minute wait periods on area transitions; stupid 7200k RPM HDD...), but actual in-game performance impact is basically nil; everything you need should already be in main memory at that point.
 
Status
Not open for further replies.