Intel's Future Chips: News, Rumours & Reviews

Page 59 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


Intel already had a ASIC for video decode though, QuickSync which was pretty damn good compared to a lot of competition.

But you are probably correct. They have hit a wall an until that wall can be taken down they will have to focus on ASICs.

I am sure though they will break the wall some day and continue a performance gain. They have to otherwise we will never get out holodecks from Star Trek.
 


who said that adding ASIC's wouldn't improve performance? intel isn't stoping performance gains, mearly changing how those gains are implemented! sure skylake is only 5% faster in ipc over haswell, but offers better asic compute. that works well for people with encrypted systems to show massive improvement or video decoders etc... new cpu's still are faster just in different ways! it will mean that cpu generations will be less worth upgrading too for the sake of say gaming performance but more for general compute use case or work computers use case etc...
 


That's been pretty much true since Nahalem. Even today, OC'd i7 920's hold up pretty dang well, even if in certain workloads they get thrashed. That's pretty much what we've been heading toward.

The REAL question, I think, as it what point you need so many specialized components on the CPU die where it starts to make sense to have a second chip specifically for those workloads, freeing up valuable CPU die space and dropping power consumption. I honestly think we could end up with something akin to the X87 days, where certain workloads are done off the CPU due by another chip.
 
Well that was the same with Haswell to Devils Canyon:

i7 4770K 3.5GHz/3.9GHz

i7 4790K 4GHz/4.4GHz

So this is not surprising especially since it is just a refresh on the CPU and giving us a clock boost with a more mature process.

I do hope Zen kicks Intel back into gear but I don't think it will. Not that it wont be competitive but rather that I think AMD will also hit the same wall Intel is at with Zen. Until they actually get some new tech to push CPUs to higher limits I don't see much difference coming in performance like we used to. As well I would expect Intel to get there first since Intel does their own fabrication while AMD will need to rely on GF who will probably get their process tech from Samsung/IBM whom I doubt will give it to them right away, much like Samsung had its 14nm for a while before GF got it.
 


The CPU + dGPU combo is handling the needs quite well, which is why it has survived so long. Most things that used to be on PCI-e cards have been pulled in the CPU or dGPU. What is left really? Intel's plan is to couple an FPGA in the CPU to give programmable acceleration. What AMD used for True Audio is apparently a generic DSP that could be used for other functions besides audio.

One of the bigger things left power wise is the memory. AMD/Intel/Nvidia are all addressing that now. It could be within 4 years that the system RAM will ship in a mainstream CPU.
 


Oh ho... That might be interesting news. Well, if we can trust WTFBBQTech.

It seems like Intel hit a wall with IPC, so they are now fully dependent on clocks and specific instructions implementations. Is there anything else Kaby will bring to the table? Like AVX3 or something?

Cheers!
 


The Intel leaks have been fairly accurate. Intel has had a few years now to tweak the 14nm process. Expecting higher than usual IPC gains but not massive (10%).

What is even more interesting is the Broxton platform is expecting 50% IPC gains over Airmont (Cherry Trail). That is a ginormous gain for Intel's entry level parts.

atom-2016_05.jpg


 
Well, considering how nuttered Atoms have been, it doesn't really surprise me they can get 50% out of a new design. What is interesting, the TDP range that thing will be moving on. I wonder if they're specializing the core design around a few instruction sets. After all, it's Android the target platform. I wonder if they'll try and grab Apple's attention as well.

Cheers!
 


Indeed, Kaby Lake is codename for Skylake refresh just as Devils Canyon was for Haswell refresh. This is the new TICK-TOCK-TOCK strategy: Microarchitecture --> die shrink --> refresh.
 


They did a while back.



Clocks also hit a wall time ago.

CPU-Scaling.jpg


It is not exclusive to Intel but to everyone using silicon

horowitz_fig7.png


The only current way Intel has to improve performance is leaving x86. This is why Intel did spend money and time on developing a lot of new ISAs: AVX-512 F, CDI, AVX-512 {VL, DQ, BW}, ERI, PFI.
 
Yeah, as far as design goes, X86 reached it's peak over a decade ago. Adding more cores, more cache, and more specialized instructions is just hiding the fact there really isn't too much more you can do with the architecture. You're at the point where you are using your transistor space you gain via die shrinks strictly to add to performance that impacts a minority of use cases, because that's all you can really do.

This is why Intel tried to kill of X86 with the much better designed Itanium lineup. Intel even backed a X86 layer for compatibility purposes, albeit at a 90% performance level. Unfortunately for us, AMD took that opportunity to come up with X86-64, extending the ISA rather then replacing it, leading to the Itanium failing in the market.

The problem now remains that any successor to X86 must include both legacy X86 and X86-64 support, in order to achieve backward compatibility with 99.9% of the software people use. And at that point, you're basically just redoing X86. Because Itanium failed, this is an architecture we're basically stuck with for the next decade.
 


I am somewhat surprised that Intel has not tried to do an IA64 again and seeing if they can't get x86 compatibility to 100% performance. I am sure it is near possible now.

Of course we would have to wait for 128bit for anything and I am sure it will be the same. Instead we will have x86-64-128.
 


But Intel has the same problem; x86-64 has taken root, and will itself require a compatibility layer to force out of the marketplace. It just isn't worth it.

Secondly: we're never moving beyond 64-bit at the OS level. There's no need.

I work with people who work with the two largest datasets on the planet: Weather/Climate prediction and DNA processing. Even they don't believe they'll need anywhere close to 2^64 worth of RAM, let alone 2^128.

Unless Windows stops being the majority OS AND whatever OS replaces it has mainline support for alternative CPU architectures, we're basically stuck with x86-64 for the next few decades, at least. Unless Quantum Computing becomes a thing and everything needs to be re-done from scratch. *shrug*
 
Uhm... Instruction depth is not only about memory space.

Whenever you increase depth on any operation (by making "bigger" ALUs or FP units and through instructions set support), you increase precision. The more precision, the less error margin you have in calculations. The more complex the calculations, the more error margin they carry, so you compensate with precision depth; software protection has a *massive* penalty to cover for error propagation. Think of SHA and other vector operands that have *huge* bit depths and the instructions that support them when they have the accompanying hardware support get massive speedups.

So, the RAM argument is not wrong, but I think you're crossing the cables on that one, gamerk.

Cheers!
 


It is pretty evident to me that both the mod and gamerk were talking about the possibility of awaiting to future 128bit (memory address) ISA to use the transition as excuse to move away from the serial non-scalable x86 ISA whereas provide a compatibility layer to current x86-64. I.e. something as EPIC/Itanium but well done.

Your mention of FP precision and vectors width is completely unrelated to what they were discussing.
 


Giving it a second read, you are correct. I did miss the point of why using "more ram" meant "move away from the ISA".

Cheers!
 
The biggest problem Intel has (like the rest of us) is global warming. The emissions reductions treaties won't leave a lot of room for the massive power increases required by future chips (let alone the hardware that would justify them). Intel was able to kick the can down the road for a while by shrinking their process, but now it looks like they are destined to become an R&D-less production company like those producing toilet paper (unless they move to biotech, but good luck persuading people of its ethicality). I expect a tech crash no later than 2020, when neither Intel nor AMD can no longer hide the fact that their business model is ended. I'm glad Intel was able to create Sandybridge and Skylake and reduce the footprint of our existing capabilities (and am frankly very disappointed in AMD for not doing the same), but time waits for no one, and now it's time to take stock of what we have and make the best of it.

The PC as we know it has been in decline for some time now. Once faster internet becomes common, more and more work will be done with thin clients and passed through data centers, which will no doubt pay Intel and AMD large sums for server farms and massive multiprocessing. I don't think AMD or Intel are dead, but their reputation as innovators must inevitably be shed.
 
The root problem is simple: assuming the fabs skip 10nm (which some are planning to do), 7nm is likely the last die shrink before physics prevents any farther progress. Even if 4nm is possible, it's almost certainly not cost-efficient. In two or three years, we're at the end of the line as far as gaining performance simply by adding transistors.

More worrying is the fact that all the alternative technologies out there aren't close to being ready yet. I suspect the 7nm node could hang around for well over a decade until quantum computing comes along. And I suspect performance won't move much (if at all) during that time.
 
Sort of a weird tangent but GPUs will definitely reach that state too. If you think of it, the better and more complex graphics get then it takes devs more time and money to make those graphics. It's going to reach a point in time where the devs can't cost-effectively make better graphics while keeping profit margins high and then the GPU companies will slow down GPU progress to near a halt because:

1) If low end GPUs can max out every game there is no point in a high end GPU.
2) No reason to put a lot of R&D into progress when they can make more money with less R&D.
3) Node limitations ,whether they will actually cause this halt or be a mere excuse.
 
Technically speaking, we've been at peak graphics since the DX10 days. Sure, we have HDR lighting now, but for the most part, the focus has been on new AA modes, Variable Rate Displays, pushing 4k resolution, and compute, rather then new graphical features. The problem is all the advanced stuff (God Rays being a good example) are INSANELY hard to compute, and even today tank FPS.

GPUs have progressed linearly with transistor count because that's how rasterization works; it's massively parallel, so more transistors = more performance. But after 7nm, GPUs, like CPUs, are going to hit a brick wall. I wouldn't be terribly shocked if the 1080 GTX is considered viable a decade from now, since I don't see more then two die shrinks for GPUs before they stall out.

Understand: We're hitting a computing brick wall in just a few years.
 
Perhaps Indium Gallium Arsenide chips can help reduce voltage needed across its junctions, with less static power or leakage power, less voltage resulting in less dynamic power.
We might achieve even higher clock rates up to 5*(1/0.7)^3 =14.6 GHz to supplement high IPCs.
Nanotechnology brings wonders!
 


Only Glofo is skipping their "10nm" node. Intel, TSMC, and Samsung are releasing "10nm" nodes. Also "7nm" is not the end. The ORTC (Overall Roadmap Technology Characteristics) of the last IRTS (International Technology Roadmap for Semiconductors) goes up to "1.8nm" industry label. Intel is already working in "5nm"

it_photo_178222_52-580x434.jpg
 
Status
Not open for further replies.