Intel's Future Chips: News, Rumours & Reviews

Page 58 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


That sounds a bit odd... They cut ties with Imagination to develop their own IP IIRC, so now going to AMD for it sounds a bit weird.

Cheers!
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810
Xeon FPGA details

broadwell_fpga.jpg


http://www.theregister.co.uk/2016/03/14/intel_xeon_fpga/
 

Tommylincon

Commendable
Mar 30, 2016
1
0
1,510


oh thanks man i love amd thangs shit
 


I have to say. The Tower unit is mighty tempting. The specs make it to be an excellent "lab on wheels" for any project. And at that price, is not that out of reason.

I am actually surprised, pleasantly.

Cheers!
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


Yeah the price is reasonable considering the announced prices Nvidia had for their Pascal box. Those were like 12K per unit.
 
At this point, the lack of software more or less dooms X86 on mobile. Throw in the lack of being able to get power draw low enough (which I predicted years ago) and the fact growth in the segment is slowing, not to mention billion dollar losses, it makes sense why Intel would finally throw in the towel.
 


"(via wccftech)"

reliable_sources_small.jpg


So, they should start calling it "Tick, tac, toe"? :p

Cheers!
 


Personally I could care less. We're going to get our 5% performance boost and it'll be a day.
 

jed

Distinguished
May 21, 2004
314
0
18,780


No, the prices on E model CPU normally don't drop.
 

TehPenguin

Honorable
May 12, 2016
711
0
11,060


Thought so as they still perform splendidly. Thanks!
 


Doubtful. Intel has a pretty massive margin on their CPUs, especially since they do all their own fabrication which makes it cheaper for them compared to AMD or nVidia who use third party companies.
 
G

Guest

Guest
In my opinion Intel will hit the limit one day . Making transistors is very hard to make in smaller dimensions . So they'll use crazy designs and different materials for more performance .
 


There's a reason why theres a TON of research in optical chips, and even using relays to replace transistors [the idea being, while relays take FOREVER to throw, you can trivially get processors in the hundred GHz range]. Traditional CPU design basically dies off after the 7nm node, as we simply can't improve performance by adding transistors.

We're literally 4-5 years from the end of performance gains.
 


That is why they are looking at 3D stacked designs like with HMC and 3DXpoint or even their FinFETs they currently have for CPUs.



Intel already made a fiber optic silicon chip:

http://www.pcworld.com/article/114778/article.html

Isn't their Omni Path part of that?
 
The main issue with Optical is how you generate the light; it's very power hungry and causes all sorts of cooling problems. Sure, in a lab, you can do it, but a consumer grade chip? That's the problem.

I know DARPA has a 100 GHz CPU built around relays. The idea being to clock it insanely high, and use very complex on-chip logic to limit as much as possible how many relays you need to throw [due to the rather INSANE latency you introduce each time you throw a relay].

And of course, Quantum Computing.

My suspicion is you'll see 7nm be the last cost-effective node to produce on, and you'll see two generations where everyone goes wide, before they figure out the software won't cooperate. That's when you'll see the big players [Intel/IBM] throw money at alternative designs.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Both Intel and IBM labs have been throwing money on alternative designs during decades.
 

Vogner16

Honorable
Jan 27, 2014
598
1
11,160


I believe that as intel hits this performance wall (2.5% increase in IPC each gen now) we will see intel not trying to improve their core design to improve FP or integer performance per say but more on the lines of ASIC's for specialist tasks which remove threads from the cpu's work load. take for example amd APU's. they integrate a UVD to remove video decode from the cores to a specialist section of the silicon. makes video decode faster than what cpu could do and allows the cpu to be open for other compute tasks.

of course this needs software to use specialist asic's on a cpu die but intel has the market by the balls... just snap you fingers!

security decode asic's already here. video decode asic's already here. compression asic? this stuff is whats next. not improvement in per core performance! or adding more transistors per core! sure they will add a few, but you get my point. asic's are the future for computing
 
Status
Not open for further replies.