Intel's Future Chips: News, Rumours & Reviews

Page 60 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


I guess one other thing to keep in mind is the interconnect. Omnipath is something that should become interesting but consumers might not see that for a while as we don't quite need it yet.
 

JustEthan

Commendable
Nov 11, 2016
69
0
1,640
What I don't understand is if performance is gained by adding more transistors then why do they keep shrinking the die? The smaller the die the closer the transistors are to eachother and the faster they heat up. If you can fit millions more transistors on a 14nm die then why go down to 7 in the first place? I understand the price of making a smaller die is less than a bigger die but for one they are charging the same or more every year for their smaller dies and when you hit 7nm the transistors are so fricken close to one another it seems more logical to just keep refining 14 or 10nm. This year the Kaby Lake 14nm is 4.2ghz with 4.5 turbo. I have no doubt in two more years with 14nm they can easily hit 5ghz with less heat. I personally think 10nm chips are going to be so darn hot idk if it's going to be worth it. Then again 10 might be the best ever. It could at 7nm when it's too hot. I really like the idea of 6 and 8 core 10nm cannonlake chips on lga1151 socket. I hope that's where it stays as not only am I going to throw a Kaby cpu in my z170 Deluxe as soon as it is released but I think it would be awesome as hell if I can update my bios and throw a Cannonlake in it in a year as well.
 

8350rocks

Distinguished


Room temperature super conductors may not be so far away as we think: http://phys.org/news/2016-09-room-temp-superconductors.html

In addition...UTBB FDSOI is likely viable with finfets through ~5nm. Beyond that...well...by the time 5nm gets here, and has seen a few revisions, either graphene will have broken through, or those guys messing with cuprates will have cooked up something better.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


This is about patent licensing. It is worth to mention that Intel already has a similar patent licensing with Nvidia. The point is that both AMD and Nvidia own lots of basic patents for graphics development and if you want to develop your own graphics tech either you invest lots of money and years on trying to develop new tech that doesn't violate patents of others or you pay to get freedom to develop your own graphics hardware without being sued by others.

This is what the Nvidia-Intel agreement did. Intel did pay a generous amount of money to Nvidia, and Nvidia didn't sue Intel for Intel graphics tech.

In this occasion it seems that AMD is not getting money from Intel but Intel gives permission to AMD for licensing x86 to the Chinese Joint Venture.
 
Yeah, this is basically a licensing for cash deal. That being said, long term, this could end up hurting AMDs APU segment, simply because Intel would be more competitive.

I think this kinda highlights AMDs financial situation in a nutshell.
 

8350rocks

Distinguished


Personally, I doubt this actually hurts AMD because it is for graphics hardware, and not specifically for APU hardware. Considering where they are with a License from Nvidia, I doubt this changes much in that regard.

Additionally, I doubt intel actually uses AMD tech, I think this is more to cover their bases for lawsuits regarding patents. AMD is not particularly lawsuit driven in that regard, but with the bad blood between intel and nvidia, a deal with AMD seems to benefit AMD financially, and also keeps Intel from being trolled in court over graphics patents they do not own.
 
I agree with 8350; more to his point is the, IIRC, Intel idea of manufacturing their own IP with graphics. They still need the basic building blocks in order to start building something, so licensing from AMD or nVidia is the only avenue they have until they have a bigger GFX portfolio to engage with cross licensing.

Since all the "cool kids" are getting into AI, having a massively parallel architecture is like the entrance ticket to the party. I would imagine the easy path is to start with the current state and go from there?

I do believe it's a YUGE market, so might as well pursue it, right?

Cheers!
 

8350rocks

Distinguished


Well, considering that AMD and Nvidia have spent multiple decades developing their own tech, and the patents for it, the R&D Intel would have to burn to catch up with something new and different would be astronomical to simply build a different, perhaps not better, mousetrap.
 
That I won't argue, because it is true I'd say.

What I will argue is "intent". Why does Intel want to develop GPUs for? Consumer? No way in hell. Proffessional world? Maybe? They had a take with Larabee and KL (is it out there already?) into parallel-GPU like stuff. Now, the market is shifting towards AI and massively parallel tasks that are actually very comfortable running in GPUs, so I would imagine Intel does indeed want to gain expertise there. And they do have the money still. They already burned a huge amount with their mobile ventures and that made just a small dent in their financial records, so why not pursue something even more profitable in the long term? Big risks turn into big gains as well, right?

Cheers!
 

8350rocks

Distinguished


That I can see being a possibility...but if they went into professional GPUs, that would probably hurt nvidia, not sure it would help or hurt AMD because of their relatively small market share. I think it would hurt indirectly...but, I am not sure that it would be something successful.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
Intel doesn't use GPUs for parallel compute workloads. Intel uses a different approach: The Phi line, which is successfully competing with Nvidia on HPC, and the Xeon line on servers.

https://software.intel.com/en-us/blogs/2014/02/27/how-intel-avx-improves-performance-on-server-application

https://software.intel.com/en-us/articles/how-intel-avx2-improves-performance-on-server-applications

Intel is not spending lots of resources on developing AVX-512 F, CDI, AVX-512 {VL, DQ, BW}, ERI, and PFI ISAs and then adding them to CPUs, just to move away and use GPUs (which are inferior). Artificial intelligence has been mentioned in above posts, but Intel already shared with us its strategy for that

Our industry needs breakthrough compute capability — capability that is both scalable and open — to enable innovation across the broad developer community. Last week at the Intel Developer Forum (IDF), we provided a glimpse into how we plan to deliver the industry-leading platform for AI:

  • ■Commitment to open source with optimized machine learning frameworks (Caffe, Theano) and libraries (Intel® Math Kernel Library – Deep Learning Neural Network, Intel Deep Learning SDK).
    ■Disclosure of the next-generation Intel® Xeon™ Phi processor, codename Knights Mill, with enhanced variable precision and flexible, high-capacity memory.
    ■Today we completed the acquisition of Nervana Systems, bringing together the Intel engineers who create the Intel® Xeon® and Intel Xeon Phi processors with Nervana’s machine learning experts to advance the AI industry faster than would have otherwise been possible.

https://newsroom.intel.com/editorials/intel-deliver-leading-platform-artificial-intelligence/
 
Yeah, Juan. They are not taking initially GPUs seriously because they have no patents to pursue that route. Intel will not, just like any other Company out there, disclose what they internally will consider "weaknesses" when presenting a strategy for market penetration and product line up.

Also what I read from the articles is that Intel, currently, is not considering expanding too much beyond the Phi line in terms of parallel processing nor multi-rack approach. I'm not saying that is good or bad, but saying they are not implying or even explicitly saying it is the only avenue they will pursue. They'll keep trying to make Phi better in terms of processing capabilities with extensions or even a potential overhaul to instructions prioritization.

There's a lot of avenues they can take and Phi is just one of many.

That being said, yes, they could just be trying to license from AMD or nVidia just to keep producing lame GPUs just to get some basic functionality for office and little proffessional use. That is also part of the scope, but why focus on the small, most probable, stuff when you can think out of the box?

Cheers!
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
Intel is not pursuing the GPU route to compute because that is a dead route. Patents aren't the reason, because Intel has had access to Nvidia GPU patents since 2004. Intel is attacking the HPC/compute market with the Xeon Phi line, because this is much better than GPU.

In any case, Charlie from SA has just stated that the rumor about Intel getting access to AMD GPU patents is fake and stated that the source (Kyle) has other motivations.
 
Well, Charlie has his own motivations as well. I won't put my trust on neither TBH.

If you're dead set on thinking that Juan, so be it. I won't try to justify more why Intel could indeed want a bigger patent portfolio that revolves around GPUs.

Cheers!
 

8350rocks

Distinguished


You say that, but as a member at semiaccurate, you know quite well that ROCm is a vibrantly thriving movement in the world of HPCs. You also know it is entirely based on GPU utilization and streamlining that aspect of it through homogenous compute and asynchronous compute.

For those unaware: http://semiaccurate.com/2016/11/14/amd-rocm-botlzmann-v1-3/
 
Fair enough.

So the original rumor was blown out of proportions and it was directed at a licencing deal instead of a holiday bundle.

Still, that doesn't take away the fact that the licencing deal will end in 2017 and they must be in talks with either nVidia or AMD to either re-new or make one.

Cheers!
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Kyle from HardOCP posted a rumor about Intel getting access to AMD GPUs patents. The rumor was amplified by the usual hype sites (like WCCFTECH). Then Charlie at SA wrote that Kyle invented the rumor. And now the above link confirms that the rumor was fake and that Radeon Group and Intel only team for a Holidays game bundle.
 

8350rocks

Distinguished


Still does not address the ending agreement with Nvidia.
 
Status
Not open for further replies.