AMD CPU speculation... and expert conjecture

Page 484 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
So far AMD have shown what can be achieved on an APU, despite its limitations. If AMD is to find a way to unlock its performance which is largely hindered by the antiquated Bus and Memory controller and if say was able to double and possibly go beyond that then the APU's will be light years ahead of what anyone else can produce to do something of similar calibre. Steamroller FX as eluded to doesn't exist because AMD are playing to their strength, the APU represents something feasible to persue because the potential is there and waiting to be unlocked.
 


PCI-E 2.0? Heck, PCI-E 1.1 hasn't even been saturated for non-Titan cards yet. Bandwidth of the bus isn't a huge problem at this point.
 

logainofhades

Titan
Moderator
An HD 7970 and GTX 680 push the limits of 1.1, but the performance difference still isn't huge vs 2.0 or 3.0. I would want 3.0 for Crossfired 290 or 290x since the CF bridge is gone from those and the data passes across the PCI-E lanes instead.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I am referring to the big plan of migrating away from the current discrete cards to the ultra-high-performance APU that Daily and his team are designing. Irrelevant details such as if X is released at concrete year or if it is delayed one or two years don't change the overall long-term plan.

This is the same than with AMD and their plan to go from CPU+GPU to "Step 3" APU (check the former AMD slide). The big plan has not changed, only details such as that Kaveri was not the successor of Trinity as initially planned by AMD, but Richland was introduced in the middle due to Kaveri delays. Nvidia is introducing Pascal and Erista by similar reasons.



That is per lane. PCIe 4.0 will max at 256 GT/s = 31.51 GB/s, whereas NVLINK will max at 200GB/s. This is up to 6x more than PCIe 4.0 and up to 12x more than current PCie 3.0.

The difference is larger for multi-GPU configurations where NVLINK will max at 800GB/s (for four GPUs), whereas a PCIe switch divides the total bandwidth between the GPUs:

In a multi-GPU system, the problem is compounded if a PCIe switch is used. With a switch, the limited PCIe bandwidth to the CPU memory is shared between the GPUs.

Moreover, it provides other advantages such as improved efficiency: 2x.



The same source that you quote and link to image agrees with me:

To pull off the kind of transfer rates NVIDIA wants to accomplish, the traditional PCI/PCIe style edge connector is no good; if nothing else the lengths that can be supported by such a fast bus are too short. So NVLink will be ditching the slot in favor of what NVIDIA is labeling a mezzanine connector, the type of connector typically used to sandwich multiple PCBs together (think GTX 295). We haven’t seen the connector yet, but it goes without saying that this requires a major change in motherboard designs for the boards that will support NVLink. The upside of this however is that with this change and the use of a true point-to-point bus, what NVIDIA is proposing is for all practical purposes a socketed GPU, just with the memory and power delivery circuitry on the GPU instead of on the motherboard.

I already explained before why both Intel and AMD are migrating to APU directly, whereas Nvidia needs the intermediate step of a socketed GPU:

Discrete GPU Card --> socketed GPU --> ultra-high-performance APU



I am not referring to overall balance, which is still positive, but to the huge loses that Intel has per month due to their empty fabs and which forced they to open their fabs

http://www.brightsideofnews.com/news/2013/11/21/intel-finally-opens-up-fabs2c-details-mobile-push.aspx
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810



Yes we get that circuits will become more integrated over time. That's just how computing has evolved since day one. It is not some big special plan devised. That's just how electronics is.




Yes I specifically called out per lane speeds. You are aware that you can make a 32 lane PCIe slot right? It's supported by the standard. You are feeding into the marketing claims hook line and sinker. Nvidia is touting a late 2016 link that is slighty faster than PCIe 4.0. The higher speed NVLink you're referring to is a 2.0 which will be even later, maybe late 2018.



Without a ZIF (zero insertion force) socket and latchiing arm it's not a socket to me. That's a board to board connector very much like 2xPCIe.



[/quotemsg]

9B pocketed after all expenses paid is more than just positive. That's incredible.

How is an empty fab a huge loss? There are no employees in it. They have no expenses besides land taxes. The property still has value that will continue to grow as the USA becomes more populated.

The world doesn't stand still. Every business model changes over time. When every silicon vendor had their own fabs there was no need to offer services to other companies. Now that they've all gone fabless the opportunity presents itself. Why would Intel just give all that business away to TSMC/GF?
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


The PCIe bottleneck is only one of the problems, and the minor of them. Upgrading to a new PCIe or to a new interconnect as NVLINK doesn't solve the power wall problem for exascale. I already mentioned plenty of times which is the solution.

The same Nvidia guy who has designed NVLINK is the same guy who is not using any dGPU for Nvidia exascale supercomputer project. AMD chief engineer has also selected an APU and rejected any dGPU for AMD supercomputer. Thus either you think you know more than both of them or, alternatively, you could read my dozen of posts about the topic that explain why a dGPU doesn't work for exascale compute.

Their APUs are not connecting each other "over ancient HT 3.0 or PCIe". The Nvidia APU docs mention an interconnect of 150GB/s. The AMD docs mention one of 40--100GB/s. This is bandwith available for each APU, not shared among all of them.

The NVLINK presented now seems to be derived from their future APU-APU interconnect.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Except that it is about how exascale is substantially different instead a mere extrapolation (integration) of current tech.



Even comparing NVLINK to a 32 lane PCIe slot, PCIe continue being much more slower. If you believe that IBM has purchased NVLINK because Nvidia marketing team fooled them, instead because it is better than PCIe, then that is funny.



Both Anand and myself know what is a socket. Wasn't evident?



The link given above already explain why Intel is opening its empty fabs to competitors.
 

jdwii

Splendid
I like the 250X i can now build a machine for 400$(402$) with all new parts that includes a Pentium dual core haswell and a 250x and it can play all games at 1080P medium with really good performance. Plus Intel's cheap boards are now worth getting i like the 50$ asus board(ASUS H81M-K) i got. This will make one heck of an PC for my brother and it has an amazing upgrade path so if he wants to get one of the fastest processors in the world he can.
I might run some gaming benchmarks later but i can tell its definitely a Superior machine compared to the APU A10 7850K in gaming while feeling a little slower in general multitasking and applications.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810

It's only 25% faster than PCIe 4.0. That's hardly anything.

An Infiniband link is 25GT/s, so NVlink (20GT/s) is actually slower than Infiniband which has been around for several years now. So sure NVidia didn't want to pay Intel for that or they couldn't and were forced to reinvent the wheel.



Anything containing a cavity which an inserted part can fit, is a socket. So yes a mezzanine card (Pascal) can be called socketed, and a CPU is socketed, and even a PCI card is socketed.

In computing we tend to be a little more specific in terminology, but whatever floats your boat. Either way it's an entire PCB plugging into another PCB, just like a dGPU card.


Yeah I read the article when it came out. No where did it mention huge losses. They're capitalizing on new opportunities.
 

has amd enabled mantle support for r7 250x? it could be a good test to see how much the pentium bottlenecks the 250x with dx11. right now, only bf4 and thief are mantle-enabled. according to a recent xbitlabs testing, thief may be system memory bound (in dx11, dunno about mantle).

ofc it'll be better than 7850k... in gpu performance, due to current prices. but if you could get a 7850k from microcenter (only applies to people near mc), the favor tips towards the kaveri.
 

truegenius

Distinguished
BANNED


i7-4960X ? :miam:
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


It is more like 200% faster than single 32 lane PCIe 4.0. And much more in multi-GPU conf.

Years ago, Nvidia collaborated with Mellanox to reduce Infiniband bottlenecks.

Latter, Nvidia rejected the usual infiniband interconnect for a newest Dragonfly interconnect for the modules in their exascale supercomputer. Of course, both infiniband and Dragonfly are too slow for APU-APU interconnects and Nvidia developed NVLINK for that and presented it now for CPU--GPU and GPU--GPU interconnect.

Before you pretended that IBM is full of idiots who purchased NVLINK when PCie is about the same. Now you pretend that IBM people is full of idiots who purchased NVLINK when infiniband is much better. Funny, because IBM has been using infiniband for years, including in the lastest Power7 HPC clusters...



Right, "huge losses" were mentioned by me. They use terms such as "doom and gloom" and "dismaying investors" that "have to worry less about Intel having empty fabs". The article summarize the problem already in the first paragraph. Thus I doubt that you did read the article.
 

8350rocks

Distinguished
@juanrga:

Ironically, you are telling me EXACTLY the opposite of what AMD is telling me, HEDT is NOT dead...they are just waiting until a better time to get things going presents itself.

I would trust word from AMD over your speculations any day of the week and twice on Sunday. They may not launch a new HEDT product this year...but it IS coming.
 

truegenius

Distinguished
BANNED

how you see it
47883684.jpg



how I see it


Oneeternitylater.jpg

 

vmN

Honorable
Oct 27, 2013
1,666
0
12,160


I'm pretty sure it is still gonna be 28nm. That all GloFo can offer currently.
 

jdwii

Splendid


I'm pretty sure its going to be on 28nm this time around. I can't wait to see what's coming out after excavator. I'm actually guessing Amd won't be releasing any high end processors until this module design is over with. I'm not to sure whats happening at Amd i know some engineers did want a high end processor out but things did not work out right. I see the business guys still overrun the engineering department at Amd, i guess its better then the marketing guys doing it.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810

You're failing at simple math here. NVlink 1.0 is 20GT/s. PCIe 4.0 is 16GT/s. That makes it 25% faster.

Infiniband is 25GT/s. What's worse is you're comparing tech 2+ years into the future with what has already been in shipping product for a while now.

The even farther future NVlink 2.0 where the 50GT/s speed comes (the 200% you're talking about) is like 2019 tech. Infiniband already has 50GT/s slated for 2017. BTW this is just one competing solution.

You're the only one talking about idiots here. Those are your words not mine. NVidia wants to brand their own version of a high speed serial interconnect that's fine. It makes it easier to market. It's a solid tie in with IBMs powerful sales channels.

Just don't misrepresent the facts and make it sound like something completely new and unique to NVidia. There are down sides as well to NVLink. I'll just mention one minor one to start. 99.9999% of PC's in 2016 will not have a NVLink port for this thing to plug into, so they will have to make a PCIe version. Conveniently NVLink ports can talk PCIe, so you see how similar they really are.


"While many have claimed doom and gloom on the PC market, Intel has proven that they haven't seen nearly as much volatility in the market as some of their competitors have."

Are you even reading the same article? That's saying despite all the fear mongering, they haven't really been that affected. Their revenue actually went up last year.

The phrasing "while many have claimed" means they are taking a neutral stance. People love doom and gloom articles. They get lots of hits, and apparently you feed right into them as well.

 


http://www.tomshardware.com/news/intel-haswell-e-devils-canyon-pentium-anniversary-edition-broadwell,26326.html

Can't find where I saw the frequency changes to the Haswell refresh but Devils Canyon is just part of that refresh. I doubt it is some insane OC CPU but they might be improving the thermal transfer of it.



I think AMD made a smart move in removing the need for a bridge and finally using the PCIe interface only. NVidia should honestly do the same.
 

anxiousinfusion

Distinguished
Jul 1, 2011
1,035
0
19,360


Not my speculation but BSN's:
Given we know that Carrizo will be manufactured at the 28nm node, albeit a different process than Kaveri, it is unlikely AMD adds more compute units to the GPU.

Apparently Global Foundries as well as TSMC are having difficulty producing 20nm/16nm that suit the needs of high performance parts.
 
Status
Not open for further replies.