AMD CPU speculation... and expert conjecture

Page 485 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
has amd enabled mantle support for r7 250x? it could be a good test to see how much the pentium bottlenecks the 250x with dx11. right now, only bf4 and thief are mantle-enabled. according to a recent xbitlabs testing, thief may be system memory bound (in dx11, dunno about mantle).
[/quotemsg]

I dislike Thief because AMD does so poorly without Mantle support in the first place compared to Intel. I want to wait and see if AMD/Eidos fixes AMD's performance issue before I buy the Mantle benchmarks for that title.
 


Given how the BD arch has gone through two refreshes now, and all the low-hanging fruit has been largely taken care of, is it POSSIBLE the delay for a replacement to PD on the HEDT is due to AMD working on a totally new arch? A bit out there, I know, but it would explain the delay on that front.



Yields are ALWAYS bad at first. The real story would be if the node eventually settles down and starts getting acceptable yields for those parts.

Also, Intel isn't getting rid of its Fabs, cost be damned. Just a decade ago, there were dozens, now we're down to basically four: Samsung, Intel, GF, and TSMC. You think Intel is giving up that piece of the pie?
 
I love it when people agree with me:

http://techreport.com/review/26239/a-closer-look-at-directx-12

The question, then, almost asks itself. Did AMD's work on Mantle motivate Microsoft to introduce a lower-level graphics API?

When I spoke to AMD people a few hours after the D3D12 reveal, I got a strong sense that that wasn't the case—and that it was developers, not AMD, who had spearheaded the push for a lower-level graphics API on Windows. Indeed, at the keynote, Microsoft's Development Manager for Graphics, Anuj Gosalia, made no mention of Mantle. He stated that "engineers at Microsoft and GPU manufacturers have been working at this for some time," and he added that D3D12 was "designed closely with game developers."

I then talked with Ritche Corpus, AMD's Software Alliances and Developer Relations Director. Corpus told me that AMD shared its work on Mantle with Microsoft "from day one" and that parts of Direct3D 12 are "very similar" to AMD's API. I asked if D3D12's development had begun before Mantle's. Corpus' answer: "Not that we know." Corpus explained that, when AMD was developing Mantle, it received no feedback from game developers that would suggest AMD was wasting its time because a similar project was underway at Microsoft. I recalled that, at AMD's APU13 event in November 2013, EA DICE's Johan Andersson expressed a desire to use Mantle "everywhere and on everything." Those are perhaps not the words I would have used if I had known D3D12 was right around the corner.

The day after the D3D12 keynote, I got on the phone with Tony Tamasi, Nvidia's Senior VP of Content and Technology. Tamasi painted a rather different picture than Corpus. He told me D3D12 had been in in the works for "more than three years" (longer than Mantle) and that "everyone" had been involved in its development. As he pointed out, people from AMD, Nvidia, Intel, and even Qualcomm stood on stage at the D3D12 reveal keynote. Those four companies' logos are also featured prominently on the current landing page for the official DirectX blog:

Tamasi went on to note that, since development cycles for new GPUs span "many years," there was "no possible way" Microsoft could have slapped together a new API within six months of Mantle's public debut.

Seen from that angle, it does seem quite far-fetched that Microsoft could have sprung a new graphics API on a major GPU vendor without giving them years to prepare—or, for that matter, requesting their input throughout the development process. AMD is hardly a bit player in the GPU market, and its silicon powers Microsoft's own Xbox One console, which will be one of the platforms supporting D3D12 next year. I'm not sure what Microsoft would stand to gain by keeping AMD out of the loop.

I think it's entirely possible AMD has known about D3D12 from the beginning, that it pushed ahead with Mantle anyhow in order to secure a temporary advantage over the competition, and that it's now seeking to embellish its part in D3D12's creation. It's equally possible AMD was entirely forthright with us, and that Nvidia is simply trying to downplay the extent of its competitor's influence.

So we're down to "he said, she said" on the timeline here, though I stand by my theory. There's no way AMD wasn't aware of DX12 cooking, and a three year lead time on the API is typical given the time it takes to develop the hardware built around it.

As for the actual API changes:

Direct3D 12 . . . [unifies] much of the pipeline state into immutable pipeline state objects (PSOs), which are finalized on creation. This allows hardware and drivers to immediately convert the PSO into whatever hardware native instructions and state are required to execute GPU work. Which PSO is in use can still be changed dynamically, but to do so the hardware only needs to copy the minimal amount of pre-computed state directly to the hardware registers, rather than computing the hardware state on the fly. This means significantly reduced draw call overhead, and many more draw calls per frame.

This is a huge biggie, and something I've been hoping for a while now. Moving on:

Direct3D 12 introduces a new model for work submission based on command lists that contain the entirety of information needed to execute a particular workload on the GPU. Each new command list contains information such as which PSO to use, what texture and buffer resources are needed, and the arguments to all draw calls. Because each command list is self-contained and inherits no state, the driver can pre-compute all necessary GPU commands up-front and in a free-threaded manner. The only serial process necessary is the final submission of command lists to the GPU via the command queue, which is a highly efficient process.

D3D12 takes things a step further with a construct called bundles, which lets developers re-use commands in order to further reduce driver overhead:

In addition to command lists, Direct3D 12 also introduces a second level of work pre-computation, bundles. Unlike command lists which are completely self-contained and typically constructed, submitted once, and discarded, bundles provide a form of state inheritance which permits reuse. For example, if a game wants to draw two character models with different textures, one approach is to record a command list with two sets of identical draw calls. But another approach is to "record" one bundle that draws a single character model, then "play back" the bundle twice on the command list using different resources. In the latter case, the driver only has to compute the appropriate instructions once, and creating the command list essentially amounts to two low-cost function calls.

So now instead of a massive command stream, we have both a state independent command list, and an easily reusable command list.

nstead of requiring standalone resource views and explicit mapping to slots, Direct3D 12 provides a descriptor heap into which games create their various resource views. This provides a mechanism for the GPU to directly write the hardware-native resource description (descriptor) to memory up-front. To declare which resources are to be used by the pipeline for a particular draw call, games specify one or more descriptor tables which represent sub-ranges of the full descriptor heap. As the descriptor heap has already been populated with the appropriate hardware-specific descriptor data, changing descriptor tables is an extremely low-cost operation.
In addition to the improved performance offered by descriptor heaps and tables, Direct3D 12 also allows resources to be dynamically indexed in shaders, providing unprecedented flexibility and unlocking new rendering techniques. As an example, modern deferred rendering engines typically encode a material or object identifier of some kind to the intermediate g-buffer. In Direct3D 11, these engines must be careful to avoid using too many materials, as including too many in one g-buffer can significantly slow down the final render pass. With dynamically indexable resources, a scene with a thousand materials can be finalized just as quickly as one with only ten.

Every engine that uses deferred rendering techniques just got a massive speed boost.

Think about memory management, for example. The way DirectX 11 works is, if you want to allocate a texture, before you can use it, the driver basically pre-validates that that memory is resident on the GPU. So, there's work going on in the driver and on the CPU to validate that that memory is resident. In a world where the developer controls memory allocation, they will already know whether they've allocated or de-allocated that memory. There's no check that has to happen. Now, if the developer screws up and tries to render from a texture that isn't resident, it's gonna break, right? But because they have control of that, there's no validation step that will need to take place in the driver, and so you save that CPU work.

A speed hack, but one that makes sense. That being said, I would expect more "game breaking" bugs to slip through to release as a result, since less stuff is going to get caught within the driver during development. Personally, I'd keep the validation around just for the debugging, but have a way to disable it for the release.

Overall, I'm liking the improvements on the DX front, though its clear nothing new graphically is getting pushed this go around. Like DX10 and DX11, this is more or less a release to fix the low-hanging fruit within the API.
 

8350rocks

Distinguished


Considering how tight lipped they are being about HEDT other than, "it IS coming"...I would not write it off entirely...

They did hire back Jim Keller...for all we know, they turned the mad scientist loose in the laboratory and said, "go play..."
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


It is not ironic, it is just ridiculous that you need to invent what others say. This is what I said you about HEDT the day 27 March 2014 11:46:47:



And then explained you why AMD has not abandoned HEDT. You ignored my question and ignored the rest of my post. You pretend to invent what others say.



Before you pretended that we were fooled by Nvidia marketing team. Now it is even poor, we cannot do (in your own words) "simple math". Poor IBM, they have purchased a completely outdated interconnect for OpenPower, when a 2 year old interconnect was much better... :sarcastic:

Before I believed that you didn't read the article. Now that you continue ignoring the relevant parts (despite I mentioned localization), implies otherwise.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


28nm confirmed by AMD engineer working in the Carrizo SoC. If a 20nm update is released latter is unknown at this time.

 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Ritche Corpus confirms again that Microsoft was informed about MANTLE "from day one" and that parts of Direct3D 12 are "very similar" to AMD's API. It confirms that no game developer knew existence of DX12. This explain why most game developers are currently supporting MANTLE. They wouldn't do it if DX12 was in the menu "from day one". EA DICE's Johan Anderson confirms he didn't know existence of DX12 even at MANTLE presentation: "Those are perhaps not the words I would have used if I had known D3D12 was right around the corner". This confirms AMD old claim about Microsoft was not planing any DX12 then and that DX12 is Microsoft answer to MANTLE.

Some days ago Nvidia said that DX12 is a "result of four years of collaboration between Microsoft and NVIDIA." Now Tamasi reduces the time span to "more than three years". Tamasi claims Microsoft couldn't have slapped together a new API within six months of Mantle's public debut. True, iff DX12 was developed from scratch, but If DX12 is essentially MANTLE then six months are enough, specially when Microsoft was aware of MANTLE before the public presentation, which expands the time span beyond six months.

"according to NVIDIA, like AMD's Mantle, DX12 requires AMD's GCN architecture". Again this agrees with DX12 being derived from MANTLE.

Some days ago we were said that any modern Nvidia card supports DX12, but now Nvidia admist that "full support of DX12" will require second gen Maxwell and first gen Pascal GPUs. This contrast with AMD talking at GDC about full support of DX12 in any GCN card

Interesting as well that Microsoft decided to demo new DX12 porting a Xbox1 (aka GCN) game. Unsurprisingly if DX12 is essentially MANTLE.

LINKS:

http://www.techpowerup.com/199086/amd-demonstrates-full-support-for-directx-12-at-game-developer-conference.html

http://www.fudzilla.com/home/item/34353-directx-12-will-get-some-hardware-specific-features

http://www.overclockersclub.com/news/35864/
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


You can believe that the Moon is made of cheese. No problem with that. But don't invent what others believe. Specially when they can provide links showing that you are inventing.
 
Ritche Corpus confirms again that Microsoft was informed about MANTLE "from day one" and that parts of Direct3D 12 are "very similar" to AMD's API. It confirms that no game developer knew existence of DX12. This explain why most game developers are currently supporting MANTLE. They wouldn't do it if DX12 was in the menu "from day one". EA DICE's Johan Anderson confirms he didn't know existence of DX12 even at MANTLE presentation: "Those are perhaps not the words I would have used if I had known D3D12 was right around the corner". This confirms AMD old claim about Microsoft was not planing any DX12 then and that DX12 is Microsoft answer to MANTLE.

Look back at comments from devs following every DX reveal: They aren't told about the API changes until after it gets finalized. Hence the Winter 2015 date for support: Figure 18-24 Months time for a game to go though development, so if the API specs were released in full TODAY, the first wave of games built around it will hit Winter 2015.

Some days ago Nvidia said that DX12 is a "result of four years of collaboration between Microsoft and NVIDIA." Now Tamasi reduces the time span to "more than three years".

...Really going to nitpick this?

Tamasi claims Microsoft couldn't have slapped together a new API within six months of Mantle's public debut. True, iff DX12 was developed from scratch, but If DX12 is essentially MANTLE then six months are enough, specially when Microsoft was aware of MANTLE before the public presentation, which expands the time span beyond six months.

No you couldn't actually. It takes a LOT of work to correctly handle the low level internals involved in talking to devices. Even if you knew the exact high-order API calls and results sets, it would still take well over a year to reverse engineer something OS level that would copy its behavior. Then again, I guess someone who's never been involved in API development or device drivers would know more then me on this topic...Nevermind DX12 includes FAR more then just the API improvements, as you'll see in the coming weeks...

"according to NVIDIA, like AMD's Mantle, DX12 requires AMD's GCN architecture". Again this agrees with DX12 being derived from MANTLE.

Some days ago we were said that any modern Nvidia card supports DX12, but now Nvidia admist that "full support of DX12" will require second gen Maxwell and first gen Pascal GPUs. This contrast with AMD talking at GDC about full support of DX12 in any GCN card[/quote]

Reading fail. Specifically, the API improvements (the low level processing, etc) can be done in current HW via driver, since its mostly API level and pipeline improvements. The other improvements to the API, which haven't yet been announced, will require new HW. Even on AMD cards.

Then again, I find it funny that AMD, who allegedly claimed they knew nothing about DX12 before it released (despite having their logo plastered all over DX during its reveal) now claims they support the entire spec, before the entire spec has even been announced.

Interesting as well that Microsoft decided to demo new DX12 porting a Xbox1 (aka GCN) game. Unsurprisingly if DX12 is essentially MANTLE.

DX12 has been cooking for over three years. Wouldn't it make sense that, knowing DX improvements were coming, that they chose a GPU that would work with the improvements the new API brings? Especially since MSFT has been gung-ho about getting all their platforms (including Windows Phone) on a unified platform? Hence why I know AMD knew about DX12: Supporting it was likely a requirement to getting in the XB1 in the first place.

You can try and spin it any way you want Juan: You're wrong, again. Then again, given how you were among the first to jump on the "There will be no new version of DX bandwagon" when AMD said so a few months back, I guess that really shouldn't come as much a shock.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810

Well I put up the numbers, so either correct it or show how it's wrong. That's normally how discussions go. BTW, you're the only one perpetuating the 12x claim. Most review sites dug deeper and found that was a 2.0 thing, meaning it's so far off it's hardly worth mentioning. It looks great on marketing slides though, comparing 2018 tech with 2010 tech.

I didn't say it was completely outdated. The speed is more than adequate for what it needs to do. I just don't want people reading this to think that NVidia is the only one that can do this. The technology is readily available. TSMC is already making 32GT/s serial ports in mass TODAY. You can make whatever protocol you want with that; PCIe, XAUI, Infiniband, Dragonfly, and in the future NVLink, or whatever custom protocol you want.

Besides, this is the OpenPower Consortium. Why do you keep saying IBM bought it? If IBM bought it wouldn't they call it PowerGPULink or something. It was an open collaboration between IBM and NVidia. IBM licensed their technology to NVidia. NVidia made an interface that would comply with IBMs specifications. I'm guessing zero money changed hands as it benefits both companies.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810
AMD worked on encapsulating HyperTransport over an Infiniband connection. That may still be their high end plan.

"HyperTransport Connector Specification: These specifications add a physical layer complement to the HNC specification, and define compact, high-performance connectors for use with specially designed high - performance twinax cables. The specifications define a right - angle female cable connector , male cable connector and mezzanine connector, allowing the implementation of HyperShare - native switchless network fabrics based on Torus topologies . The solution delivers a combination of node scalability, minimized implementation costs, power efficiency and low latency performance not possible with conventional switched networks."


 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Nvidia's Tamasi: D3D12 had been in in the works for "more than three years" (longer than Mantle) and that "everyone" had been involved in its development.

AMD's Ritche Corpus: AMD shared its work on Mantle with Microsoft "from day one". AMD received no feedback from game developers that would suggest AMD was wasting its time because a similar project was underway at Microsoft.

EA DICE's Johan Anderson: Those are perhaps not the words I would have used if I had known D3D12 was right around the corner.



DX12 is MANTLE in disguise, no strange that AMD logo is in it.

I understand frustration of all those who have been saying nonsense about MANTLE during months. The same guys that now praise [strike]MANTLE[/strike] DX12.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Math was made and shown before. Both raw bandwidths and percentages/ratios over both PCIe 3 and PCIe 4 using both 16 and 32 lane PCIe slots. I suppose you missed all that math.



If you ignore all the links that I gave and if you ignore the whole Internet, then yes I am alone. Moreover, it is very easy to obtain the 12x by oneself: NVLINK is >12 times faster than the PCIe slot in your mobo, but Nvidia marketing team rounded it to "12".



First you said that NVLINK is only a 25% faster than PCIe and basically unneeded. Wrong. Then you said that infiniband is already faster than NVLINK and thus outdated. Wrong. You insist again, but you continue being wrong. Let me quote the same source that you used before:

NVLink, in a nutshell, is NVIDIA’s effort to supplant PCI-Express with a faster interconnect bus. From the perspective of NVIDIA, who is looking at what it would take to allow compute workloads to better scale across multiple GPUs, the 16GB/sec made available by PCI-Express 3.0 is hardly adequate. Especially when compared to the 250GB/sec+ of memory bandwidth available within a single card. PCIe 4.0 in turn will eventually bring higher bandwidth yet, but this still is not enough. As such NVIDIA is pursuing their own bus to achieve the kind of bandwidth they desire.

[...]

To pull off the kind of transfer rates NVIDIA wants to accomplish, the traditional PCI/PCIe style edge connector is no good; if nothing else the lengths that can be supported by such a fast bus are too short. So NVLink will be ditching the slot in favor of what NVIDIA is labeling a mezzanine connector

[...]

With all of that said, while NVIDIA has grand plans for NVLink, it’s also clear that PCIe isn’t going to be completely replaced anytime soon on a large scale. NVIDIA will still support PCIe – in fact the blocks can talk PCIe or NVLink – and even in NVLink setups there are certain command and control communiques that must be sent through PCIe rather than NVLink. In other words, PCIe will still be supported across NVIDIA's product lines, with NVLink existing as a high performance alternative for the appropriate product lines.

I agree with him. NVLINK is much much faster than PCIe. I also agree with IBM engineers by their choice of NVLINK over much slower future PCIe 4.0.



I have just purchased an AMD APU last week. Would I call it JuanAPU or something? I guess no.

If you did read the links that I gave before you would know what NVLINK was designed by Nvidia. The "NV" in NVLINK are from NVidia. I also gave the name of whom designed it. IBM purchased NVLINK and is adding it to OpenPower platform.

IBM is also praising NVLINK as an "significant contribution" to the OpenPower platform. I suppose you still believe that IBM did never heard about PCIe or infiniband before. LOL
 

griptwister

Distinguished
Oct 7, 2012
1,437
0
19,460
*drools* The blue side has DDR4 on their roadmap... Maybe I should switch to the dark side? After all, I'm sure the x99 boards are going to be pure cake to the eyes.

I come here religiously. (I go to church at least once a week, just like I visit this topic) Haha, but seriously, I hope to find new info, and nothing. It's quite depressing. The only cool things I'm hearing about are that GPU prices inflated 100% and that I could but 2 780 Ti's for half the price of a TItan Z lol.

I really hope we hear something on excavator over the summer. I'm stacking some cache for black friday :D

Meanwhile, I'll pick up a mechanical key board, Kraken 7.1's, a PSU and possibly some bling for my PC. Some Sleeved cable extensions and some Corsair fans.

Any word on the next GPU processes? Are we going to see 20nm or 22nm for GPUs this year?
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


And what about this link?

http://www.brightsideofnews.com/news/2014/3/18/exclusive-amds-carrizo-apu-details-leaked---soc-for-the-mainstream.aspx

given before and that mentions DDR4 support for Carrizo APU (excavator-based).
 
[strike]DX12[/strike]MANTLE is [strike]MANTLE [/strike]DX12 in disguise, no strange that AMD logo is in it.

I understand frustration of all those who have been saying nonsense about [strike]MANTLE[/strike]DX12 during months. The same guys that now praise [strike]DX12[/strike]MANTLE.

Fixed it.
 

colinp

Honorable
Jun 27, 2012
217
0
10,680


Let me try to understand what you're trying to say here.

You're saying that Microsoft went to AMD with their proposals for DX12 first. Then AMD, who are so poorly resourced and innefective that they couldn't organise a booze up in a brewery, somehow managed to steal MS's idea, develop their own version of it, make it look sufficiently different so that MS can't sue them into oblivion, enough even that AMD have made indications they want to open source it, and make it available to current versions of Windows. That they've managed to do all this and release Mantle a full 18 months ahead of DX12, despite Microsoft's programming resource dwarfing AMD's?

Best April Fools I've heard all day.
 

truegenius

Distinguished
BANNED
Pascal Islands Launching in 2016 – Nvidia Gets Acquired by AMD in a Brilliant Reverse TakeOver

http://wccftech.com/amd-private-aquires-nvidia-reverse-takeover-releasing-ipo/

what ?

does it mean no competition ?
whats going on
nsa and web encryption chaos, dx11 vs mantel vs dx12, facebook bought whatsapp, some gaming tech, some gaming community, amd and nvidia got hetrogenified (don't know what i just said)
now can anyone explain me what's the meaning of life ?
 


And AMD Engy had a lunch with an MS engy:

AMD: "Dude, DX is so bloated... Do something about it".
MS: "Uhm... My Execs won't approve this... You know... Money".
AMD: "Maybe I can, since our CPUs suck at the moment, we need less CPU overhead. I'll call you back"

No?

The truth lies somewhere in between AMD getting the XB1 contract for their APUs and the need of a less CPU intensive API. Timing should be around the Durango prototypes with the APUs.

Cheers! :p
 
Status
Not open for further replies.