AMD CPU speculation... and expert conjecture

Page 470 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


AMD has stated that they want MANTLE on linux and OSX. During APU13 Developer Summit, one of the slides stated "Mantle + SteamOS = powerful combination!"

6mLnZYxl.png
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I already disclosed plans of Intel, Nvidia, and AMD to do that... technical details of each one of their APUs, and when they expect them to be ready.



This was commented before. You can continue ignoring the quotes, the links and the slides from Intel, but Intel already said that they will sell the new coprocessor discrete cards only for legacy customers who already have a discrete card and want to upgrade without changing the whole platform. The socket version will be the main product and will be faster than the discrete card version. Check again the slide that you spend ignoring. Check the part of the slide that says "future"...



CPUs are a kind of LCUs; therefore CPUs are always homogeneous. The Xeon Phi is not a LCU because it has new cores optimized for throughput. This is the reason why Intel has developed a new core: no CPU core was valid for the Phi.

According to HSA/Intel/CUDA, an heterogeneous system is a combination of LCU+TCU. A combination of two CPUs (LCU+LCU) is not a heterogeneous system.

I already said that I don't know why Samsung has joined OpenPower, and I don't want speculate. But I know Samsung plans about servers

http://www.xbitlabs.com/news/cpu/display/20120403115803_Samsung_Hires_AMD_Server_Chip_Specialists_to_Build_ARM_Server_Offerings.html

http://www.fool.com/investing/general/2014/01/01/samsung-setting-up-to-challenge-intel-in-servers.aspx
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Yes, that are very old news.

Why do you believe that the old plans to develop Steamroller/Excavator Opteron CPUs were canceled?

Why do you believe that Warsaw CPU is only aimed to legacy customers that want upgrade their old Opteron CPUs without changing the whole platform?

Why do you believe that there is no Steamroller/Excavator FX CPUs?
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Funny how Microsoft and OGL group are releasing updated APIs to reduce an "inexistent overhead" (you said us) and compete with a 'mantel' API that "nobody uses" (you also said us). You also said us that close to metal is evil, but now that Microsoft is talking about bringing close to metal capabilities to future DX, you remain silent about it...

Check this:

We’ve spoken to several sources with additional information on the topic who have told us that Microsoft’s interest in developing a new API is a recent phenomenon, and that the new DirectX (likely DirectX 12) will substantially duplicate the capabilities of AMD’s Mantle.

http://www.extremetech.com/gaming/177407-microsoft-hints-that-directx-12-will-imitate-and-destroy-amds-mantle

Funny to see Microsoft replicating an API that you have been criticizing since first day.

About your claim of death for 'mantel', check this:

This has already been read in several circles as to be the death knell for AMD’s custom API, but such claims are short-sighted, for multiple reasons.

Do you recall when you criticized MANTLE in grounds of backward compatibility? Check this:

if DirectX 12 closely maps to Mantle, it’s possible that today’s GCN GPUs will still support it. Alternately, if it doesn’t, then Mantle may become the preferred option for ensuring broad backwards compatibility.

And of course, check this part about which are AMD long-term plans since first day:

And if Mantle is ultimately subsumed by DirectX — so what? When I first talked to AMD about the next-generation API at APU13, the developers candidly told me that the long-term goal was to get Microsoft and the Khronos Group in charge of OpenGL to adopt a Mantle-like architecture. The entire point of Mantle was to spur game development and drive the adoption of a better standard.

I believe that it was palladin who predicted something as this for MANTLE. Kudos for him.
 
Funny how Microsoft and OGL group are releasing updated APIs to reduce an "inexistent overhead" (you said us) and compete with a 'mantel' API that "nobody uses" (you also said us). You also said us that close to metal is evil, but now that Microsoft is talking about bringing close to metal capabilities to future DX, you remain silent about it...

I said, specifically, that there was about a 10-15% worst case overhead when using draw calls, but noted that *most* developers are smart enough to use as few as possible to get around that limitation. That's typical of overhead for most API's.

As for use, two games are adding it after the fact via patches, and only one is being developed for it natively. As long as NVIDIA and Intel don't support it, the API isn't going anywhere.

As for backward compatibility between Mantle and DX12, don't count on it. No matter what, MSFT will be adding API calls that current GPU's simply will not be able to understand, let alone process. Nevermind its unlikely they could resist the temptation to throw a few more bells and whistles into the API...

And I again stress the drawback to API's like Mantle: For all the extra performance, the more you have to write special code to make it work for a given architecture, the more likely less popular ones get supported. Point being, if you go too low level, its quite possible the developers could start dropping support for cards at a much faster rate then they do now if they are required to go to that low level to make the API work, which would kill the PC gaming industry. The benefit of abstraction is you can write code one that will support multiple architectures, and that abstraction is the reason API's like DX and OGL were created in the first place.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


So basically word games. What is a BlueGene/Q then? LCU or TCU? Because it's used for both.

Heterogeneous; I don't see how that common term can be misconstrued.

http://www.merriam-webster.com/dictionary/heterogeneous

<the seating in the hall was a heterogeneous collection of old school desk chairs, wood and metal folding chairs, and even a few plush theater seats>

Or wikipedia if you prefer.

"Heterogeneous computing refers to systems that use more than one kind of processor. "

 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


I should have been a little more clear. I am an extreme case and am running gentoo. bdver2 cflags on my FX 8350 and btver2 on my A4-5000.

After enabling bdver2 on my Gentoo system and playing around with things, the worst case scenario I found was a little over 10% increase in x264, and I believe it has a lot of hand tuned assembly so compiler options shouldn't make a huge difference. LAME was 60%+ faster than in Windows, Blender twice as fast, etc.

I haven't done any objective measurements on my A4-5000 yet, but I can already tell you that between Gentoo and Windows, if you told me they were different APUs in each one, I'd absolutely believe it, with the slower one being in the Windows machine.

But the point I'm getting at is that Windows ecosystem is seriously harming AMD's CPU performance and being forced into it is not good for AMD at all. Even if you can not compile the game itself, the entire base system is using AVX/SSE4+, etc as opposed to Windows 7 minimum requirement of "1ghz CPU" There are 1ghz Pentium 3s out there, which only support MMX and SSE1.

And generally (Haswell seems to be an exception in things like Dolphin, which is actually the kind of gains I expect out of FX in Gentoo) Intel doesn't see much in recompiling for their architecture. The Intel Gentoo guys dont' expect more than 10% performance increase generally, and I was surprised to see something as low as 10% gain in some applications on FX.

But I've ranted about this before. But the point I'm getting at is with Mantle I was seeing hopes of being able to game on a system compiled around my hardware. And now I have to listen to Nvidia guys talk about how vendor lock in is ending with Mantle failing (which it doesn't actually seem to be) as the alternative to "mantel" is vendor lock in to a specific OS.
 

8350rocks

Distinguished


Where do you think I got it from?
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Don't confound the bare possibility of usage with optimal usage. You could run an OS on a modern GPU but this would be terrible because a GPU is a TCU.

I said you before that BlueGene/Q is an homogeneous system

http://www.eedailynews.com/2011/08/ibm-blue-geneq-project-18-core.html

You can use LCUs for throughput workloads as IBM has made, but you will be on disadvantage against heterogeneous systems which have TCUs

http://www.green500.org/lists/green201306

IBM has just joined with Nvidia to develop heterogeneous systems to compete against Intel. IBM will do the LCU and Nvidia the TCU. Without TCUs, IBM cannot compete in HPC, but IBM doesn't make any TCU. Solution? Join with Nvidia.

As a general rule of dumb, general dictionaries are not good for technical terms. The wikipedia page that you found is better, but you just omitted the relevant part. Read just at the point where you stopped:

Heterogeneous computing refers to systems that use more than one kind of processor. These are multi-core systems that gain performance not just by adding cores, but also by incorporating specialized processing capabilities to handle particular tasks. Heterogeneous System Architecture (HSA) systems utilize multiple processor types (typically CPUs and GPUs), usually on the same silicon die, to give you the best of both worlds: GPU processing, apart from its well-known 3D graphics rendering capabilities, can also perform mathematically intensive computations on very large data sets, while CPUs can run the operating system and perform traditional serial tasks.

The Wikipedia page lacks accuracy, because as I said before the HSA specification goes beyond GPUs and also consider other kind of TCUs.

Return to the Wikipedia page and check the section "Hybrid-core computing", because this is what I explained to you that Intel is doing with the new KL core. Of course, the wikipedia agrees with me on that "It is a form of heterogeneous computing[7] wherein asymmetric computational units coexist with a "commodity" processor." Go back in the thread because I already said you which are the asymmetric computational units in the KL core. I already said you what part is LCU and what part is TCU.
 

ColinAP

Honorable
Jan 7, 2014
18
0
10,510


That's what I've been wondering. I watch Phoronix very closely, and all I've seen are articles like:

AMD's Mantle Launches, But No Linux Love Yet
AMD launched their new Windows graphics driver today that supports their new Mantle API as an alternative for game developers to Direct3D or OpenGL. However, there's still no indications of foreseeable Linux support.

Or:
AMD Ends Up Releasing Catalyst 14.2 Beta For Linux
While AMD's communication crew went to talk up Catalyst 14.2 for Windows due to its proper support for the Thief game and that Thief will soon sport a Mantle renderer. The Thief game and Mantle itself are both irrelevant to Linux gamers at this stage since there's no Mantle Linux graphics driver and this game is not natively available for Linux users at this time.

Could you point me towards the article on Phoronix that says that Mantle is coming to Linux by October this year?
 
They already have ones that outperform dGPUs. The current APUs actually out perform a lot of dGPUs.
 

logainofhades

Titan
Moderator



A bunch of old ones maybe. The 7850k is about equal to an R7 240. Nothing worth writing home about. At the end of the day, a haswell Pentium G and a dedicated card like a GTX 750 is a better buy for a similar cost.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


In this case I believe he was referring to the APUs in the PS4 and XBone. The PS4 one is between an HD7850 and HD7870 in performance. A roughly $175 equivalent card.
 

8350rocks

Distinguished


http://www.phoronix.com/scan.php?page=news_item&px=MTYxNDc

Most of these recent open-source Linux graphics driver improvements are terrific, but due to the structure of the open-source driver and the code being spread across multiple components with no easy and generic installation path (compared to the binary Catalyst driver on Windows and Linux being self-contained within a single binary file), it will be some months before most Linux users see these driver improvements. Ubuntu 14.04 LTS is carrying most of this new code but the Linux 3.14+ and other recent Mesa Git activity (VCE support, OpenMAX, etc), won't land until Ubuntu 14.10 in October. Only with rolling-release distributions will you really see these improvements in the near-term unless you resort to building the code yourself or relying upon unsupported, third-party repositories.

UPDATE: I received another email this morning from another AMD Global Communications representative talking up their Linux support... Mentioning the same points (and errata) as above and also talking up Catalyst 14.2 Beta and the new THIEF game. The email ended with, "Overall, it's a big week for the Windows and OSS Linux Radeon drivers! We hope you'll join us in playing, testing and enjoying THIEF --- the latest AAA title optimized for AMD Radeon."

The Thief video game is powered by Unreal Engine 3 but there's been no indication of Linux support coming at all. Even if there were Linux support, the open-source Radeon driver would be extremely unlikely to be able to handle running Thief -- if it managed to even render the game correctly with its OpenGL implementation, the performance would likely be horrid with the open-source driver. Thief is also going to sport a Mantle renderer to complement OpenGL, but there's still no AMD Linux graphics driver (open or closed-source) that even supports Mantle. And again, their mentioned Linux driver changes to the open-source driver aren't exactly brand new and are completely unrelated to the Catalyst driver code-base. At the end of the day there doesn't appear to be anything new for Linux users either in terms of brand new, major open-source driver features nor any Catalyst 14.2 beta Linux driver.

UPDATE 2: Further contradicting the AMD Linux PR talk with saying no Catalyst 14.2 Beta for Linux would be released due to their open-source driver improvements, Catalyst 14.2 Beta for Linux is now available.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


Good then, we agree that a CPU with SIMD is still homogeneous.

Granted the Bluegene/q systems have dropped but they do still hold the 12-34 spots. And that is with ancient 45nm cores. So yes, 28nm GPUs have quite the advantage there.



That's your speculation. I see x86 with AVX512 extensions. Xeon Skylake will also have x86 with AVX512 extensions. Piledriver cores have x86 with AVX extensions. Are all CPUs now hybrid-core because they support more than the original x86 ISA?

The new Phi being silvermont based is OoO and fully supports a modern server grade OS. You can already buy an 8 core Avoton server. They have no need for a 2nd type of core.
 

colinp

Honorable
Jun 27, 2012
217
0
10,680

Really must sort out my usernames across different PCs...

I think you are under the mistaken assumption that Catalyst for Linux is feature equivalent to Catalyst for Windows. It's not. There is no Mantle for Linux. AMD haven't even carried out a feasibility case to port it to Linux yet, no matter what their good intentions may be.

The part of your quote that you didn't bold out, the part that specifically refers to Mantle for Linux:

The Thief video game is powered by Unreal Engine 3 but there's been no indication of Linux support coming at all. Even if there were Linux support, the open-source Radeon driver would be extremely unlikely to be able to handle running Thief -- if it managed to even render the game correctly with its OpenGL implementation, the performance would likely be horrid with the open-source driver. Thief is also going to sport a Mantle renderer to complement OpenGL, but there's still no AMD Linux graphics driver (open or closed-source) that even supports Mantle.

In summary, there is no Mantle for Linux, there is no Thief for Linux, AMD's PR department have no idea what they're talking about when it comes to Linux.
 


Haswell L1 cache is 982GB/s almost 1TB/s. Even if this comes out, there will always be enhancements to cache and cache will always be faster as there is near no latency compared to memory, even with very fast interconnects.



Haswell L3 is 180GB/s +/- and still faster than the theoretical speeds that memory should do, 160GB/s. As well there is latency which lowers down the actual bandwidth. SB-E has a theoretical bandwidth of 51.2GB/s with full quad channel but IB-E pushes at best 41GB/s as does SB-E. Still a ton of bandwidth that you and I will probably not saturate for a few more years.



GCN cards. SO HD7000 GCN and R9/R7 GCN cards. And the 7000 series ahs ben out for 2 years so it has a decent market share. The only problem is that so far only a handful of GPUs (290 series) have been fully optimized for Mantle.



At some point I would not be surprised. As technology evolves, the CPU does too. I am sure that people said the same thing when x86 was still newer and memory controllers. Now Intel is able to put a stack of memory on their CPUs and some also have the entire south bridge on the CPU along with a GPU.



I think CPUs have not been just CPUs for a long time. A lot has been forgotten as to what it used to do and did not do. Remember the Math Coprocessor? That's been on the CPU for a while. Remember the NorthBridge? Z87 is just the SouthBridge and that will go at some point to (has for certain Haswell CPUs) just like the VRMs did.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


IBM problem is not on the process, in fact they are able to compete with Intel on that. They could develop Bluegene/Q systems on 22nm SOI and would be still on disavantage compared to heterogenenous systems. IBM had two choices to remain competitive on HPC: either to develop a TCU by themselves or join with someone who makes TCUs. They chose the last and joined with Nvidia to develop heterogeneous systems.

The Wikipedia link is not saying that a core with AVX extensions is heterogeneous computing. Neither I am. A FX Piledriver CPU is not a heterogneneous system. Kaveri APU is. One thing is that you don't understand what the wiki or myself are saying you, another is that you attribute to me stuff that I never said.

In fact the HSA specification defines a HSA enabled CPU as one that run the native ISA (e.g. x86) plus the HSAIL ISA. But of course the CPU continue being homogeneous, only when you add a TCU (e.g. a HSA GPU) the system is heterogeneous.

You already posted the argument about OoO and the operative system. I already said you why the first is wrong and the second irrelevant. I also gave you some links detailing the architecture of the new KL core.



You are right, neither AMD nor phoronix have said that MANTLE comes in ubuntu 14.10. However, AMD has stated its intention of porting MANTLE to linux, OSX, and Android.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


The Nvidia APU has DRAM bandwidth of 1.4--1.6 TB/s. There is not L3 cache and there is no mention of bandwidths for L1/L2 caches. But L1 cache would have about 10x the bandwidth of main memory.



When Intel did enter the HPC market, existent players did similar claims. "Intel will never caught us", they said... Today 90% of HPC is based in x86 designs from Intel and the then big players (e.g. MIPS, alpha...) have been killed all of them. Only IBM remains on the business with some success.

Haswell Iris Pro is very close to AMD Kaveri GCN. I already gave benchmarks of Kaveri A10 vs i5R. Broadwell will bring about 40% improvement over Haswell and thus Intel will have integrated graphics so powerfull like AMD. Then comes Skylake, which resuscitates the old Larrabe plans.

I have no doubt that Intel will kill discrete graphics cards. Nvidia and AMD also know. Nvidia is already preparing the abandon of discrete graphics cards division. Semiaccurate has an article about that. I predict that discrete graphics cards will be killed by 2018 or so.
 

jdwii

Splendid


I'll be confused it would be cool to spend 300$ and be able to play games like a 100(CPU)+200 video card however, i don't see that happening. Unless we find a way to make it where latency matters a lot for GPU memory bandwidth for gaming which it doesn't
 

jdwii

Splendid


Actually i was doing some builds comparing Intel and Amd for 400-1000$ i found that it doesn't make sense to use a I5 until you get to the 600-650$ price point and that is using a locked I5. for 400$ you could build a dual core Celeron Haswell and a radeon 7770Ghz build, with Amd you can build a A10-7700K build. For 500$ you can build a 6 core fx with a radeon 7770ghz and with intel you can have a I3 and a 260X for that same price. Intel's cheaper boards have gotten a LOT better i noticed. I usually notice you get more features however from Amd per dollar. I would also never trust a dual core in a gaming machine however for a gaming build the Intel machine would still have a fantastic upgrade path.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860
I can see I didnt miss anything during my 4 week ban.

Anyway back to ESO.

Btw, I wonder who predicted mullins would be an x86 tablet APU and who the other one is who said jaguar is being replaced with ARM.
 
Status
Not open for further replies.