AMD CPU speculation... and expert conjecture

Page 251 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


This is close to a pair of things that I wrote in this thread some days ago:

The first is if a new version of dual graphics for HSA APUs could use the full iGPU for compute whereas the dGPU is used for graphics.

The second is if the iGPU of an APU HSA could be fused with a HSA enabled dGPU for dynamic split into compute or graphics according to workloads.

The former is technologically simpler, cheaper, and gives enough performance: the APU would work as a 'CPU' (HSA enabled soft.) with about 4x the performance of the FX-8350. The problem is that goes against the HUMA architecture and returns, basically, to a traditional CPU+dGPU architecture.

The second is technologically complex, expensive, maintains HUMA, and has the potential to give still more performance: the dGPU could help the APU in some heavy compute workloads. The main problem would be on the interface.
 
Yes, Windows is poorly prepared for 6 cores.

Windows works just fine with 6 cores. Its up to the program/developer/compiler to segment tasks out in such a way that the scheduler can use that many cores in a meaningful way.

[quoteThe discussion ESRAM+DDR3 vs GDDR5 is interesting one and reproduces the XboxOne vs PS4 war. I would prefer the GDDR5 solution, because the ESRAM solution would obligate developers to tune software for the small size of the ESRAM in order to maximize performance[/quote]

Remember though, at any single point in time, only a few MB of data is going to need to be needed. As long as you can shuffle data in/out of the ESRAM quickly enough, you can ensure the data that the CPU needs will be available for use.

In broader terms, think of ESRAM as an off die, L4 cache. Because thats essentially how its used.

In one sense, the ESRAM breaks the one-memory-pool purity of the GDDR5 approach, by introducing two pools: one small and fast and other slow and large. I think all game developers prefer the Sony approach of a single unified large pool of ultrafast memory.

Sony's approach is cleaner; keeping the ESRAM filled to keep the SPE's going was a major challenge for the PS3.

That being said, its quite possible to keep the ESRAM implementation invisible to the OS/Application, as far as Addressing goes, keeping the single memory pool intact.

My only doubt would be on what would be the performance gain of GDDR5 memory in pure CPU tasks. The Piledriver FX chips benefited from faster DDR3 memory, unlike intel Ivi Bridge.

Again, I worry about latency. While its nice to be able to pump a few GB a second across the bus, those higher latencies will show for CPU bound tasks that only need a few KB of RAM at a time, which favors a lower latency architecture. [This same logic is the reason why IEEE-488/HP-IB is still hanging around: Low-latency communication is oftentimes better then higher latency, high bandwidth]
 


1148060760876fv9.jpg






 


I did a similar upgrade, moved from a 1440x900 monitor to a new ASUS VH238H and a ASUS DCUII GTX 760 to replace my old EAH4870 Dark Knight and a Corsair 500R for $70 (The black one without a top dust filter, but I managed to cut out a part the top of my old CM690 and screwed it in my 500R so I can mount my 3 Violet CM 80mm fans (Free dust filter too :3). Still holding on to my 4-year-old C0 920@3.7 and a Sabertooth X58 :D. Anyways, I think GDDR5 will be reserved for Carrizo or on mobile [strike]Socket 939[/strike] boards.
 

GOM3RPLY3R

Honorable
Mar 16, 2013
658
0
11,010


"Projected" Doesn't mean it will be. Well have to wait and see. I'd be surprised if it can get anywhere near 780 performance. Also, a fun thing will be to see how far they can go on overclocking. I know that if ASUS makes a DCUII version of the 9970, it'll be a great showdown. ^_^
 


ASUS and AMD Radeon X970 cards don't mix too well. I'm looking at YOU, MATRIX 7970!
 

GOM3RPLY3R

Honorable
Mar 16, 2013
658
0
11,010


As far as the 700 series goes, the price to performance is insanely better. The 780 is priced at $650, which is ~$200 less if not more of what it should be.

The 760 is ~equivalent to the 670 in performance, but for $100 to $150 less. Also the 780 at 1080p (which is mostly used), is on the Titans neck hairs, but for ~$350.

As for the Intel Extreme series... I just don't know. Comparison Chart

The only difference between the $530 3930k, and the $1000 3960X and 3970X is:

- 15MB of Cache instead of 12MB
- Bus/Core Ratio:
3930k - 32
3960X - 33
3970X - 35
- Higher clocks and turbos stock

Even here, look at this: http://processors.findthebest.com/compare/2-120/Intel-i7-3960X-vs-Intel-i7-3930K

The 3930k has many things that the 3970X doesn't, other than 3 MB less of cache.

I just don't know. :??:
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Windows just works. Linux is optimized since the kernel was designed to scale well on multi-core supercomputers. Moreover, the Windows scheduler doesn't understand well CMT.



Again, I don't worry about the latency myth.
 


They manage to jack up the price for their X80 series by making the 780 look like a super good deal, which it is not really that good of a deal. The 670 is around $300 nowadays. Personally, I will recommend a 4GB 680@$400 over a 2GB 770 any day of the week.
 

GOM3RPLY3R

Honorable
Mar 16, 2013
658
0
11,010


In that case, the DCUII will be what pulls Nvidia ahead. And the Hawaii series x970 being only $50-$100 less than 780, for almost 780 performance, that's even worse than the 7970 Ghz vs that 680. The 680 slightly pulls ahead most of the time, but for $200 more. Since that range is easily shortened, and there is a big chance that when AMD releases there cards, Nvidia will drop their prices no less than $50, AMD should have a tough time for this generation.

And about their CPU path :)vomi:) , where are they going? From what I can see, they are going to rock bottom, as Intel leaves Bikini Bottom for the surface. ^_^
 

GOM3RPLY3R

Honorable
Mar 16, 2013
658
0
11,010


Comparing it to the Titan, it's a fantastic deal. I can agree that it is a little much compared to the inferior cards, however like i just said in my last post, Nvidia should drop their prices a good amount when AMD releases their new cards.
 


CPU path, we don't know their next roadmap, just wait for their conference in October goshdarnit! Considering I did find a 3GB 7970 with a 1GHz boost for $300. That>770 The 9970 will be under $600, I bet you this will turn into Fermi and Southern Islands deja vu, where AMD caught nVdia in the @$$. I admit I took the ASUS DCUII 760 over the ASUS 7950 (Sorry EVGA and SAPPHIRE, but I love ASUS) due to their 79XX series being rather lousy. That being said, I recommend EVGA or SAPPHIRE over ASUS to the average user not fatally attracted to them.
 

8350rocks

Distinguished


Agreed. The Sapphire Toxic Edition will be the cream of the crop likely again...

EDIT For GOM3R:

The 780 is actually not a new architecture...in fact, it's because the Titan is selling so poorly that Nvidia are rebranding some chips, deactivating sections and turning them into the 780.

FYI: For Nvidia, the 700 series is full panic mode...it wasn't supposed to release until Q2 2014 at the earliest. AMD was taking market share so aggressively though, Nvidia had to do something to stop the bleeding. So you got a rebadge with a slight spec bump. Do not be fooled, these are not dramatically different from the GTX 6XX cards in anyway except the 780 which is deactivated silicon.

The HD 9970 is a completely new architecture, and honestly...Nvidia can see impending doom coming over the hill. They have been trying so hard to keep the ship afloat: they have been offering employees who were about to jump ship, because they see what's coming, ridiculous money to stay around and watch the place go up in flames...

By the way...did I mention that Nvidia scrapped their plans for any more of the GK11X line? S|A reported on it a while back, some heads rolled over the issue as well...
 


inb412GB9970

Man, I liked it when the X870 had some value in it :p 9870 and 9870X2, oh how I miss you 4870.
 
Windows just works. Linux is optimized since the kernel was designed to scale well on multi-core supercomputers. Moreover, the Windows scheduler doesn't understand well CMT.

And that's AMD's fault, and one I personally noted as a potential problem back in the beginning of the BD speculation thread. Windows did what it was designed to do: Use a non-HTT core as a normal CPU. But because using the second core of a BD module incurs a performance penalty, this resulted in BD performing even worse then some of us expected.

I even gave a simple fix that could be implemented without a windows change: Enable the HTT bit via a BIOS update, so Windows would treat a CMT core the same way it treats HTT (avoid using unless heavy workload). While not "perfect", it would have avoided the months wait for a windows patch.

The real question should be "Why wasn't AMD working with MSFT to get the issue resolved prior to product launch?".



Its not a myth; you can only execute on data that you have access to. And if it takes a few extra ns to get the data you need, guess what? Your CPU is idling, doing nothing, or worse (from a benchmarking perspective), context-switching to some other task, causing your application to wait even longer to execute.

For anything memory intensive? GDDR5 is a huge speedup; potentially 200%+ performance gains. I'm more worried about those low level system tasks, that only need to read a few KB at a time, and the leeching the higher baseline latency will have on overall performance. Whether that extra latency is noticeable in usage I can't say, but I can say I've worked on systems where higher bandwidth, but higher latency, caused a subset of tasks to have lower absolute performance. And I wouldn't be shocked if that's the case here as well.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


The fault is on Microsoft part. You don't need to download and apply patches on linux, for instance.

And when Microsoft decided to released its first hotfix it was buggy and had to release another to fix the hotfix. Amazing!

Who said you that AMD wasn't working with MSFT to get the issue resolved prior to product launch? You are assuming too much.

The popular idea that GDDR5 latencies are astronomic and will affect CPU performance is a myth. Latencies are essentially the same and outperformed by the bandwidth gain.
 
The fault is on Microsoft part. You don't need to download and apply patches on linux, for instance.

No, Linux just releases a new kernel to accomplish the same exact thing. Or the package that contains wherever scheduler is being used gets updated.

Who said you that AMD wasn't working with MSFT to get the issue resolved prior to product launch? You are assuming too much.

Based on the fact BD was in development for what, FIVE YEARS, and a relatively simple change to the scheduler wasn't introduced, even after a major version change (Vista) and an additional minor version change (7) to the kernel was released during the time BD was in development? So yeah, that's telling me either MSFT was too lazy to implement/test a few line fix for their second largest CPU vendor, or that AMD simply didn't bother to go to MSFT ahead of time.

The second is more believable.

The popular idea that GDDR5 latencies are astronomic and will affect CPU performance is a myth. Latencies are essentially the same and outperformed by the bandwidth gain.

Astronomic? No. But even minor latency degregation can force the CPU to wait around doing nothing, which is the last thing you want the CPU doing. And extra bandwidth doesn't help if your application is only asking for a few KB worth of data. If that trend happens enough, then it will show in the final performance.
 

GOM3RPLY3R

Honorable
Mar 16, 2013
658
0
11,010


Well that sucks, no GK11X? Damn. lol

Anyways, you may say that it was a panic (I mean I could agree), however, it would actually turn out for the better. As it may be a marketing "gimmick" as they are the 600 re-vamped into a cheaper yet slightly more powerful series. This could prove positive for those who need a major upgrade, from a 670 or below. Considering that the 770 is slightly more powerful than the 680, paying $400 for that wouldn't be a bad idea, as you can overclock greatly with the DCUII cards.

I can actually admit, it's thanks to ASUS that Nvidia is able to pull ahead so much. The DCUII cards are barely any extra money to get, but they deliver such better performance, and are easier to overclock without having to go water cooled.

So with the 780 being the "Titan" of the 700 series, you should take this into fact: Before the 700 series was released, a 4GB GTX 680 from say EVGA would be ~$650, simply because the series has been in affect for a while and many people wanted the cards. Now that the 700 series is out, it has dropped to a mere $450, however, the 770 being ~ the same price, for slightly better performance, would easily get people's attention.

There is something also to keep in mind: The fact that when the new AMD series comes out, the prices are sure to drop a good amount.

EDIT:

The only thing really that is allowing AMD to keep it's head above water is OpenGL. OpenGL will run slightly better than Direct X, unless you get a higher end Nvidia card (770, 780, or Titan) which will boost the performance greatly of Direct X. The thing that I found the least realistic about games, is the lighting. In almost every game I play, the lighting is very bright compared to reality, no matter what settings you play at. DirectX does a better job at getting darker shades and colors, but at the same time getting better dynamic lighting results so the distortion between light and dark is better, as well as better texture results.

Then again, it is also the problem of how good the user uses Direct X. If it is executed correctly, it will look much more fantastic and beautiful than OpenGL, whilst keeping up with the frame rates of it.

AMD has the performance gain of OpenGL, however, DirectX does look better, and is easier to code for. If they make a more optimized or shorter code for DirectX, it is bound to be the same if not better quality, and run faster and better. CUDA is what helps boost this Direct X performance. Everyone says "it's a marketing gimmick" but its really the architecture of it that gives it advantages for PhysX and Direct X. CUDA is the name given for the architecture units that run the engines. This is literally no different than what AMD uses to power OpenGL better, except they didn't specifically name it.
 

griptwister

Distinguished
Oct 7, 2012
1,437
0
19,460
The Titan is the Titan of the 700 series. I personally like to think that Nvidia screwed up with the name of the Titan. They should have called it the Titanic. It was Big, Great, efficient and expensive, and it was well worth the money. Until it sank! LOL!

@The Q6660 Inside: I feel that I play enough fast paced games to fully appreciate a 144hz refresh rate. I was going to invest in a 1920x1200 monitor, but the 144Hz was only $40 more. So, IMO, it's a better deal.

Also, what was wrong with the HD 7970 Matrix? I've heard lots of good things about the GPU.
 

GOM3RPLY3R

Honorable
Mar 16, 2013
658
0
11,010


All I can really do is agree. :miam: If they lowered the price to ~$700 and then the 780 to about $500-$550, it would probably fix a lot of problems and allow a better income. And really, no one buys them, simply because they are so expensive. There's no reason you should be paying the same price as a Dual GPU card for something that barely get's the performance of one. :pfff:
 


http://www.overclock.net/t/1318995/official-fx-8320-fx-8350-vishera-owners-club/20640#post_20551604 This guy stole my words. For such a high end card, it feels so inferior to the competition... That being said, ASUS makes superb nvdia cards.
 


690\680SLI\770 SLI>Titan 90% of the time TBH
 


Well, for AMD cards, I usually suggest Sapphire. I've always used Sapphire and they've never turned me down. I swapped my MSI GTX670 for a Sapphire 7970 Vapor X for around USD$80. Runs cooler and it's at 1.1Ghz.

I haven't changed the paste it has from factory, so I could do that down the road. I didn't even bother with the GTX670 (it was a ref design) but I can't say it was a bad card in any way. The 7970 feels faster by a long shot, except for decoding. CUDA was great in that with LAV and still, they haven't said they'd go OCL route 8(

Anyway, I've heard they have terrible customer support (Sapphire), but I've never had to use it, so... :p

Cheers!
 

GOM3RPLY3R

Honorable
Mar 16, 2013
658
0
11,010


This is actually pretty funny, mainly because I have a (you can say REALLY old) low profile Sapphire HD 5570. Doesn't really run games totally the best (you know, about 40 frames average on lowest possible settings for todays graphic intensive games), however the cooling is great for a card of this age running really heavily GPU dependent games.

The time that I played battlefield however, the card played at 90 - 105 ºC. Insane right? The overheating problem I used to have was made done by a terrible case design by Gateway -_-.

By the way, Obviously overclocked, but voltages were trapped since it was an ATI :(. The stock core and memory:
C: 650
M: 900
Overclocked:
C: 735
M: 1040

After cutting a hole through the plastic top of the case, the GPU instantly ran anywhere from 50-70 ºC instead. The case is a low profile case, with the PCIE 3.0 slot at the very top. With the cover on the side, really it was just sucking in hot air and blowing it out, continuing to circulate over and over. Now it runs open case with this hole. Runs better and cooler whilst overclocked, and it's amazing that the card is still alive after all this.

Kudos to Sapphire! Honestly for AMD, they are one of the best. :D
 
Status
Not open for further replies.