• Happy holidays, folks! Thanks to each and every one of you for being part of the Tom's Hardware community!

AMD Richland APU Will Boost up to 4.4GHz

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]silverblue[/nom]If Richland will work with the 7000 series in Dual Graphics, the lowest possible model looks to be the 7750. I think anything above this would defeat the object of Dual Graphics due to the 8670D being too weak, let alone the CPU cores.How does Resonant Clock Meshing stand to help at such high clock speeds? How are AMD planning on keeping power consumption at Trinity levels with the higher clock speeds all round... or is this going to prove an impossible task?So many questions...[/citation]

FX 8350 lowered peak load power over the FX 8150 by around 60w while raising the base clock by 700mhz + a boost all within the same TPD and on the exact same process. There is your answer right there. Mature process.
 
[citation][nom]blazorthon[/nom]Your incorrect use of the term IPC discredits any opinion that you have on the technology, but I'll also add a few things to that. AMD improved the core architecture of each APU over the desktop series before them, so it stands to reason that they will do so again. AMD managed to improve power efficiency with every APU release over the previous desktop version that they're based on, so it also stands to reason that AMD improved it yet again.Intel hit a wall because they had a huge front end and memory bandwidth bottle-neck. If Intel wanted to keep using it, Netburst most certainly could be still used today with some tweaking and it'd probably do just as well as Sandy and Ivy have been doing if implemented properly.AMD does not need to trash the architecture at all and they won't need to any time soon if they don't want to. The base architecture with Piledriver still needs some work, but most of the work that needs to be done is not architectural, at least at the point of Steamroller that should be out this year or early next year at the latest. Simple evidence for this is as follows:The architecture used in the Core/Core2 CPUs and the architecture used in the Sandy and Ivy CPUs is extremely similar. Going from Nehalem to Sandy and Ivy, it's almost identical. The differences in performance are mostly from minor tweaks, cache improvements, and memory controller improvements. Have a look at the basic architecture used in each (diagrams and such can be found all over the internet) and you'll probably notice how the biggest differences between Core 2 and Sandy/Ivy Bridge in integer performance per core are the cache and memory. There's also feature support and such, but although a different, albeit related, topic from hardware differences.As such, even without looking at the front end improvements planned in Steamroller (of which there are many), given the extremely poor front end situation with Bulldozer and even Piledriver, there is undoubtedly a lot of headroom for the modular architecture in performance per clock improvements without sacrificing clock frequencies.Furthermore, chasing clock frequencies isn't even necessarily a bad way to go about this. Just compare the first Netburst CPUs to current Piledriver CPUs for proof of that. The performance difference (even when you use modern DDR3 memory to alleviate the huge memory controller issue for the LGA 775 interface) is huge, to say the least. With comparable real-world memory bandwidth for both platforms, it also becomes clear how Athlon 64 actually wasn't a huge win over Netburst architecturally.Moving on to what you said about core count, it most certainly is extremely important so long as the software can utilize the cores. For example, when all cores are properly utilized, AMD's eight-core FX CPUs easily trump Intel's quad-core i5s in overall performance. That AMD opted for high core counts in a time where most software used by people on this site, IE gaming, is generally not able to scale across large numbers of cores (large, in this case, being more than four) is arguably a decision worth criticizing. However, that's not a good reason to say that the concept itself is flawed, especially since the greatest improvements in performance over the last few years generally involve increasing core and/or thread count.For example, although we've manged to increase performance per core from a roughly 3GHz Core 2 Duo to a similar price-point 3GHz Sandy/Ivy i5 by about 50%, doubling the core count had a far greater impact on performance for work that can scale across enough threads. The same can be said going from one of the top-end Core 2 Quads to a hexacore SB-E i7 where, again, both are around 3GHz with a roughly 50% performance per core increase, but a roughly 100% increase in multi-threaded performance not counting the performance per core increase.[/citation]

Blazor, you are preaching a good surmon but to the wrong choir. He is completely out the know when it comes to AMD, why do you think he never makes a comment on the AMD conjecture thread, because he will be shot down in flames. He lives in a naive world created by the odd line picked up from Toms or Anandtech and regurgitates them on the forums in a AMD bashing tirade.

I can deal with peoples choices but simply put he never has anything good to say about anything AMD so its pointless even wasting time responding to him.
 
[citation][nom]sarinaide[/nom]FX 8350 lowered peak load power over the FX 8150 by around 60w while raising the base clock by 700mhz + a boost all within the same TPD and on the exact same process. There is your answer right there. Mature process.[/citation]
Can you throw me a link, please? I was of the impression that Piledriver barely touched power consumption at all and that AMD just raised clocks so that it would perform better at the same power consumption.
 
For those who compare this to Netburst, keep this in mind:

-Netburst had a terrible branch predictor. Which meant that a huge portion of the partially processed data had to be repeatedly flushed out of the pipeline, thus wasting processor time, and that impacted its IPC severely.

-AMD had an integrated memory controller. Intel didn't, and payed for the price when it came to the dual-core game.



There's nothing wrong with high clock rates if you can keep the power consumption and thermals under control...
 
Question If i make my budget gaming rig with an A10 will it be compatible with a ati 7970 in Dual mode? even more, can i add a second ati 7970 and enjoy three-way gpu power?
 
[citation][nom]Wisecracker[/nom]GCN Cape Verde cores will not be on Richland.[/citation]

No, Dual Graphics is limited to HD6670 at most at this point in time.

You can run two 7970's but the will only run in Crossfire.
 
Quick question, could they build a memory controller to utilize gddr5? Is it possible to use this either in tandem with ddr3 or just by it self completely? I know the bandwidth is much higher in video memory.
 
[citation][nom]bustapr[/nom]well if thats the case then look at this:http://www.youtube.com/watch?v=SDRL1CovGAc720p on high settings and dx10. average fps 45.[/citation]

Hehe thanks for this I knew the desktop APU's were performed quite good in games but I didn't knnow they were this advanced. I forgot to add that I'm interested in the Laptop APU's more. A light and energy efficient laptop that uses an APU for gaming, now that's a dream :)
 
[citation][nom]A Bad Day[/nom]I've played L4D2 and Civ 5 on a friend's A10 laptop, and you can get a fairly decent graphics at $600-$700. Beat that with an Intel laptop![/citation]
That's not saying much. I played L4D on Intel HD3000 graphics, 1920x1080 resolution with near-max settings. It ran flawlessly.

Trinity is supposed to be 4x more powerful than Intel HD3000. Just saying.
 
[citation][nom]rds1220[/nom]So theyare push GHz down peoples throat in the hope that people will think more speed is better, to bad that doesn't work. You would think they would have learned from Bulldozer an Piledriver that speed and cores doesn't mean anything more it's all about IPC. They can keep increasing GHz to try and keep up but Intel will always be one step ahead. They will eventually hit a wall just like Intel did with the crappy Pentium 4 and they will have no choice but to trash the architecture and come up with somehting that works.[/citation]
Single threaded performance is a product of IPC X gigahertz. Stock Trinity can already level with the i5s in single threaded tasks because of it's clockspeed. This "speed bump" should be plenty enough to keep intel at bay until kaveri. If you think that clock speed doesn't matter at all, you don't understand how computers work.
 
[citation][nom]dozerman[/nom]Single threaded performance is a product of IPC X gigahertz. Stock Trinity can already level with the i5s in single threaded tasks because of it's clockspeed.[/citation]
In most benchmarks I remember seeing, a stock i5-3570 beats just about any overclocked AMD chip for lightly threaded stuff. That is why i5 dominates most gaming benchmarks when graphics details are low enough to avoid GPU bottlenecking.
 
[citation][nom]DEY123[/nom]I think the real strength to the apu is the general use laptop where you want mobility as well as the ability to at least play games for around $600.[/citation]
It all depends if you can shop around.
Here in UK are prices like this :
Samsung NP355V5C-A05UK
Screen size - 15.6 in - 1366 x 768
Processor - A10-4600M A10 4600M - 2.3 GHz
RAM - 8 GB
Hard Drive - 1 TB
Operating System - Windows 7 Home Premium (64-bit)
- Upgrade to Windows 8 Pro for only £14.99!
Optical Drive - DVD SM
Graphics - AMD Radeon HD 7670M
Warranty - 1 year warranty
Cheapest ever price £502.93

Dell 17R SE
Processor:- 3rd Generation Intel® Core™ i5-3210M processor (3M Cache, up to 3.10 GHz)
Operating System:- Windows® 7 Home Premium, 64bit, English
Display:- 17.3in High Definition+ (900p) LED Display with TrueLife
Memory:- 4GB DDR3 SDRAM at 1600MHz
Hard Drive:- 1TB Serial ATA (5400RPM)
Optical Drive:- Optical Drive : 8x DVD+/-RW Optical Drive
Video Card:- 2GB Nvidia GeForce GT 650M 90W
 
It all depends if you can shop around.
Here in UK are prices like this :

Samsung NP355V5C-A05UK
Screen size - 15.6 in - 1366 x 768
Processor - A10-4600M A10 4600M - 2.3 GHz
RAM - 8 GB
Hard Drive - 1 TB
Operating System - Windows 7 Home Premium (64-bit)
- Upgrade to Windows 8 Pro for only £14.99!
Optical Drive - DVD SM
Graphics - AMD Radeon HD 7670M
Warranty - 1 year warranty
Cheapest ever price £502.93

Dell 17R SE
Processor:- 3rd Generation Intel® Core™ i5-3210M processor (3M Cache, up to 3.10 GHz)
Operating System:- Windows® 7 Home Premium, 64bit, English
Display:- 17.3in High Definition+ (900p) LED Display with TrueLife
Memory:- 4GB DDR3 SDRAM at 1600MHz
Hard Drive:- 1TB Serial ATA (5400RPM)
Optical Drive:- Optical Drive : 8x DVD+/-RW Optical Drive
Video Card:- 2GB Nvidia GeForce GT 650M 90W
Cheapest ever £498.18
 
[citation][nom]Pherule[/nom]That's not saying much. I played L4D on Intel HD3000 graphics, 1920x1080 resolution with near-max settings. It ran flawlessly.Trinity is supposed to be 4x more powerful than Intel HD3000. Just saying.[/citation]

10 FPS?

I find that hard to believe. Running TF2 on HD3000 required 720p resolution and low graphics.
 
[citation][nom]lpedraja2002[/nom]Depends what kind of performance you're measuring. We all know they loose in CPU performance but they shine in GPU performance, some people, like me care more about that.[/citation]
this is really funny because AMD's GPUs work best with intel, lol
 
[citation][nom]nebun[/nom]this is really funny because AMD's GPUs work best with intel, lol[/citation]

Ok so your telling me that a 6670 will work better with an Intel CPU than a A10-5800K, i think you will get better frames with the A10.
 



The big limit on APU's performance is the memory bus speed. Llano experienced a large increase going from DDR3-1333 to DDR3-1600 and then a small increase to 1866. Trinity see's an even larger increase going from DDR3-1600 to DDR3-1866 and then to 2133. With 1866 and 2133 memory being so cheap now, there is no reason for an APU build not to include 8GB of dual channel memory @2133. This is actually one of my sore areas, Notebook OEM's tend to put el-cheapo memory inside their units without realizing their devaluing their product, same with desktop folks.

The A10-5800K (7660D) with 8GB of DDR3-2133 puts out really nice numbers, especially with a slight northbridge overclock.
 
[citation][nom]silverblue[/nom]Can you throw me a link, please? I was of the impression that Piledriver barely touched power consumption at all and that AMD just raised clocks so that it would perform better at the same power consumption.[/citation]
The downvoting without offering a single counter-argument is getting a bit old. So, from a little research, it appears that over an entire benchmark suite (such as http://www.tomshardware.com/reviews/fx-8350-vishera-review,3328-16.html ) that the 8350 would be more efficient than the 8150. However, in http://www.anandtech.com/show/6396/the-vishera-review-amd-fx8350-fx8320-fx6300-and-fx4300-tested/6 we can see that power usage is pretty much at the same level as the 8150. So, it's definitely dependant on what you're running, but hardly a massive jump forwards from a very hungry predecessor. Clocked at the same frequency, however, it's clear that AMD did a good job here - performance per watt certainly went up.

In addition, it appears that neither Trinity nor Vishera launched with any implementation of Resonant Clock Meshing, which makes Piledriver's gains even better. Richland could make some savings on top of the usual architectural improvements if it's implemented.

So, to part answer and part correct my original question, in some tests Piledriver is nearly as hungry as Bulldozer, but that's due to the higher clock, meaning for most other workloads, it's a faster and less hungry architecture.

There we go.

[Edit]I DID put links, but it took them out for some reason.[/EDIT]
 
Love the news, but the whole thing with HD8000 APU graphics cores supporting dual graphics mode with HD7000 discreet cards just makes me angry! Anybody else feel like this? HD8000 with HD7000? I mean come on for the love of all that is holy why confuse use even more. And when will we we see APUs that can dual with the HD8000 cards which are coming out pretty soon? 6 months before HD9000 cards?

AMD... I love you guys really, but I am brand agnostic when it comes to what I buy... How will I look myself in the face when I buy an APU that, although it works great as I'm sure it will, is destined to become obsolete the moment new APUs that support HD8000 graphics cards comes out?

I mean dang man! Give a hardware enthusiast a break!. ARrrr!

THINK!
 
[citation][nom]The_Trutherizer[/nom]Love the news, but the whole thing with HD8000 APU graphics cores supporting dual graphics mode with HD7000 discreet cards just makes me angry! Anybody else feel like this? HD8000 with HD7000? I mean come on for the love of all that is holy why confuse use even more. And when will we we see APUs that can dual with the HD8000 cards which are coming out pretty soon? 6 months before HD9000 cards?AMD... I love you guys really, but I am brand agnostic when it comes to what I buy... How will I look myself in the face when I buy an APU that, although it works great as I'm sure it will, is destined to become obsolete the moment new APUs that support HD8000 graphics cards comes out?I mean dang man! Give a hardware enthusiast a break!. ARrrr!THINK![/citation]

I think that obsolete is too strong of a word to use in that way. Even Llano APUs are not obsolete, yet they are using GPUs that are almost functionally identical to the old, low end Radeon 5000 GPUs and CPUs that are almost identical to the old Athlon II x4s.

Also, I've read different reports that the new graphics cards will arrive anywhere from Spring of this year to Fall of this year. I don't think that Radeon 9000 will be out less than a year after Radeon 8000 is out. If Radeon 8000 gets delayed, then chances are that Radeon 9000 will have a similar delay too.
 
Status
Not open for further replies.