• Happy holidays, folks! Thanks to each and every one of you for being part of the Tom's Hardware community!

AMD Richland APU Will Boost up to 4.4GHz

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I wonder how high their mobile Richland chips will go? I think right now their highest end Trinity mobile APU is 2.3 / 3.2GHZ, not very good. I might be wrong about that clockspeed.
 
So theyare push GHz down peoples throat in the hope that people will think more speed is better, to bad that doesn't work. You would think they would have learned from Bulldozer an Piledriver that speed and cores doesn't mean anything more it's all about IPC. They can keep increasing GHz to try and keep up but Intel will always be one step ahead. They will eventually hit a wall just like Intel did with the crappy Pentium 4 and they will have no choice but to trash the architecture and come up with somehting that works.
 
Toms, I think you got it wrong.
Richland will not be using GCN graphics, its the same VLIW 4 architecture with the difference of higher clocks. It has the 8xxx series name but not the actual gpu (its a rebrand, did trinities 7xxx igp use GCN--NO).

Its basically a clock speed boost.

I doubt AMD has the money to rework their cpu like that and then launch Kaveri later in the year.
 
This will improve single core performance... AMD's weakness. 4.4 GHz should fly. The 100W seems like a stretch though. Hopefully it is true.
 
Man can't wait to see how the APU perform in games, especially the video part. I know the days of an IGP that can run Crysis at least on medium with good fps is very very near, been wiating for something like that forever.
 
[citation][nom]nebun[/nom]lol...no matter how fast they make their CPUs, AMD will never be able to compete with Intel when it comes to performance[/citation]


Depends what kind of performance you're measuring. We all know they loose in CPU performance but they shine in GPU performance, some people, like me care more about that.
 
@nebun, some ppl said the same @K5 era :)
Now everything is going for the performance/watt and multicore route, AMD just needs a smaller node asap.
 
[citation][nom]wozza365[/nom]These chips are really good value, probably not much difference in GPU power between it and the consoles, great for a cheap gaming system, 3rd CPU, $60 motherboard, $30 RAM, a $30 case, and a cheap hard drive, bargain!![/citation]

There is a big difference in GPU performance between the consoles and these IGPs. AMD's IGPs are far more powerful as well as far more modern in feature support and that's despite consoles being more optimized platforms.
 
[citation][nom]rds1220[/nom]So theyare push GHz down peoples throat in the hope that people will think more speed is better, to bad that doesn't work. You would think they would have learned from Bulldozer an Piledriver that speed and cores doesn't mean anything more it's all about IPC. They can keep increasing GHz to try and keep up but Intel will always be one step ahead. They will eventually hit a wall just like Intel did with the crappy Pentium 4 and they will have no choice but to trash the architecture and come up with somehting that works.[/citation]

Your incorrect use of the term IPC discredits any opinion that you have on the technology, but I'll also add a few things to that. AMD improved the core architecture of each APU over the desktop series before them, so it stands to reason that they will do so again. AMD managed to improve power efficiency with every APU release over the previous desktop version that they're based on, so it also stands to reason that AMD improved it yet again.

Intel hit a wall because they had a huge front end and memory bandwidth bottle-neck. If Intel wanted to keep using it, Netburst most certainly could be still used today with some tweaking and it'd probably do just as well as Sandy and Ivy have been doing if implemented properly.

AMD does not need to trash the architecture at all and they won't need to any time soon if they don't want to. The base architecture with Piledriver still needs some work, but most of the work that needs to be done is not architectural, at least at the point of Steamroller that should be out this year or early next year at the latest. Simple evidence for this is as follows:

The architecture used in the Core/Core2 CPUs and the architecture used in the Sandy and Ivy CPUs is extremely similar. Going from Nehalem to Sandy and Ivy, it's almost identical. The differences in performance are mostly from minor tweaks, cache improvements, and memory controller improvements. Have a look at the basic architecture used in each (diagrams and such can be found all over the internet) and you'll probably notice how the biggest differences between Core 2 and Sandy/Ivy Bridge in integer performance per core are the cache and memory. There's also feature support and such, but although a different, albeit related, topic from hardware differences.

As such, even without looking at the front end improvements planned in Steamroller (of which there are many), given the extremely poor front end situation with Bulldozer and even Piledriver, there is undoubtedly a lot of headroom for the modular architecture in performance per clock improvements without sacrificing clock frequencies.

Furthermore, chasing clock frequencies isn't even necessarily a bad way to go about this. Just compare the first Netburst CPUs to current Piledriver CPUs for proof of that. The performance difference (even when you use modern DDR3 memory to alleviate the huge memory controller issue for the LGA 775 interface) is huge, to say the least. With comparable real-world memory bandwidth for both platforms, it also becomes clear how Athlon 64 actually wasn't a huge win over Netburst architecturally.

Moving on to what you said about core count, it most certainly is extremely important so long as the software can utilize the cores. For example, when all cores are properly utilized, AMD's eight-core FX CPUs easily trump Intel's quad-core i5s in overall performance. That AMD opted for high core counts in a time where most software used by people on this site, IE gaming, is generally not able to scale across large numbers of cores (large, in this case, being more than four) is arguably a decision worth criticizing. However, that's not a good reason to say that the concept itself is flawed, especially since the greatest improvements in performance over the last few years generally involve increasing core and/or thread count.

For example, although we've manged to increase performance per core from a roughly 3GHz Core 2 Duo to a similar price-point 3GHz Sandy/Ivy i5 by about 50%, doubling the core count had a far greater impact on performance for work that can scale across enough threads. The same can be said going from one of the top-end Core 2 Quads to a hexacore SB-E i7 where, again, both are around 3GHz with a roughly 50% performance per core increase, but a roughly 100% increase in multi-threaded performance not counting the performance per core increase.
 
[citation][nom]anoldnewb[/nom]actually throughput = IPC * GHz[/citation]

Actually, it isn't remotely that simple, even ignoring your incorrect use of the term IPC. Still, yes, clock frequencies are a very important factor in CPU performance.
 
Hmm interesting APU. Lots will depend on it's maximum memory speed and how good OEM's are at making BIOS's support it.

For those trolling and what not. APU's are most applicable in mobile systems under $1K and for small form factor PC's (Mini-ITX with no expansion slot).
 
[citation][nom]lpedraja2002[/nom]Man can't wait to see how the APU perform in games, especially the video part. I know the days of an IGP that can run Crysis at least on medium with good fps is very very near, been wiating for something like that forever.[/citation]
well if thats the case then look at this:

http://www.youtube.com/watch?v=SDRL1CovGAc

720p on high settings and dx10. average fps 45.
 
[citation][nom]nebun[/nom]lol...no matter how fast they make their CPUs, AMD will never be able to compete with Intel when it comes to performance[/citation]

AMD has historically and even right now, they most certainly are competing with Intel in CPU performance. Are they competing in every aspect of it? Not perfectly, but they are competing nonetheless.

For example, As Tom's has been pointing out repetitively for at least several months, if not pretty much indefinitely, AMD most certainly can compete with Intel in many markets. AMD has nearly had the lower end to mid-ranged netbook/notebook cornered as far as price/performance goes with their APUs in many situations. Even in higher end desktop situations such as gaming, AMD competes very well. A fairly new Tom's article (not like it's exceptional, Tom's has posted articles with similar results for a long time now) shows AMD being at worst about 10-20% below Intel on average even in very high end gaming performance with two Radeon 7970s comparing an i7-3770K to an FX-8350 with and without CPU overclocking.

That most certainly is competitive. Also, for such gaming machines, even AMD's load power consumption disadvantage is generally overshadowed by PSU efficiency and the power consumed by the graphics cards and AMD manages to beat Intel in idle power consumption with their Trinity APUs, so even for lower end systems that don't rely on heavy graphics, AMD isn't killing the power consumption.

Furthermore, when it comes to multi-threaded performance, AMD is killing Intel in price/performance. With AMD having decent six core models in the same price range as Intel's dual-core with HTT models and AMD's eight core models in similar price ranges to Intel's bottom current quad core models, that's hardly a surprise, but that doesn't detract from its relevance nor how it disproves your claim.
 
[citation][nom]bustapr[/nom]well if thats the case then look at this:http://www.youtube.com/watch?v=SDRL1CovGAc720p on high settings and dx10. average fps 45.[/citation]

That seems to prove lpedraja2002's point quite well. Sure, 720p isn't a very high resolution, but it's still an HD resolution and the settings in that test are supposedly on high rather than the medium specified by lpedraja2002 and the video's author also states that the FRAPS recording decreases the performance by around 10%. It stands to reason that at least at 720p and similar resolutions such as those common on cheap laptops and monitors, going by that link you posted, switching from high to medium settings would get good performance even on today's higher-end APUs, let alone this next generation with a more respectable resolution of say 1600x900. It also stands to reason that unless AMD made next to no improvements at all (which doesn't seem to be the case), even 1080p would be doable with good performance.
 
[citation][nom]UltimateDeep[/nom]I think it's hell about time that AMD APUs and even Intel's CPU go with Graphics of a GDDR5 Core rather than GDDR3/DDR3 core we've been seeing....[/citation]

Well, there's DDR4 being put into almost-full production in 2014, and adoption of it may happen in 2015.

Meanwhile, AMD has also been nearly finished with developing GDDR6...
 
Why do none of you mention that AMD is probably suffering largely due to not Intel but to Nvidia not allowing PhysX code to run on AMDATI GPUs? That seems like a dirty tactic by Nvidia.
 
i love these i built my friend a $300 gaming pc with one of these so he can play battlefield and guild wars with me and he loves it a $50 motherboard a 4gb stick of ddr3 1866mhz ram and a a10 5500 and a cheap 500gb harddrive and a $20 antec new solution case that was on sale its a sharp looking and well preforming build looks nice on his 1080p 26inch tv
 
I don't care much about Ghz, just performance per watt per dollar.
If the performance/watt of the 65w CPU is near the 4-core Sandy and the muscle of 8670D is superior to Ivy IGP (or Haswell IGP), then I think AMD has a winner !
 
[citation][nom]silverblue[/nom]If Richland will work with the 7000 series in Dual Graphics, the lowest possible model looks to be the 7750. I think anything above this would defeat the object of Dual Graphics due to the 8670D being too weak, let alone the CPU cores.How does Resonant Clock Meshing stand to help at such high clock speeds? How are AMD planning on keeping power consumption at Trinity levels with the higher clock speeds all round... or is this going to prove an impossible task?So many questions...[/citation]
If Richland does indeed turn out to be VLIW4, that does improve the situation. Moreso, when Kaveri appears, the lower-end 8xxx series will be GCN which also provides a cheap upgrade assuming AMD can sort out the drivers. At the moment, pairing a relatively decent dGPU with an APU for Dual Graphics makes little sense as the performance only marginally exceeds that of the dGPU, if at all, so I'm hoping they've put in some work to remedy this. If Richland had been GCN and locked to the 7000 series, having to buy at least a 7750 to run Dual Graphics would've been mostly pointless as well as expensive - you'd have been better served either running the discrete on its own or getting a cheap FX/i3 instead of the APU - so having VLIW4 is a blessing at this time, though the GPU will only be marginally faster than Trinity's if this is indeed the case. When Kaveri appears, lower-end 8000 cards will be available based on GCN which keeps Dual Graphics relevant.

This will also be AMD's third generation of 32nm products, so perhaps they've worked out a few more bugs in Piledriver and improved the die overall. Resonant Clock Meshing is still an important topic and I wouldn't mind knowing how efficient it will be at 4GHz, along with any improvements they may have incorporated into the design.
 
Status
Not open for further replies.