AMD Richland Series

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Sorry, not meaning to cause any debate or anything. If I have absolutely nothing - no motherboard or CPU - and buy an A10-6800k upon release, can I purchase any FM2 motherboard and flash it? Or would that require me to have another FM2 socket APU, flash it, and then put the A10-6800k in there. I'm not sure if this question makes sense or not, but I think that's what I'm looking for.

I've heard for motherboards that let you flash via USB? Is that something I could look into?
 


He's just being dramatic. You shouldn't have any issues.
 


What are you on about. You don't have to flash anything its plug and play... I know this because I have had some tender love with the 6800K

AMD socket stability means you can use old or new processors on the newest sockets without need of flashing.
 



So A75 mobo can use A10-6700 without flashing? My situation as follows:

http://www.tomshardware.com/forum/365375-28-flash-newbios-with-richland

i have got a FM2 A75 MB.I was thought to use A10 -5700,but now i can't find it on sale in my place, and i afriaid my psu would overload by useing A10-5800K (TDP100W)...

i prefer A10-6700,but i don't have any old Trinity APUs. My mobo is Biostar TA75MH2 .

 


How well does it perform?
 
Better than Trinity respective parts, some of he expert conjecture suggests 7-10% CPU component increase, and 15-20% iGPU component increase. You will just have to wait until NDA is lifted.
 


How far we advance is irrelevant. We can't make a few dozen watt IGP perform as well as an up to several hundred watt graphics card of similar (or in the usual case for APUs, even more advanced) technology.
 
what is going to happen is that integreted graphics from intel and amd will completely replace entry level graphics cards from amd and nvidia ie $100 and under while the mid range and hi end cards will be a different market that will remain unaffected by this.
 


Stay tuned Blaz 😉 AMD are not targeting top end GPU's, they are targeting the mainstream level that would be 7770OC-7850OC performance on a die, which is not that impossible as people make out. Kaviri will be mind blowing but the yet to be named excavator part with DDR4 and on die memory will likely change a great deal about how iGPU's perform. They are being virgin tight about excavator, rumors of a socket unification have come up but AMD is reluctant to leak for good reason but it is very plausible by Excavator in 2015 we will see a unified socket with high end x86 CPU's and mainstream level iGPU's with the benefits of hybrid cross-firing, it is far from impossible.
 


That isn't what smokeybravo was saying. That's why I replied about integrated not getting past the entry level on its own any time soon.



This is my point. Integrated graphics can't replace graphics solutions that can use much more power than the integrated solutions simply because more power gives more headroom for performance, especially if you consider how APUs have been about a generation behind discrete graphics of their time as far as GPU technology goes. Integrated graphics is entry level and that's not changing any time soon and by soon, I mean within the next few years is certain and within the next few decades is still likely. It's not until we advance to the point where integrated can do pretty much anything we could want from a graphics adapter (which we are something like two orders of magnitude away from) where discrete cards will start to lose their importance among the mid-ranged to high-end desktop/laptop gaming community.

Kaveri is still not supposed to best the Radeon 7750 GDDR5, if even the 7750 DDR3. That'd be an incredible step for integrated graphics (granted it'll still be entry-level by the time it happens and is arguably entry-level even now), but it won't shake the mid-ranged to high-end gaming community. That's not to say that it won't matter, it'll definitely matter for many things and may help AMD significantly, but it simply won't be mind-blowing except for what it is: an APU. It'll still be a lower end offering, granted a great improvement from its predecessors. Maybe, just maybe, the Dual-Graphics solutions offered by Kaveri will be good enough to hit it into the lower mid-ranged in performance. I wouldn't bet on anything more and at that point, it's not replacing discrete, it's merely augmenting it.

AMD is not being all that tight about Excavator's architecture. You can look up most of it right at Anand where AMD gave details just like you can for Steamroller and could for Piledriver before it came out too.
 
Except desktop PCs as they exist today are not going to be around in 15 years. As soon as quantum engineering makes its way into mainstream computers, having a dedicated video card would be redundant. In 15 years (possibly earlier), computers will be no larger than a music CD case and be completely solid state. Circuit boards will be printed on plastic sheets and will be faster than anything you can build right now. If you think that sounds like science fiction, you are probably forgetting people used punch cards to input data into computers the size of a building not that long ago. The APUs are entry level into this model of total integration. You're going to see this technology start to take off in the next couple of years.

That's my prediction anyway. Take it with a grain of salt.
 


Sorry, but I think that your timing examples are off. Quantum computing won't come into play until standard electronics are no longer profitable, regardless of our capability to use it.

Less than fifteen years ago, punch cards were not common for PCs. It was quite a while ago when we still used them. The general populace also didn't have computers the size of buildings. You're thinking many decades ago, not a mere decade and a half ago. Maybe we'll be where you expect within the next forty to sixty years if you want to compare time tables with your examples, but much less than that is a stretch at best.

Technology's advancement has been slowing down, not speeding up. We can see where we'll be in the next few generations with relative ease and unfortunately, it's not unimaginably far ahead. We're well beyond the step of being completely incapable of figuring out where we're going in the semi-recent future of a few CPU/GPU generations ahead.

It's this simple. APUs are bound to about 100W TDPs. Their power consumption is generally bound somewhat under that. Only about half of these APUs' power consumption gets monopolized by the GPU. It simply can't compete with something that has up to several times more power available along with more advanced technology.

Desktops will still be around in fifteen years. They're current usage as a workhorse isn't changing any time soon. They may be even further phased out of the average users's life, but they will remain a part of anyone who wants mid-ranged to high-end performance and can't afford a far more expensive laptop with less or at best comparable performance.
 
I'm looking to build a box to run the A10-6800K on, and what I want to know is will AMD change the socket from FM2 to something new to accommodate this new processor or will existing FM2 mobo's be sufficient.

Also when and where can I get this latest Richland processor?

Thanks in advance for all your help
 


All existing FM2 boards that support the 100w TDP will work and the Kaviri APU's later in the year.
 


comparing power utilization is a pretty poor metric of performance. Your argument in particular is on the surface particularly thin.

Let me taking a stab at reworking your argument in the only appropriate compute analogy.. with cars.

My sister's 1970 GTO with the 455ci option produced a fairly impressive 360 HP and a much more impressive 500ft/lb of torque. It was a hugely impressive when I was a little kid and I could barely lift my arms off the seat when my brother punched it. It was also rated at 12mpg city with no emissions controls though my sister got 5-8mpg [depending on whether my brother got hold of the keys].

fast forward several decades.. The 2006 GTO. Though we think of old muscle cars as being huge, the 2006 curb weight is around +100lbs over the 1970.
Smaller engine, 364ci engine. More HP, 400HP, less torque 400ft/lb.
2006 Performance? 0-60 nearly 1.9 seconds faster. Quarter Mile 2 seconds faster despite the weight and torque disadvantages.
Oh yea, don't forget it's loaded up with emissions equipment and is rated 15 city.

That's a long way to go, but I'm feeling chatty and I think it's a particularly good analogy because it illustrates many aspects you glossed over. Before digging in, let me point out that there's been a lot less room to grow in internal combustion efficiency between 1970 and 2006, compared to processor performance/watt between 1971 and 2006. CPU litho in 1971 was done at [I'm not exaggerating] 10,000nm. Today we're making processors at 22nm. Performance is well over 20,000x greater. As curious as I am, I'm not going to look up power consumption on 1970s era CPUs.. but I used to run code on an old IBM mainframe that needed chilled water with a roof heat-exchanger and it was probably many times slower than my phone.

OK.. I'm still going. hopefully someone enjoys the story enough to keep reading. 🙂

why the analogy is, I think, a good one...
- as time goes on, performance [compute or speed] goes up as power [gas or electrical] goes down. As technology advances, efficiency goes up.
- Complex systems benefit and are held back by all their components. How does a new car with slightly ore HP and much less torque smoke an old car that weighs less with 100+ ft/lb in 0-60? Suspension, tires, transmission, responsiveness [fuel injection vs. carb.. etc].
- and this one requires a bit of a stretch but the analogy still works.. If we scaled old tech to new tech.. would it matter? If GM put a 455ci engine with performance that scaled like the 364ci [over the old block] would it really matter in the end?

The last point is where your argument fails, but perhaps never on absolute terms.. but on the terms that matter.
Scale it up linearly and we got a 500HP motor. 364ci * 1.25 just happens to equal 455ci. Putting 500HP in a 2006 GTO isn't going to get you 3.8sec 0-60s. You won't grip. You might even blow the trans.
Look at it another way.. the top speed on the 2006 GTO was something like 167, 169mph. Scaling up to a 455ci w/ 500HP isn't going to also get you up to 210mph no matter what else you also upgrade. That one is probably more obvious.

Right now, discrete GPUs, at least the really big ones, are REALLY BIG and for that reason use lots of power. They're just one part in an even more complex system though. They seem to have an advantage in bandwidth to local ram but remember that they're still swapping data into there from the main memory. Their main choke point is the PCIe bus and the fact that it operates in a different memory address space from the main system RAM. Integrated GPUs have a massive advantage here. AMD APU's working space is on slower RAM but they can directly access the same memory addresses as the CPU. There's no swapping. The benefit is immediately obvious when you look at the level of performance you get [especially GP-GPU calc] between APU and CPU+GPU systems. A fairly small integrated core keeps up or beats a mid-sized discrete core.

Further, we can assume that both GPUs and APU graphics will both continue to scale.. perhaps even at similar rates for the foreseeable future. Will the choke-points scale though?
When APU and GPU graphics performance doubles, will PCIe bandwidth have doubled by then? If not, APUs will start to scale faster than discrete GPUs.
More interestingly, would it even matter? we're always going to be limited by the physical interface.
Even if we get 4K monitors in the next few years.. how long until there's no point in investing in faster GPUs for 99.9% of users because we're already running full-rez at 240Hz with every affect turned on. We'll get to a point, eventually, where us poorly constructed humans won't recognize the difference.. and far before that, we'll decide we won't pay a boat-load of money to barely be able to discern a difference.

OK.. finally ready to go to sleep. Just in time.
I can easily see APUs more or less obsoleting discrete graphics, tough probably not totally at any point unless integration totally trumps size minus choke.
We're already half-way there. I quoted a business-class Dell AIO for a client at work today. The only gpu options were Intel integrated. I wouldn't be at all surprised if half the machines shipping today world-wide had intel or amd integrated graphics.

But, but, but that's not good enough for gamers, right?

It's becoming enough right now for a lot of gamers. I've got a bunch of CHEAP compute nodes that I built with with Llano A8s and I just added a Trinity A10. I installed Win7 first to play with them.. they're impressive, hugely impressive from a price/performance and size/performance perspective.
A trinity A10 is around $100 and it'll run rings around a $150 CPU and a $100 graphics card.. possibly a $150 card. It's no GTX680 but how many of those are sold compared to total computer shipments?
More in line with your argument.. an A10 is a 100W chip. It'll easily smoke a 65W CPU paired with a 100W [possibly more] GPU. There's a synergy in the integration that increases performance per watt and performance per transistor when you compare APU vs. CPU+GPU.
As the APUs get faster.. you'll eventually find the GTX650 buyers saying.. why bother? Then it'll be the GTX660 market.. eventually it'll be the GTX 670 (or the comparable product in that market segment at that time).. and when that happens you're seriously biting into people who are willing so spend serious $$ on gaming.


 
OK.. one last point...
There is one point where we'll probably never seen discrete graphics made obsolete by integrated graphics, until we see a big paradigm change. GP-GPU.
The future for Nvidia might not be in GPUs. It might be in Teslas. As the home and office market shrinks, where's the room for expansion? HPC.
For anyone doing HPC, or even HTC work, demand for CPU power is always functionally limitless. I support people who chew through a million CPU hours in a blink of the eye on a national grid or one of the big BlueGenes.. but they they need to do that every week of the year. ;-)
If I could get some of them a million hours a week for 52 weeks a year, they'd come back and say "that's awesome I can do a billion samples/iteratons/interactions/frames.. [whatever unit they happen to work in], but now we should try to run a 2 billion samples/iterations/ frames.". etc.

There are practical limits to the CPU and GPU power any home user can use or is willing to pay for.
In HPC circles, there is only one limitation. How much money can I spend?

Also, we're slowly but steadily seeing increased leverage of highly parallel resources, like GPUs and now Xeon Phi. Everyone would be running CUDA right now if their calculations were well suited because the price/performance to highly threaded workloads is so massively tilted toward GPUs. It's the threading that keeps them from jumping. Writing threaded code is hard but the tools are getting a lot better.

Discrete GPU silicon:
Shrinking markets.. increased demand for HPC.
Guess where ATI and Nvidia are going to shift attention?
 


Sorry, but your comparison to cars doesn't work, you misunderstand how GPUs use memory and PCIe, you incorrectly compared integrated to discrete, and you came to an incorrect conclusion as a result of those mistakes.

Your cars analogy doesn't work because cars, unlike computers, only need to increase in efficiency, if even that- they don't need to keep increasing exponentially in performance. Cars don't need to go any faster now on modern roads than they did several decades ago. GPUs need to keep increasing in performance to meet current demands or exceed current demands in preparation for future demands.

GPUs do not swap much between the main memory and their own memory. They only need to copy a few things over from main memory to their own much faster memory kinda like how CPUs copy data from the main memory to their own much faster caches. They don't swap data back and forth much at all, it's mostly one way and even then, the GPU's own memory is used much more. GPU memory bandwidth is a far greater bottle-neck than the interface between the GPU and the CPU just like the CPU's cache is a much greater bottle-neck than its own main memory. That makes AMD's APUs having shared memory almost no benefit whatsoever compared to discrete graphics except for integration as far as gaming goes. It can be great for many non-gaming workloads, but gaming does not benefit much at all from this and the sacrifice of great bandwidth makes the trade-off extremely counter-productive from a raw performance perspective.

PCIe is hardly any bottle-neck whatsoever. GPUs are not very reliant on having low latency and the huge bandwidth difference between even PCIe 2.0 x4 and PCIe 3.0 x16 (a difference of almost an order of magnitude) usually amounts to a negligible performance difference. Worst case scenarios like that still don't make a big difference even for high-end systems such as Radeo 7970 Crossfire.

Compare the Radeon 6550D to the Radeon 5550 GDDR5. Despite nearly identical GPUs, the memory performance difference still lets the HD 6550D at best only match the DDR3 version of the Radeon 5550 while the GDDR5 version remains too fast to catch. Another comparison is the Radeon 6670 DDR3 and GDDR5 versus the A10-5800K's GPU. The A10's GPU is similarly performing to the Radeon 6670's GPU and can roughly match the DDR3 version in performance at comparable clock frequencies, but the GDDR5 version of the 6670 is far faster. Memory is obviously a huge factor, far greater than the PCIe bus which is only a problem when you go to like PCIe 3.0 x1 and worse.

My power consumption argument was flawless. Power consumption on top-end graphics has not gone anywhere but up ever since they came out and although they seem to have more or less plateaued, there is absolutely nothing to indicate that performance will stay the same as power consumption goes down. Quite the opposite, in fact, when we consider how seemingly difficult it is to keep the rate of improvement in performance from dropping off to far (it's already several times slower than it used to be). That means that by the time integrate graphics gets anywhere near say the GTX 670, discrete will be several times faster. If I had to guess, that'll take at least five or six more generations anyway since we're talking about an improvement of about an order of magnitude from current IGPs.

I'll also point out that there has yet to be much movement in display resolution increasing on average other than the *cheap* Korean IPS displays making decent IPS technology more affordable (but still fairly expensive for the average user, even the average gamer) since 1080p came out and the general direction of refresh rate has gone down rather than up, granted there are a few decent reasons for that. It is extremely unlikely that we'll see high resolutions such as any 4K resolution be affordable within this decade and it's even less likely that we'll see refresh rates over 120Hz being affordable in the next decade. Even less likely is that we'll find them both in the same display at all.



We won't see integrated graphics make discrete obsolete for gaming until integrated graphics can already be fast enough to the point where there is no advantage in getting discrete. At best with current technology being what it is, we seem to be decades away from that.
 
Kaviri/Kabini/Tamesh will all feature GDDR5 and hybrid controllers. The addition of FM3 with intergrated GDDR5 and DDR3/4 support. The upside to this is that the FM3 boards will be backward compatible with existing FM2 parts.

I tested a unspecified DDR3 3600 kit with the 5800k I have, the bandwidth achieved on DDR3 1600 is roughly 22gb/s, the badwidth with that kit achieved 51gb/s. DDR4 will have 4200+ speeds and higher badwidth, I wouldn't be surprised if you see APU's hitting 60-70gb/s +.

If you don't own a APU and curious, I would suggest waiting on Richland, if you are an existing APU owner I do suggest waiting for Kaviri and the new boards to come out. AMD have pushed the iGPU at a rate of knots. It was said that AMD wanted mainstream entry level integrated graphics by Excavator and it is very possible, likely feature DDR4/GDDR6 interface with what AMD project well over the 100GB/s mark and dual graphics support.
 


Do we have confirmation from AMD that the FM3 APUs will have GDDR5 memory?