Efficient AMD A10-6700T APU to hit Retail Next Week

Status
Not open for further replies.

ta152h

Distinguished
Apr 1, 2009
1,207
2
19,285
Is this really progress? It runs at 2/3 the speed, and uses 2/3 the power? This is a good thing?

I'm kind of used to lower the frequency by a few percent, and getting a much larger percentage drop in power use. It's probably the GPU, as the frequency wasn't lowered much, but 45 watts for a dual-core (it's a dual core, not a quad-core, except if you're an AMD zombie who really thinks adding an integer unit and nothing else makes one core, two) running at 2.5 GHz isn't that great. The scenarios for this processor are limited, the A10-6700 is going to be better for most people, being quite powerful, and being reasonable on performance.

I also hate when they re-label processors with a "T", but the unit runs much slower than the one they are naming it after. Intel has done this too. It's confusing to customers, and leads to disappointment.
 

alextheblue

Distinguished
Have to bench it before you can determine what "speed" it runs at. Depending on how the turbo operates, it may end up being faster than its base clock implies. The listed turbo clock, for example, is 81% of the 6700's turbo clock. Also, the GPU will be almost as fast. It actually has the same top speed. In a low-power APU system, I'd say this is going to be the biggest limiting factor anyway.

What I really want to know is if AM3+ is going to see any Steamroller chips.
 

Jaxem

Honorable
I'm really not getting why saving a bit of power on a desktop is a big deal, people want a fast powerful desktop, not one that saves them $1.20 a month on their power bill.
 

sykozis

Distinguished
Dec 17, 2008
1,759
5
19,865


If you go by that line of thinking, the 8086 was only a half-core processor, as were the 286, 386 and 486SX as they all only featured an ALU and required a separate x87 "math co-processor" (better known as an FPU) for floating point operations. Having an FPU doesn't make a "core" a "core". It's the ALU that determines what's a processor core. Unlike the FPU, the ALU is a fully functional, stand-alone component. The FPU can't operate without the ALU.



APU isn't a "made up market by AMD". AMD uses the term as a reference to CPUs that have an integrated graphics processor, which is something Intel actually pushed to market first. The Core i3 and i5 processors on LGA1156 were the first x86 processors to feature an integrated graphics processor. Intel has no need for such references due to the fact that even the majority of their mainstream processors have integrated graphics processors.



Your understanding is quite wrong. If that were the case, Intel wouldn't waste their time putting an integrated graphics processor on every CPU package. An APU functions as both a CPU and a GPU (which is exactly what they are). The 2 components are completely capable of functioning independent of each other, even while sharing the same processor package. You code for the CPU portion, the same as you would any other CPU. You code for the GPU portion, the same as you would any other GPU.
 

teh_chem

Honorable
Jun 20, 2012
902
0
11,010

I was about to call shenanigans, but it appears you're right.

A10-6700T 2.5GHz base, 3.5GHz turbo
A10-6700 3.7GHz base, 4.3GHz turbo

I don't understand why this is even designated as the "T" version of the 6700. It's just an under-clocked CPU (and hence, of course it has a lower TDP).
 

hakesterman

Distinguished
Oct 6, 2008
563
0
18,980
I'll Put my FX 8350 up against any Intel chip remotely in it's class. As far as APU's are concerned, their excellent for internet PC or people who want to play simple games like Angry Birds and card games. Their not designed for high end PC's, their great for what their made for.
 

knowom

Distinguished
Jan 28, 2006
782
0
18,990
AMD has fallen so far since it's glory days of the AMD64, but I really feel no real remorse for them when they were on top they price gouged heavily for those CPU's.

The company I'd like to see doing something is VIA they were always so keen on power efficiency and had some really innovative thinking, but I think that's a long shot.
 

silverblue

Distinguished
Jul 22, 2009
1,199
4
19,285
The only difference between this and the 5750M is the slightly higher GPU clock speeds, which would explain the TDP change from 35W to 45W. The GPU, therefore, is the same as the one in the 6800, except I'm not so sure about its base clock.
 

southernshark

Distinguished
Nov 7, 2009
1,014
4
19,295



It really depends on where you live and what your budget is.

EXAMPLE: I lived in a nice section of Guatemala City for two years. For the first six months there I was operating just with a laptop. My energy bill was around $500 Quetzales a month (that's about $75 US dollars). Guatemala City is quite cool being up in the mountains so I never ran an AC or anything like that.

That Christmas, I flew back to Fla. and picked up my PC. After installing the PC and video monitor, my light bill jumped to $900 Quetzales a month (about $130 dollars) almost doubling my energy bill. This is because Guatemala City uses a "scaling" energy bill. If you use over X amount of energy per month then your costs are scaled. So let's say I was at 1x per Kilowatt, I was suddenly billed at 1.5x per Kilowatt for crossing that magical barrier. That's a bill of over 600 USD per year just to run a PC.

I still ran mine, because that's how I roll. But one can imagine how someone, especially if they had to live on Guatemalan wages, would want to stay below that magic cut off point.
 

HKILLER

Honorable
Jan 8, 2013
85
0
10,640
I love it how people are trying to bash AMD at the same time they have forgoten that both Sony and Microsoft gave their consoles to AMD this year instead of Intel or Nvidia.they're gonna sell a massive number of consoles.+AMD still sales Graphic Cards and alot of people who are on budget would go with AMD processors.so if you are thinking AMD is going down you can dream on...
 

hotbuddha

Distinguished
Sep 12, 2011
3
0
18,510
Actually it is in fact more efficient. For one thing you cannot look only at the processor speed dropping without taking into account that the graphics speed remains the same. Also someone's math was off even on just the processor comparison alone. It would not be 2/3 the speed for 2/3 the power it would have been a 25% performance drop for a 30% power usage drop. Here it is taking into account the graphics performance remaining the same as well which shows even more efficiency...

Graphics Same .760 to .844 = Avg .802 GHz

A10-6700 3.7 to 4.3 = Avg 4 + .8 Graphics = 4.8
A10-6700T 2.5 to 3.5 = Avg 3 + .8 Graphics = 3.8

Performance difference is 3.8/4.8 = .79 or about 20% less on the T
Power difference 45/60 = .69 or about 30% less on the T

So, at the cost of a 20% reduction in total performance you get a 30% reduction in power usage. That my friends makes for a more efficient APU.
 

teh_chem

Honorable
Jun 20, 2012
902
0
11,010


Look at the base operating frequencies: At stock, the T (2.5GHz) is only 67% the speed of the non-T (3.7GHz). Yet it consumes 70% the TDP of the non-T. It's less processing capability for the power (assuming TDP equates to electrical power--which isn't true, but it's good enough for analogies). Yes, it consumes less power overall, but it's not more power-efficient assuming both processors are running at full load. You can't easily factor the turbo speeds into it because not all cores operate at the turbo speed, averaging turbo freq. doesn't make sense. And if you're at the max TDP for the CPU, no cores will be operating beyond the base frequency so the non-T will still be more efficient.

It's just an under-clocked processor. It's the same architecture and manufacturing process only it operates at a lower frequency and voltage to fit in a different TDP envelope. Clock-for-clock it's not more power efficient, that's not even physically possible.
 

hotbuddha

Distinguished
Sep 12, 2011
3
0
18,510


I understand your points here (except the part about not averaging the turbo since it is in fact there has performance value and a feature you lose if you drop below an A10 to an A8) but your still not accounting for running the same architecture outside of core clock with less power (GPU like I said, but also everything else like memory access, FPU, etc etc). Same architecture and same manufacturing processes does not produce the same processor between batches because of degrading quality. I'm sure you already know this but I bring it up here for argument and not your education so please don't be offended. All A series are essentially the same chip yes, cast from the same die but each batch wears the die down. Batch 1 from a fresh die would be the A10-6800's, the next batch might not be stable at the initially targeted performance levels but still able to be under powered to 65W and stable and would be A10-6700's, and so forth until a batch does not have functioning turbo core 3.0 and get downgraded to A8's and then batches come out with unstable cores and become A6's or A4's etc etc. I would wager that the A10-6700T is not (per se) an under clocked under powered A10-6700 but perhaps something just above it and just below an A10-6800 (so it's own animal) which tested as able to perform better and more stable at 45W and thrown into a different pile. So I say it is physically possible for the same chip to perform differently because of the way that manufacturers (AMD and Intel included) sell off what are technically defective chips as lower models. You would be correct to say it could not be more efficient (better performance to power ratio) if compared to the perfect chip for the same architecture, but it could in fact be more efficient than a different defective version of the chip like the A10-6700 (and I suspect is based on the numbers).
 
Status
Not open for further replies.