AMD A10-4600M ''Trinity'' 3DMark 11 Performance Leaked

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
To be honest I am extremely confuse with their A10/A8/A6/A4 naming scheme. Why cant they just stick to a shorter simpler naming scheme? It is not like the number A10 mean anything when the 4600 is already a bigger number than Llano.
 
[citation][nom]A Bad Day[/nom]I wonder, when will integrated GPUs come with their own VRAM? A mid range GPU with a mid-latency 1600 MHz RAM can stand toe-to-toe with a high end GPU armed with high-latency 1066 MHz RAM. An integrated GPU with 256-512MB of GDDR5 can easily dominate other intergrated GPUs.[/citation]It's already been done. They called it sideport, it was like a cache. Unfortunetely it has its drawbacks - chief among them is cost. You've got a custom board design for Sideport, and it's squeezed on both sides. On one side you've got cheaper, simpler, higher volume boards that use integrated graphics and NO sideport. On the other side you've got entry-level discrete graphics. Kind of hard to market and sell something in between. "Well ours is identical but it costs more because it has SIDEPORT MEMORY!" Tough sale to most consumers.

Also you have to realize that the CPU side of the APU doesn't really need all that bandwidth. Look at the differences in performance for these chips at various clocks. Not a whole lot of scaling going on here. The GPU side, on the other hand, keeps scaling nicely. So if the OEM wants faster graphics, all they have to do is dump the DDR3 1333, 1600, and replace it with 1866 or better. It's just as tough a sale as Sideport memory, but it's a lot cheaper and easier to do at least - no need for different board configurations!

Eventually they'll have more on-die/on-chip cache though, and maybe toss in more memory channels. Triple channel DDR5 for Trinity's successor? 😛
 
[citation][nom]Lackaflocka[/nom]Nice graphic! They used the lightest weight CPU with the most underclocked GPU Intel put out.[/citation]
Sure, but on the other hand, the A8 APU still outperforms the HD4000 in all real-world situations, regardless of what Ivy Bridge chip you're looking at. And then the A10 is almost double the performance of the A10 on the graphics side, at least as far as 3DMark11 performance.
 
[citation][nom]alextheblue[/nom]Eventually they'll have more on-die/on-chip cache though, and maybe toss in more memory channels. Triple channel DDR5 for Trinity's successor?[/citation]

I really hope so. I would like to see how much difference the sideport memory helps vs fast ram (1866) vs 1600/1333. Know of any sites that have tested this with the current APUs?
 
Surprised no one mentioned DDR4 coming out in a year and half from now. Besides the lower wattage and higher clock speeds it also offers point to point memory controller instead of 2 or 3 channel setups. If the whole ddr4 specification passes every stick of ram that occupies a slot will increase memory bandwidth that i think if you have 7 or more dimms in your system it would be faster than the 7970's 384 bit memory controller. Saying that going by every dimm occupied equals 64 bit band which in most systems is 128 bit hence 1289 bit or dual channel memory.
 
The Korean literature says 56% igp + over Lano
and 29% cpu + over Lano + 3 monitor support and support for 2 discrete gpu
maybe 2133 ram
That should make it a big hit
 
The speed of system RAM itself is irrelevant when it comes to dedicated GPU's.
It might make a difference with integrated gpu's which in turn can increase their performance in games... however, the effect is minimal (usually within 5% margin most of the time).
 
[citation][nom]hasten[/nom]Who cares about the naming scheme. This is the second article today people were crying about naming. So basically your comment tells me you buy things only based on the number or name of the product?You realize this is a mobile product referenced in the article right? Or did u complain before getting through the sub title?[/citation]I dont but the average joe do.
 
[citation][nom]frozonic[/nom]i am not sure if i fully understood your comment but.... you are saying that RAM matters a lot in GPU performance? that a High - end ram kit can boost your mid-end gpu performance to keep up with a high end gpu?! if so, you are a noob[/citation]

You don't understand the comment. The point was that the IGPs should have their own high speed memory, not the CPU. There's a reason for video cards having dozens and even hundreds of GB/s bandwidths and it is that the GPU needs some serious bandwidth to keep up with it's number crunching. Also, high end RAM kits DO help high end gaming... Just not a whole lot.

[citation][nom]Tomfreak[/nom]To be honest I am extremely confuse with their A10/A8/A6/A4 naming scheme. Why cant they just stick to a shorter simpler naming scheme? It is not like the number A10 mean anything when the 4600 is already a bigger number than Llano.[/citation]

A4 is weaker than A6. A6 is weaker than A8. A8 is weaker than A10. How could that possibly be confusing? How could it possibly be a shorter naming convention when each family name is only two to three characters long? As for your last sentence, please don't be that stupid. Since when have bigger numbers meant much of anything between different families? Radeon 1950 is a much bigger number than GTX 680, but we all know that the 1950 but we should all know that the 1950 can only manage a mere fraction of the 680's performance and even then, only in applications that it supports. Just going from Geforce 9800s to GTX 295 is a huge drop in numbers, yet a large increase in performance despite the huge number difference.

[citation][nom]deksman[/nom]The speed of system RAM itself is irrelevant when it comes to dedicated GPU's.It might make a difference with integrated gpu's which in turn can increase their performance in games... however, the effect is minimal (usually within 5% margin most of the time).[/citation]

Please, don't be stupid. Just going from 1333MHz to 1600MHz increases FPS in games by almost 20% with A8 Llano systems. Tom's did an article that included this already. Even in systems with discrete GPUs, having high speed RAM can help in many situations. For example, going up to about 1866MHz for AMD and 1600MHz for Intel almost always helps pretty much everything by a little. Tom's also showed us this in an even more recent article, among many others before it.

Going from 1333MHz to 2133MHz is around a 50% to 60% increase in A8 Llano gaming performance strictly because of how memory bottlenecked Llano is. Faster RAM in Llano systems shows huge performance gains, sometimes almost as high in % as the increase in bandwidth is, as measured in %.

[citation][nom]Tomfreak[/nom]I dont but the average joe do.[/citation]

Absolutely no naming convention would be simply enough for the average jo who doesn't understand technology at least a little except for an extremely simple one that is universally adopted by the entire industry to avoid confusion between brands and product families. Complaining about a naming convention that doesn't do this is ridiculous. You might as well be complaining about the names of every computer product in the industry.
 
[citation][nom]Kyuuketsuki[/nom]Sure, but on the other hand, the A8 APU still outperforms the HD4000 in all real-world situations, regardless of what Ivy Bridge chip you're looking at. And then the A10 is almost double the performance of the A10 on the graphics side, at least as far as 3DMark11 performance.[/citation]
Not true. HD4000 is trailing somewhere behind HD6630m.
[citation][nom]ThE_BrutE[/nom]I really hope so. I would like to see how much difference the sideport memory helps vs fast ram (1866) vs 1600/1333. Know of any sites that have tested this with the current APUs?[/citation]
They tested it on a desktop version. There is a considrable difference between 1066, 1333, and 1600 chips. But not so much anymore above that.
 
With Trinity using a much faster clocked GPU, I feel that it's going to be much more sensitive to RAM speed. That 1833MHz DDR3 is really starting to look like a good idea, isn't it?

I wish it was that cheap and easy to add a 1GB GDDR5 chip to the die, but if it was that cheap and easy I think it would've been done long ago. Still, assuming there was a huge bus connected to it, you'd go right back towards being shader limited in a jiffy.
 
I feel the need to make a comparison, even if it's only based off a single benchmark (3DMark too... yuck)... to the ASUS U46SV. Check the following link:

http://www.pcworld.com/article/248867/asus_u46sv_review_fast_comfortable_enduring_but_too_much_software.html

Both CPUs are similarly clocked and both GPUs appear to be as well. The 540M (see Wikipedia) has 1GB of DDR3 with 96 shaders (double speed, plus their numbers aren't really comparable to those in AMD cards). The i5 and GF pairing appears to have a TDP approaching 70W.

At half the expected TDP (for what it's worth), it's really not looking bad for Trinity, is it?

Still, I can't help but wonder if they should've leaked something a little juicier. Skyrim/Civ5/HawX2/DiRT3 scores, perhaps.
 
In the meantime they've been diligently working on APUs. They are seemingly as far ahead of Intel in the APU market, as Intel is ahead of AMD for desktop market.

I think the AMD approach of not going head to head with intel in super high desktop performance is working. Their Opteron was always made mainly for webserver type usage where high integer cals are required. They ported it to bulldozer for the PC-desktop market. Everyone called it a failure, but they never intended it to be the giant-killer for intel's sandybridge. It just needed to be good enough price/performance-wise to be viable which it did long enough to bridge the gap to their APUs being released.
 
the bulldozer doesn't fail that hard actually, just coming out too late and the wrong naming marketing strategy, if they sell the FX-8 series as a quad core 8 thread cpu with the price around the quad core i5 then it will looks shine......
the prescott are bigger failure than it.
 
[citation][nom]sonofliberty08[/nom]the bulldozer doesn't fail that hard actually, just coming out too late and the wrong naming marketing strategy, if they sell the FX-8 series as a quad core 8 thread cpu with the price around the quad core i5 then it will looks shine......the prescott are bigger failure than it.[/citation]

The quad, six, and eight core CPUs in Bulldozer are just that... Not dual-threaded dual, triple, and quad core CPUs. AMD markets them this way because that is what they are. There are two conjoined cores per module, but they are two cores nonetheless.

[citation][nom]SuperVeloce[/nom]Not true. HD4000 is trailing somewhere behind HD6630m.They tested it on a desktop version. There is a considrable difference between 1066, 1333, and 1600 chips. But not so much anymore above that.[/citation]

Actually, Llano A8s still get big benefits from going beyond 1600MHz. It's CPUs that tend to see little benefit from going beyond 1600MHz and even then, only Intel CPUs because AMD CPUs can get significant benefits by going up to 1866MHz. Of course, the Intel CPUs stop getting benefits from faster memory a little sooner than the AMD CPUs because the Intel CPUs have more efficient memory controllers that don't need higher frequency RAM to get the same usable bandwidth. For example, Llano and Bulldozer-based FX get about 25% less bandwidth than Sandy Bridge does with the same number of channels at the same frequency with DDR3, as tested by Tom's and Anand.
 
Hmmm...i'm not sure what to say. I know everyone's excited and all, but...we all know a present desktop A8 beats desktop HD4000 by a margin...and we know that the laptop HD4000 will be about as good as the laptop HD4000, just slightly under-clocked maybe...and here the A6 under-performs HD4000...

somehow i feel that the A10's GPU performance is almost the same as the A8...a look at the graphics hierarchy chart suggests the same, though i'll admit i'm not well versed with AMD's naming scheme as far as their APU's graphics component is concerned.
 
The HD4000 on the mobile platform beats the A8 mobile (the HD4000 can clock up to 1.3 GHz). They are showing the lowest clocked mobile HD4000 on the weakest CPU in the comparison. The A10 will probably beat the HD4000, but not by as much as the graphic would make you believe. I'm guessing it will be in the 10-15% range depending on game titles for 3D performance and compute when truly matching tit for tat. On the CPU side though, it will be about 1/2 the performance of the IvyBridge is my guess. With video encoding and decoding going to HD4000 heavily, the A10 will still be a tough sell without an extremely low price. An A10 for $150 might be the sweet spot sell that gets AMD back into the game.
 
[citation][nom]ILikeCPU[/nom]The HD4000 on the mobile platform beats the A8 mobile (the HD4000 can clock up to 1.3 GHz). They are showing the lowest clocked mobile HD4000 on the weakest CPU in the comparison. The A10 will probably beat the HD4000, but not by as much as the graphic would make you believe. I'm guessing it will be in the 10-15% range depending on game titles for 3D performance and compute when truly matching tit for tat. On the CPU side though, it will be about 1/2 the performance of the IvyBridge is my guess. With video encoding and decoding going to HD4000 heavily, the A10 will still be a tough sell without an extremely low price. An A10 for $150 might be the sweet spot sell that gets AMD back into the game.[/citation]

Mobile Llano A8s still beat the mobile HD 4000, so Piledriver A8s and especially A10s should really beat mobile HD 4000. Considering that the HD graphics is different for each processor family (even if it's the same name, IE HD 4000 on an i3 will be slower than HD 4000 on an i7), we don't even know for sure how well the mobile i3/i5 HD 4000 will stack up. Most laptop buyers who want integrated graphics will be buying below the i7s, so this is fairly important information and really helps AMD.
 
There seems to be a little confusion on the memory-thing.

My understanding is AMD is 3/4s of the way to a united memory controller and address space, where the CPU cores and GPU share-n-share alike. To me, that implies higher speed, greater bandwidth, less latency, FTW, with your RAMs.

Kaveri is the final step in the unification process -- Piledriver CPU cores and GCN GPU compute units in a SIMD array, with a unified memory controller, and 64-bit address space.

For OpenCl :)

 
[citation][nom]Wisecracker[/nom]There seems to be a little confusion on the memory-thing. My understanding is AMD is 3/4s of the way to a united memory controller and address space, where the CPU cores and GPU share-n-share alike. To me, that implies higher speed, greater bandwidth, less latency, FTW, with your RAMs.Kaveri is the final step in the unification process -- Piledriver CPU cores and GCN GPU compute units in a SIMD array, with a unified memory controller, and 64-bit address space.For OpenCl[/citation]

Llano already has the GPU and CPU sharing memory. The problem is that you have both a CPU and a GPU sharing a fairly low bandwidth memory interface and the poor bandwidth is compounded by the fact that the GPU's discrete equivalent (for the desktop A8s, their 6550D is roughly equal to the Radeon 5550 that they were based on) has several times more VRAM bandwidth than both the CPU and GPU have all together. Llano's biggest problem is that it is memory bandwidth bottle-necked. If it had good enough memory, it would beat HD 4000 (even on the mobile market). However, that would mean it needs something like a 512MB or 1GB of GDDR5 memory attached to the Llano motherboards and the Llano APU sockets would need more pins to facilitate this on-board (the proper name for it is Side-Port memory) VRAM.

Also, sharing the memory like this actually increases latency and does not help bandwidth at all. Llano already supports 64 bit memory addressing too.
 
Status
Not open for further replies.

TRENDING THREADS