AMD, you were too slow, I'm going for Sandy Bridge now. Planning before April 2010. Unless you can give me a future refund for buying the LGA 1155 motherboard and the processor, no matter how fast your new processors will be, GG.
I just want to know what improvements they've made to the cores. I know about the doubling up, etc, but how much did they increase IPC? Actually doubling most of a core but having them use some parts could actually slow dowe IPC if the shared resources are used at the same time. And unless they increased IPC by a lot, I don't see BD beating SB, or possibly even coming close.
This is awesome for low power things, but I don't really care about those.
[citation][nom]Winnick[/nom]Bunz_of_Steel: why would you up for a Llano APU for an HTPC, when the Zacate already handles 1080p Video smoothly, with lower power requirements and is available now?[/citation]
A Llano APU, though using more power, will provide more performance....which will allow you to use it for other tasks. An A8-3550 provides 400 Stream Processors and 4 physical cores while would allow for better multi-tasking....like running a media server and HTPC simultaneously, which Zacate would struggle with. Also, the extra 320 Stream Processors would improve encoding performance.
That is the same amount of Shaders in a 5670, clock speeds will be lower, and no mention of memory bandwidth, but I presume it will be severely lowered if using shared DDR no VDDR. Still, this is the future, the way forward where it brings the Base specification of honest machines up, allowing more game development to happen because more machines will run 3d gaming. It could rejuvenate the PC Gaming Market.
If you look at the Radeon cores that are being used several things really stand out. The HD 6370 is a 7 watt core as discrete GPU so they probably draw much less as APU. Clearly this is headed for laptops. These cores also have one other HUGE common denominator.
They were all released Nov 2010!!! If anybody does not believe that AMD Llano will eliminate the mid price point mass market for discrete gpu's then they really need to have a hard look at the facts.
AMD is releasing Radeon 6990 without quantity restrictions. Nvidia is releasing less than 1000 GeForce 590's. The 590 is a cherry picked dual gpu board. It may perform equal to or actually outperform Radeon 6990. But what good is it if you can't buy it? Or is the market for bleeding edge bragging rights also just not there?
The mass market supports new gpu core development. Without the sales of millions of discrete gpu's for legacy upgrades, the next generation doesn't get designed or if it does without the prospect of any mass sales volume then it becomes a very expensive piece of silicon.
A good example is the ATI FirePro and Nvidia Quadro brands. They simply do not have the mass volume sales to allow for a lower purchase price point of $2000-$3000.00, the demand is simply not there. Product refreshes are also not as frequent as the mass market again due to demand.
If AMD is using this year’s top discrete gpu design for next years Fusion APU then the discrete gpu market is most certainly dead. Will there be a reason to upgrade a one year old Llano box with the latest discrete GPU? For what gain other than bragging rights? And what would be the discrete GPU demand looking forward?
The real question becomes is that AMD’s plan? And if so how does Nvidia plan to keep the discrete market open? Does Nvidia license core designs to Intel?
The other question is just what does AMD plan to do with Bulldozer? It seems that Bulldozer will be the server, workstation or high performance desktop and gamers cpu. This is certainly not a mass market cpu. As a server obviously graphics are not needed beyond a motherboard integrated gpu. So there will be some demand for discrete gpu boards with Bulldozer.
The next question becomes. When does AMD release Bulldozer with an on die graphics core? Because Bulldozer will be the only market left open for discrete gpu’s.
Of course just how Intel intends to answer AMD will determine the future of Nvidia graphics. Arguably Intel cannot compete with the AMD/ATI library. Every few months AMD releases new graphics silicon, they are continually evolving that product to meet present market demand. Intel is not a graphic’s design house. But now they have to be to keep their CPU business competitive. That means they are designing graphics gpu’s to penetrate a market that is owned 100% by AMD and Nvidia.
AMD is now designing discrete GPU’s with the intention of integrating that design on-die for an APU release ONE YEAR LATER! That has to be an optimized model and as such just how can Nvidia compete with AMD if they don’t have that insight into Intel future release Architectures? Nvidia’s only market will be on an Intel Inside box.
Right now AMD is directing the future of CPU design. They have the edge over Intel with ownership of arguably the world’s best graphics design portfolio and gpu design team. And they have the cost edge over Nvidia as they simply sell a one year old core design on die to millions of consumers as an APU. For Intel to remain competitive they are forced into the same model and this model shuts out Invidia.
Some things are weird on that chart. Some deductions:
1)A6s will have 3 cores + 1 locked and 1MB of cache will be locked.
2)E2s will have 1 core + 1 locked and 1MB of cache will be locked.
3)6370 can become a 6410, because the number of SPs is the same, and the only thing that differs is the clock speed
4)P comes from Power, and will be the equivalent of "Black Edition", and will have unlocked multipliers, while the others will not.
I didn't like a lot of things about that video. The major thing was they had a 2GHz Sandy and didn't clock their own chip? One; a 2GHz Sandy is by no means one of their top chips and 2; we don't know if the AMD APU was comparable in speed or price. Don't get me wrong I'm and AMD fan but that video left a lot to wonder about. It's easy to spin product videos. I myself will be waiting for non bias Tom's Hardware comparison.
God... they're taking so long, but they're all just going to be so freaking awesome. Makes me so happy to hear that AMD isn't rushing everything to get them out and compete with Intel. Take your time and do it right.
[citation][nom]Zeppelingcdm[/nom]I didn't like a lot of things about that video. The major thing was they had a 2GHz Sandy and didn't clock their own chip? One; a 2GHz Sandy is by no means one of their top chips and 2; we don't know if the AMD APU was comparable in speed or price. Don't get me wrong I'm and AMD fan but that video left a lot to wonder about. It's easy to spin product videos. I myself will be waiting for non bias Tom's Hardware comparison.[/citation]
The comparison was a mobile Intel chip versus a mobile AMD chip both with the integrated GPU within the CPU. But how was this surprising at all? Intel has lacked in the graphics department for ages, and now that AMD releases an APU and you're skeptical that it's outperforming an intel couterpart? errr... what?
Llano notebooks need to be out during the summer to make good back to school items. Since I'm fine with my current I7 laptop (cept for the part where it's kinda heavy) what I really care about is Bulldozer. Stop the fluff and give us Bulldozer already AMD.
I found this "out there". It looks like the same thing:
To me, something doesn't seem right:
Is this for 3D on a laptop? The i7 2630QM sounds like notebook part.
Are the two graphics implementations even on-par with each other?
Are both in-chip or discreet (again notebook compatibility)?
If you will be using a discreet graphics card for rendering (either AMD or NV), for a desktop PC, does any of this matter?
[citation][nom]mateau[/nom]Intel is not a graphic’s design house. But now they have to be to keep their CPU business competitive. That means they are designing graphics gpu’s to penetrate a market that is owned 100% by AMD and Nvidia.[/citation]
That... Remember Larabee?
With ARM getting more market share from the smartphones and tablets arena, Intel has to be smarter and keep an open mind for a change (not forcing x86 like in the old days >_>). They're losing market cause they're just too stubborn to make deals with either AMD (yes, them) or nVidia for more patents that cover their deficiency. To be fair though, I don't really know if they want an ARM killer, cause you can't kill it in the low power market and keep the same performance with CISC; you'd need like 3 shrink processes ahead to be close or a little better (like 40nm now and Intel on 22nm and so on).
Anyway, on the AMD topic here... They're putting a new product on the table that seems to deliver a LOT on the mobile market. Let's see what Intel does here...