AMD CPU speculation... and expert conjecture

Page 443 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810
Most people just laughed at the Nintendo spec rumor but they actually could fund that if they wanted to. I find it really doubtful but if Nintendo were smart they could design in a 20nm version of the PS4 style APU with even more powerful graphics and Puma cores. We know Sony/MS won't redesign for 5+ years.

Nintendo would be late but beefier hardware could convert people. As they say, the second mouse gets the cheese.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


At the time of writing this, the average OC of the A10-7850k is of 4.6GHz on air.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I already said time time ago that Kaveri could hit 4.5GHz. Now we have data that confirms another of my predictions. :D

A10-5800k: 3.8GHz --> 4.5GHz [ first gen 32nm SOI ]
A10-6800k: 4.1GHz --> 4.9GHz [ second gen 32nm SOI ]
A10-5800k: 3.7GHz --> 4.6GHz [ first gen 28nm bulk ]

All OC average values on air. Kaveri has more OC potential (24%) than Richland (20%) and Trinity (18%) despite not being fabricated on a mature process.

The fact that Kaveri hits 4.6 GHz average implies that some lucky users are obtaining higher OC. Of course Kaveri will not break worldwide records. but will offer enough OC for most overclockers.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


The document was talking about a monolithic 16 core Steamroller CPU, not these new models of Piledriver CPUs which have been shipping for over a year. Quite different.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


At contrary, it proves that it reflects real-world performance. It gives ~6% between 3770k and the 8350 and the average of dozens of tests (C-Ray, x264, NAS, JTR...) gives the 8350 ~10% behind the 3770k.

Of course if you compare both chips using biased benchmarks optimized only for Intel or if you use ancient SuperPi (and without BIOS patch), or if you use single threaded software then the difference will be very superior.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Thanks to you and to noob2222 by proving my point. When I did my predictions about Kaveri, nobody here wrote a bit about accuracy... of the benchmarks used. Only now when the final measurements are showing excellent agreement with predictions both of you are trying this tactic of "attack the benchmark". LOL

I don't know what programs you use. I don't care if you use programs whose code cheats or if you use programs that only utilize a fraction of the hardware potential.

I know what programs I use and I know that the benchmarks that I mentioned reflect well the performance. In fact what I used what has been named "the best benchmarking platform".
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790




The recent rumor, which noob insists on spreading, and which I debunked before is the rumor about AMD releasing a 16-core (on die) Steamroller Opteron.

Both of you are linking to something different, the outdated 6300 series Opteron are 12/16 core in a multidie package (2x6) and (2x8) respectively and are based in Piledriver technology. As it was mentioned in this thread (I did at least 56 times), the 6300 series Opteron has been replaced by the new Warsaw CPU, which is based in Piledriver and uses dual packaging.

colinp, it is now when you write "Opps".

P.S.: Thanks to cazalan for his additional reply to jed
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


16 cores for a console doesn't make sense, because would require special programming techniques to use all the cores. 8 A57 cores @2GHz would be overkill.

But does AMD has GCN ready for ARM? A friend from mine says that AMD will release an ARM-GCN SoC this year.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


It would make sense for the ARM server CPUs they're releasing to have some GCN in them. Considering practically every ARM SoC has some minimal graphics capability to reduce BOM cost. Since they didn't tout it as a feature the core count is probably on the low side though. One or two CU like Kabini/Temash.

Otherwise they would have to license the ARM GPU cores as well which would be rather embarrassing.
 

jdwii

Splendid
"I don't know what programs you use. I don't care if you use programs whose code cheats or if you use programs that only utilize a fraction of the hardware potential."
real world programs such as handbrake-games..ect
 

jacobian

Honorable
Jan 6, 2014
206
0
10,710


Why? Is it not clear this architecture will trail Intel forever. The current market mantra is to chase the best performance per/watt. AMD is losing here pretty badly and there is no fix in sight. Once a year AMD updates the architecture by increasing IPC a little, and then lowers clock frequency to lower TDP. But that process was way too slow. It's painful to watch. A bolder and probably quite smarter decision would be to put all of Bulldozer derived architectures on life support and design a new architecture with a view towards replacing the former in 3-4 years. AMD sells so little right now, it has little to lose.


 
Nintendo would be late but beefier hardware could convert people. As they say, the second mouse gets the cheese.

Not Nintendo's style nor their market. Nintendo primary targets the young demographic, 14yo and younger. The kids who pester their parents to buy them stuff. Those customers don't care about performance, they care about the experience itself, as in was the game fun and entertaining. Hell look at pokemon, that series as made Nintendo billions of USD and it's not very advanced graphically. Then you have Zelda and Metroid, again two big series that bring in big money and neither very advanced compared to the competition. They would lose trying to much to chase the Adult gaming segment as their policy's regarding violent content are restrictive when compared to Sony and Microsoft. About the only problem Nintendo faced was trying to change the interface too much too fast, the Wii's control interface was unique and good for some games but I got REALLY tired of using it for everything. The Wii-U is just plain bad, too expensive and ruining a known formula regarding gamepads. There is a reason Sony hasn't changed their playstation controls.

That FPU comparison is kinda good but it's Anand doing their old tricks. Comparing products from vastly different market segments while using hand waiving. If someone knows what their looking at then the numbers are good, but anyone who tries to read the article will be lead to a very wrong conclusion regarding SIMD FPU's and vector processing.
 


o_O?

Kaveri is actually pretty amazing from a performance standpoint. The A8-7600 is gold, especially at 45W for a combined CPU and GPU. The problem is everyone is comparing the A10-7850K which is obviously just a higher clocked 7600 with the full iGPU enabled, it's grossly overpriced currently. I'm hoping AMD eventually creates a 7650 with a full 8 CU's (512 GCN cores).

As for the BD uArch, there is a long story behind that. The key difference between the BD uArch and everything else is the modularity of the chip's design. I'm not talking about the shared components here but how it's really cheap to create a custom chip based on the BD uArch, everything is plug and play design side. People often don't appreciate the insanely high man hour requirement is to design a high performance IC. The sheer size and complexity means you go over every trace and connection by hand to get optimum performance out of the chip. Any alteration to the chip means you gotta go back to the drawing board and redo lots of work. Intel has the budget and R&D size to just throw money at the problem and hire tons of engineers to brute force it. AMD lacks that budget and R&D size so they needed to find a way to design chips on the cheap. The modularity of the design means it's cheap(er) for them to design the chip, but it comes with it's own set of drawbacks, chiefly that your not having a ton of engineers crawling over every nm of your design optimizing it. BTW that's why we keep seeing 10~15% efficiency increases each generation, they keep finding things to fix and optimize.

Essentially AMD would of never won the design contracts with Sony and Microsoft if they hadn't used BD. They had the best offering for product performance (absolute performance + energy usage + space usage) then the competitors due to how cheap it was for them to make custom CPU's.

The market itself is actually splitting into different refined segments. Desktop has become saturated and people only buy new ones to replace old ones, gaming being the prime difference. Growth is currently seen in appliance and mobile, but mobile itself is a bunch of different categories with smartphones being different from tables and notebooks. There is money to be made in all of them, just gotta address them individually.
 

jacobian

Honorable
Jan 6, 2014
206
0
10,710


Kaveri's performance is not amazing, but it will sell if the price is right. Looking at the benchmarks, A8 and A10 are roughly equivalent to Core i3s of various generations, setting game performance aside. I am not convinced about Kaveri as a good gaming APU unless we're talking about very casual gaming. It's hard to imagine that someone spending $50-100 on new game titles will be satisfied running the game at minimal settings and low FPS. Another important observation is how little you gain from additional GCN cores on the Kaveri A10 compared to A8. It could be that the memory bandwidth is the bottleneck. To make these APUs big players in gaming, AMD needs to solve memory bandwidth issues and doing it cheaply. If you have to spend a lot money on a custom motherboard with memory architecture optimized for APU, that may erase the savings you get from not buying a dedicated GPU.

But besides gaming, AMD needs to do something to catch up with Intel in the notebook market. The 35watt Richland A10 is the flagship notebook CPU, and it still roughly comparable Intel Core i3, besides having better graphics and worse real life power efficiency. And Core i3 and AMD is what I see in $400 Best Buy special laptops. So, there is very little profit there. Anything costing +600 is at least Core i5 or ULV or both.



 


Why in the world would you use an APU for a desktop product? Seriously, why are people even thinking about doing that? No wonder so many are butthurt, your complaining that Honda's motorbike is bad because it can't carry a trailer and fit a small family.

i3's are horrible in actual real world scenarios, low end i5 is the minimum anyone should use. I never recommend them. i3's make for great twink benchmarks, turn off everything else and you can get high single / dual threaded numbers. Of course when the consumer actually use's one, with all the services and security software running, along with their usual web browser / ect.., well they get much less then they expected.

Anyhow APU's are for the 90%+ of the population who use them for what their made for, not the enthusiasts who make "gaming rigs" like us. Also your comment on "custom motherboard blah blah" is factually incorrect. There is nothing expensive nor special about the motherboard nor memory. DDR3-2133 memory is cheap now and that's all you need as DDR3-2400 provides very little performance over it. It's actually the same price point as DDR3-1600 and DDR3-1866, so your not spending any extra on memory.

Your statement about people playing at "low~medium" setting is also horribly wrong, actually your entire post is factually incorrect. You are part of what's known as the vocal minority, your loud but not representative of the market. Listening to vocal minorities is how companies crash themselves. The market spends very little on desktop's, only replacing them as the old ones break. They don't spend much and are perfectly fine with a system with low ~ medium graphical power. This segment represents the bulk of OEM sales. High end desktops are comparatively rare but also gross higher margins then the value segments. There is a reason why consoles represent the vast majority of the gaming market. They aren't high performance, the games running on operating at low details at 720p internal resolution.

In short, my statement stands.

Kaveri is actually pretty amazing from a performance standpoint. The A8-7600 is gold, especially at 45W for a combined CPU and GPU.

The whole package performance of the 45W (or 65W) A8-7600 is amazing, especially for it's cost at $120 USD. Stick it with 8GB of DDR3-2133 on a Mini-ITX board and you got yourself a very powerful platform that takes up almost no space or power.
 

UnrelatedTopic

Honorable
Nov 4, 2013
22
0
10,510


I thought PS4/XBone was cat cores? or was it?
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860
@juan

If the scores at "stock" can vary by 800+ pts, how reliable is it as a "real world" benchmark? Also consider that the fx 9590 is just above the 4770k, why would amd abandon it in favor of a passmark score at 50%?

If you want to prove me wrong on cosmology, then prove it. You claiming I'm wrong just by saying so doesn't. It only proves you have no clue and still want to say "im never wrong because my name is Juan."

Post some of your actual kaveri benchmarks, or are you here bragging about the greatest thing AMD ever made to not buy it yourself, and have nothing but google to back you up?

P.s. you need to adjust your fake "trinity overclocks only to 4.5 ghz" figures.

http://www.pureoverclock.com/Review-detail/amd-trinity-a10-5800k-review/9/
 

jacobian

Honorable
Jan 6, 2014
206
0
10,710


Now that the benchmarks show that Kaveri A10 APU can't even decidedly best Richland APU in non-gaming apps and benchmarks, much less Intel's entry level Core i3, you have to resort to making the B.S. claim that benchmarks are wrong, and Core i3 is actually horrible in real world applications. That's far fetching statement, isn't it? Tell this to millions of notebook users with Core i5 and low power Core i7. Do you even know that pretty much every _mobile_ Core i5 as well as Core i7 (except a few quad-core MQ i7 models) are also dual-core with hyper-threading, just like the desktop i3? I don't have desktop Core i3, but have been using mobile Core i3 and Core i5 and I do not find your statements true.






I am not talking about the expense of DDR3-2133 memory. I am talking about the expense of using GDDR5 memory or some other completely new memory architecture in future, which will be necessary to address the APU performance in future. There was already a talk that AMD was considering some kind of GDDR5 solution, but shelved it. Tomshardware review uses DDR3-2133 memory. Going from A8 to A10 buys you like 10% FPS even though the both clock speed and TDP are increased, and the number of GCN cores is increased by a 3rd. And then there is the thought "what kind of FPS you would get with Radeon 7730 with the same 384 stream processors. It seems like the DDR3-2133 memory still puts the brakes on the graphics core, and that's the problem AMD needs to address.



In short, my entire post wasn't incorrect as demonstrated above.


 

It's Jaguar which use's the same modular technology that AMD used to create BD. It's modular in the sense that the components inside your PC are modular. You don't need to redesign your PC to add more memory, a different CPU, another HDD or a different GPU. Everything is plug and play which has made building and shipping PC's much cheaper then 30 years ago when everything was custom (Amiga) or proprietary (Apple). For BD it means the L2 cache, MMU, integer cores, FPU, decoder and scheduler can all be freely modified or altered. Jaguar is a four core CPU but AMD made special ones for XBONE / PS4 that had additional integer cores on the die. AMD used the same stencils to create BD. The plus is that you can reuse each stencil on multiple chips without extensive redesign of the component, the downside is that you lose performance from the stencil not being tweaked and optimized for that chip.

Think LEGO's but with CPU innards.
 

You can chose to believe whatever you want to believe. The 65W A8-7600 has the same CPU performance as the 100W A10-6800K while having significantly better graphics performance. At 45W it slightly less of both but does it's job while requiring half the power. The 6800K is $140 USD while the 7600 should be $120 USD. Both of these provide greater graphics capability then any i3 while doing so using less power and space then any i3 + dGPU combo. I also stated that the i3 is insufficient for modern desktop work and users should be using the fx6 or i5 CPU's (with the fx4 barely usable in select cases), nowhere did I mention the APU being sufficient for desktop usage as it's not. APU's are for low power / SFF design's, basically living room HTPC's, minature gaming box's, or kiosk style set top box's, places where there isn't enough room nor power / cooling for anything bigger.

http://www.mini-box.com/M350-universal-mini-itx-enclosure

Something like that is what you put a A8-7600 inside. Install a mini-itx MB, 8GB of DDR3-2133 memory and a 80~120w picoPSU + power brick. Slap in a 2.5 inch HDD (or SSD) and your done.

http://www.mini-box.com/s.nl/it.A/id.417/.f

If your design allows you to go a little bigger then you can use this.

http://www.amazon.com/In-Win-200-Watt-Mini-ITX-Black-BP655-200BL/dp/B005ITOAL8

Bit more space but your still too power / space limited for a dGPU.
 

con635

Honorable
Oct 3, 2013
644
0
11,010

Yes it assumes real world software is going to use the full power of the chip but most times it doesn't, I feel like a parrot but I'll say again if you download the program you can see the exact the spec of the system even which drivers are used that got 800 points more at 'stock' and use it to claw your 800 points back by correct setup. If you look at laptop benches eg my net book, all scores are within a few points or the same with same drivers etc. I doubled my 3d score and oc'd my system to 'sensible' speeds with help from passmark, saw real world difference as well, its worth the money imo.

 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Agree that GCN would make sense for servers, but both Seattle and its successor Cambridge are CPU-SoC. This is why I believed that GCN for ARM is not ready still. However, a friend from mine disagrees and claims that we will see ARM APUs this year.
 
Status
Not open for further replies.