AMD CPU speculation... and expert conjecture

Page 121 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


But you also get a significantly faster CPU, nevermind the iGPU. Throw that into the equation, and the price could be considered well worth it.
 

8350rocks

Distinguished


Not when other significantly faster CPUs without iGPUs (if we're throwing that out anyway) are $130 less than that one as well. I cannot justify spending more than $180-190 on a CPU at this point in time with few exceptions.

Those being:
1.) Professional rendering companies (I would skip i7-3770k btw and go straight to something like i7-3930k if I was going to spend the money...though I likely wouldn't personally)
2.) A situation where I would need a heavy duty, dedicated server to do the job, at which point I would look at an Opteron solution of some sort.

Considering all of that, number one can have an argument made for it, but it boils down to preference...I would prefer not to spend more money in that case, but I could see the argument for it.

The i7-3770k is not $200 better than the A10-5800k, and it's most certainly not $130 better than the FX8350. Though the FX8350 is $70 better than the A10-5800k.

Really, the i7-3770k isn't $80 better than the i5-3570k (with a couple exceptions for specific dedicated purposes).

So, honestly, I cannot see that $200 being well worth it...at all. Unless...you just want to spend $300 for a $180 equivalent CPU...in which case I would say...go ahead.

Though I could make an argument about how many davidoff cigars that extra $130 would buy as well...or how much more GPU you could get with an extra $130 to spend...etc.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


The Windows 7 FX patch schedules some threads on the same module for improving performance.
 


CPU wise, I don't agree. Assuming you already have a dedicated CPU, try running Crysis 3 maxed on an A10. Now try doing it with a 3770k.

That performance increase has people in the GPU forum spending ~$400 per upgrade cycle.
 

8350rocks

Distinguished


Crysis 3 can be played on mediumish settings on the A10-5800k with dual channel RAM at 1866/2133 MHz. The framerates are into the 40s with those settings @ 720p.

The HD 7970 will run it maxed out with full AA, that's really what confounds the lesser cards is all the heavy AA.
 

viridiancrystal

Distinguished
Jul 27, 2011
444
0
18,790

Two Bulldozer cores in a module are ~80% of two Bulldozer cores not in a module, or that is what was speculated to be the case.
An Intel core with hyperthreading is ~120% of a single Intel core.
Hyperthreading is a strange concept, and I still do not fully understand how it works, but I know it best as this: The "Hyperthreaded" thread is using extra parts of the core that are not in use. If there was a task using 100% of the integer computing the core has, but only 50% of the FPU computing, the "hyperthreaded" thread can use the other 50% of the FPU not in use, that would otherwise not be used, because the core is busy with the integer calculations.
I still do not fully understand it, but I know that there is no physical change to the die.
In theory, with proper scheduling, neither CMT or HTT should affect the single-threaded speed of a core.
 

8350rocks

Distinguished


Well, 2 bulldozer cores in a module are the equivalent of 80% of 2 intel cores, or if you prefer 80% of 2 Thuban cores from the last gen Phenom II series.

So in terms of engineered Core strength, 4 intel cores with HTT ~4.8 cores; while 8 AMD cores ~6.4 cores.

Now, HTT can have a negative impact depending on how the software utilizes it. AMD architecture cannot have a negative impact, it can simply be improperly utilized which means less efficiency. Though with AMD you wouldn't really see a difference in performance by disabling cores(no tests of this I have seen yet show increases)...where as with intel, you sometimes see a difference in disabling HTT.
 
The 3770K reminds me of the 486DX, AKA the 486SX (3570K of way back) with a FPU that worked.. with HTT being the "new FPU". HTT is a strange concept and somehow manages to lower scores/FPS on unoptimized software, perhaps hogging resources that are about to be used, but that is the most grossly simplistic way you could put it . One AMD module is ~70-80% of 2 Phenom II cores, due to cut down features (as pointed out above), thus lowering the single core/threaded performance. The module system was too ahead of it's time in a sense, almost nothing can make use of it, but that is how many technologies start, I mean no gamer would have gotten a Phenom 1 or Core 2 Quad in 2007 unless they were that hardcore, yet today (Q8400/OCd Q6600 performance or above) they hold their own in today's Quad Core gaming festival.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


Hyperthreading is a clever hardware trick to optimize CPU resources with lightly threaded processes. It costs Intel practically nothing and yet there's a lot of inefficient code out there that can benefit from it.

From Intel's perspective it's genius. They get to charge more money for relatively few transistors.

Having used both i5 and i7 you can definitely tell a difference in responsiveness of the operating system, in windows 7 anyway. It's hard to put a price tag on what can be a minor annoyance to some with what I'd call "lag". Other people might not even notice.

I built an i5 system last year to same some cost. After using it for 8+ months I probably would spend the extra for the i7 version if I were stuck with only 1 computer. As it is I just rdesktop into the old PC for the things it does better.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


That could be taken as a rough approximation of the execution resources, but it's only part of the picture. Keeping those cores fed is the issue.

Phenom II dual-core can decode 6 instructions per cycle
Bulldozer dual-core can decode 4 instructions per cycle
Sandy Bridge dual-core can decode 8 instructions per cycle

This we've known for some time as to why Bulldozer can be really good at some workloads (high cache hits) and others not so hot.

Now Steamroller should bump that to 8 instructions per cycle which is greatly needed. Where the new bottleneck will be is a guessing game. We'll just have to wait until more leaks come out.
 

8350rocks

Distinguished


Yes, and steamroller will also improve internal latency, and reduce mispredicted branches, honestly, I think they're going to have this close to right by the time steamroller hits. I normally do not upgrade every cycle, but if SR turns out to be even 80% of the hype...I will probably drop this 8350 into my wife's PC and buy a new 8 core SR...ha!
 

Lets hope that moar coars logic survives. Perhaps a 16 Core 289W TDP FX 9550 :D Chances are the SR FX8550 8-core will have a 95W TDP.

 

8350rocks

Distinguished


SR version of Centurion, anyone?

On a more serious note though, they could add HTT to the architecture...I doubt they would, because the real world gains would be virtually nothing. They would literally be better off going to some monster 12/16 core design as a high end IB-E competitor. Wouldn't that be crazy though? FX12550...?
 
The HD4000 isn't even comparable to a 7660D, seriously Intel is still years behind the established GPU makers. ATI (and Nvidia) have been making accelerated graphics processors for nearly two decades and Intel has a lot of catching up to do before their on the same footing. What the Intel iGPU's do provide is a pretty decent framebuffer for cheap computers. This is from a usability point of view, your not buying an i7 and only using the HD4K anyway, it's mostly there feature padding, where as the 7660D on the A10 is functional in the market segment that would be buying A10's. Now if we saw the HD4K on an i3 then they might have something competitive.
 

Ahh, FX8XXX Centurion, IT BETTER HAVE 12MB of L2 Cache and NO L3! On a serious note, 5GHz Vishera 8-core seems profitable, but only if they up the cache to something like 24MB L2 and 128MB L3 (over-estimate) to compete with the monsters that are the 3930K and 3960/70X. This was the "rendering" market AMD was lacking in, since the Opeterons are more for servers and the 83XX is (at most) comparable to the 3770K, a decent choice while rendering in a short amount of time.
 

lilcinw

Distinguished
Jan 25, 2011
833
0
19,010


Can either one output to two or more screens? I see iGPUs as a value-add for gray OEM office boxes and about the only thing I have upgraded for in my office in the past 4 years is the ability to drive multiple monitors.

My regional manager was amazed at how much more productive he could be with dual screens and forced some others in the office to upgrade (under protest) soon after. :D
 

jdwii

Splendid


Knew i should've stayed in bed Gamer is making Amd sound better this time
 

jdwii

Splendid


Almost made me spit at my screen until you finished up. I like my Phenom with 93%-97% scaling.
Edit since toms new scheme sucks

But with the whole Amd vs Intel IGPU Intel had me going until i saw the benches never will i trust them again why and i mean why does Intel have to influence results they have no competition really and have lots of money its just sad when will these tech companies learn to stop saying Up to and other dumb statements. All it does it makes me more mad and not want their product when it turns out to be fake.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


There's a 15 core Ivy on the way, with HT this year, is that enough cores? ;)

 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


AMD has 12/16 core chips called Opterons. They're 2 FX die in the same package.

http://www.cpu-world.com/CPUs/Bulldozer/AMD-Opteron%206380%20-%20OS6380WKTGGHK.html

At $1088 it's not bad. Same as a i7-3960X Extreme (Hex core).
 


Both can drive two screens though it really depends on the board maker. Technically you can get more but it would require a board that supported three DP connections. APU's are really aimed at low end entry level stuff, they practically scream mini-ITX implementations.

The i3-3225 is an interesting product. Intel's next iteration might be worthwhile, especially if they can spin their drivers better.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810

8350rocks

Distinguished


Yeah, those Opterons are nasty...they even have a 16 core variant...The raw computing power is just crazy.
 
Status
Not open for further replies.