AMD CPU speculation... and expert conjecture

Page 153 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

kettu

Distinguished
May 28, 2009
243
0
18,710
I haven't read anything about that but my guess is that they'll still run one thread/FPU. But the dedicated FPU is increased from 128bit to 256bit wide. Since SR is an evolution of BD/PD the FPUs probably can work together on a single thread as a 512 bit unit. Or perhaps the allocation could go 3-1 if the other thread needed one 128bit FPU.
 

anxiousinfusion

Distinguished
Jul 1, 2011
1,035
0
19,360


I feel like that guy is everywhere. He has accounts on other sites where he will do nothing but bash AMD as though it where a full time job. I'm actually impressed at his dedication to the cause.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860


entertaining at best. gotta love the comments and the fanboism towards intel found there. Thats just it tho, even if AMD manages wins 80% of the benchmarks they lose because of the other 20%.

and where did they come up with fx-9650 ... Phenom (I) has the 9650.
 


His stupidity is commendable, if that is possible.

 

8350rocks

Distinguished
This guy is an uneducated, uninformed twit...or, he's really...really...young. I am having a hard time figuring out which it is. However, he has become quite the troll in a few of the threads I have been keeping up in...and it gets annoying after a while. His nonsensical posts are mind blowing bad with references to technical terms he misuses because he doesn't understand them. Have you read any of his posts?

http://www.tomshardware.com/community/profile-1281705.htm
 

montosaurous

Honorable
Aug 21, 2012
1,055
0
11,360


This article seems a little off, especially with the numbering scheme and price estimates for the new chips.
 


As the SW guy here, I'm going to repeat myself: Games do not parallize well. The stuff we can parallize, we already moved to the GPU (Randering + Physics). That doesn't leave much to parallize. Hence why you still see 2 threads doing the majority of the heavy workload, one or two more doing some background work, and the rest doing trivial amounts of processing.

Now, you could see more core usage due to games DOING MORE WORK due to having more resources, but games themselves aren't going to be magically more parallel. What I personally expect to see is a greater focus on physics (helped by the fact consoles won't be nearly as crippled), leading to higher overall CPU usage. But not because anything is inherently more parallel.

imo the gpu in ps4 is more like 7855 or 7860 in terms of configuration. i could be wrong since amd's model numbering is not always coherent.

Something close to that class, yeah. Basically, a 7850 with slightly higher clocks.

programmers can program at very low level in consoles where in pcs, windows is the main barrier. i am hearing rumors that windows might be an obstacle for hsa. unless, imo, amd and ms work things out beforehand. a functional hsa implementation will demand mutual cooperation from both.

Coding tot he metal GREATLY improves performance; going through a high level API is great for comptibility, crap for performance. You can probably get a 2/3 speedup if you removed the OS and coded directly to the hardware.

As far as HSA goes, ti *should* be invisible to the OS. At the end of the day, its the same exact data, its just a matter of routing the data to the proper HW devices. There isn't some magical coding thats going to be involved to make HSA work; it should all bo OS/Driver side.



No, different OS's, drivers, and API's. In the case of the XB1, given how its running a modified Windows kernel, a port should be trivial. MSFT is heading in the direction of running any piece of software on any of its OS's though. Porting from the PS4 is slightly more difficult (different OS/API), but should still be a heck of a lot simpler then it currently is.
 
I don't want to go back into the whole "parallelism programming paradigm" here, but it's up to how you wanna solve the problems if they'll be easier or harder to parallelize (is that even a word? haha).

In the specific case of games, there are so many ways to solve the same problem that you could have good level of parallelism for an engine, but still have the OS problem at hand. At the end of the day, most graphical engines are parallel in nature (except OGL/DX calls AFAIK) and are at the mercy of the OS scheduler. CPUs can do little to nothing there and even more if you take compilers into the equation x_X.

The whole "the way it's meant to be parallel" (SEE WHAT I DID THERE? lol) escapes the CPU conversation IMO and should be left alone for a while. HSA will push in that area, but it's so far ahead (IIRC) that we won't see anything until the first PS4 and XB1 (Xboner as I read somewhere, haha) games are released and use some of the doctrines from the HSA group (if they do at all).

I just want to know about AMD's plans with AM3+ and SR. I'd love to change the platform yet again, but the i7 has been good so far. Want to know how they'll compare. We need MOAR LEEKS!

Cheers! :p
 

griptwister

Distinguished
Oct 7, 2012
1,437
0
19,460


Hahahaha! Agreed!

And AMD just announced they are going to be doing some announcing at computex May 5th (Pretty sure they meant June 5th). Hope they announce something steamroller related!!! AMD is gonna slay this year of these benchmarks are somewhat on par.



Haha! The way it's meant to be parallel. Again, AMD said they'll be announcing some stuff. *fingers crossed*
 
Honestly Gamer I once said that in order for software to be parallel the whole problem needs to be rethought and redefined in such a way that it's parallel by nature. Current code and methods can not be made much more parallel, you are 100% correct in that. That does not preclude more parallel code from being created, you even admit this in a roundabout way. The only thing about HSA is the OS needs to provide some mechanism for the code to be scheduled onto a target. HSA is not x86 and won't be treated like it. The OS needs to be aware of the new opcodes and how to schedule it. This is similar to how the SIMD FPU is a completely separate co-processor from the OS's point of view, register stack and all.
 

jdwii

Splendid
If Amd though HSA and multicore was the only thing that mattered we would have 16 Core FX desktop CPU's at a clock rate of 1.6Ghz. The truth is Steamroller is rumored to have a massive increase in performance per clock. If those rumored benchmarks are true and i highly doubt they are(Just because of the way its benched and the names of the CPU's) i would be amazed and would love to buy their product. What i find funny is the comments from that article even if Amd did improve that much people are still going to hate?????? Really? That would be so massive and with intel lately barely even improving performance this would be a massive win for Amd not to mention their not done with Steamroller and all current AM3+ owners will be able to upgrade to it.
 

montosaurous

Honorable
Aug 21, 2012
1,055
0
11,360


Only thing I'm concerned about is my board probably won't get a bios update for it. Other than that I could wait.
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810


This.
Also, people usually correlate parallelism== Speed. This may not necessarily be true, because of mutexes and other syncing mechanism required for actual coding.
The program could be using more and more cores, but to make all the threads play well with each other, the synchronising needed is enough to give a poor scaling. As said, in games the rendering and physx are truly parallel workloads, which should be done on the GPU anyway.

Though the newer game engines like UE4 or others should be more parallelised, because they were wsritten from scratch. But then again, since they were started 4-5 years back, tech has changed much by now.

 

amdfangirl

Expert
Ambassador


Definitely a very good chip, given "good enough" chips dominate now. After all the Samsung ARM Chromebook is #1 on Amazon's best sellers. (One would be angry that it's running Chrome OS, but nevertheless)



I'll keep an eye on him.



Ugh, tell me about it. I'm not even certain that my motherboard supports Piledriver. The Gigabyte techs said that BIOS F3 supported Piledriver, despite predating its launch by a year.

That and BIOS F4e has been experimental beta for a year now.
 

so far i've found the following info on the consoles.
both consoles have 8 core cpus. based on jaguar cores. strongly rumored to have been arranged in 2 modules, each containing 4 cores (with individual L2 cache). now the vague parts about the cpu: nothing further about the configurations. i assumed that the cpus are connected through a customized u.n.b. (possibly a 'crossbar'(i dunno what that is)), before the memory controllers.i assumed that the cpus couldn't have individual memory controllers for each module since it could give rise to, may be.. syncing issues or lockups/downs. there are conflicting info on the process node used, from 40nm to 28nm, by glofo and/or tsmc. clockrate being mentioned varies from 1.6-2.0 ghz. afaik, 40nm's 1.6 ghz is slower than 28nm's 1.6 ghz (assuming linear clockrate improvement with nodes).
xbone will run 3 oses, one host os and two virtualized ones, i assume. how much resource would that arrangement consume? according to my newb math, 1-3 cpu cores and around 2-4 gb ram(especially the windows part :p). i dunno about ps4's.
i read that developers often use the weaker console as the lowest common denominator. will they use xbone as baseline for console performance, or ps4, this time? imo ps4 is a console gamer's console. it can be turned into a home entertainment device through software but xbone(r)'s weaker hardware cannot be brought up to ps4's raw performance level.
if, by any chance ms decided to use xbone(r)'s one cpu module for os and maintenance and other one for gaming, will the developers be programming for 4-6 cpu cores in the future? i dunno if ms put in additional chip to run the OSes and give the whole cpu for games.


i assumed that since amd's bd(uarch) cpu modules were supposed to appear as 2 cores to the os (and they did), yet problems occured with workload allocation(...?), windows could have similar issues with hsa-capable apus.


Kyoto Becomes the AMD Opteron X-Series
AMD Unleashes Jaguar onto the Micro-server Market
http://semiaccurate.com/2013/05/28/kyoto-becomes-the-amd-opteron-x-series/
http://www.amd.com/us/products/server/processors/2100seriesplatform/Pages/x2150seriesprocessors.aspx
NICE. (extra kudos for poking fun at intel's promo slides.)

AMD finally puts D*ck Port on a device
http://semiaccurate.com/2013/05/24/amd-finally-puts-dock-port-on-a-device/
sorry, couldn't help it...

 
rumors so far.
The memory controller is unified for both consoles both are 256bit but ps4 will use gddr5 while one will use ddr3. The cores communicate through the L2 but the modules will use main memory. xbox will use 3GB ram, 2 cpu cores and 1CU of gpu to run background tasks such as apps. PS4 will use 1GB ram for OS as well as 2 CPU cores.

One will likely be lead console for all western multiplatform games. There will never be an OS chip to just run OS. Thats a stupid and efficient idea.
 
y'know, the new consoles have started to seem like cheap (dual socket...?) hpc (micro...?)servers with dual processors (not to be confused with multithreading) with large amount of gpgpu processing available at their disposal. [strike]only a matter of time before[/strike] may be some strange rumor will surface of some crazy people buying 10k of ps4 and build a supercomputer and then japan and u.s. will temporarily suspend console sales out of fear. :whistle: :p :ange: :pt1cable: (emoticons are used to pacify people who scare easily).


i think that the unified memory is one of the best things for the consoles that i wish to see on an everyday x86 pc. i assumed that since the memory subsystem communicates with the cpus through unb so it shouldn't be an issue. i was wondering how much resources the non-gaming programs like os, social media and maintenance will consume and how much will be left for the games. although, i totally forgot about how much graphics resources they'd use, haha.

what was that rumor about ps4 using an arm chip? was that for 'connected standby' tasks only?
 
ARM core would probably be integrated into the SOC for security like the current AMD APUs. Maybe a standby mode chip could work too. Using a chip specifically for the OS while the OS has to interact with applications would be very hard memory management and almost impossible to keep the applications secure.
 

ah, got it.

intel's silvermont core related ones. right now, silvermont cpu core exists on promo slides (like amd's steamroller) while amd's jaguar based opteron x soc already exists.

Samsung talks about their 14nm FinFET process
14nm BEOL plus 20nm FEOL equals what again?
http://semiaccurate.com/2013/05/28/samsung-talks-about-their-14nm-finfet-process/

AMD bridges road to ARM with new low-power x86 server chips
http://www.pcworld.com/article/2040045/amd-bridges-road-to-arm-with-new-lowpower-x86-server-chips.html
Feldman candidly acknowledged that the Bulldozer failure cost AMD some credibility in servers. But the company is looking to rebound with a revamped management team led by CEO Rory Read and a new server roadmap comprising x86 and ARM chips for multiple server categories.

“Bulldozer was without doubt an unmitigated failure. We know it,” Feldman said.

“It cost the CEO his job, it cost most of the management team its job, it cost the vice president of engineering his job. You have a new team. We are crystal clear that that sort of failure is unacceptable,” Feldman said.
well, if he says so. :lol: imo the new team is taking the right steps.
 

abitoms

Distinguished
Apr 15, 2010
81
0
18,630


ah, got it
;-)
 
Status
Not open for further replies.