AMD Piledriver rumours ... and expert conjecture

Page 104 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
6470M?

6750M and 6770M are what they used to offer. Now it's the 7690M and 7690M-XT which are just 28nm die shrinks of those two GPUs. These are offered if you want a box with a little more *oomph* and both do ACF with the 6620G inside the APU.

The onboard 6620G can game at 1366x768 fine, that is the native resolution of the 15 inch LCD panel. The notebook also has the option for a 1920x1080 panel which the 6620G will have issues gaming on. The 6620G is fine for pushing 1080p video to that screen but rendering video games is too taxing for it at that resolution. Thus you can order one of the dGPU's to link with it and game at 1080p. It becomes one of those questions of "what do you need" and configure it accordingly.

The most recent offering of the DV6 is a 3550MX with a 7690M, 4GB (or 8GB) DDR3 x 2 sticks (very important that their size match) with the 1080p screen, all for right under $900 USD. They don't offer the 3530MX unfortunately, the 3550MX is just an up-clocked version of the same die. Their all multiplier unlocked and can have their p-states controlled by K10.
 
as long as they can play their facebook games and some online gaming well they wouldnt even consider upgrading to a discrete.

Facebook games are surprisingly CPU intensive and not multi-core friendly. The GPU offloading of browsers is not very good yet.

It's all my folks complain about is the lag in their facebook games, and they have a Core2Quad.
 
Facebook games are surprisingly CPU intensive and not multi-core friendly. The GPU offloading of browsers is not very good yet.

It's all my folks complain about is the lag in their facebook games, and they have a Core2Quad.


with IE9 and new FF they are getting better
eventually everything will go HTML5 and with Metro in Win8 with the browser based apps
you will notice GPU being much more used
 
with IE9 and new FF they are getting better
eventually everything will go HTML5 and with Metro in Win8 with the browser based apps
you will notice GPU being much more used

Now that you mention it, if Microsoft actually does what it's thinking, the dynamic scaling will use a lot of resources, making the CPU useless for rendering at real time fluidly.

Compiz in Linux has demonstrated that a GPU can do pretty neat stuff in the desktop as eye-candy and some useful features for power users (like the smooth android transitions and stuff). If Microsoft can do that as well, but using Kinnect and GPU processing for it, I'd say our GPU necessity will skyrocket.

It's up to nVidia and AMD to make MS think about such possibilities... Although this is a very far fetched thought, hahaha. More like a dream at this time. And yes, I've seen Minority Report a lot 😛

Here's what I mean:

[flash=640,360]http://www.youtube.com/v/MN5VIVNSJ5I?version=3&feature=player_detailpage[/flash]

Cheers!
 
AFAIK , the 7690 and below GPUs are just rebadged HD6k GPUs with an OC, performance would be higher, but still....not 28nm
at least that's better than Nvidia's straight rebadged........worse of them all.........the GT555M......I still have no f@cking idea which is which between the 2 GPUs of the same name......


I just found out there's a GTX670M.......

http://www.notebookcheck.net/NVIDIA-GeForce-GTX-670M.72197.0.html

turns out to be an OC'ed GTX570M....

What happened to Progress...... :fou:


-Edited-

Just did some deeper research and it looks like some bad news got out. The 7600's are 40nm parts with refinements and the 7700+ parts are 28nm. They had at one point listed then 7690M as a 28nm part where it's really a 40nm one.
 
Now that you mention it, if Microsoft actually does what it's thinking, the dynamic scaling will use a lot of resources, making the CPU useless for rendering at real time fluidly.

Compiz in Linux has demonstrated that a GPU can do pretty neat stuff in the desktop as eye-candy and some useful features for power users (like the smooth android transitions and stuff). If Microsoft can do that as well, but using Kinnect and GPU processing for it, I'd say our GPU necessity will skyrocket.

It's up to nVidia and AMD to make MS think about such possibilities... Although this is a very far fetched thought, hahaha. More like a dream at this time. And yes, I've seen Minority Report a lot 😛

Here's what I mean:



Cheers!


funny I have ubuntu 11.04 on a usb jump drive for disaster recovery and other handy uses :)
there is a reason you cant walk around military installations with a USB flash drive LOL
 
I got to thinking that, even though BD cores are far from efficient or fast compared to Intel's, would it still be viable in situations where one could utilize all 8 cores?

For example: someone is exporting video and wants to do something else while that does its thing. So he sets 4 cores to do the exporting, and leaves the remaining 4 to play a game. On top of that, he might even start a small server that his friends can play on at the same time. He gives the server two cores, and the game two cores.

While that may come across as a rare scenario, it is a real life scenario, and it is something i would do if I had the hardware to do it, and would someone be able to do that on an i5? I can't say for sure, but i would think that halving the cores available to each process would not go smoothly.

Just a thought.
 
more cores is a good thing especially for multitaskers
I would think BD would be a decent Folding at Home CPU for example
AMD didnt really need to have a killer IPC on BD but still it shouldve been stronger than their older architecture
BD might have its place in workstations where more cores are crucial
everybody worries about gaming so much and forgets their is real work to be done
 
For those interested, just finished screwing around with my GF's laptop.

3530mx with 6GB (2 + 4) DDR3-1333 and 6620G for video.

CB 11 scores
Stock 1.9Ghz 2.6Ghz Boost
OpenGL: 23.17fps
CPU: 2.31pts
CPU (Single): 0.76pts
MP Ratio: 3.06x

OC'd with K10 (1.7Ghz 2.6Ghz Boost)

OpenGL: 23.37 fps
CPU: 3.06 pts
CPU (Single): 0.79 pts
MP ratio: 3.88x

For the OC Run I used K10's "state locking" to lock the CPU at P0 state, the 2.6Ghz one.

Temperature at idle is 41C, highest was 79C during the 2.6Ghz locked run. It finished the bench, but I doubt it would survive a stress test. I will look into how high I can get it's P1 state to go with all four cores state locked.

3530MX has the following states
B0: Boosted State (2.6Ghz default)
P0: Standard State (1.9Ghz default)
P1: 1.7Ghz
P2: 1.6Ghz
P3: 1.4Ghz
P4: 1.2Ghz
P5: 1Ghz
P6: 800Mhz

You OC it by changing the P0 state to 2.6Ghz and upping its voltage a bit (I'm at 1.15v). You then undervolt all the states from P1 ~ P6 to lower it's idle and low load power usage. It will try to keep all cores at P0 and only lower them when temps start to go up. You can adjust how fast it responds to this. I'm looking at upping P1 as the backup state for under heavy multi-threaded loads. On the first run it tried to do 2.6 but immediately fell to 1.9Ghz across all cores, I think I can get it to 2.1 ~ 2.2 stable.
 
very interesting
as a comparison my PhII x 4 925 stock 2.8 OCd to 3500 with CNQ enabled
scores a 4.13
my old core2duo at 2.93ghz scored a 1.65
just to give a range of cpu benches there is this site
http://www.cbscores.com/

just so you can see how it scales
my HD 5770 scores about a 59 on OpenGl
a HD 5670 scored in the lower 50s

your scores are respectable for a laptop palladin
 
Especially when you consider it's the worst case memory scenario possible. Mismatched sticks with bad timings and only DDR3-1333. Should be 2x2 or 2x4 DDR3-1600 with 9-9-9-24, improves performance significantly.

This guy was able to get much higher, gotta figure out what he did.

2.61 at the 1.9 P0 state (vs my locking mine to 2.6 for 3.06 and getting 2.31 at 1.9).

http://forum.notebookreview.com/hp-pavilion-notebooks/597278-dv6z-qe-user-review-a8-3530mx-6755g2-undervolting-overclocking-results.html

http://imageshack.us/photo/my-images/810/cinebench1124ghz.png/



Uploaded with ImageShack.us

Something I've noticed while watching it do single thread was that it never stayed on one core for longer then a fraction of a second. Windows scheduler is trying to balance the world load of a single thread across four cores and that is a big no-no. Should lock it to one core to prevent unnecessary state changes and cache loads / reloads inside the CPU. I never realized how bad the NT kernel was at this. If your caching has high latency then thread switching across cores would cripple your performance due to all the cache loads / reloads and miss's.
 
For those interested, just finished screwing around with my GF's laptop.

3530mx with 6GB (2 + 4) DDR3-1333 and 6620G for video.

CB 11 scores
Stock 1.9Ghz 2.6Ghz Boost
OpenGL: 23.17fps
CPU: 2.31pts
CPU (Single): 0.76pts
MP Ratio: 3.06x

OC'd with K10 (1.7Ghz 2.6Ghz Boost)

OpenGL: 23.37 fps
CPU: 3.06 pts
CPU (Single): 0.79 pts
MP ratio: 3.88x

For the OC Run I used K10's "state locking" to lock the CPU at P0 state, the 2.6Ghz one.

Temperature at idle is 41C, highest was 79C during the 2.6Ghz locked run. It finished the bench, but I doubt it would survive a stress test. I will look into how high I can get it's P1 state to go with all four cores state locked.

3530MX has the following states
B0: Boosted State (2.6Ghz default)
P0: Standard State (1.9Ghz default)
P1: 1.7Ghz
P2: 1.6Ghz
P3: 1.4Ghz
P4: 1.2Ghz
P5: 1Ghz
P6: 800Mhz

You OC it by changing the P0 state to 2.6Ghz and upping its voltage a bit (I'm at 1.15v). You then undervolt all the states from P1 ~ P6 to lower it's idle and low load power usage. It will try to keep all cores at P0 and only lower them when temps start to go up. You can adjust how fast it responds to this. I'm looking at upping P1 as the backup state for under heavy multi-threaded loads. On the first run it tried to do 2.6 but immediately fell to 1.9Ghz across all cores, I think I can get it to 2.1 ~ 2.2 stable.

You can push it a lot higher than that but at that point you become limited only by your board and the cooler. Some are getting 3ghz+ on this chip. Only the oem samples work if you bought yours on eBay or Amazon the speeds reported cpu-z are false readings. It is a shame that there are no desktop class FS1 boards or the A8 35xxMX line could with ease might break 4ghz due to the higher binned sample quality.
 
BD actually isn't that great at Folding at home, not compared with X6s and i7 920s at least. Doing approximately the same WU:

X6 1055T 3.2GHz: ~11000 PPD
i7 920 3.2 GHz: ~12000 PPD
FX 8120 4.0 GHz: ~15000 PPD
i7 970 3.6 GHz: ~32000 PPD

There is some variation of course, but that gives the idea.
 
yes that FX 8120 isnt scaling considering how many cores it has
if you are using the V7 client then I wonder if it is properly optimized for BD arch
I would love to see a V7 client running on a Win8 with BD and see if there was a difference because of the thread scheduler
anybody got benches with using Linux on BD?
curious if Linux handles the BD better than Windows
I know recently I saw some Ubuntu 11.10 benches vs Win7 on Toms
fascinating how Ubuntu will outperform Windows
 
The problem is that BDs scheduler problems, at least those I know of, were mostly with just a few parallel threads (not a full CPU load). Folding launches 8 parallel threads, so that shouldn't be an issue. I have folded with BD on Ubuntu, though I don't remember the exact results. If I remember correctly, BD did better, but to be fair they all might have a little. I haven't really done a scientific analysis of it.
 
Status
Not open for further replies.