AMD Piledriver rumours ... and expert conjecture

Page 167 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
That is about far more then how many frames a CPU can render a motorbike at, or how many minutes it takes to render grandmothers birthday video. This is a fundamental shift in the way software is coded and hardware is utilized. Turning the GPU as an open ended vector co-processor that shares address space with the CPU and can interact on the fly is just .. wow. This means you can interact with both the CPU and GPU in the same machine level code stream, no need to context switch or segregate your code into "GPU land" and "CPU land". Create a construct in memory, do ~stuff~ to it, then reference that same construct in a GPU opcode without having to first re-create that construct in GPU memory.

For the most part, I agree. There are a few things to watch for though. I'd be interested to see latency on the GPU accessing main memory (As opposed to local VRAM), and you still have the fundamental problem of GPU's being horrid at sequential logic compared to a CPU. Sharing Address Space is also well and good...unless we still use 32-bit exe's and have a hard 4GB cap in place. You need the GPU to understand how to execute low level machine code, which requires significant amount of extra HW thrown in (decode logic).

If it can be pulled of, great. But I think this is still a few years away from being the norm.
 
Looking at the benchmarks of several different sites. I come to this conclusion.

2600K 10
2500K 9.5
8150fx 6.5
1100T 6.3
980 6.0

The 1100T and the 8150fx can compete head on or slightly beat the 2500K on apps that use 6 or more cores but falls short when a App uses less and even loses on Resident evil 5 that can use all 6 cores.

Now per core per clock i'm still going to say a BD core is only half as strong as a Intel core(ivy).


In the next couple of years This design will most likely look better, No one not even huge Amd haters can deny that.

Side note

About 2 weeks before Bulldozer released i knew something wasn't right when all rumors on the processor were bad, The fanboy in me kept saying "its Amd they have great engineers just bad marketing" But i started to wonder and i said to myself "as long as its at least 10% faster per core and faster then the 2600K in multithreaded apps i'll buy it", Well let me just say i'm not letting this happen again with Piledriver! Not even going to save the money until it comes out and has full reviews(not amd only)

But something tells me i'll be happy with Piledriver! If not i think this is the end with me and Amd when it comes to their CPU.(Not APU's or GPU's)

I'm sure i'm not the only one who feels this way Amd will get one more chance with me unless i'm jumping to Intel and that makes me SICK to my stomach and also a little bit teary.

The issue is threefold: SW Scaling, Number of Cores, and Per Core Performance. (I don't expect processor execution speed to vary significantly between Intel and AMD). If you have two CPU's with the same number of cores and same execution speed, then for a given app, per core performance wins out for a given benchmark. Since BD sacrificed per-core performance, the design has to offer BOTH more cores (and have appropriate SW scaling to USE those cores) or offer a significant speed increase to offset Intels design (and as I just said, I don't expect such a gap to ever manifest).

Point being, Intel has a design that, more often or not, will be able to use all its execution resources. AMD is basically banking on SW scaling, but with the advent of the GPGPU, I can forsee everything that would scale well being offloaded to the GPU. Thus, adding more cores is not a long-term solution.
 
For desktop Trinity? First week of June.

For full PD? Last week of September.

*All information is half-remembered from supposed 'leaks' that I can't be bothered to find at the moment.

Can anyone else confirm this? I was under the impression that the desktop Trinity CPUs wouldn't be available until August.
 
How much longer do I have to wait, I am getting twitchy.

:cry: :heink: :fou: :ouch: :pfff: 😱 🙁 :??: :sweat:


12597961935vRqX5.jpg

you are at the beginning of 100th page,
i was trying to be at 1st position on the 100th page of this thread
i was tracing this thread for days to be at 1st position on the 100th page of the thread
so as to host the 100th page

(to be on topic 😛 and to avoid hammer of ban 😗
gpu requires massive memory bandwidth, so apu needs massive memory bandwidth to work effectively, and if gpu is going to use all the bandwidth then cpu side will be waiting for data and thus lot more letency :??: )
 
AMD has already announced that the company will be unveiling its new CPUs at Computex in two weeks' time, so unless there are some more leaks, we're going to have to wait until the 6th of June to find out the full details of the new platform

Source The article is talking about FM2 which I thought was the DT variant so June 6th, however their review of Trinity mentioned DT 'next quarter' which matches up to August.
 
:cry: :heink: :fou: :ouch: :pfff: 😱 🙁 :??: :sweat:


http://thumbs.dreamstime.com/thumblarge_457/12597961935vRqX5.jpg
you are at the beginning of 100th page,
i was trying to be at 1st position on the 100th page of this thread
i was tracing this thread for days to be at 1st position on the 100th page of the thread
so as to host the 100th page

Dealwithit.jpg, I am truly a evil person and this was by design, don't mess with me or I will blow another Ivy.....don't tempt me.

Speaking of such I have more IB benches with LN2 and Dry Ice, but that is the wrong place I guess.

On the AMD side, I am actually quite positive for whatever reason, more so than Zambezi, in the build up months it kind of created the feeling that something was wrong.
 

there is an amd overclocking thread but no intel overclocking, so means no liquid n2 or he or solid co2 here,
means i am in safe zone 😀



lets put another 0 after 100, thus new target, 1 post at 1000th page 😗


😍
 
because he's an AMD shareholder ? 😛
It'd be funny if Rory Read uses an Intel powered Thinkpad though


Lol No i'm not crazy(plus i don't have the money to waste i would rather gamble its safer :kaola: )i wouldn't put any shares in any technology(plus are stock market is pretty bad) not even intel its just not smart. The market changes to fast.


Over the years I have swapped from ATI to Nvidia, to ATI, from Intel to AMD to Intel, in electrical goods I have swapped from Panasonic to Sony to Panasonic, in cameras from Canon to Panasonic to Nikon, and throughout it all, not a single tear shed.

Why do you feel teary?

Lameness i guess i'm typically brand loyal but i'm slowly changing.
 
Brand loyalty is OK as long as youre happy with your product.
Asking any piece of HW to do something it obviously cant, or struggles with isnt OK.

This is where pricing comes in, as not everyone has a top rig, be it SSD,GPU,CPU etc, even your screen choices.
If its better than what you currently have, and if its good enough to have a noticeable difference, then you win, as long as its within your spending limits
 
For the most part, I agree. There are a few things to watch for though. I'd be interested to see latency on the GPU accessing main memory (As opposed to local VRAM), and you still have the fundamental problem of GPU's being horrid at sequential logic compared to a CPU. Sharing Address Space is also well and good...unless we still use 32-bit exe's and have a hard 4GB cap in place. You need the GPU to understand how to execute low level machine code, which requires significant amount of extra HW thrown in (decode logic).

If it can be pulled of, great. But I think this is still a few years away from being the norm.

Read the interview again. He wasn't talking about a dGPU but about using the iGPU for these things. The iGPU shares the exact same memory as the CPU, same bus and access paths. Currently the iGPU use's a different virtual address space for it's memory and copying data between the two requires page locks and framing. By allow the GPU and CPU to talk in the same address space you can treat data in system memory as data in GPU memory and vice versa. This is something that will be implemented at the kernel level first and foremost. You could also use this within the 2GB limit of 32-bit land, but would require some form of address translation similar to what PAE does.

Honestly, we all need to get the hell away from 32-bit world, 2GB is not enough anymore.
 
hey i was just wondering if Cyclos technology = Bad overclocks??


Well, basically this cyclos stuff aims at creating an LC tank circuit oscillator throughout the chip to save energy.
However, an LC oscillator will resonate ONLY at one frequency, and the chip will be the most energy efficient at that particular frequency.

qresi.gif



Since, Power=c*f*v^2, so going lower than the resonant freq should not be much of an issue. On the other hand, pushing higher that the resonant freq would lead to sudden dramatic (more than normal) increase in power consumption....would it not?!!


Lets say AMD designs the Vishera chips to resonate at 4.4Ghz, wouldn't there be a horrendous increase in power consumption, even trying to push it past a mild OC like 4.8GHz???
Is it right to justify that the Vishera chips will be bad overclockers in terms of excessive heat/power draw limiting the attainable OC??



PS:- 1. Another implication of this, is that if the same AMD Mobile trinity dies are launched as desktop trinity chips, then going from 35W to 100W (~3x increase) will barely allow for a 60% increase in clock speeds?!!
2. Could this be why there are no 45W trinity mobile chips, because they were optimizing the die for the lower frequncies of the 17W/25W
 
Status
Not open for further replies.

TRENDING THREADS