AMD Piledriver rumours ... and expert conjecture

Page 219 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
core disabling defeats the purpose of building the architecture on modular and shared resources/makes bulldozer useless. goes against what amd is pushing for and makes bd architecture look bad/worse. on paper, the architecture didn't seem so bad that a cpu would need to be crippled to get more performance out of it.
if it yielded tangible performance benefits, some reputed hardware sites would have tested it already, especially for games.
 


All VSync does is try to sync the video output to the monitor to when the monitor is ready to display a frame. Assuming a 60Hz display, thats one frame output every 16.7ms. However, if a 16.7ms rate can't be maintained, the previous frame is typically re-sent a second time to make up the difference. Hence why FPS in some Vsync implementations drops to the 30's if you can't maintain a constant 60 FPS rate. This also explains why Adaptive Vysnc [NVIDIA] isn't the best solution either, as even if FPS is significantly over 60 FPS, you still will end up taking a hit if any particular frame takes more then 16.7ms to actually be rendered.

Remember, 60 FPS simply means an average of 60 frames over a 1 second timespan. That does not mean a single frame will always be output every 16.7ms. Some frames will be ready sooner, others later. Hence why you can get 60 fps and stutter significantly [which I've noticed a LOT of people with C2Q's are starting to compain about...which makes sense, given the FSB would probably add significant latency. Might bear an investigation just to confirm thats whats going on.].

What techreport shows in the article I posted above, is AMD processors tend to spend more time taking more then 16.7ms to create a given frame compared to Intel. This indicates that even if the output FPS is equal between a given AMD and Intel processor [which can easily happen due to a GPU bottleneck; see BF3], Intel will tend to give a smoother experience due to a lower latency compared to AMD [more consistent frame output]. This farther illustrates that Intel processors may be more powerful in gaming then traditional FPS benchmarks indicate.

Hence why I've always argued [sometime VERY loudly] that we need to bench more then just FPS at max settings, as it doesn't show you the comparative strength of two given CPU's in a non-GPU bottlenecked situation for gaming.
 
Hence why FPS in some Vsync implementations drops to the 30's if you can'tmaintain a constant 60 FPS rate.
as in some games like lost planet the frame rate does not drops to half
may be it only skips late frame and display them in another refresh cycle thus maintains more frames per second than 1/4th or 1/3rd or 1/2nd of refresh rate and less than refresh rate for a complete second of display 😛
 


Yes, varies by implementation. For example, some Vsync implementations I've seen do some REALLY weird things.

For example, if two frames were creating during the 16.7ms interval, the FIRST would be displayed, and the second buffered. If one more frame were created, the buffered frame would be displayed and the most current one buffered. Essentially, the most recently created frame would be displayed one frame later, so an extra frame would be available if it takes longer then 16.7ms to create a frame. Of course, this leads to another enemy of gamers: Input Lag. This method does do a better job at keeping the 60 FPS target though...

The ONLY thing Vsync mandates is that one frame is sent out every screen refresh (16.7ms). How that is accomplished varies by implementation.
 


Well I think the 2500k is overrated , either the 8120 or the 8150 can perform almost the same and now at lower prices , it's up to the gpu guys! :

Take a look , 8150 vs a 2500k:

http://www.youtube.com/watch?v=1kd4dvLJQP4
 
Well, sorry to not trust that spread sheet shown in the video, but like gamerk said, there's a lot of proof out there that the FX doesn't do a very good job.

I have seen personally the i5 and i7 in rigs I've made for friends and they do offer smoother gameplay most of the time than my own 965 pushed to 3.97 and same conditions.

Once PD comes out, we'll see how it will behave with games, but so far, the expectations are low for me and I think everyone else agrees 😛

Cheers!
 


Maybe this explains weird screen tearing and stutter in Assassins Creed 2 :??:
 


A person banned from every tech website with a member base, starts rumour to traffic attention he is dubious at best...what has happened is AMD have prompted they don't respond to rumour let alone that on "Secret documents".

A close source that is a tester and professional overclocker with AMD says AMD will start with Bulldozer, end with Excavator and then move on to the next big thing...his opinion is that AMD will not be leaving the DT industry and this is as a person that has become a full on Intel endorsee.

 


If that is indeed what they plan to do (keep dozer for years) by the time I need to upgrade, which wont be for another 3-4 years, then I will hands down go with Intel. In the old days, Intel would rack up gigahertz while AMD worked on architecture. AMD was faster. Now roles switched. AMD racks up "cores" which, beyond 3, become more or less useless to gamers. Gamers then are they main PC building group. So its a marketing issue on their side.



Actually not THAT easy. Big boys have all marketing connections, potential, scientists, patented tech, and NAMES. New kid would have a difficult time in the beginning, especially in current, competitive economy.
 


What makes you think that software along with a refined architecture which includes high end integrated graphics won't be something that is a reasonable alternative. Do people honestly believe that Bulldozer is going to be the resultant hereon in. They posted a link about the refinements made to Piledriver without changing the architecture to much from Bulldozer yet it ekes out 15% gains, the next two will be radical upheavals likely on a smaller die process using concurrent tech of the time. To say that Intel is the only choice 4 years from now is rather pre-emptive, I would suggest that in 4 years Intel will have a IGPU solution to run with Trinity.
 


Not true. Current trends in transistor density point to approximately 4x increase in the next 4 years. Assuming Haswell is in fact 2.5x the number of EUs that IVB has, if Intel dedicates the same die area to iGPU in 4 years as they do in Haswell we are looking at a 10x increase in graphics performance over IVB. This also ignores and driver opimization, frequency and turbo improvements, datapath widening (AVX2 etc) or other achitectural improvements. I think we can assume that in 4 years, intel graphics will be roughly equivalent to current mid-high level discrete GPUs in terms of raw throughput.

I would say 1000% increase in Intel iGPU graphics power over the number 4 years would be conservative.
 



Not saying the only option. I mean if AMD will be as far behind.
 



Question is how far can they go? They keep shrinking their die size but its getting harder and harder look at how the 22nm turned out. I heard from some where that Intel is being a little light on their current designs(their size is small when compared to Amd) and if they wanted to they probably could of made some beast Intel 5000HD graphics.

Is their any way to shrink after 10-14nm?
 

You might be assuming that will lead directly to 2.5x performance, which isn't very accurate to assume. Using that reasoning, A Radeon 7970 would perform >2x as well as a Radeon 7850. It has twice the shader cores, and a higher clock rate (thus the >). Add extra memory everything, and you still don't even come close to 2x performance.

Intel also seems to be relying heavily on gpu clock rates, which will not turn out well for them in the long run.


http://www.tomshardware.com/news/Vishera-Piledriver-FX-Series-cpu-apu,17188.html
I am rather impressed by the TDP on these chips, mainly the 6300.

http://www.tomshardware.com/news/AMD-Steamroller-Piledriver-Kaveri-processors,17217.html
Very good news, assuming it works out well.

Edit: adjusted wording
 
Status
Not open for further replies.