AMD Piledriver rumours ... and expert conjecture

Page 67 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
NVidia is actually in a much better situation than AMD right now.
They have 3 billion in cash, almost no debt, and double the profit margins.

AMD is on the ropes with more debt than cash. They need Trinity to succeed in a big way.

Nvidia is doing good as a company their cards need to be overhauled
they need low power designs asap
something equivalent to 560ti on 128 bit 80 watts will do it
on the ropes ,wasn't a reference to their economic situation but their product designs,
they ought to put more effort in their vcards and less on tegra cpu designs
 
not saying AMD isn't in trouble but nvidia is in a bad spot. They need to get tegra sales up or else they will be in the same boat as AMD.

Nvidia is still doing very well with their gpu(s) despite the hardware issues are selling very well plus they have had the professional market almost all to them selves. As for Tegra I think of it as more an experiment but if they can get the costs down and push them into cheaper tablets and smart phones but the market is already slim margins when it comes to ARM.
 
Nvidia is doing good as a company their cards need to be overhauled
they need low power designs asap
something equivalent to 560ti on 128 bit 80 watts will do it
on the ropes ,wasn't a reference to their economic situation but their product designs,
they ought to put more effort in their vcards and less on tegra cpu designs


Nvidia has Kepler(28nm, same TSMC process as AMD) due in a month or two. How much more asap can you get? AMD has yet to ship the middle-of-the-road card 68xx series which I think will see the highest volume and profits for AMD. If Nvidia ships their mid-range first they'll practically be even.

They are positioned just fine in the graphics and phone/tablet market with Tegra2/Tegra3(shipping) and Tegra4(28nm, in the queue).
 
...
Back on topic
I think that in the long run that AMD is making the right moves
they cant compete in a straight up performance desktop CPU battle
what it would cost in R&D would nearly bankrupt them
so they are finding their niche markets where they can compete
mobile and budget OEM towers that dont need discrete GPUs to handle what the general public needs them to do (browse,email,online game playing etc)
they even have a good marketing position with "more cores" and "more ghz"
which will appeal to the general computer user
even the names like Bulldozer can be used well in commercials like I stated before
especially if they shift their R&D budget over to advertising
also watch out for ARM
could very well be a player in the desktop arena a few years from now
x86 is still viable but there will be a day when it is replaced in the future
just the way that the tech world goes
x86 has probably lasted longer than anybody at IBM during the 8086/8088 days ever imagined it
IMHO AMD never has and can never overtake Intel in the CPU market
I still remember the days of PC compatible CPU makers and how almost all of them are gone or swallowed up
AMD serves a purpose for Intel due to the US government and monopoly restrictions
Intel needs to have a competitor
they forgot about that with their OEM manipulations and payed a big price for it
now if ARM CPU makers become a major factor then AMD is not needed anymore
this is all my opinion
I could be wrong.....
You have said some things that are true.

I think that in the long run that AMD is making the right moves
they cant compete in a straight up performance desktop CPU battle
what it would cost in R&D would nearly bankrupt them
While it is so, I believe, that AMD cannot spend on R&D what Intel spends, AMD nevertheless can do some R&D and if it is the right R&D then AMD can indeed compete with Intel in the desktop/workstation CPU market. AMD is much smaller than Intel so they must focus more. And AMD is not running their own fab any more so they can even have Intel manufacture their CPU's, an excellent idea. See how we can all work together? Even with that idea, perhaps staying with a larger process, more refined with a new design will give a better CPU? Is smaller always better?

even the names like Bulldozer can be used well in commercials like I stated before
especially if they shift their R&D budget over to advertising
Condescending? I think a little, yes. No, I reject that AMD should stop R&D and must merely pick up the market where they can. What AMD does has it's foundation in intellectual property. They have done really well with Bulldozer in many ways. It just has problems. Let's not hate, that's so last century.
 
Oh snap! We're in panic mode!? I didn't get the memo!

* Not speaking for Intel Corp *

With mobile sales in the lurch Intel will be without a credible igp solution
nor will slapping lipstick on hd3000 help
AMD is keeping Intel honest or you would have hd1000 still


 
Nvidia has Kepler(28nm, same TSMC process as AMD) due in a month or two. How much more asap can you get? AMD has yet to ship the middle-of-the-road card 68xx series which I think will see the highest volume and profits for AMD. If Nvidia ships their mid-range first they'll practically be even.

They are positioned just fine in the graphics and phone/tablet market with Tegra2/Tegra3(shipping) and Tegra4(28nm, in the queue).

we will have to wait to see what nviia does whenever it finally comes
the key is how much power draw
can they make low power deigns
 
Condescending? I think a little, yes. No, I reject that AMD should stop R&D and must merely pick up the market where they can. What AMD does has it's foundation in intellectual property. They have done really well with Bulldozer in many ways. It just has problems. Let's not hate, that's so last century.
quote-r@ck3tm@n

to respectfully disagree there was no hating on Bulldozer or AMD
I think the BD/PD and future CPU roadmap looks promising
I think they are thinking long term in CPU making
the whole module concept once refined could give Intel a run later down the road
a company doesnt plan ahead for only one year
trying to catch up with Intel in one architectural change isnt going to happen

I just thought that Bulldozer is a catchy name for CPU in the advertising world
lends itself to making a cool commercial
ie the nerd on a big real bulldozer running over the macho guy represeting Intel
that statement I made was in admiration
I would definetly not count AMD out
if anything the way they are going in five years they could have a drastically bigger market share
and I was referring to AMD not spending the R&D money on trying to compete with Intel in the performance desktop market in the short term
the money they have to use to do that would be phenomenal
to catch up to Sandybridge in the next year or two would cost billions IMHO
they still need R&D
but if they are going to spend extra money it should be in advertising
when AMD is known as well as Intel is in the general consumers mind then AMD wins
 
That is why AMD is modularizing their CPUs. They plan to eventually remove their FPU entirely and use the Radeon Streams SIMD core as their FPU.

Which would be a disaster performance wise.

Again, if a CPU can only handle 2 instructions per core [lets assume AMD sticks with its module approach for this discussion], what good does 400 seperate SIMD cores do? Now, instead of one strong FPU, you have 400 weaker ones, and CPU FP performance WILL suffer as a result.

Theres a reason why massive FP datasets [rendering, and more recently encoding] have moved to the GPU, but normal FP processing has remained CPU bound.

This is the same exact reason why I called Larabee DOA from the start: CPU's are good at doing one thing at a time REALLY quickly, and rapidly swapping between tasks. GPU's are good at doing lots of simple math equations at the same time, but stink royally on equations that can't make use of its many individual FPU units [shaders]. Any attempt to mix the two will lead to server performance degregation, for reasons that should be fairly obvious.

Go read again and stop hating. This is future developments and not BD.

AMD's touting heterogeneous computing as the next big thing. The entire purpose of the FMA3/4/AVX instruction sets is to get away from classic CPU SIMD instructions and towards a more parallel modular approach. Also a GPU is just a giant specialized SIMD FPU btw. The GPU manufacturers (Nvidia/ATI) have made more strides in the SIMD FPU research then either AMD or Intel have. By going to the newer instruction sets and fusing the power of the GPU with the CPU you end up with a ridiculous amount of potential.

Finally there are different types of instructions. Your thinking x86 instruction set when your talking about "two per core". Did you know that SIMD stands for "Single Instruction Multiple Data", meaning one instruction but multiple operations. Their easy to multiplex and run in parallel. It's Integer ops that are hard to multiplex.

Funny part is ... Intel's trying to do the same thing right now.

Now sit down, and try to imagine what types of instructions get used for what? Your not doing integer addition / logic compares with SIMD, nor are you doing load / store operations. SIMD (what the FPU is now) is used for large complex vector math and scientific calculations. These are things that lend themselves well massive parallel operations. This is AMD (and Intel) wanting to move the GPGPU into the CPU instruction set, eventually. The overall idea is so that in the future, your OS will analyze code and determine the best component to dispatch that code to. It's blurring the line between CPU / FPU / GPU instruction sets.
 
AMD buying ATI was a big a$$ gamble. They could of easily gone bankrupt during that time.

Thankfully it seems to of paid off and both companies are stronger for it. As evidenced with the growing popularity of the APU's (especially in mobile devices) and ATI's newer cards. I'm mostly an Nvidia user myself, but I can appreciate the power of ATI's cards.
 
I think some people are really underestimating the strength heterogeneous computing has going forward. The thing is, only a handful of programs are set to use it, because most people don't want to spend the money on a discrete Gpu. Right now that makes sense, because it has almost no benefit to most people.

Where as we are getting to the point almost every chip is going to be shipped with Igp. In 2-3 years, almost all of the market will have a decent amount of graphics power in the system. At that point, software will start taking advantage of the new opportunity to make programs faster/more efficient.

If your building a house, your cpu is the architect that can quickly analyse the blueprints, that look like code to everyone else. He is essential. However, if you don't have the massive workforce to build the house (your gpu) quickly, your wasting a lot of time that could be better spent.

In a short while, cpu+gpu will be the standard, Intel and AMD both know that, which is why they are both in such a rush to make their igp better.

AMD got the jump on Intel this time, as long as they can stay ahead, they will be in good shape.
 
For those individuals who may not be very familiar with that exactly FPU / SIMD units do these days.

http://en.wikipedia.org/wiki/SIMD

http://en.wikipedia.org/wiki/Floating-point_unit

FPU's are very very old. No modern CPU has an actual "FPU" anymore they have a SIMD unit that can also handle FPU instructions. Old 8087 instructions have largely been replaced with SIMD equivalents (MMX/SSE/AVC/FMA/XOR). FPU/SIMD units are not accessed the same way the CPU integer units are. They don't share pipelines or schedulers, they don't need to be decoded into micro-ops. They are dispatched directly to the SIMD units for execution with some rearranging done before hand for optimal execution time. Due to SIMD not including ~any~ logical conditional instructions (COMP/JUMP) there is no need for branch prediction. They have their own directly addressable registers and do not share the x86 register file. It's best to treat the SIMD unit as a separate coprocessor even though it inhabits the same die and use's the same memory controller that the integer units use.

Due to this, SIMD instructions will always be highly parallel in nature. The whole purpose of the SIMD unit is to execute multiple operations simultaneously on large arrays of data.
 
nice veridian
is that your analogy with the CPU/architect and GPU/workforce?
is it okay if I use that elsewhere?

as somebody who has done encoding/rendering using my HD 5xxx series to acclerate with ATI Stream
plus somebody that has just recently gotten into Folding@Home
I appreciate that the GPU is going to have a huge role to play
really the GPU has been sitting there being lazy for many years
just doing 3D
it is about time that it was being used for more than gaming
dont get me wrong
I like to game too :)
but it was a wasted resource in its early years
like many of us LOL
 
For those individuals who may not be very familiar with that exactly FPU / SIMD units do these days.

http://en.wikipedia.org/wiki/SIMD

http://en.wikipedia.org/wiki/Floating-point_unit

FPU's are very very old. No modern CPU has an actual "FPU" anymore they have a SIMD unit that can also handle FPU instructions. Old 8087 instructions have largely been replaced with SIMD equivalents (MMX/SSE/AVC/FMA/XOR). FPU/SIMD units are not accessed the same way the CPU integer units are. They don't share pipelines or schedulers, they don't need to be decoded into micro-ops. They are dispatched directly to the SIMD units for execution with some rearranging done before hand for optimal execution time. Due to SIMD not including ~any~ logical conditional instructions (COMP/JUMP) there is no need for branch prediction. They have their own directly addressable registers and do not share the x86 register file. It's best to treat the SIMD unit as a separate coprocessor even though it inhabits the same die and use's the same memory controller that the integer units use.

Due to this, SIMD instructions will always be highly parallel in nature. The whole purpose of the SIMD unit is to execute multiple operations simultaneously on large arrays of data.


Thank you
I wasnt expecting homework :)
but greatly appreciated
I better go study
I hope there isnt going to be a pop quiz :) LOL
 
nice veridian
is that your analogy with the CPU/architect and GPU/workforce?
is it okay if I use that elsewhere?

as somebody who has done encoding/rendering using my HD 5xxx series to acclerate with ATI Stream
plus somebody that has just recently gotten into Folding@Home
I appreciate that the GPU is going to have a huge role to play
really the GPU has been sitting there being lazy for many years
just doing 3D
it is about time that it was being used for more than gaming
dont get me wrong
I like to game too :)
but it was a wasted resource in its early years
like many of us LOL
Use it as much as you want.
 
http://www.tomshardware.com/news/AMD-ATI-Nvidia-GPU-Tegra,14795.html
Report: AMD Considered Buying Nvidia Before ATI Purchase

" After the acquisition, AMD struggled to integrate its newly acquired graphics business as Nvidia unleashed a flood of strong products, consuming large chunks of market share. AMD eventually gathered its forces and fought back, but Nvidia had already moved aggressively into the mobile SoC market by introducing its ARM-based Tegra chip.

With Tegra installed in tablets and smartphones, Nvidia now has a market capitalization of $9.7 billion whereas AMD is worth just $5.2 billion. "


now what were you saying again in that imaginary world of yours..? :heink:
If nvidia fails on tegra they will effectively need to compete only with their gpus, they would be in the same boat as AMD.
 
Status
Not open for further replies.

TRENDING THREADS