AMD Piledriver rumours ... and expert conjecture

Page 148 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
As far as I can tell I see AMD shares hitting 12$ to15$ by Q4 2012 by Q4 2013 they may well be back to 25$ or more.

I only see 1 product AMD has with no competition, and that's Trinity. (benefit AMD)

Brazos 2.0 is now on par or lower than the 32nm Atom's. (benefit Intel)
Bulldozer/Piledriver won't catch i5/i7/Xeons in the desktop/server space. (benefit Intel)

Graphics probably hasn't changed much. Nvidia/AMD have comparable products with NVidia having faster chips but volume issues. (benefit Neutral)

That's 1 positive, 2 negative, 1 neutral product family.

Not a strong recipe for 50% growth.
 
I only see 1 product AMD has with no competition, and that's Trinity. (benefit AMD)

Brazos 2.0 is now on par or lower than the 32nm Atom's. (benefit Intel)
Bulldozer/Piledriver won't catch i5/i7/Xeons in the desktop/server space. (benefit Intel)

Graphics probably hasn't changed much. Nvidia/AMD have comparable products with NVidia having faster chips but volume issues. (benefit Neutral)

That's 1 positive, 2 negative, 1 neutral product family.

Not a strong recipe for 50% growth.
atoms still can't do 1080p video. brazo 1.0 is still more useful than the atom at the lower end.
 
OpenCL increases the speed of the A8 3530MX vs Not OpenCL. Granted this tech would still work with an AMD video card, but in an APU (Trinity, steamroller?), it will still work with an Nvidia gpu installed, off the APU. May even work with Intel cpu +Amd video card, unless AMD took notes from Intel (compiler) and Nvidia (physx) about making things not work when competition is detected.

Theres a difference between "disabling a feature" versus "not optimising". If Intels proprietary compiler does not optimize for AMD, thats their business. If they actually DISABLED manually programmed SSE calls however, that would be blatently illegal. Theres a difference between the two.

I thought I beat this one dead.

Intel disabled features of all non-Intel CPUs via code modifications and biasing. They were found to be selling a broken product (their compiler) and false advertising. This isn't up for debate, it's old history and already settles.

You don't get to decide this, I didn't get to decide this, JS didn't get to decide this, the government entity charged with the responsibility to review and ensure fair and ethical business practices gets to decide this. They did decide it and it wasn't what you posted.

Also .. did you not remember the last discussion about this? Intels compiler disabled ALL SSE instructions period, end of story. If your CPU didn't have GenuineIntel as it's vender ID then the code dispatcher wouldn't pass SSE instructions to it and instead would use i386 / 8087 instructions. The only way around this was to patch the executable ~AFTER~ compile to disable the dispatcher's VenderID check.

Unless your saying that gamerk316 is smarter and a better expert then the entire FTC and Justice Department combined.

Or to put it another way.

By posting what you just did, your knowingly posting false information. Do I need to post the links to the FTC's findings and the CL agreement between the two again?
 
The Cedar Trail 32nm chips can do 1080p and DX10.1. N2700 (2C/4T)

http://www.cpu-world.com/CPUs/Atom/Intel-Atom%20D2700.html

The next node, 22nm is what's gonna be nearly impossible to beat. Unless TSMC/GF get their 20nm done much quicker.
http://uk.hardware.info/reviews/2700/cpu-shoot-out-intel-atom-d2700-vs-amd-e-450
The new D2700 does have integrated HD video encoding and it works as promised for the most part. Both 720p and 1080p HD video can be played as long as its either MPEG2 or H.264, although 1080p can be a bit jerky at times. It's not a slideshow, but it's also not completely smooth. Especially when the bitrate is a bit higher, it starts to struggle.
 
http://uk.hardware.info/reviews/2700/cpu-shoot-out-intel-atom-d2700-vs-amd-e-450
The new D2700 does have integrated HD video encoding and it works as promised for the most part. Both 720p and 1080p HD video can be played as long as its either MPEG2 or H.264, although 1080p can be a bit jerky at times. It's not a slideshow, but it's also not completely smooth. Especially when the bitrate is a bit higher, it starts to struggle.

Heresy! Heresy I say. Thou shalt not speak ill of thine lord

Atoms ... yeah ....

Intel deliberately neutered the Atoms to prevent them from competing with their other low end CPUs. They didn't want a repeat of the Celeron 300A days.
 
they need to do something to be strong in the mobile market, atom isn't it. Sure medfield looks great compared to a 1-2 year old phone. Samsung and HTC's brand new phone makes it look ... slow.

http://news.softpedia.com/news/TSMC-Shows-3-1-GHz-CortexA9-Dual-Core-CPU-and-Doubles-28-nm-Shipments-267543.shtml

A15 is going to hulk smash it, especially if they get 3+ ghz out of it.

http://news.softpedia.com/news/TSMC-Shows-3-1-GHz-CortexA9-Dual-Core-CPU-and-Doubles-28-nm-Shipments-267543.shtml

Interesting side note, TSMC is going to try and dump 700M to get 20nm faster.

http://www.phonearena.com/news/TSMC-promises-20nm-chips-in-2013_id29519

But AMD isn't sitting still either, looks like their tablet cpu is slated to hit with windows 8. although seems odd to go with 40nm, but could be for a larger area to cool fanless (22nm plague)

http://www.theregister.co.uk/2012/05/09/amd_hondo_tablet_windows8/
 
Never said that. Remember when people were using an old Nvidia card with AMD gpus to get physX support? What was Nvidia's solution? Nvidia did that to their own customers and had very little to do with AMD other than to drive people back to buying Nvidia only. Somehow I don't see AMD stooping to their level.

And heaven forbid this caused some undocumented problem due to the presence of an AMD card; which tech support line do you think would be getting the bulk of the phone calls? Its an unsupported configuration.
 
I thought I beat this one dead.

Intel disabled features of all non-Intel CPUs via code modifications and biasing. They were found to be selling a broken product (their compiler) and false advertising. This isn't up for debate, it's old history and already settles.

Again, you fail to differentiate between the optimization and code gen stages of compilation. I read the settlement as meaning a failure to optimize.

Really simple to test though: take a piece of code, manually insert an SSE instruction, and compile with all optimizations disabled. Then compile with optmizations enabled.

If what you argue is true, then with no optimizations, the only code path would be the SSE instruction, and the app would instantly crash within the CPU dispatcher.
 
Wouldn't it make more sense if Intel got a single Ivy Bridge core, stuck HD 2500 graphics to it and made it a native die called it an Atom?

Pretty similar power consumption given that CULV IVBs get down to 18W.

IIRC Atoms are currently in-order processing only, as the OoO front-end takes a lot of power in more modern processors. I think Brazos does use OoO but not 100% sure. While going OoO can boost performance significantly, the battery life tradeoff for things such as cellphones or tablets may not be worth it.

I believe the newer ARM designs are also going OoO, but here both AMD and Intel have many years of experience in designing efficient front-ends, while ARM not so much. Think Chris Angelini had an article on this several months ago here on THG. He was predicting ARM would be struggling by 2014 or so.
 
OK, found the article - by somebody named "Mark", not Chris: Mobile: Intel Will Overtake Qualcomm In Three Years

So, to make our point, we have to perform a magic trick. All magic tricks have three acts. The first part is called "The Pledge." That's where we do something ordinary: talk about CPU architecture. Any editorial team can do that. The second act is called "The Turn." We take our ordinary article and make it do something extraordinary. This is where we get into the details of chip fabrication and the history of mobile GPUs, something only a few editorial teams can do.
...
Let’s talk about raw performance before we discuss power consumption. There is no question whether Intel has the best resources to achieve the fastest processors. ARM and Qualcomm are going to face the same growing pains that the x86 world has already struggled through.

In the next processor generation, Qualcomm is transitioning from its partially out-of-order Scorpion architecture to Krait, a full out-of-order design. Krait should more effectively facilitate peak CPU utilization, maximizing efficiency.

At the same time, Qualcomm is now navigating uncharted territory, where its engineers have less expertise. ARM already has some experience with its Cortex-A9, which is out-of-order-capable. But even with the upcoming Cortex-A15, the company will be relying on dedicated reservation stations (the instruction queue) for each of the execution units. While Intel and AMD used dedicated reservation stations in the past, both now employ unified reservation stations to improve performance and utilization. Unlike ARM, Qualcomm is attempting to jump directly to a unified reservation station design. The original Pentium Pro used a unified reservation station, so it’s not inconceivable to think that a company could pull this off successfully.

The Atom architecture doesn’t incorporate any of Intel’s advanced technology. It’s a single-core, in-order design that is more reminiscent of the Pentium CPU than anything modern. But here’s the thing: it’s already faster than the ARM-based competition. As performance demands start to increase, Intel has access to decades of expertise to drop into Atom. We’ve heard that Atom would go to an out-of-order core within five years of its launch, landing it in the 2013 range. So, ignoring power consumption, there is little doubt that Intel can put out faster processor designs.
...
Intel, on the other hand, has always done pretty well with the performance of its platforms (just look at its current Sandy Bridge-E architecture). Again, the challenge for Intel is power consumption, rather than performance.

The reason ARM is dominant on the mobile side is that, to date, Intel has been unable to demonstrate a power-efficient MSoC. In this world, trumpeting impressive performance-per-watt numbers isn’t enough. You actually have to be able to show off a full day’s worth of talk time and impressive standby numbers in order to be functional. With Medfield, Intel demonstrates that its team has the technical know-how to produce an MSoC within striking distance of ARM. As Intel put it, Medfield buys the company a seat at the table.

What happens in the next three years, though? Given that Medfield is competitive with currently-shipping ARM MSoCs, we have to look to the next generation. On the previous page, we suggested that ARM and Qualcomm face at least as significant of a challenge scaling performance up as Intel faces in scaling power down.

We can be perhaps most objective in looking at manufacturing technology. Intel has the best chip fabs in the industry, which allowed it to out-compete AMD during the K6 and K7 era, and maintain its position when AMD introduced the successful K8-era processors. Medfield is currently based on a 32 nm node and is already competitive with ARM-based solutions. Intel’s next move is to make a jump to 22 nm on a 3D FinFET design, representing two steps forward in process technology. Intel has never failed to execute with a fabrication process, and it will already have plenty of experience from its Ivy Bridge-based processors. If the company stays on track, it’s about 18 months ahead of the competition in manufacturing. As soon as the competition starts shipping 28 nm, Intel will follow with 22 nm, and it will be even longer before competing fabs can implement FinFET. This gives Intel another 20-30% improvement in power consumption over its current technology, while basically doubling density.

And the article goes on (and on and on.. P). Anyway, of course this is just opinion but an informed one apparently..
 
http://www.legitreviews.com/news/13067/

Ah, that's great news! So the reviews will start popping in a few days then.

Again, you fail to differentiate between the optimization and code gen stages of compilation. I read the settlement as meaning a failure to optimize.

Really simple to test though: take a piece of code, manually insert an SSE instruction, and compile with all optimizations disabled. Then compile with optmizations enabled.

If what you argue is true, then with no optimizations, the only code path would be the SSE instruction, and the app would instantly crash within the CPU dispatcher.

Uhm... Wasn't that about which processor supports already said somewhere inside of it? My point is, if you say "I do support SSE", why would you not use it? It's not like the compiler will give you bad code for a certain CPU after the erratas for assembler are passed along, right?

OK, found the article - by somebody named "Mark", not Chris: Mobile: Intel Will Overtake Qualcomm In Three Years

And the article goes on (and on and on.. P). Anyway, of course this is just opinion but an informed one apparently..

I kinda agree with it; I've always said Intel has the bigger R&D budget and the Fabs. Those 2 are big advantages to be feared.

Cheers!
 
I kinda agree with it; I've always said Intel has the bigger R&D budget and the Fabs. Those 2 are big advantages to be feared.

Well as always, time will tell.. IMO, Intel shot itself in the foot by not aggressively pushing Atom, I guess for fear of hurting their low to mid mobile CPU sales. But ever since sometime last year, they appear to be serious about it. IIRC Atom will first be on 14nm in a couple years from now, with the successor to Haswell (whose name I forget 😛) to be later in 2014.

Off-topic, but I can see that I'll probably have to finally get a smart phone, instead of the dumb-arse 89-cent phone I got last time I renewed my Verizon contract 😀.. My wife got a iPhone 4S the 2nd or 3rd day after the release last October, and although we recently had to replace it (got bricked during a firmware update - apparently iTunes is not smart enough to update itself first and then the phone firmware, and tried to do both at the same time), it is obviously the wave of the future.

Verizon FIOS has an iPhone (and maybe Android as well) app for smart home control - you can get a video feed of somebody at your front door, unlock or lock the doors, garage door, turn on/off appliances & lights, set your furnace or AC temp, set your DVR to record, etc - all from your smart phone. Also controls the alarm system. I'm seriously considering getting a system for my house as well as my wife's salon business. Would be great to pick up the iPhone and check the nighttime video cameras arond the house or salon, if the motion detector goes off. Last year the teenage son of some neighbors down the street, had a wild party that the cops broke up at 2AM, and according to other neighbors there was a bunch of drunk teenage partiers hiding around the bushes in the front of my house. Didn't break anything but they dumped their bottles, cans & cups on my yard. And of course I slept through the whole thing 😛.. And while the shopping center where my wife's salon is located has a nighttime security service, it would behoove us to upgrade the security system there as she does have some valuable equipment, supplies, cash register, etc on the premises.

My brothers and I have a beach resort condo in Florida that we rent out year round. Right now it's a major expense & hassle using a local real estate company to hand out & collect the door key for each renter. So one brother is installing a smart deadbolt that is wifi-enabled, so that he can send a PIN number to renters to use, then discard/change it for a new one for the next renter. Seeing as how the real estate company gets something like 12% of the rental for basically the same service, plus advertising, etc, this will save thousands a year, since he is also using For Rent By Owner to advertise, make reservations, etc.
 
And heaven forbid this caused some undocumented problem due to the presence of an AMD card; which tech support line do you think would be getting the bulk of the phone calls? Its an unsupported configuration.
undocumented problem .... 😱

just curious, is that your justification for everything being ok to take a big dump on AMD? oh, AMD sucks so bad, if anything actually tried to work, its broken, thats why it was done this way. AMD cpus are broken. AMD gpus are broken. anything AMD has is broken. ... AMD would be bankrupt if that was the case from all the lawsuits against them.

Why do you think people were thinking of doing Nvidia + AMD cards in the first place? the people actually doing it weren't idiots who didn't have a clue.

its not broken. its business, business to screw over the competition, ie AMD.

The thing is if AMD even tried some of the stuff that has been pulled on them, they would be sued so fast their head would spin.

http://www.techradar.com/news/computing-components/intel-sues-amd-over-breach-of-agreement-585553?src=rss

AMD is under a microscope all the time while Intel are sitting on mount olympus where no one is allowed to look down upon them. It is encouraged to take a leak on AMD. This mentality is complete bs. Too many people think this way.
 
since intel is sabotaging at compiler level, why isn't amd taking legal action? has amd already taking legal action? or did i miss that bit...
compiler popularity seems to be part strong-arming, part good vendor relationship, part lobbying.
amd should take a two pronged approach and push their own compiler as well as independent ones.
edit: i mean after intel and amd settled for crosslicensing and amd letting intel use amd-64 and stuff.
 
Uhm... Wasn't that about which processor supports already said somewhere inside of it? My point is, if you say "I do support SSE", why would you not use it? It's not like the compiler will give you bad code for a certain CPU after the erratas for assembler are passed along, right?

During compilation, multiple code paths are created whenever possible, so in theory, any X86 processor going back to the Pentium can run any application on a modern OS. You typically have several SSE paths, possible a MMX path, and a generic X86 path built into the application.

The issue is one of code generation: Because of the way compilers generate multiple code paths, it is possible to force CPU "X" down the generic X86 path. Thats what Intel did. But what happens if you disable the optimizer, and manually place some SSE code? Would the compiler try and force AMD/VIA processors down the X86 path [that doesn't exist], or would it happily execute the SSE code [no other path to take].

Basically: Is AMD crippled within the optimizer (a failure to optimize for a competitor), or is it done during code gen (totally disabling SSE support). Easiest way to test is to manually insert some SSE code, disable the optimizer, and see if the program crashes on an AMD CPU. (Or, if the program is simple, just look at the assembly).
 
undocumented problem .... 😱

When you get into untested configurations, yes, they pop up. And then you have to spend time and money to fix it.

just curious, is that your justification for everything being ok to take a big dump on AMD? oh, AMD sucks so bad, if anything actually tried to work, its broken, thats why it was done this way. AMD cpus are broken. AMD gpus are broken. anything AMD has is broken. ... AMD would be bankrupt if that was the case from all the lawsuits against them.

I'm just pointing out that its NVIDIA's proprietary standard, implemented via another NVIDIA proprietary standard. They can do whatever they want for it. AMD is free to license the tech if they so choose.


The thing is if AMD even tried some of the stuff that has been pulled on them, they would be sued so fast their head would spin.

Just like Intel/NVIDIA have been? Sounds fair to me.
 
since intel is sabotaging at compiler level, why isn't amd taking legal action? has amd already taking legal action? or did i miss that bit...
compiler popularity seems to be part strong-arming, part good vendor relationship, part lobbying.
amd should take a two pronged approach and push their own compiler as well as independent ones.
edit: i mean after intel and amd settled for crosslicensing and amd letting intel use amd-64 and stuff.
AMD took legal action in 2005 agains the compiler, Court ordered Intel to fix it in 2010. Intel fixed it with this command

/QxO

Enables SSE3, SSE2 and SSE instruction sets optimizations for non-Intel CPUs

so in order to enable it for AMD cpus, you have to know to use that option. By default its still disabled.
 
AMD took legal action in 2005 agains the compiler, Court ordered Intel to fix it in 2010. Intel fixed it with this command

/QxO

Enables SSE3, SSE2 and SSE instruction sets optimizations for non-Intel CPUs

so in order to enable it for AMD cpus, you have to know to use that option. By default its still disabled.
so if someone chooses to leave it as it is or forgets (however unlikely) to turn it on, optimizations for amd stays disabled, yes?
this seems like an awareness problem for programmers and software vendors.


 
somewhat. If Intel is supporting the developer, and staffing them with a few software engineers, do you think they will allow that developer to compile with that option?

http://software.intel.com/sites/billboard/article/blizzard-entertainment-re-imagines-starcraft-intels-help

Blizzard worked with companies such as Intel to make sure that StarCraft II works efficiently on the latest PCs and supports the largest group of systems and video cards on the market today. Sigaty said his team had the foresight from the start of development to look to the future of processing power and technology.

“Intel’s current processors are definitely poised to help deliver that killer experience to any diehard gamer. No one enjoys playing a game if they’re subject to interface or gameplay lag, stuttering performance, or slowed load times. Owning a current generation Intel® processor will definitely help avoid those troubles and add to the fun and enjoyment of StarCraft II.”

“Intel engineers worked with Blizzard teams to get Blizzard’s games running fast and looking great on PCs,”

AMD is not in that "largest group of systems on the market today"
 
Status
Not open for further replies.