AMD CPU speculation... and expert conjecture

Page 391 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
@palladin9479: Your "professional qualified expert opinion" is a collection of thoughts which are unrelated to what I wrote... specially the first "barrier".

@lilcinw: No I am not referring to project Denver.

@blackkstar: But do you know what kind of performance I am talking about? Besides that, there are many other issues with your post. For instance, HSA is not only about raw performance it is also about power. HSA can reduce power consumption by 2x---8x depending of the workload. Doubling the number of modules in a CPU to get a 4m/8c SR dCPU increases the power consumption.

@gamerk316: Nvidia and Intel openly disagree with you. And I suspect that AMD also.
 


Idk, about broadwell, but I heard from a insider that they're on path for 14nm coming 2014 (they're starting production).
 

easy, don't reply to troll/bait posts. start ignoring them and stop feeding them. always works for me. 😀
 




There is none at this time.
 


I was posting something like this when I was pondering why GF would cancel FD-SOI, one theory was that GF didn't feel like buying the equipment since ST-Ericsson broke up and were the ones driving it.

The forums ate my post (error 404 - thanks)

My question was why AMD went for 28nm designs. Were they going 28nm (a massive 4nm shrink from 32nm SOI that is known to work) because at the time GF was promising 28nm FD-SOI then said fk it when ST-Ericsson pulled out ...

Would AMD have stuck with 32nm SOI had they known GF would fail to produce another SOI node?

Sure, 28nm sounds like a good thing, but think about it this way. SOI offers lower heat and lower power. so does 28nm, but apparently not at the same rate. This leads to increasing ipc by x% and dropping the clock speed by y%. Is 32nm soi better than 28nm bulk?

Doesn't look like it to me considering the clock drop going from richland to kaveri, power numbers during a "cpu stress testing" will shed some light on this. Temperature results should be interesting as well.

GF needs 28nm customers, did they offer AMD some sort of deal to make the switch?

These questions we will likely never know the answer.

With GF only offering a low power 20nm node, things don't look good on performance parts. Should be another interesting development should Samsung's Exynos 6 actually be a 14nm finfet product in 2014. Intel and Samsung on 14nm, GF (possible) and TSMC on 20nm.
 


First its an iGPU in the 7850k. Basically its lacking the dedicated memory.

The closest thing Intel has would be the Iris pro found in a few select laptop chips and can't be purchased as a DT option. In that sense, Intel has no DT product to compete with the GPU side.
 
@gamerk316: Nvidia and Intel openly disagree with you. And I suspect that AMD also.

Everything that's been added to the DX API going back to DX10 has been either API cleanups, or post-processing effects (programmable shaders, new AA modes, Tesselation, etc). Nothing really new in terms of new functionality. Hence the sudden interest in physics: We can't do much more graphically, so we'll push physics instead. Or new AA modes. Or tesselation. And so on.

We're not getting any groundbreaking jumps in graphical quality anymore; those days are long gone. For this generation, I expect stagnation. After this generation, I expect we move toward Ray Casting.
 


And there are workloads where you want performance over power consumption.

Look at what cryptocurrency is doing to AMD GPUs. You can not find them anywhere anymore and used 7950s are commonly going for $350+ on ebay.

Cryptocurrency mining is a real world example of a very successful consumer level GPGPU application and AMD is completely dominating.

Meanwhile an APU is completely uncompetitive for mining. The same will happen when more things spring up that are GPGPU based.

I think that you are speaking because you do not have experience with these sorts of workloads. If you are doing anything compute-heavy you're not going to magically have every piece of software you use flip a switch and go "we HSA NAO!"

But my JPEG decoding example was spot on why AMD"s APU focus right now is out of necessity because they can't make a big chip instead of by choice. HSA is good and it has potential but some of the ways AMD are using it and showing it off are a complete waste where a standard dCPU would be better at it anyways.

And, as I've said before, if AMD did drop dGPU and went APU only and didn't have a platform where you could have the power of a dCPU and a dGPU in a single product, everyone is just going to switch to Xeon Phi + Intel. And then you're also forgetting that to get something running on Xeon Phi it's a simple recompile while HSA needs retooling.

What you're suggesting is absolute suicide for HSA and AMD. You need to back off of the whole "Dell and HP are not shipping as many desktops, the desktop is DYIING!" train and the "ARM will take over everything because what the desktop needs right now is a slower chip that uses less power and is not compatible with the entire x86 windows software library"

 


woops, I meant to ask the NVdia equivalent of the igpu in the 7850k
 


More powerful than a GT 640, but less powerful than a GTX 650.*

As has been mentioned before, the 7850k's igpu has the same number of GCN cores as the discrete Radeon 7750, but it will run slower than the Radeon 7750 due to memory bottlenecking and lower clockspeed.


*Judging by techpowerup website's charts.
 
After this generation, I expect we move toward Ray Casting.

That's prohibitively expensive but I can see us heading that way in a few generations. We'll need an order of magnitude increase in Vector processing power at the minimum before we can get real time rendering at anything resembling decent resolutions. But when it happens it's going to be beautiful.

The DT formfactor isnt' going away anytime soon, there is nothing capable of replacing it. APU's can't replace dedicated processors due to economics slamming into physics. dGPUs exist and won't suddenly die off dinosaur style come next year. If you have a dGPU then an APU is useless to you in the DT form factor. HSA might be able to salvage some performance but only in dedicated applications and not in general processing power, aka scalar processing. AMD not releasing another high performance desktop CPU, at a time when it's target market is growing, sounds more like a manufacturing problem then AMD just deciding to axe a revenue source
 
@blackkstar: There is again many issues with your post. E.g. Phi requires much more than a simple recompile, unless you want your code to run slower than in a CPU. But I will repeat my question, which you avoided: "But do you know what kind of performance I am talking about?"

@palladin9479: There are at least two companies, Intel and Nvidia, that disagree with your "APU's can't replace dedicated processors due to economics slamming into physics" mantra.
 


There are two versions of the Radeon 7750 and, graphically, Kaveri iGPU will be faster than the DDR3 version but slower than the GDDR5 version if the game fits inside the VRAM of the 7750 GDDR5.

From a compute perspective, the Kaveri iGPU could be faster because avoids the PCIe bottleneck and has hUMA.
 


I think every enthusiast loves Phenom II's. I have a 1075T, a 965 BE, a unlocked X6 960T. All of them still kick ass. Unfortunately the big OEM's didn't share the same feeling.
 


Please, stop trolling. Haven't you read any of the posts? Looked at any of the benchmarks? AMD is making headway. they have limited resources and can't do everything at once.
 
PD already beats out Phenom II in absolute single threaded performance. For those asking for a "die shrunk" Phenom II, they already made them, it's called Llano. And before anyone screams about L3, AMD equipped Llano with double the L2 cache of the Phenom II to compensate for no L3. Thuban / Zosma both had 512KB worth of L2 with 6MB of L3 shared while Llano had 1MB of L2 with no shared L3. Llano was 32nm and the K series had unlocked multipliers for decent overclocking.

People are just using nostalgia when remembering the Phenom II's. My 970BE is still running in my backup system and it's far inferior to my FX8350.
 


I run a 6300 in my main rig. I don't do any benchmarking etc.... But, I have observed this..... When I use any of the three other rigs I have. a 965 BE, a 960T unlocked to X6, and a 1075T. All running in excess of 4 GHz. All with very similar hardware otherwise. The 6300 just performs all I throw at it noticeably smoother and faster. Its a shame AMD can't charge enough for their products to actually thrive.
 
OH MAI GAWD!!! I'm litterally laughing my butt off! I thought this whole GTX 780 Ti GHz thing was a joke from The Q6660 Inside... turns out, they're doing a GHz edition...

http://www.eteknix.com/gigabyte-gtx-780-ti-ghz-edition-specs-released/
 
Status
Not open for further replies.