AMD CPU speculation... and expert conjecture

Page 670 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

8350rocks

Distinguished
Honestly, I see this as a big win for AMD. Primarily because if they have either bartered, or negotiated some way to make all this kosher, it might eliminate the hefty penalties they were paying to GF for under minimum wafer purchases. Apple would get away from Samsung, but use the same process...AMD gets access to Samsung's top node. I knew months ago that AMD was aiming for Samsung foundries and obviously had to sit on my hands...lol. However, I always wondered how AMD was going to outmanuever GF on the deal...this makes perfect sense. Both chipmakers are pissed at their current contractual partners, and this solves all those issues while maintaining a semblance of familiarity. Very well played indeed.
 


That would be some of the worst engineering I've ever head of, and that's saying something. Core locking should be done at the user side, never at the programer side.

That being said, anything less then an i3 with HT is kinda useless for "gaming" these days. Users need at least one additional core free for miscellaneous OS related tasks and UI, assuming the game itself is utilizing one heavy render and one heavy core logic thread.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860
@8350

I remeber on one of the older threads about bd and ivy stating that AMD needs to find a way to get on Samsung's facility, but at that time were strapped to GF's wafer agreement and didnt think they would ever be able to get out because of their financial problems.

All the intel fans were trying to say that samsung is behind gf because samsung doesn't brag about their capabilities, but were throwing billions into their fabs, and that wasn't to make more 32nm fabs.
 

con635

Honorable
Oct 3, 2013
644
0
11,010
Posted on oc.net:
http://www.chiphell.com/thread-1182382-1-1.html

Apparently the 380x :/ wow if true!

011503yjegz86eyes86w8e.jpg


011518p1h2dduhz6xuo9j9.png


 


I agree, but it certainly shouldn't be forced. Near as I can tell, my 2600k, at stock settings, with all but two cores disabled, runs fine at mediumish settings in DA:I. So it certainly not a hardware limitation.

Another worrying problem, at least for FC4: Core 3 is a HTT core on HTT enabled Intel chips. So Intel chips are likely suffering a significant performance penalty, especially if Core 2 is being used at the time. Heck, if HTT is disabled, I'm not even sure the game would run (is Windows smart enough to re-allocate the core numbering if HTT is disabled?).

So yeah, bad programmers = bad port. Really that simple.
 

con635

Honorable
Oct 3, 2013
644
0
11,010

Not sure if serious? You realise they're all multi gpu above it? If this things priced like 280x it will be a winner esp since there's no mature drivers or mantle in those tests, expect 10-20% more again.

 

8350rocks

Distinguished


R7-250X...?
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780
Musings at S|A forums that this chip is around 300mm^2 and 20nm. If those rumors are true, that means it's 380X as Fiji, which would be 390X, is rumored to be a much larger die.

I told you guys it was awfully suspicious for Nvidia to release a new GPU that was faster than what AMD had and for them to only really talk about efficiency. Efficiency is a near meaningless metric for HEDT, and when you see a company (AMD, Intel, or Nvidia mostly) start talking about how their HEDT products are so efficient, you know there's something bad that's going to happen.

For Ivy Bridge Intel talked up efficiency and we got a 5% to 10% IPC gain and a 5% to 10% loss in maximum clock speed. Similar situation with IB to Haswell jump but smaller numbers.

Looking at what Nvidia, Intel, and AMD tell their tech "journalist" muppets to highlight about their products as good is a good indicator. If it's "wow look at these frame rates!" it's probably going to be hot and use a lot of power. If it's "wow such little power consumption!" it's probably going to be slow or disappointing. Tech sites don't want to call out bad hardware when they can avoid it. They like their free Titans, Intel EEs, etc.

I think people forget that the three of them know what each other is up to very well and that they're going to make marketing decisions that reflect the information they have that we don't.
 

jdwii

Splendid
Well amd can do the same if these results are true its quite efficient while beating a 980. I don't get hyped up much anymore but if true and if they have a 350$ video card that is efficient I will switch back I wish they would get adaptive v-sync and allow me to use fxaa on older titles I love that.
 
AMD Future of Compute Brief
http://www.iyd.kr/694

Exclusive Interview with AMD’s Robert Hallock – Future Prospects of AMD Explored
http://wccftech.com/interview-amds-robert-hallock-future-prospects-amd-explored/

Mantle API Revisited at Future of Compute – Will Allow Console Optimization to be Ported
http://wccftech.com/mantle-api-revisited-future-compute-console-optimiation-ported/

AMD delays financial analyst day to rethink product roadmap
http://hexus.net/tech/news/industry/77721-amd-delays-financial-analyst-day-rethink-product-roadmap/
 

Rafael Luik

Reputable
Sep 18, 2014
86
0
4,660

Which PSU doesn't come with a 6pin PCIe connector (when not 2 of them) or what GPU doesn't come with a molex > PCIe adapter?
 


Pico PSUs 8)

And that wasn't the point. Tourist is right. AMD hasn't put a decent low power graphics card fro a good while now. Maybe they're just trying to squeeze APUs into that segment, which would make sense. The 7750 was roughly on par with Kaveri when using hte DDR3 models. Since they're toying with stacked RAM and all that, maybe they won't need low power GPUs anymore.

Cheers!
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Nothing more far from reality! Efficiency and performance are related by

Performance = Efficiency x Power

Thus if you double the efficiency, then you double the performance at same power consumption levels. This is true either if you are considering a mobile 1W device or if you are considering a HEDT 300W device.

Moreover, HEDT products are not fabricated in a vacuum, but are derived from rest of products of the company. E.g. your FX-8350 is derived from a design aimed at servers and supercomputers. Servers and supercomputers don't consume 1000W; a top supercomputer can consume 20MW and the electricity costs are about so big, thus efficiency is even more important.

AMD main failure on last years (pre-Rory) was to ignore efficiency as key metric and it was the reason why AMD almost lost the server, supercomputer, and laptop markets, and never enter the tablet/phone market, whereas survived on the desktop with low prices, which generated an enormous debt for the company, which itself generated the cancellation of many programs and the cut on R&D. The next figure is self-explicative

35012011843332be2709b367aa820d00.png


Luckily for us, the new management understands that efficiency is key and this is the reason why Papermaster gave the "25x20" talk

http://www.amd.com/en-us/press-releases/Pages/amd-accelerates-energy-2014jun19.aspx

http://www.amd.com/en-us/innovations/software-technologies/25x20

During the last Future of Compute conference, AMD has emphasized again the efficiency goal

AMD-Energy-Efficiency-Improvement.png


The 25x20 goal means by 2020 you can have a HEDT product of 200W which will be about 25x more faster than current 200W products...
 

szatkus

Honorable
Jul 9, 2013
382
0
10,780


APU in current form (8CU) even with HBM won't match 7750. Low end GPUs will be still moderately important.
But they can throw some HBM to improve their efficiency.

 


Perf / Watt != Raw Performance.

I do agree that having good efficiency relates to good performance, but the small/fine print is "at a power target".

Pilediver is a good example of that. The power target was ~125W and they settled with ~4Ghz. Go beyond that and the perf/watt is stupid bad. Same happens with Intel designs. They envision a power target, then improve efficiency of the implementation around that. Go out of that comfort zone and perf / power gets dumb, but perf still goes up.

Cheers!

EDIT: This is in the context of HEDT, where I do agree Perf/Watt loses meaning for many of us.
 
Status
Not open for further replies.