AMD CPU speculation... and expert conjecture

Page 368 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


Actually, the post is directly above this one...re-read it...and see what I stated. That's what I am going with until I get word otherwise from a reliable source. Not from juanrga speculation...
 


It seems that the 2GB of SDRAM just might be for the OS.

As for the 16x512MB sticks, thats a little much. They need a pretty good memory controller for something like that.
 


16x512MB = 8 GB GDDR5

It would not be in "stick" form, but rather soldered on, and it would likely be using a unified memory controller.
 
Juaranga I see two problems with your ideas that any type of CPU beyond 2m/4c SR is not needed anymore

1. Traditional workloads will always exist. GPUs are horrible at crunching integers when latency is important. If AMD does go APU only, yeah, the APU will compete just fine with FX 8350 or even 9590 in Mantle games, but what about games that don't use Mantle? There are games that people don't want to give up on. Look at how long Skyrim lingered around, and that was an abomination of an engine that uses x87 instructions and scaled horrifically, like it was written by gamerk.

2. AMD is still making and selling HEDT parts as well as server parts. If the APUs were a real replacement, AMD would have simply cancelled Warsaw and no one would have wanted it because the APU Opterons would be better.

I've seen the whole "Warsaw for transitioning customers" line alluded to several times and every time it gets brought up it's implied that the only transition is to ARM and APUs. For all we know AMD might be talking about transitioning to a new high end server platform.

To me the reason for not seeing an HEDT class Steamroller dCPU is because there is no where to actually make one.

AMD has three choices:

1. Release SR on 32nm SOI in the year that Intel releases 14nm. Having a new chip at 32nm competing against a chip at 14nm is not a position anyone wants to be in, that's a massive gap

2. Release SR on 28nm bulk, and watch clock speeds take a hit as well as overall performance.

3. Milk Vishera for all it's worth and translate your console wins in gaming into marketing wins through benchmarks and vast performance increases through software that will just ride on the coat tails of the APU and console developments.

Maybe it is just me, but releasing a brand new product that's 300mm^2+ and will fit in a sub $200 price range and compete with

14nm to 22nm shift, Haswell 4c with GT2 graphics is 177mm^2
14^2 / 22^2 * 177

72mm^2 (very optimistic scenario) 14nm Intel quad cores seems like a completely futile effort.

It makes much more sense to wait until 20nm SOI comes out, so you can go from

20^2 / 32^2 * 315

123mm^2 die (very optimistic of course because of that formula) instead. At that time AMD could even squeeze a GPU on a die in the 150mm^2 range or add a module or two if HEDT dGPU + dCPU HSA is mature enough by then.

Even if AMD wanted to focus on HEDT platform and ignore everything else, they couldn't because there is no where they can make the chips.

No one as even mentioned anything about 22nm SOI that was supposedly coming to GF that got completely cancelled. It is at IBM now supposedly.
 


Supposedly GF was going to do FD-SOI up until extremely recently they advertised the daylights out of it, especially at the SOI Consortium.

Now, we're not hearing much about it, the following is all speculation, of course, however...these are my thoughts:

1.) AMD desperately wanted to do FD-SOI after seeing SR (A) Core performance, so GF was working hard to get the product to ramp.

However...

Something at GF was not working, the ramp date for 28nm FD-SOI SHP kept getting pushed back further, and further, until AMD finally said...screw it! They went with bulk, however, in order to get the performance gains they were looking for so they would even be remotely close to the target performance envelope, they had to design SR (B) Core config because bulk process really screwed up their ability to clock their APUs higher like PD.

I think the initial announcement earlier in the year about 4.0 GHz and 900 MHz on Kaveri was an optimistic projection assuming GF could finally deliver FD-SOI. Once they realized that wasn't going to happen, they scrambled to try to hit those goals on bulk...but they came up short. *SIGNIFICANTLY* short, in fact...

The result of that entire debacle, is that they put the new HEDT platform on hold until they could figure out a way to do it that made sense for them. AMD knows they cannot put out something with a single digit percentage performance increase and sell it well, to keep taking market share, they need a large increase.

This leaves us where we are, where basically we have to wait until they sort out GF and figure out what to do about HEDT. Which sucks...
 
I really don`t care any longer if there going to be any Steamroller high Performance CPU or if AMD is going APUs Only, as long as the damn APU gets the job done with extra performance with HSA and MANTLE i could care less...

Currently Kaveri is castrated by the DDR3 Ram and the performance is not as expected and i really don`t want to grab any FX, they are good just not that good in my opinion.

This PhenomII has lasted and i been gaming strong since Oct 2010 and i am sure it can still can do another year until Excavator APU/CPUs arrive, that should be a worthy and massive upgrade.
 


I see, that makes more sense. Thanks for clearing that up.

It would've been like old server boards if that were the case. lol ^_^
 


No, they are careful not to lie. It's clear you are easily impressed by these rather superficial tech demos. Where the other people here that have actually written some code are much more skeptical.

Just to be clear, one doesn't have to lie to make misleading benchmarks or demos. Queue Hafijurs favorite SuperPI.

This "10x overhead" you're repeating is just a generalization. There is overhead in ALL APIs. These are trade offs made to speed development. PS3/4 doesn't use DX but they're not magically getting 10x faster games now are they? Why? Because OpenGL has it's own overhead.

Here's a rather candid video on Kaveri. He calls it a "low end" part and admits they have no clue how much speedup they'll get with Mantle BF4 on just the APU as they "haven't played with it yet."

http://www.youtube.com/watch?v=l3RY94zGda4

He did say "2 to 3 times faster" with a high end GPU. Which could be great but realistically who would drop $600 on a video card and only buy a $140 APU?
 

i didn't pretend, i just didn't 'read into' benchmarketing and delude myself with semantics, let alone stretch said semantics way out of proportion.
you're lying about the slide displaying fps. it showed an x-axis scale of fps range, not exact fps. i recall you avoiding (for days) posting your own calculation and results.

the lies just keep coming... you never said anything about mantle optimization on slide#13. the slide #13 showing bf4 sp bench didn't have mantle optimization, otherwise it would have been stated explicitly.

i know you never posted the roadmap so your hint(!) means nothing. actually it was another user who posted it first, then another user later. you only claimed to possess a copy, but never proved it like others did. i could correctly accuse you of lying in this regard.

if you knew it wasn't real and most were speculating, then why are you even arguing this?

what signals? there's now a server roadmap as well? LOL
at least you admit that slide#13 wasn't about servers (that's a start in the right direction).

you're the one who doesn't understand... if a 2x 4M die has 2 defective modules, it will have 6 working modules - 12M warsaw/interlagos (depends on how amd bins). if a 4M die has 2 defective modules, it will be a 2M/4C opteron (or an fx4000, if it performs even worse). if the 2x 4M die has 1 defective module (means 7 working modules - 14 possible cores), it's up to amd how to bin that. it will also depend on other parts like cache, imc etc. if a 4M die has 1 defective module, it'll be a 3M/6C opteron/fx depending on binning. you've been wrong from the begining.
warsaw does not replace lower opterons, it only succeeds interlagos. you have a real problem distinguishing between "Succeed" and "Replace".
i asked for proof of warsaw being monolithic since you kept droning on about 12-16 core(instead of mentioning modules or mcm), you failed to provide that. this is the first time you mention modules and mcm after i pointed those out.
warsaw being mcm package, it is simply not feasible to make less than 10 cores out of that package. you don't have to stretch out out as far as demand. i know that you are incapable of calculating demand. moreover, i never said amd would be making 2/4/6/8 core drivatives of warsaw. it's illogical to use a big 2x 4M mcm package to make 2/4/6/8 core cpus. there'd be too much wasted space, nothing to do with demand. amd knows that and they implement redundancies into the design so that yields are close to 12-16 cores as much as possible.
you know what? someone more better versed in servers should be the one explaining this to you, i use too many simple terms.

your poorly attempted sarcasm fails as a miserable diversion from failure to provide proof (or may be you were lying about this as well?). i asked for proof showing amd vc lisa su stating(this week, according to your claim) that amd is abandoning fx and fx and opteron being eol products. don't avoid next time, please. and no Trolling like "She did really mean "we are going to release a 10-core SR FX @ 5GHz tomorrow Stay tunned" :ROFL:" i know for sure i wasn't discussing anything like that.

evident where? do you have anything official? if you have, then please post it here. i thought that amd was carrying on with vishera/piledriver, but you claim the opposite, so an official proof is necessary. fm2+ platform getting improvement does not mean anything. llano apus got integrated pcie controller when zambezi came out that didn't mean amd was abandoning zambezi.
you're still misreading shipping roadmaps. i've started to think that you're doing it intentionally for shock value (according to toms rules of forum conduct, that is trolling).
please provide official proof that amd stated opteron is eol, semantics from shipping roadmaps don't count. and you seem to think that operon 6300/4300's Successor, warsaw will be sold as warsaw brand, not opteron. wrong.

yes, you confirm that you misinterpret shipping roadmaps that explicitly state on the roadmap image that they are intended for shipment timeline. before accusing me, why not improve your own reading comprehension?
the 'list' you 'provided' was full of mistakes and falsehoods that i've already pointed out.
what has temash's speculated failure got to do with your claim of temash being replaced by an arm soc? i asked for official proof, not speculation. the rest of your rant is worthless.
i only pointed out your switching. fabrication is different from shipment.
 

I thought this was actually the whole point of mantle? As bf has been my favorite game for so long if the marketing is right I will re think my upgrades completely after christmas. So to answer that guys who already have a 140$ apu/cpu may go for a high end card instead of a whole new system.

 


You said that the overhead was of 10% in the poor case. I said you that was untrue and we see now examples at APU13 showing 1000% overhead and 3x improvement in performance.

Now your excuse is they used too many calls... LOL

Months before APU13 I explained you, with an example with craters, how developers are forced to reduce the overhead with batch calls, generating poor games.

Developers cannot use more draw calls in current games (i.e. cannot create more realistic and complex games) because the hardware couldn't handle them.

Oxide showed an example with lots of objects being rendered independently. You couldn't draw all them using DX, not even using a hypothetical i7-4770k @ 35GHz (yes 35 it is not a typo). With MANTLE you can draw all those objects independently using hardware that you can purchase in a store.

MANTLE allows one to use the available hardware, DX only can use a fraction of the hardware. Due to DX overhead, people is paying for hardware that games cannot use.



No sure why you repeat what I am saying. I wrote "It will be funny to do the first HSA benchmarks..."

And of course, HSA software will appear in cases where the GPU scales well, otherwise nobody will write a HSA version. I already explained this before but some people in this forum don't read. I recall that I said time ago that nobody will be doing a HSA enabled version of a Word-like program, because current CPUs are more than enough for writing a letter. However, we will HSA acceleration in Excel-like programs, photographic software, video encoders, web browsers...



In purely CPU workloads Kaveri will perform like an i5 for integer tasks, outperforming them in some specific cases (see John The Ripper benchmark in my article).

Kaveri will perform rather poor for floating point heavy tasks, because has about half the resources (see the Himeno Benchmark in my article). But this is not a problem, because the tendency is towards pushing those tasks to the GPU. Why do you believe Intel has developed the Xeon Phi accelerator? Because Intel CPUs are not so fast to compete with GPUs in compute.

This was all shown in my article months ago. Therefore no sure why you have the need to repeat the same...



😆 Everyone here knows that Kaveri is an APU=CPU+GPU. It is also explained in my article.

Moreover, pay attention to the above numbers. I gave a sum where the first number (118) is for the CPU and the second number (738) is for the GPU. The sum (856) is for the whole APU.

For old games (poorly threaded) Kaveri will compete with the 8350, because has four more stronger Steamroller cores. We will Kaveri outperforming an 8350 in some older games.

For recent games (well threaded) the 8350 will be faster than Kaveri.

For future games (HSA, MANTLE) Kaveri will outperform the 8350 by two reasons: first MANTLE eliminates the CPU overhead allowing a less powerful CPU to feed the fastest possible GPU; second, MANTLE/HSA allows asymmetric multi-GPU rendering/compute. A future game could use the dGPU for rendering and the iGPU in Kaveri for post-processing for example.
 


It looks more as both of you trolling me.

The above article said that Temash was not competitive and was being replaced by an ARM SoC. Finally AMD is replacing Temash with Mullins.

The part about replacement was right, the part about the ARM SoC being the substitute of Temash was not. However, both of you pretend that there is no any "replace". LOL

The server roadmap says that Opteron-X is replaced by Seattle (ARM). Yes "replaced".



Is that a "yes" or a "no" to my question?

You do well not trusting my speculations. I only was correct regarding bulk, AM4, SR FX, FX replacement... whereas your well-informed speculations about Kaveri being FD-SOI, drivers for new AM4 socket mobo being released in a website, SR FX coming to AM3+, FX replacement being announced for 2014... have vanished in the air.

And lets us forget your other failed speculations: no PhysX for PS4, jaguar cores clocked at 2.75GHz...
 


He said that the demo was up to 3x faster due to MANTLE. Your exact words were:



The bold part sounds as you calling him a liar who could be using tricks and hidden optimizations to do MANTLE look faster than it is. But he is not lying, because we know that DX has a big overhead and that using low-level APIs provide at least 2x performance.

I was not taking those demos as you believe. If I was doing it as you believe I had said that MANTLE version of BF4 will be "2 to 3 times faster". However, I am not. I said you that I expect the MANTLE version of BF4 to be 30--50% faster.

The "10x overhead" is an approximate number for the overhead introduced by DX over a low-end API. The only way to eliminate all the overhead is the non-API approach, but reducing the existent DX overhead by a factor of ten is very important for games and other graphical applications.

You are also confounding draw call overhead with speed improvement: 10x less API overhead doesn't translate to 10x faster. Pay attention to the APU13 talk that you are criticizing. The demo reduced the DX overhead by one factor of ten, but 'only' run 2x--3x faster.

Funny that you mention the PS4 and its overhead, because I can quote the words of Timothy Lottes (the Nvidia guy who created FXAA):

The real reason to get excited about a PS4 is what Sony as a company does with the OS and system libraries as a platform, and what this enables 1st party studios to do, when they make PS4-only games. If PS4 has a real-time OS, with a libGCM style low level access to the GPU, then the PS4 1st party games will be years ahead of the PC simply because it opens up what is possible on the GPU. Note this won't happen right away on launch, but once developers tool up for the platform, this will be the case. As a PC guy who knows hardware to the metal, I spend most of my days in frustration knowing damn well what I could do with the hardware, but what I cannot do because Microsoft and IHVs wont provide low-level GPU access in PC APIs. One simple example, drawcalls on PC have easily 10x to 100x the overhead of a console with a libGCM style API.

We will see future PS4 games coded in the low-level API that will run about 2x--3x faster than those using the high-level API (OGL ~ DX).


And another misunderstandingfest

The slide #13 is displaying FPS. You continue confounding the concept of a slide displaying FPS in graphic form, with the concept of a slide attaching labels to each bar. The labels are not needed. Labels can be added to each bar for people as you, but one can obtain the FPS from the slide, because are here in graphical form. I got them. I posted them. I said you how to obtain them. Your problem is that you are still unable to obtain them.

You can continue calling other liars, but it is better if you return to the school and learn how to read a bar graph.

The slide #13 was presented in October. The APU13 talks explaining MANTLE were given in November. Evidently I didn't mention MANTLE before, in October. I am doing it now. I am explaining you now why what we know about MANTLE now coincides with what I predicted about what the slide #13 was trying to transmit. The slide #13 uses the ordinary version of BF4, I never said you that it uses the MANTLE version. You don't read and when you do you misunderstand everything.

I am tempted to explain all this to you again, but I feel as you will misunderstand everything once again.

I see that you don't still get the stuff about servers. There is no new "2M/4C opteron" for 2014. First, Opterons are replaced by Warsaw/Berlin/Seattle. Second Warsaw is only 12 and 16 core, which means that dies with two defective modules are trashed.

The FX4000 uses the old PD modules. There is not FX4000 with the new Warsaw-enhanced PD modules. Stop inventing things.

Nobody said you that warsaw replace lower opterons. In fact I gave you the next list:

Warsaw replaces Opteron 6300/4300
Berlin replaces Opteron 3300
Seattle replaces Opteron-X

Why don't just read instead inventing?

I have said you that Warsaw uses MCM technology (2x6 and 2x8). My exact words:

And another misunderstanding-fest.

AMD is making a big die with 4 modules based in a PD refresh. They will be using those for the new Warsaw 16 core (8x2 MCM). Dies with one defective module will be used for the 12 core version of Warsaw. Dies with two defective modules will be trashed. There is no 2/4/6/8 core version of Warsaw.

Again: why don't just read instead inventing?

According to your particular logic we cannot conclude that AMD is abandoning Opterons, Trinity, and Temash because she didn't say those exact words during the opening keynotes. Also she didn't mention a 10-core SR FX @ 5GHz. Therefore according to your logic AMD could release one tomorrow because they didn't explicitly said the contrary :ROFL: Can the rest use our brain and think different?

Did you see the 2014 desktop roadmap? Once again, the FX "don't even receive a refresh a la Warsaw and all the important stuff is being made in FM2+ platform for APUs".

Did you see the 2014 server roadmap? Once again, all the Opteron chips are being replaced. See the above server list and stop trolling.
 


What I have actually said is that Kaveri 4C will be able to compete with i5 and FX-6000 for traditional workloads. See my article in BSN* comparing Kaveri CPU to several SB/IB i5 using traditional workloads. From my article:

The above benchmarks use traditional software, which only use the performance of the CPU and ignores the rest of the APU.

Then I have mentioned that Kaveri 4C will be able to compete with i7 and FX-8000 for HSA workloads. Again from my article:

The new Kaveri APU will show its real strength with HSA software, which will exploit the performance of both the CPU and the GPU. [...] With HSA enabled software, Kaveri has the potential of being much faster than an Intel i7 or an octo-core AMD FX.

I understand that noob still believes that the benchmarks in my article are about HSA or "APU optimized" (like he calls them), because he cannot read, but I don't understand why some other people insist on misunderstanding this. I confess it is a mistery for me. Are not the above quotes sufficiently clear and unambiguous?

In my article I didn't wrote about MANTLE, because MANTLE had been just presented when I wrote it and I knew little about MANTLE. Now after APU13 I know more. Please pay attention to the next slides

IMG0043517_1.jpg


With MANTLE a blazzing fast CPU is not needed. A "mid-range CPU will be sufficiently fast to drive the best GPU".

IMG0043521_1.jpg


With MANTLE the R9-290X is GPU bound with a FX-8350 downclocked at 2GHz

FX-8350 @ 2GHz ~ 4 PD cores @ 4GHz < Kaveri CPU

MANTLE also introduces asymmetric multi-GPU coding. This means that future games can render the frame in a R9-290 whereas iGPU in Kaveri makes the post-processing of the scene.

With MANTLE games, Kaveri APU + dGPU will be overkill.

With HSA software Kaveri APU will be overkill.

With HSA + MANTLE... wow!

With ordinary software Kaveri will behave like an i5/FX-6000 approx.

Is my point understood now?

Regarding the rest of your post. AMD will be tapingout 20nm bulk the next year and then will move to 16nm bulk FinFET.
 

that slide isn't displaying fps. it is hiding the exact values while giving the illusion of discernable fps difference. that's why exact fps values were needed. amd didn't provide the values as part of their benchmarketing campaign to advertise a10 6790k's non-existent performance advantage.

you only say these, but never posted the values to me, even after days of avoiding.

others? i only accused you. even that's based on the evidence i've observed so far.
your claims on slide #13 was that amd was abandoning higher core fx(hedt) for apus because a10 6790k performed closely to an offline single player bf4 benchmark. there was no relation to mantle (at least not when you started arguing with me on slide#13). in the previous post that i replied, you claim you predicted something about mantle. you never did, in reality. check your own posts.

i've already stated multiple times that i have low knowledge on servers. arguing me on servers is like beating a paraplegic person. i only mistyped 12M instead of typing 12C in the reply post. that was my mistake. dies with 2 defective modules will likely become an fx cpu. (edit: simplfied) this is amd's decision though. if they ditch dies with 2 bad modules, they waste chance of selling them as a cheapo quad core cpu as long as it passes binning. the binning process has already been explained to you.

you're again putting words in my posts that i never said. did ever i say there will be an fx4000 cpu with modules found on warsaw? please show me where i said that.

wow, this is so tangled i can't even trace backwards. i'll try. this started when you claimed in your second hypothesis(that strongly reeked of fact-based claim during the time of posting) amd abandoned fx because fx didn't get revisions like warsaw(and that you admitted being wrong about the 2 hypotheses(that strongly reeked of fact-based claims during the time of posting).)

i said amd doesn't waste silicon. and "the dual modules that passes binning will continue to be sold." <- this was about FX in respose to FX in Your post, not warsaw. because the dual module cpus corresponded(in my post, as a response to yours) to fx4000 cpus. i know amd doesn't make dual module cpus from dual die packages, despite having low knowledge about servers. you responded-

since you didn't reply about FX, i thought it was another statement you failed to counter and switched again. i replied-

that was me trying to point out your fallacies.
you.are.the.one.who.started.arguing.about.warsaw.
you.are.the.one.who.brought.up."lower core".warsaw.versions. ->your own 'inventions', knowing amd won't make such sku from warsaw. and misunderstanding what i was replying to.
let's set my 'logic' aside for now. i asked for official proof of lisa su (per your claim) stating amd is abandoning fx and opterons and fx and opterons are e.o.l. just post the proof and we'll be done on this matter. please do not troll anymore and do not avoid.

i did, and vishera carries on, contrary to your claim of amd abandoning fx.
apus are meant to have more integrated components by design, as well as optimizations that facilitate apus. apus and cpus(fx) address two different market segments.

but where is the official proof your claimed opterons are e.o.l.? shipment roadmaps mean very little.

additionally, i asked for proof on amd replacing temash with an arm soc. please provide that as well.
 


Now it is "supposedly"? Wait. A pair of weeks ago GF 28nm SOI was the best thing in the universe and AMD doing bulk was a bad decision (you said us). Now something at GF is "not working"

:ROFL:

If AMD had made the mistake of selecting SOI at GF, it couldn't be presenting Kaveri at APU13 and probably would cancel the entire project.

GF is a disaster, always was. In the past, Cray lost some contracts because AMD couldn't provided Opteron due to GF delays and under-promising.

The decision to go bulk was taken many many years ago. In fact the guy who took it is no longer working at AMD. I recall 'experts' in this thread criticizing him for his decision, but he was right and has saved AMD from GF SOI.

In fact, GF is such disaster that Samsung is taking the R&D SOI just now.

About the rest of your speculation...

Your claim that AMD had to redesign SR core, because bulk couldn't provide the PD frequencies is nonsense. Kaveri is clocked at 3.7GHz, that is 100MHz less than Trinity (SOI) and 300MHz less than FX-8350 (SOI). You present it as if it was 2GHz vs 4GHz, but it is not.

Also Kaveri is not 100W. If it was 100W you could upgrade the CPU clocks to 3.8GHz matching Trinity clocks on SOI.

Richland is 4.1GHz, but Richland is based in a mature process tweaked during years at GF, whereas Kaveri is their first product in the new 28nm high-performance process.

AMD never did an announcement "about 4.0 GHz and 900 MHz on Kaveri". Those were guesses made by people, including myself. In my article about Kaveri I added a table with three different combination of frequencies whose combination gives the originally expected 1050 GFLOP. The lower frequency in my table is 3.8Ghz. Kaveri is finally 3.7GHz. Wow! 100MHz of difference!

The reason why Kaveri CPU is 3.7GHz and not higher is related to the fact that originally Kaveri was a 100W APU, but it now has a lower TPD. It is also related to the GPU being more powerful and eating more power and generating more heat.

In Richland the iGPU was ~35% of the APU. In Kaveri it is ~50% of the APU.

The CPU in Kaveri will be overclocked beyond 4.5 GHZ without problems. It will not break worldwide records either.

The reason why the GPU cores of Kaveri are not 900MHz but lower is related to bandwidth. The original Kaveri APU was planned to come with GDDR5 support (someone has said me that the final Kaveri die contains a disabled GDDR5 memory controller). If this is true the decision to abandon GDDR5 was made very very late.

Without GDDR5 the iGPU is memory bandwidth limited and increasing the frequency by a 25% (or more) will increase the power and the heat much more than the graphics performance.
 


Therefore your reply consists on repeating the same mistakes, misunderstandings, lies, and insults... whereas continue asking for material that was given before. LOL
 

could you point those mistakes and misunderstandings out? it'd be really helpful.
what did i lie about? please point those out instead of making accusations.

i only asked for proofs of the following claims that you made:
that the slide #13 from a10 6790k intro event says that amd is abandoning fx cpus.
the calculations and intermediate fps results that you claim to have posted but never posted to me. i could not find those.
that lisa su stated (in this week) that amd is abandoning fx and opterons. <- official statement is needed.
that fx and opterons are e.o.l. <- an official eol announcement would be helpful.
that amd is replacing temash apu with an arm soc. <- again, official proof would be great.
 
I actually like gamerks idea of an 8350 + GT630, haha. I wonder if the 8350 would be too hot for an HTPC case, hmm.... I think the FX4350 would be better suited for an HTPC. What will be the price of Kaveris top of the line? Anyone knows or has an estimate?

If they are around $200, then the idea of an FX4300 with a cheap nVidia or AMD card would be a tough sell, unless the PCIe slot is needed for something else, like in my case.

Anyway, in regards to the GF situation, well, it sucks. Not even with the extra cash from AMD and the new buyers they were able to put the roadmaps in the promised time frames. That is just sad. We all lose because of that, I guess.

So... When is BF4 MANTLE edition coming out? haha.

Cheers!
 
^^ right now, the top richland apu -amd a10 6800k sells for $140.
kaveri's price (top apu) depends on how amd perceives it. since the cpu and gpu cores appear significantly faster than predecessors, amd might decide to undercut entry level core i5 cpus instead of mid range core i3 e.g. start from $160-170 and drop (core i3 4340 is $160 on newegg). usually apus started at $150 (msrp). supply will also be a factor for retail price.

on a slightly different note:
amd a10 7850k,
amd performance edition ddr3 1866 ram,
amd radeon hd 7730 or 7750 or 7770 with 1 GB ddr3/gddr5 vram whichever is able to dual gfx,
amd radeon ssd 128 GB,
Mantle api, drivers, and rest of the parts.
playing bf4 on it, with mantle support. - looks like amd is creating it's own software and hardware ecosystem. mantle is the begining on the software side. sign of an upcoming closed off world? i wonder...
if it was a laptop, would be awesome (gaming with mantle and directx both).
 


The above mentioned FX-8350 downclocked to 2GHz would be rather cold. haha!

The MANTLE update of BF4 is scheduled for December.
 


Fixed it for you.

2133 RAM, the 280X, and the 1TB HD is the configuration used by AMD in the infamous slide #13 :sarcastic:

Moreover, so far as I know there is no SSD Radeon brand. Correct me if I am wrong.

But it seems more interesting to use a R9 290/290X with Kaveri. Why? Because those two cards already support "system unified addressing" and may be the perfect complement for a hUMA HSA APU.

Finally, it is worth to notice that AMD has always made software. See the software section in their Wikipedia article. AMD now reinventing itself as a "gaming company" is focusing the software development to gaming.
 

slide #13 has nothing to do with the system i was imagining.
selected ddr3 1866 instead of 2133 for a10 6800k's default compatibility of only 1.5v ddr3 2133. if kaveri retains that, 1866 would be more prevalent... unless 1.5v ddr3 2133 modules get cheaper in time after launch.
there's no radeon ssd right now. the radeon ssd is based on a s/a speculation/rumor.
http://semiaccurate.com/2012/11/28/amd-to-launch-radeon-branded-ssds/

yeah.. a more powerful gfx card will always deliver more gfx prowess. i combo'ed the most possible candidates for amd dual gfx.


edit: AAAGH!! i just realized that i missed a great chance to ask about radeon ssd in the recent amd ama. i know those guys are from gfx dept. what a wasted opportunity T_T.
 
Status
Not open for further replies.