AMD CPU speculation... and expert conjecture

Page 222 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I did mean HSA enabled applications.

Comparing a HSA enabled APU to one of the faster Intel CPUs offers an idea of the potential of the technology. It is not very different from those reviews that compare OpenCL applications in 3770k/4770k against a FX-8350 (which doesn't even have a iGPU).
 

GOM3RPLY3R

Honorable
Mar 16, 2013
658
0
11,010


I really do agree with you here. It is probably the way that the CPU's are built in correlation to their IPC. It's either:

A. Lower IPC will improve performance (which, kind of is, you know, out of the bulletin)

or B. The SB-E chips physical design and possible other attributes that are added can cause a usual increase in performance. Simply, it was built more towards efficiency, as that's what anyone would do upon improving something.

Then we come to our answer, which is your statement:

The comparison is naturally flawed because we have no baseline to bridge between the 2 separate uarch's.

In which right here, you are right, and I am telling you they you're completely wrong.

On a simple manner, creating a baseline is very hard. It requires many tests, tries and failures before becoming your baseline. Of course this is with something very complex with our world of CPU technology: "IPC, CLOCKSPEED, CORE AMOUNT, ARCHITECTURE..." and so on.
---------------------------------------------------------------------------------------------------------------
There is a easy (on paper) way to solve your statement.

Let's take a break and have a mind teaser.

Student A, B, and C are taking a 3 times test:

Student A scored 90 on her first test, but only 85 on her next test, however 100 on the last. Thus her average would be: 92 (91.6 repeated rounded ; all results are rounded to the nearest unit).

Student B took the test and got 70, 75, and 60 on his. Avg: 68

Student C: 40, 50, 35 on his. Avg: 42

Obviously, the average of all three students averages will be: 67

Thus we have are median score on a rating of 0-100.

Percentile Explanation:

So then, if someone does better, you need to do this. The percent of increase or decrease is dependent on the number to the top and bottom. So 67 is 33 away from 100, out top, so 33 = +100% for the top, and for the bottom, 67 = -100%, which will be out percentile. For example, a few years ago, I was in the 90th percentile for height growth, now I am in the 70th, meaning if that average number didn't change, I'm growing less than before. IRL most percentiles are use with 50 as the median, but for this it'll be different, since it's easier this way.

Student A, getting an average of 92, will be positive, but how much?

So with 33 being our positive of 100%, and 25 being the difference from the median, we'll do some math.

Simply, you can divide 33 from 25, even though you can cross multiply and then do a multi-step equation.

33 / 25 = 0.7575 repeated. Lets round to the thousandth, so 75.8%

This means that Student A, is + 75.8%0 (above average = +, and below = -).
Student B: + 3.0%
Student C (will use 67 since he's negative): - 37.3%
--------------------------------------------------------------------------------------------------
There you have it. Simply, just use the stock CPU settings, plug and play basically, and run their test scores (from trusted sources), to see their average median. So then, if any CPU does better or worse, you can use this theory on percentile to see if they are doing better or worse. And obviously, if one does better than the other, you can tell by if its a better number. If it's negative, the lower percent, the better. If positive, the higher the better.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Thanks by this useful reply. I would say that by "A" and "B" I didn't mean threads but cores within a module.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


But this was not the case I was considering. I was considering something like this

350x700px-LL-d3796154_proz20amd.jpeg


where "moar cores" imply more performance because the engine is designed to go wide.

E.g. a dual core is loaded near the 100% and then a quad core gets its cores loaded above the 94% giving more performance, not two cores near 100% and two cores unused (which would give the same performance than the dual).

That is what I did mean by "well-threaded", although I would like to see an engine loading the 8 cores above the 90%.
 

szatkus

Honorable
Jul 9, 2013
382
0
10,780


I know you did. But it's only part of puzzle made by AMD. HSA is a great technology, but only for some computation. We still need performance of single thread.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860


If I had to guess, AMD did get some pretty bad publicity for releasing 990fx am3+ motherboards with no cpus even though it was compatible with phenom II.

unfortunately its the bad people who make thing like what your wanting to do not good for a company image. as you stated, it makes sense to want an upgrade path.
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


What you missed is that there isn't HSA ( Heterogeneous System Architecture) without an heterogeneous PE (processing element) present... otherwise it doesn't make sense talk about HSA.

OpenCL IS NOT HSA.

OpenCL is only a way to do better multithreading with good aligned and well behaved data sets... OpenCL on the CPU alone misses *LARGE* part of its potential. But it depends on the SOFTWARE how it was designed, CPU here, AMD or Intel or other is agnostic. Matter of fact should be the other way around... OpenCL almost exclusively on a (i)GPU... large well behaved well aligned data sets are typical of embarrassingly parallel work loads, that should feel very much at home on an iGP, so for this CPU counts for almost nothing.

Of course with this a $100 iGP beating to absolute ridiculity a $1000 CPU, is something that the "powers of marketing" don't want you to be aware, that is why Luxmark has been co-opted for the next season of the "premier league football championship" for morons only lol

In the end what you want, is to discuss SOFTWARE and craved it into the responsibility of a single PE (processing element), in this case a CPU (to too much delight of the power of marketing, i'm sure)... and blindly hope that flies ( or intentional or by deep ignorance).
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


Yes that is CMT (cluster multithreading ) where "one thread = one cluster which is a CPU core"...

But this doesn't apply to SMT(simultaneous multithreading) and so to Intel.. so for the sake of comparativeness when you talk A or B you must talk "threads" not cores.

Imagine "Horinzontal Multithreading"... you have now a cluster = core which is each a "2 ALU exec pipes" + "2 AGU(mostly) exec pipes".

Now divide that in 2 sub-clusters, somehow kind of independent in operations from each other... then you will have " 2 new clusters" for each former one, one "2 ALU" and the other "2 AGU (+some ALU + MOV)".

Now you can have 2 threads in a single former core and IS NOT SMT, each thread sees all resources for itself as if its alone in a core(which is = 2 sub-cluster now) ( is so because its incredibly fast switched, like the vertical VTM for a fetch or branch or decode), its "switched on event= block Multithreading" on clusters , in this case sub-clusters.. .and you can say its CMT (cluster multithreading) all along to.

What you have with each "core" now dived into "2 PE (sub-clusters)" , one part about "memory operations almost exclusively", and other part about "pure Integer ALU exec" is a DAE (decoupled access/execute) architecture...

Its old
http://en.wikipedia.org/wiki/Decoupled_architecture
http://courses.engr.illinois.edu/ece512/papers/smith.1982.isca.pdf
http://personals.ac.upc.edu/jmanel/papers/euromicro98.pdf

And AMD found that if well implemented can have plenty of advantage over a "traditional approach" (up to 113% better in one test approach)...

http://www.xbitlabs.com/news/cpu/display/20120207135832_Engineers_Show_Way_to_Improve_Performance_of_AMD_Fusion_Chips_by_20.html

http://news.ncsu.edu/releases/wmszhougpucpu/

CPU-Assisted GPGPU on Fused CPU-GPU Architectures”

Authors: Yi Yang, Ping Xiang, Huiyang Zhou, North Carolina State University; Mike Mantor, Advanced Micro Devices

Presented: Feb. 27, 18th International Symposium on High Performance Computer Architecture, New Orleans

Abstract: This paper presents a novel approach to utilize the CPU resource to facilitate the execution of GPGPU programs on fused CPU-GPU architectures. In our model of fused architectures, the GPU and the CPU are integrated on the same die and share the on-chip L3 cache and off-chip memory, similar to the latest Intel Sandy Bridge and AMD accelerated processing unit (APU) platforms. In our proposed CPU-assisted GPGPU, after the CPU launches a GPU program, it executes a pre-execution program, which is generated automatically from the GPU kernel using our proposed compiler algorithms and contains memory access instructions of the GPU kernel for multiple threadblocks. *The CPU pre-execution program runs ahead of GPU threads because (1) the CPU pre-execution thread only contains memory fetch instructions from GPU kernels and not floating-point computations, and (2) the CPU runs at higher frequencies and exploits higher degrees of instruction-level parallelism than GPU scalar cores. We also leverage the prefetcher at the L2-cache on the CPU side to increase the memory traffic from CPU. As a result, the memory accesses of GPU threads hit in the L3 cache and their latency can be drastically reduced. Since our pre-execution is directly controlled by user-level applications, it enjoys both high accuracy and flexibility. Our experiments on a set of benchmarks show that our proposed preexecution improves the performance by up to 113% and 21.4% on average.

... it can also be applied for typical CPU workloads as i described... in that "Decoupled Architectures"... and so AMD when having "4 thread per module" implementation" or "2 thread per core", i'm NOT talking about "weak threads", but "each thread most probably more performant than in an actual BD/PD core" and so "more performant than on intel CPU cores also"... and this on the same basic config of 2 ALU + 2 AGU of today.

Yes... many will say i'm dreaming, its not possible... but i assure you its possible, the traditional approach is just that, TRADITIONAL & OBSOLETE... its time for something new and better.

So "independent" of the performance on any particular benchmark, which is typical and dependent on the SOFTWARE of that benchmark alone, not really generalizable... when i say AMD has with its "decoupled BD uarch" far better *potential* uarch than anything Intel so far... i'm not saying by fanboyism, and hope the long explanations helps as arguments (lol)

... what happens most of times is ppl parrot marketing slogans, that have little representativity of reality... but i think its a sign of times, not that one company is better than another... real science, innovation and some risk taking is gone... its time for marketing... its time for the "ism" aficionados to beat each other to dead over ridiculous 10, 20 or 30% differences, and its driven by marketing.
.
 

8350rocks

Distinguished

That makes no sense for an older architecture with lower IPC and less clockspeed to be better at something that doesn't take advantage of it's only advantage...2 more cores! Core counts matter when you look at apples to apples, same architectures...you cannot deny that AMD and Intel architectures both see FPS increases adding cores up to 8 physical cores.
Then we come to our answer, which is your statement:

The comparison is naturally flawed because we have no baseline to bridge between the 2 separate uarch's.

In which right here, you are right, and I am telling you they you're completely wrong.

On a simple manner, creating a baseline is very hard. It requires many tests, tries and failures before becoming your baseline. Of course this is with something very complex with our world of CPU technology: "IPC, CLOCKSPEED, CORE AMOUNT, ARCHITECTURE..." and so on.
---------------------------------------------------------------------------------------------------------------
There is a easy (on paper) way to solve your statement.

Let's take a break and have a mind teaser.

Student A, B, and C are taking a 3 times test:

Student A scored 90 on her first test, but only 85 on her next test, however 100 on the last. Thus her average would be: 92 (91.6 repeated rounded ; all results are rounded to the nearest unit).

Student B took the test and got 70, 75, and 60 on his. Avg: 68

Student C: 40, 50, 35 on his. Avg: 42

Obviously, the average of all three students averages will be: 67

Thus we have are median score on a rating of 0-100.

Percentile Explanation:

So then, if someone does better, you need to do this. The percent of increase or decrease is dependent on the number to the top and bottom. So 67 is 33 away from 100, out top, so 33 = +100% for the top, and for the bottom, 67 = -100%, which will be out percentile. For example, a few years ago, I was in the 90th percentile for height growth, now I am in the 70th, meaning if that average number didn't change, I'm growing less than before. IRL most percentiles are use with 50 as the median, but for this it'll be different, since it's easier this way.

Student A, getting an average of 92, will be positive, but how much?

So with 33 being our positive of 100%, and 25 being the difference from the median, we'll do some math.

Simply, you can divide 33 from 25, even though you can cross multiply and then do a multi-step equation.

33 / 25 = 0.7575 repeated. Lets round to the thousandth, so 75.8%

This means that Student A, is + 75.8%0 (above average = +, and below = -).
Student B: + 3.0%
Student C (will use 67 since he's negative): - 37.3%
--------------------------------------------------------------------------------------------------
There you have it. Simply, just use the stock CPU settings, plug and play basically, and run their test scores (from trusted sources), to see their average median. So then, if any CPU does better or worse, you can use this theory on percentile to see if they are doing better or worse. And obviously, if one does better than the other, you can tell by if its a better number. If it's negative, the lower percent, the better. If positive, the higher the better.

Look, you're getting way out into left field...if this was Wrigley Field, you might already be lost in the Ivy on the wall out there....

I put the conclusions we could logically arrive at based on the data...and you're trying to extrapolate results from it that are tangential at best, and irrelevant at worst.

The question at this point, is not AMD vs Intel, I think that's where you missed the boat...

The question is, do more cores make a difference in these highly threaded games?

The answer is, in anything but a RTS title...overwhelmingly YES!!! They do make a difference.

 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Steamroller will increase CPU performance up to Ivy Bridge levels, which is more than enough for the average user. HSA is an extra boost for lots of applications: browsers, games, video editing, image manipulation, spreadsheets...



I think you are confounding me with szatkus. He is who said that HSA = OpenCL.

Both 8350rocks and me said to him that HSA is something much more important. In this same thread, I recommended to szatkus the tomshardware article explaining the differences between HSA and OpenCL.



But I was not comparing with Intel. I was comparing BD/PD modules to Steamroller modules.

Meanwhile, I found an article that discuss the effect of doubling the decode within a module, and obtains the same conclusion I got above: twice decodes working in parallel offer a theoretical limit of doubling performance. Of course, the average improvement in performance will be far from doubling "since it's rare that a 4-issue front end sees anywhere near full utilization". However, this was one of the main bottlenecks of the BD/PD arch and I am happy that was being eliminated.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


SR will likely see the largest variance of % performance increase in the history of AMD architecture evolutions.

I would expect benchmarks where Bulldozer/Piledriver scaled very well like Media encoding, compression apps, rendering to not see much improvement. In these apps the decoder has all the micro-ops already in the cache, and you don't see the backend (execution) being starved for work. There will be some minor increases due to cache changes and probable clock speed increase. You may see only 5% here.

In productivity apps, games(hopefully) and general multitasking you'll see a much bigger improvement. These are scenarios where the instruction decoder couldn't keep up with the more varying instruction workload, and the backend is starved. These apps are where you'll see the 25-30% improvements.

In general SR should act more like a true 4C/4T processor, than a 2M/4T processor.

In retrospect I still hate that they went to this module design. It limits your overclockability and TDP scalability by having the cores tightly coupled. Like the cache and FPU have to run at the higher of the 2 clock speeds, even when the B core is down clocked. They could have scaled up Jaguar cores for higher IPC and clock speed.
 

cowboy44mag

Guest
Jan 24, 2013
315
0
10,810


+1 I know that all the speculation is Kaveri APUs will feature 4C processors, but I wonder if that will carry over to Steamroller FX? Will Steamroller FX be 4C or will they still offer 8C processors? With all the improvements of Steamroller over Piledriver an 8C Steamroller FX processor should be amazing, possibly even a "game changer".

I know that there are a lot of Intel fans that are still saying a lot of people over-hyped BD (and therefore Steamroller is being over-hyped as well), but really I can excuse AMD for rushing BD (I think eveyone knows that PD is what BD should have been), but you guys also have to remember that Intel just did the same thing with Haswell. For Intel, that is unacceptable. They have a huge R&D budget, and for a fourth generation Intel processor the "gains" made are a joke.

That leads to Intel's biggest ace in the hole. Everyone talks about their R&D department and the huge budget they have and the talent they have, but their most important departments are their PR department and their Marketing Department. I wish that I could get a couple guys or gals from their PR & Marketing Department to come to my ranch for a few days and do their magic. I have over 60 horses at my ranch so what manure I can't spread I give away to local farmers and pay to have hauled off. If I could get Intel's Market Department magic on my side I could be selling 10lb bags of it for $100:D If ANYONE can make a worthless pile of ... manure seem like the greatest thing man has ever discovered- its Intel's Marketing Department:lol:
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


The current roadmaps don't have any 8+ core SR in sight for 2013/2014. Perhaps in 2015 they will make a comeback or they could just skip to EC.

We'll probably see an 8 core Jaguar first for servers.
 

GOM3RPLY3R

Honorable
Mar 16, 2013
658
0
11,010

I feel that you are misunderstanding something here. This isn't about AMD vs Intel, this is making an easily solution to see both that if IPC vs Cores will work better. Obviously, there is not one CPU out there where if you add cores, it wont improve.

The point from this is to make a very averaged and overall performance statement based on overall performance results. This is to see, "Okay, the X CPU compared to Y is better, and when you overclock it A, it has a better performance percent index (ppi, is what I call this system here), than CPU B."

Basically, it's giving the interpretation of how good it would do if someone did every kind of thing from gaming to video editing, coding, and so on. Now yes, no one does EVERYTHING, however, giving these CPUs a general score, will help see if it improved overall from a Core / IPC increase. They can also be sub-divided into what category (gaming, video rendering, multi-threaded apps, etc.), and be judged better then.

It's really to benefit the Computer Community as a whole, giving them straight results that show how the performance of our machines is coming along, and if someone wants to do something specific, say play games, they can say, "Okay, this CPU X has a better performance increase than CPU Y." So on and so forth.

Don't be so quick to judge, take time to think it out first.
 

cowboy44mag

Guest
Jan 24, 2013
315
0
10,810


That's a little disappointing. I know that if Steamroller is as good as I think it will be a 4C Steamroller FX will be a very good processor (equal to but greater than an 8C Piledriver), but an 8C Steamroller FX would be awesome.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810
Finally finished listening to the earnings report.

Those Kaveri delay rumors are pretty much guaranteed true now. There was absolutely ZERO mention of Kaveri or Steamroller.

The entire call basically revolved around Jaguar and the console wins. Which is good for AMD's bottom line but for those waiting for SR the wait will be even longer now.

To get their operating costs down they must be cutting into R&D and that's delaying the higher end cores. :(
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780



I think there will be 2 kaveris, one for "mobile" ( down to ultra-thins ).. and one for mainstream/desktop.

The first will be low power 28nm "bulk", and is incredibly "late" compared with 28nm TSMC that debuted almost 2 years ago... the other one will be 28nm FD-SOI (that comparatively as Glofo announced is like half node shrink compared with 28nm bulk).

So i think is this last one, the one that is supposedly late... but after all Glofo (officially) announced that 28nm FD-SOI would only be ready for full production ramp-up in 2014... and though porting a bulk design to FD-SOI is quite straightforward ( its all planar both processes and a share the same BEOL (edt) and some MEOL), i think also AMD will take advantage of the shrink to beef up something, specially if not exclusively the FlexFPU.
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


Its the same thing comparing with BD/PD. One of the main mistakes of AMD was claiming 2 cores per module when the performance of 2 threads in a module wasn't anywhere comparable to 2 cores. I think its possible that with "vertical mulrithreading", as i described, theoretically achieve 90% the performance of 2 cores - ( real world perhaps a little above 80%)...

In any case is better talk about threads not cores.

Twice the decode doesn't mean twice the performance in *ANY* circumstance. Not even 2 full cores means twice the performance, there is the cache and interconnect contraints... and synchronization for multi threading software to account for.

Only "vectorization" and full "embarrassingly parallel" data sets on a embarrassingly parallel processing element ( like a GPGPU) can break the Amdahl's Law, its called the Gustafson Law. This last means "data" must be very well aligned and very close to the ALU pipes, on huge "register files" like GP/GPUs, that in Tahiti as example, has 12MB of "memory"... only the point, of that, 8 MB are register files, and other 2 MB are scratchpad memory(register extension)... only ~2 MB are "normal" caches...

This topology is kind of impossible for CPUs... in those its better much smaller RFs, and so much faster RFs, because the data is inherently sequential.

To me the "decode" been the "uber problem" is a marketing slogan (any CPU, or GPU for that matter, is already full of tradeoffs).... simply x86 as not much more than this, its "plateaud"....CPU work is inherently very repetitive on small loops of instructions with relative small data sets, so the best way to overcame decode constraints is wit a "decoded cache" AFTER the decode engine, and if "tracing" the merrier... which Intel solved by having a " straight decoded L0"(after decode) since the SandyBridge.

Yet was not because of that performance doubled, it was like more 10 to 15% to 20% (depends on SOFTWARE)...

Performance can only double OR MUCH MORE if you break the "strong dependency model" of x86... one way and easy to achieve that is "full vectorizing an parallelizing" the workload (another more difficult is SpMT (speculative multithreading) or "dataflow approaches" with HTM (hardware transactional memory)), this means SOFTWARE must be designed for the effect (which is only mildly on benchmarks at description, with no representativity with the software ppl use).

So i think AMD can achieve the same effect of a "decoded cache" with buffers... Jaguar already has them for the "fetch" section, SR can have additional (loop caching) ones before dispatch and after decode.

This mitigates the sharing of those " vertical domains", and i think is specially this AMD is after, 30% more ops is if account 2 threads on the same "module", count only 1 thread per module and is half of that, they are after "greater parallelism"... but who cares about uper duper stupid obsolete single-thread performance, right ?? ... only an idiot (marketing contents creating mases of them)...

... continue with that single-thread approach and is NO POINT AT ALL making CPU chips with more than 4 cores, its not rigged that a $1000 "exterme" CPU can lose for the 4 cores $300 brethren (like since SNB compared with SNB-E), the benchmark code has only 4 threads or less (like most PC world software), so running it on a 6 cores (and more threads) is no benefice at all... and is the higher clock/turbo that mandates then

I hope this "illustrates" the stupidity well.

 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


The contrary, the cluster approach, by permitting to divide the workload in "separated" type of instructions and data, is the "feature" (edt) that more helps "clock cycle" (since long, academia simulations shown this)... on AMD module only the FP code "type" is off the CPU code paths.

But its already clever, prone to achieve higher clocks AS DEMONSTRATED... and prone to future enhancements... and no its NOT BD/PD that is P4, matter of fact BD has less pipeline stages than SB/HW, 15 compared to 17 for SB/Hasfail. Only they toke the possibility of separated code paths to streamline the CPU cores further (they went a little too far i think).

 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860


I think the issue is slo glo. seems they still haven't got good yields on the 28nm and amd is paying per wafer not per working chip. AMD is at their mercy right now as far as 28nm soi timing.

seems samsung may be the way amd needs to head ~2015 and get some low power 14nm parts. If GF is still fighting 28nm, I don't see them getting 14 till 2016 or later.

http://www.macrumors.com/2013/07/14/apple-reportedly-signs-deal-with-samsung-for-14-nm-a9-chips-starting-in-2015/
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


And media encoding, compression apps, rendering... would benefit a lot of from HSA.



AMD-Server-Roadmap.jpg


Server roadmap shows that AMD is either replacing 4/8 piledriver cores series by 4 steamroller cores or eliminating 4/6/8 piledriver cores series and releasing the new Warsaw CPUs with only 12/16 piledriver cores. I suspect that Warsaw will be replaced by a future 6 steamroller cores CPU/APU.

I doubt that AMD will be releasing a FX 8 steamroller cores series. Note that kaveri 4 cores single-thread performance would be as an i5-3350P, whereas multi-thread performance would be like a FX-6300, more or less.

Moreover, kaveri comes with HSA support: about 500% increase in performance in parallel workloads. A hypothetical FX with 16 steamroller cores wouldn't be so fast in those workloads.

About AMD/Intel. Both showed their results for Q2 2013. AMD got better results than analysts waited. Intel got poor results than analysts waited. AMD stock ^^^ as AMD gains confidence, Intel stock vvv as Intel loses confidence.

Moreover, Intel marketing dept. is having hard times. Recently they launched a news offensive about how Intel next mobile chip outperformed the fastest ARM processor... but experts showed that the Antutu benchmark used had been cheated. A corrected version of the Antutu benchmarks showed, a pair of days ago, that Intel mobile chips are very far from the performance of fastest ARM processors.

Does not this resemble what Intel did with Sysmark and AMD?
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


Till today when the stock is on a free fall (-14%). The analysts didn't sound to impressed. They've known about the XBone/PS4 wins for a while and it's already been factored into their higher share price. They need another big win like Google/Facebook looking for a custom APU.

The complete lack of any mention of Kaveri/Steamroller troubled me. In January they at least mentioned that Richland was shipping to select OEMs.
 
Status
Not open for further replies.