AMD: ''No Doubt, We Lost Market Share in Q2''

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

clownbaby

Distinguished
May 6, 2008
264
0
18,780
I'm not sure how AMD competes at all in the consumer space anymore. Their processors are wildly inefficient compared to their Intel counterparts both in processing power and power consumption.

The basic architecture of AMD's fastest processors are roughly equal clock to clock with Intel's Core 2 lineup that's 6+ years old. Obviously lots of cores, some software enhancements and higher clock speeds make up some ground, but it's kind of comparable to putting lipstick on a pig.

Their only way of competing is at a price point level, where in certain situations they do offer more performance than intel (in highly threaded applications, integrated graphics) (and lose in power consumption). The combination of charging less and having higher production costs than Intel are a force multiplier of bad news for investors.

I would say that the graphics department could be the silver lining, but after years of putting out monster performance at all cost, Nvidia finally reigned in their power consumption and noise production and put out a truly superior product.

It's too bad really that AMD can't compete on a truly competitive level at the moment. Maybe they'll learn from bulldozer, but more than likely, they're too far behind Intel now to ever catch up in a meaningful way.
 
It seems to me that AMD has a few problems:
1) A confusing lineup. We have APUs, Athelons, Phenoms, and FX chips just on the consumer side of the market alone. Each product line has it's own name scheme, and there are huge overlaps in pricing and performance. I know this is not the end of the world, and it is not indecipherable if you do a little bit of research, but compare it to Intel; Intel has a bunch of different product lines, most of which share a single unified naming scheme, so even without the 'i3' or 'i7' name on front you can see something like the 2100 and the 3770K and know that the 2100 is a 2nd Gen Core processor (2), in the 1st tier of performance (1), as the lowest clock speed of that tier (0), and has basic graphics for it's level (0), while the 3770K is a 3rd gen (3), on the 7th tier (7), with a high clock speed (7), and basic graphics for it's level (0), and the K denotes that it is OCable. A CPU like the 2125 is going to be similar to the 2100 (because the 2 and 1 are the same) except that it has a higher clock speed (2) and has the premium graphics load-out (5). It is a simple system that makes for easy comparisons between processors, which is something that the average consumer relies upon to make decisions. Having a naming convention that is arbitrary, or merely sequential in the date of release, does not help anyone, and scares off customers

2) The APU branding is confusing. While most of us know that there is no real difference between an APU and a CPU, Joe consumer is very confused on the subject, or uneducated to the point of not even considering an APU. I mean, why bother calling it something else when it is just a CPU with premium graphics? The function it serves is the same, so don't make something more complicated than it has to be. There is a huge market of people who would be well served with an APU, but because there is no general understanding about what it is, people purchase what they feel they can understand (whether they understand it or not).

3) I can say with some certainty what the general lineup will be for Intel for the next 4 years (Haswell, Broadwell, Skylake, and Skymont, each released roughly 12-15 months apart). Every 2 years they come out with a new generation of CPU which will have an anticipated 20% performance increase per Watt, and in the off years we will get a tech increase (smaller die size) which will have a more modest 5-10% performance increase. This allows professionals to plan ahead with their budgets and time in order to request the funding they need for their systems well in advance of a product release. With AMD we do not get that. Regardless of how Good or Bad a product might be, a person cannot wait almost a year after an anticipated release date to upgrade their systems. When a product is being released we need more than a 1 month window of notice between the announcement and the product launch. It is OK to make a product announcement a year out, and then be off by a month or two, we just need to know a general time frame so that we can plan our lives. Release Trinity, and then have a predictable release schedule that you can meet

4) Again with public confusion. the FX 8 core processor does not have 8 cores. I realize that it is not HT technology as software does not need to be written specifically in order to use the off 'cores', but it is still much more similar to a HT core than an actual processor core. They would have been better served calling it a quad core with something attached to it, than to have all the debate and confusion around the product, because it drives people away. Any time you advertise a product, and even your fan base has to say 'but it isn't really...' is a lost sale.

5) Performance per Watt (!/W) is not nearly as important as our pop culture makes it out to be, but it is in the pop culture, so there needs to be more effort towards that end. Most of us don't care if our CPU takes a 77W load, or a 120W load, just so long as it pushes out performance, and the !/W stays below a level where cooling becomes an issue (like those poor Pentium 4s that ran so hot back in the day). However, in the mobile market it really is an issue, and they really need to work on that

6) And this is most important !/$. In this AMD has ALWAYS been the king of the hill until this last year. Yes, and argument can be made that the APUs have a better !/$ after the graphics are considered, but honestly APUs have overkill graphics for office and home use, and severely underpowered graphics for games. Even an HTPC with blueray content can run just fine with Intel HD graphics, so really what is the point? I am not saying that there is absolutely no market for it, but that they would be better served as a company either cutting back the IGP to make the chips cheaper and more competitive, or else going nuts with the IGP and make a dedicated GPU unnecessary while making it cheaper than an intel chip paired with a GPU.
On the other half of the spectrum you have the FX series which performs worse, while being more expensive than their old Phenom II line on many metrics. The new chips are complicated to make, and have a high failure rate which makes the chips expensive to produce. We now have perhaps the first time ever that Intel CPUs are cheaper than AMD CPUs, while offering devastatingly better performance in most use cases. With the exception of a few high-end nitch applications (3D work, and video editing on duel CPU rigs) Intel beats AMD on price and performance, and it is a wonder that anyone purchases their products at all (unless upgrading an old AM3 rig that they already own).

I like AMD as a company, they have great customer service, they are loyal to their employees, they pushed the envelope of technology and innovation for several years, and they have always been the ones to bring a cheaper workhorse CPU to the market where even if they did not have the speed crown, they were always the cheaper alternative. But now they are failing; specifically in marketing their products and giving clear choices, but also in providing products that meet the needs of the user at a lower budget than their juggernaut competition. They have damaged their trust in being able to put out compelling products on a time frame, and the trust that their new products will be better than the old ones. They are not so far gone that they cannot recover, but they need that fighting pioneering spirit that the company seems to have lost in order to make a comeback.
 
[citation][nom]blazorthon[/nom]Actually, Windows 8 helps FX and presumably also Trinity more than the other CPUs and it does so quite significantly because it optimizes for Bulldozer's more unique scheduling characteristics. The difference should be quite noticeable. Beyond that, FX isn't overpriced at all right now. It was when it first launched, but not anymore. AMD refers to FX by their actual core count. A core is a math unit that processes integer math. Each module has two such cores. If you don't like it, well, oh well. Why would AMD sell the eight core FX-8120 for $125 when it beats the top i5s in highly threaded performance, the whole point of an eight core CPU? That makes no sense at all. The same goes for the FX-8150. The FX-6100 and the FX-6200 are more on-par with the i5s in highly threaded performance, but they should also not need to drop further at this time.Also, AMD's stock coolers with the FX CPUs are the best stock coolers of any current CPU in or even near their price range. They kick the crap out of Intel's junk coolers.Also, I'll yet again mention the fact that disabling one core per module grants a significant performance per Hz boost due to each module not needing to share resources between two cores, an increase of about 25%, while decreasing power consumption by 30%-40%. That's a huge increase in lightly threaded performance per watt that leaves a ton of overclocking headroom, not that they didn't already not have a lot. I wouldn't do that on a quad core FX, but the six and the eight core FX CPUs are excellent for this.[/citation]
The key word is 'In highly threaded performance'. How many office programs and web browsers make proper use of multiple threads? How many games take advantage of more than 4 threads? If you are using professional applications for video editing or 3D work then yes, the 8 cores will beat (barely) the intel chips. But the fact of life is that even power users rarely use more than 2-3 cores, and gamers only use 4-5 cores 99% of the time, and when using those cores they are generally pushing them at ~20-50%, which means that a higher clock rate with less cores would be a cheaper and more cost effective option. Under normal loads with an average gamer a core i3 will beat most FX chips because the name of the game at the moment is core efficiency, not multi-thread performance.

By the way, the fix is already in for win7 and win8 schedulers, and was pushed through on a windows update. Did you notice the 2-3% improvement in performance? I agree that the idea behind the architecture is genius, but the implementation is simply crap, and no amount of software fixes are going to fix a broken product. Hopefully Piledriver will bring the fix whenever it comes out.

Book definitions do not matter about what a core is or a module is, operationally it acts like a quad core with hyperthreading. It is not the same as HT technology, but they would have been much better served calling it a 'quad core with X' than calling it a partial 8 core. If it acts like a duck, and it quacks like a duck...
 
[citation][nom]sonofliberty08[/nom]negative. u guys just don't realize a faster gpu are way more important than a faster cpu, the apu gave the average joe the most balance computing and save cash from adding discrete card, today best small form factor pc and htpc will be fusion apu inside, not the crappy atom or even i3.. switching from AMD E450 to Intel i5 won't makes you feel much faster on day to day computing, and the E450 can handle your day to day graphic program, movie and gaming much better than an i3, if u intended to make your pc faster, best way was saving your cash from the expensive intel cpu and invest it on the ssd or hdds with raid 0[/citation]
True, but you can purchase a Celeron or Pentium G and pair it with a low end GPU for the same price and get much better performance all around. It is only in laptops that the APU currently makes any real sense.
Balance of performance is not as important of threshold of performance. Balance of performance just means that the bottlenecks are not there. thresholds of performance means that there may be a bottleneck, but it can operate for specific tasks. APUs are overkill for most of what they are designed for (multimedia playback), while being too weak for most of the systems they are put in (HTPC game rigs).
On the high end balance is key, on the low end it is all about thresholds. AMD just got it backwards.
 

tomfreak

Distinguished
May 18, 2011
1,334
0
19,280
[citation][nom]fazers_on_stun[/nom]Well AMD's stock has plummeted over 12% today, on both the weak Q2 and poor outlook for Q3. Time to either buy or sell - just can't figure out which![/citation]sell, until they fix their stupid pricing. I will not bother buying their overprice cpu that work poorly on most consumer apps.
 

JefferyD90

Honorable
Jun 1, 2012
842
0
11,160

I can agree up to a point with everything. The only thing I can say your totally wrong on is them running hoter, they do not run NEAR as hot as Intels. Also the you say that about the clock rates like its a bad thing? I do agree tho that they are designed a little weird where they are not 8 modules, but they are 8 true cores. Windows will always and forever notice them as a 8 integer processors. The only thing is that each core inside the module must work the same workload, which with threaded programs, kind defeats the purpose of multi core applications.
 

ashinms

Honorable
Feb 19, 2012
155
0
10,680
Why the hell are virtual conversations about computer processors and economics online littered with people who can't spell and have no touch with reality? These are supposed to be intelligent conversations, not arguments between six year olds.
 

JefferyD90

Honorable
Jun 1, 2012
842
0
11,160

Its not that they created it, its that they pushed Intel and provided them with the resources... Also notice the simularity between their older CPU's archetcure and the Sandy Bridge... Considering the age difference and features they are strickingly simular.
 
[citation][nom]caedenv[/nom]True, but you can purchase a Celeron or Pentium G and pair it with a low end GPU for the same price and get much better performance all around. It is only in laptops that the APU currently makes any real sense.Balance of performance is not as important of threshold of performance. Balance of performance just means that the bottlenecks are not there. thresholds of performance means that there may be a bottleneck, but it can operate for specific tasks. APUs are overkill for most of what they are designed for (multimedia playback), while being too weak for most of the systems they are put in (HTPC game rigs).On the high end balance is key, on the low end it is all about thresholds. AMD just got it backwards.[/citation]

With the ways that things are going, I'm not sure of there being a definitive answer for general-purpose performance. Although not everything can take advantage of it, GPGPU accelerated programs really are becoming more common and although people might go on about Llano and such being inferior to Intel's Celerons and Pentiums. People seem to forget that for quad threaded work, something that has gotten increasingly common, even among games, the Pentiums and Celerons often lose to even the quad core Llano CPUs quite badly.

As for the graphics of AMD's APUs, well, it's not too weak for gaming. Llano is entry level and it does a fairly good job of that, granted it takes Trinity to handle some of the most graphically intensive games at above a minimal level.
 

ashinms

Honorable
Feb 19, 2012
155
0
10,680
The APU branding is confusing. While most of us know that there is no real difference between an APU and a CPU, Joe consumer is very confused on the subject, or uneducated to the point of not even considering an APU. I mean, why bother calling it something else when it is just a CPU with premium graphics? The function it serves is the same, so don't make something more complicated than it has to be. There is a huge market of people who would be well served with an APU, but because there is no general understanding about what it is, people purchase what they feel they can understand (whether they understand it or not).

They're called APUs because they were originally designed to be used with traditional x86 workloads being augmented by a gpu block that acted as an accelerator. Hence the "A". For the most part your whole argument is invalid.
 
[citation][nom]JefferyD90[/nom]I can agree up to a point with everything. The only thing I can say your totally wrong on is them running hoter, they do not run NEAR as hot as Intels. Also the you say that about the clock rates like its a bad thing? I do agree tho that they are designed a little weird where they are not 8 modules, but they are 8 true cores. Windows will always and forever notice them as a 8 integer processors. The only thing is that each core inside the module must work the same workload, which with threaded programs, kind defeats the purpose of multi core applications.[/citation]

SB runs cooler than any comparable AMD CPU that was ever made as of yet. IB runs hot, but that's only because of the paste between the CPU die and the IHS. Also, the cores within a module don't need to do work on the same process. I have no idea where you heard that. When they do this there is a performance boost of which the severity depends on how interdependent the workload is between different data sets and/or instructions, but it is not necessary.

I could run eight different programs all with an affinity for only one core each, all using a different core. The two programs per module wouldn't even need to be related for both cores to be utilized. Also, whether or not Windows says that there is eight cores isn't an accurate method of counting the number of cores. Hyper-threading Technology threads are seen as actual cores within Windows even if the core operating system doesn't exactly use them as such.
 
[citation][nom]caedenv[/nom]The key word is 'In highly threaded performance'. How many office programs and web browsers make proper use of multiple threads? How many games take advantage of more than 4 threads? If you are using professional applications for video editing or 3D work then yes, the 8 cores will beat (barely) the intel chips. But the fact of life is that even power users rarely use more than 2-3 cores, and gamers only use 4-5 cores 99% of the time, and when using those cores they are generally pushing them at ~20-50%, which means that a higher clock rate with less cores would be a cheaper and more cost effective option. Under normal loads with an average gamer a core i3 will beat most FX chips because the name of the game at the moment is core efficiency, not multi-thread performance.By the way, the fix is already in for win7 and win8 schedulers, and was pushed through on a windows update. Did you notice the 2-3% improvement in performance? I agree that the idea behind the architecture is genius, but the implementation is simply crap, and no amount of software fixes are going to fix a broken product. Hopefully Piledriver will bring the fix whenever it comes out.Book definitions do not matter about what a core is or a module is, operationally it acts like a quad core with hyperthreading. It is not the same as HT technology, but they would have been much better served calling it a 'quad core with X' than calling it a partial 8 core. If it acts like a duck, and it quacks like a duck...[/citation]

The Windows 7 fix and the Windows 8 fix are different. Windows 7 optimizes for a different and much less effective scheduling method. Yes, the implementation ruined an excellent architecture. AMD relying on the scheduling fixes to help out a little is also kinda stupid since they could have simply had the second core in each module be seen as a Hyper-threaded thread so that it wouldn't take performance away from the first core in each module in lightly threaded workloads. However, it is different in this. Hyper-Threading is just using spare parts of the processor that a currently running thread isn't using.

Maybe it stalled and is waiting for a memory access, maybe something else. Whatever. It increases performance maybe by around 30%, maybe a little more nowadays. Bulldozer modules are instead two tightly linked cores that share a front end, among other parts. Instead of being only significantly faster than a single core that has two threads, it is almost as fast as two distinct cores when it has two threads. It would be as fast as two distinct cores if the architecture was tweaked a little and some non-architectural implementation problems were solved, such as the very high latency problems. This is about an 80% boost instead of a mere 30% boost (which even then, 30% wasn't bad).

Like Hyper-Threading, the point is to increase the efficiency of usage per mm2 of die space and power consumption. It would have done a great job of this had AMD not implemented the architecture so incredibly poorly. As we've pointed out, there are very distinct differences between Hyper-Threading Technology and AMD's modular architecture, granted as you've said here, they really are just two very different methods for the same idea: getting more out of the same architectural size. I suppose it could be likened to saying that Intel went with increasing their front end to make better use out of their back end whereas AMD increased their back end to make better use out of their front end, kinda doing the opposite to get a similar result. It could be interesting to see a processor with both methods implemented into the architecture.
 

JefferyD90

Honorable
Jun 1, 2012
842
0
11,160

I was refering to the sharing of resources... Sorry, after looking at that I worded it horiably... I ment that the two cores would have to be workign on the same thing (so the resources are the same, like the cache) to be at its best... not that it HAD to... That one was total fail on me trying to explain.
 

sonofliberty08

Distinguished
Mar 17, 2009
658
0
18,980
[citation][nom]caedenv[/nom]True, but you can purchase a Celeron or Pentium G and pair it with a low end GPU for the same price and get much better performance all around. It is only in laptops that the APU currently makes any real sense.Balance of performance is not as important of threshold of performance. Balance of performance just means that the bottlenecks are not there. thresholds of performance means that there may be a bottleneck, but it can operate for specific tasks. APUs are overkill for most of what they are designed for (multimedia playback), while being too weak for most of the systems they are put in (HTPC game rigs).On the high end balance is key, on the low end it is all about thresholds. AMD just got it backwards.[/citation]
Celerons? no thanks....... cpu pair with discrete low end gpu r last decade kind of computing, more cards means more power, more heat and more noise, we r on the process of cpu+gpu fusion rite now, the future of computing will be the complete fusion of cpu+gpu, thats why intel are following amd footstep. is only matter of time for the software programer to write the software on gpgpu, all the computing power will rely on the more powerful gpu, we will have a mini supercomputer in our house by then. core i7 or even core i5 are overkill for most of the average user, the e450 already prove it can handle the daily home and office(photoshop/AI/core draw) task very well for the average user, and it can still playing games in low to medium setting, and watching blu-ray video smoothly. the A8 are good enough to handle most of the latest game without needing of discrete card, it can even handle dual display, online and playback video at the same time side by side, or running photoshop and ai side by side...... that's what intel can't offer at the moment.
 
[citation][nom]sonofliberty08[/nom]Celerons? no thanks....... cpu pair with discrete low end gpu r last decade kind of computing, more cards means more power, more heat and more noise, we r on the process of cpu+gpu fusion rite now, the future of computing will be the complete fusion of cpu+gpu, thats why intel are following amd footstep. is only matter of time for the software programer to write the software on gpgpu, all the computing power will rely on the more powerful gpu, we will have a mini supercomputer in our house by then. core i7 or even core i5 are overkill for most of the average user, the e450 already prove it can handle the daily home and office(photoshop/AI/core draw) task very well for the average user, and it can still playing games in low to medium setting, and watching blu-ray video smoothly. the A8 are good enough to handle most of the latest game without needing of discrete card, it can even handle dual display, online and playback video at the same time side by side, or running photoshop and ai side by side...... that's what intel can't offer at the moment.[/citation]

With a discrete card, Intel offers far more gaming performance (well, depending on the card) and has much more integer throughput per watt per core. It takes AMD some serious mods on FX CPUs just to match in performance per core and even then, a K edition LGA 1155 CPU can still stay ahead of AMD's best efforts while using less power. Piledriver might change this, especially in eight core versions with L3 cache, but that's only speculation until we see some Vishera CPUs out and benchmarked.

Furthermore, not all software can be run on the GPU any faster than on a CPU. Not all software can even be run on a GPU nearly as fast as on a CPU. It depends on how parallel a given program can be, among other possibilities. Beyond that, Llano is not more power efficient than an LGA 1155 CPU with a Radeon 7750 except maybe in a few GPGPU tasks that don't scale well with faster GPUs. However, they would be in different levels of performance and different budget ranges.

Intel offers nothing in the budget range that Llano dominates that is comparable. Intel's dual-core CPUs in that price range are much faster gaming processors for most games, but they don't have even decent IGPs. Not even if they had HD 4000 would they be comparable there. Intel and AMD really are offering very different products at those very low budgets. It's hard to call one better than the other because they really service different purposes.

Also, Ivy Bridge can run three displays off of a single IGP, as can Trinity, so Intel and AMD are equal there with the newer products, granted desktop versions of both still remain to be seen in these budgets. Regardless, just because something is using a discrete video card doesn't make it inherently inferior nor outdated. Heck, it doesn't even necessarily mean more power consumption, heat, and noise. I guarantee that a Llano A8 will consume more power than an Intel Celeron G530 paired with a GT 640 DDR3 while also performing substantially worse in gaming than the Intel configuration. However, the GT 640 is extremely overpriced, so it's not a very comparable situation, but still. It could simply be switched out with a Radeon 6670 DDR3, granted the power consumption win would be lower, but still a win for Intel and it would now be in the proper budget range.

Also, photoshop has some things that require a helluva lot more performance than any current integrated solution offers, so that's not even a good point of reference for your argument. You also ignore the fact that not everyone likes playing at low/medium settings on low resolutions. Even the Trinity A10s are still low-end gaming processors, not even mid-ranged gaming processors. The Radeon 5770/6770/7750 cards are what I'd call the bottom of the mid-ranged graphics cards right now and they are incredibly faster than even an A10's IGP in graphics performance regardless of the memory bandwidth and latency for the A10's RAM.
 

sonofliberty08

Distinguished
Mar 17, 2009
658
0
18,980
[citation][nom]blazorthon[/nom]With a discrete card, Intel offers far more gaming performance (well, depending on the card) and has much more integer throughput per watt per core. It takes AMD some serious mods on FX CPUs just to match in performance per core and even then, a K edition LGA 1155 CPU can still stay ahead of AMD's best efforts while using less power. Piledriver might change this, especially in eight core versions with L3 cache, but that's only speculation until we see some Vishera CPUs out and benchmarked.Furthermore, not all software can be run on the GPU any faster than on a CPU. Not all software can even be run on a GPU nearly as fast as on a CPU. It depends on how parallel a given program can be, among other possibilities. Beyond that, Llano is not more power efficient than an LGA 1155 CPU with a Radeon 7750 except maybe in a few GPGPU tasks that don't scale well with faster GPUs. However, they would be in different levels of performance and different budget ranges.Intel offers nothing in the budget range that Llano dominates that is comparable. Intel's dual-core CPUs in that price range are much faster gaming processors for most games, but they don't have even decent IGPs. Not even if they had HD 4000 would they be comparable there. Intel and AMD really are offering very different products at those very low budgets. It's hard to call one better than the other because they really service different purposes.Also, Ivy Bridge can run three displays off of a single IGP, as can Trinity, so Intel and AMD are equal there with the newer products, granted desktop versions of both still remain to be seen in these budgets. Regardless, just because something is using a discrete video card doesn't make it inherently inferior nor outdated. Heck, it doesn't even necessarily mean more power consumption, heat, and noise. I guarantee that a Llano A8 will consume more power than an Intel Celeron G530 paired with a GT 640 DDR3 while also performing substantially worse in gaming than the Intel configuration. However, the GT 640 is extremely overpriced, so it's not a very comparable situation, but still. It could simply be switched out with a Radeon 6670 DDR3, granted the power consumption win would be lower, but still a win for Intel and it would now be in the proper budget range.Also, photoshop has some things that require a helluva lot more performance than any current integrated solution offers, so that's not even a good point of reference for your argument. You also ignore the fact that not everyone likes playing at low/medium settings on low resolutions. Even the Trinity A10s are still low-end gaming processors, not even mid-ranged gaming processors. The Radeon 5770/6770/7750 cards are what I'd call the bottom of the mid-ranged graphics cards right now and they are incredibly faster than even an A10's IGP in graphics performance regardless of the memory bandwidth and latency for the A10's RAM.[/citation]
im just talking about the budget pc that can done your office task (ms office, acrobat, photoshop, coredraw) and still gave u enough performance on your home entertainment (internet, facebook, blu-ray movies, music, casual gaming) for most of the average user, not talking for the high end gaming rig for the enthusiast there. it takes time for the software to catchup with the hardware, more software will be run on gpgpu in the future for sure. i didn't mean that with discrete card are outdate, i my self have a discrete card as well, im just talking about the low end cpu pair with low end gpu are outdate kind of things, if i wanna go low, i'll pick the small form factor APU system that offer most balance computing power and most value for the money. if i wanna go high, off course i will go with the cpu + mid or high end discrete card(crossfirex), but i won't spend my hard earned good money for expensive intel cpu, as i quote b4, going from athlon II to core i7 won't makes u feel much faster, a quad core phenom II already enough, i rather save the money for getting ssds(32GB x 2) in raid 0 mode for the system, and hdds (500GB x 2) in raid 0 for programs/games, there go better investment for making a pc faster.
 

kitekrazy1963

Distinguished
Feb 1, 2010
89
0
18,630
[citation][nom]Nikorr[/nom]Too bad, there is no third party on the market.That way Intel has no real reason to speed things up, in the CPU market.[/citation]

If you go too fast then you are stuck with older inventory that won't sell. That also means companies like Asus, Biostar are stuck with older parts.
They could also make the same mistake AMD did with the introduced a new socket (7??) and it becomes discontinued rather quickly.
 

warpuck

Distinguished
Oct 29, 2011
19
0
18,520
The question is do I want to spend $350 on a i5 system GTX 460 sli or wait for the piledriver? I think I'll wait. I already have a 890fx mother board. Hd 7770 xfire and a piledriver may just do better. I use a Visio V320 for a monitor. I am satisfied with 1080 resolution through Hdmi and I can watch TV if I want to. I have a GTX 460 1045t in my 790gx system and also have a 460 for the 960T 890fx. With the latest AMD drivers for the Hd 6770/ 5750 the crossfire shuts off the 5750 when it is idle! These are not bragging rights systems. The 790GX is still a good gamers and it overclocks well. I could just sell the 790GX and get a HD 7850. I am not ready to join the Intel socket of the month club.
 
G

Guest

Guest
I read an investor note today that said the preliminary market share data is out, AMD lost share in all segments -- desktop, server, and notebook.
 
Status
Not open for further replies.