Is the AMD FX 8350 good for gaming

Page 8 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


You did not get the point. It was not about comparing overclocking capabilities of both chips but about him pretending that you only can compare Intel to AMD at the same clocks: e.g. an AMD at clock speed vs an overcloked Intel, which is not only ridiculous but also biased.



No. I was referring to the eight-core record

http://news.softpedia.com/news/All-8-Cores-of-the-AMD-FX-8350-Vishera-CPU-Overclocked-to-8-176-GHz-Somehow-301992.shtml

And did you read what I said about the relation between a world-record and what one can obtain at home?
 


Sorry didn't see that, yep, either compare at stock or at both chips at the max safe overclock.

Sorry didn't see the eight core record. The articles I looked up had only two cores at 8+ ghz.

I't doesn't matter how superior the overclocking abilities are if it is not doable. The cell cpu in the ps3 was supposed to be a beast with much greater power than the competition that ultimately fell flat because it was hard to code for. Any enthusiast is going to want an over clock that they can safely achieve at home (on air overclock is ultimately going to matter the most because that is what the greatest majority of overclocking are going to be using) over something that requires liquid nitrogen.
 




He hit the silicone lottery. I can show you videos of those lucky enough to get a 5.4ghz+ Sandy stable for 24/7 (and it's not very rare BTW). But that hardly means anything. My point being most people cannot even hit 5.2 ghz stable on an FX for 24/7 use much less 5.6. Therefore it's not representative of what a consumer can reasonable expect.

I've followed the FX threads for awhile and looked into it extensively. 99% of people are not hitting 5.6 ghz and even IF they were the power consumtion would be absolutely horrendous. To the point where you might have to upgrade you PSU Lol
 
You know, his voltage is only 1.48 on the 5.6 GHz OC, and several others have OC's of 5.2 GHz. I can find other CPU-Z verified OCs that high pretty easily, but that site is by far the most documented and well organized.
 
Well, if the i5-3570K is so much faster at everything, then please explain why :

http://openbenchmarking.org/result/1210227-RA-AMDFX835085

SMP NAS server -100% faster in SMP NAS testing vs i5-2400. 60% faster than the i5-2500k. 35% faster than the i5-3470 (noted : 15% slower than the 3770k).

John the ripper (DVD ripping) - it beats the i7-3770k by almost 20%, more than twice as fast as the i5-2500k and nearly twice the speed of the i5-3470.

Look at the Linux compile time. 8350 = 82s. I5-2500k =116.14s. i5-3470 = 114.25. The 8350 is almost 40% faster than the i5-3470 here. Also faster than the i7-3770K (by just a few seconds).

These are threaded and well balanced usage cases.

Bottom line is, if you have either a multi-threaded application that balances the threads well, OR you have multiple things going on at once which will load more than 4 cores - the AMD tends to win.

On single-threaded tasks, yes Intel wins most of the time.

But single threaded is not the future, nor is it even the present.

We have been at the point where more well-coded highly threaded applications are necessary to continue to process increasingly complex problems and data more quickly for almost a decade. IOW the future is now.

This is a fairly famous article from ~8 years ago (2005); what the author here is talking about is in fact happening. Witness the very incremental single thread performance increases for the last 3 processor generations. The curve is flattening out. The only way to get more performance is to utilize more cores.

http://www.gotw.ca/publications/concurrency-ddj.htm

"...applications will increasingly need to be concurrent if they want to fully exploit CPU throughput gains that have now started becoming available and will continue to materialize over the next several years. For example, Intel is talking about someday producing 100-core chips; a single-threaded application can exploit at most 1/100 of such a chip’s potential throughput. “Oh, performance doesn’t matter so much, computers just keep getting faster” has always been a naïve statement to be viewed with suspicion, and for the near future it will almost always be simply wrong."
 


Yes Intel cpu's are reliable, but ivy bridge chips have thermal issues due to poor quality IHS, and this difficulties overclocking. Some people is able to overclock satisfactorily when they manually remove the IHS from the chip, but others continue having thermal issues even after the removal.

AMD chips are so reliable that are used in servers and in top supercomputers such as Titan and Jaguar. AMD chips have better overclocking capabilities. The world record of overclocking was obtained by an AMD FX-8350: eight cores above the 8 GHZ. No Intel chip can achieve that level of overclocking due to a low-quality design.

 


The second paragraph is complete bull. Supercomputers use all types of chips. Overclocking to 8 ghz is like the mhz myth, It doesn't matter how high you con go in theory. what matters is how high you can go in practice for the average overclocker.
Low-quality design? Someone is forgetting exactly how much power an overclocked 8350 is using vs a overclocked i7-3770k.

Ivy bridge does have heat problems because of intel's stupid decision to try to save a few dollars but to call their design low-end is simply wrong. If you remove price out of the equation the fx-8350 simply cannot compete with the i7-3770k.
 


Really? Also non-reliable ones?



How many times I need to explain this to you?



Therefore you think that "intel's stupid decision" is not part of the design of the chip? Does it belong to the marketing dept.?



Do you mean how when the FX-8350 beats the i7-3770k on the performance test provided to you before?
 


Yes, intel actually had a slide a few years ago where they were boasting how they could cut down the quality of the stock heatsinks and save a few dollars.

Please don't cherrypick a few tests. The ones you gave me were for the most part, very,very close.

And yes, remove price out of the equation and most people are going to go with the i7. Much lower power consumption and fairly equivalent multithread performance with much greater singlethread performance. Also has integrated graphics and quicksync (so you can save a few dollars if you are not doing anything that requires a gpu).

Supercomputers use all sorts of chips. Jaguar using opterons was succeeded by a computer using xenons. Many supercomputers are using other chips from other companies such as IBM. Furthermore, the cpu is pretty much the most reliable part in the computer, the chances of a cpu failing in a consumer computer (assuming you are not overclocking) is pretty much nil. Much greater chance that the motherboard is going to go.
Supercomputers do not overclock the cpu's so the argument that one is more reliable than the other because it can go to extreme frequencies is moot. What matters is reliability at the conditions they are operating in (stock) for a supercomputer.

And yes, 8ghz under extreme conditons and extremely high voltages is worthless to the average person. Given two cpus with equal IPC, I'd rather have one that could hit a max of 6 ghz under liquid helium but consistently hit 4.5 ghz under air than one that could hit 8 ghz under liquid helium but only 4 ghz under air (these are imaginary cpus). Some might say that the first is better than the second because even though max performance is lower the extractable performance is higher.

This is kinda like the idea (the idea here not the specifics) of a souped up corvette vs an average car. The corvette can travel much faster (higher max overclocks) but both are limited by the speed limit (easy way to cool the chip, long term stability--2 volts is not going to last long, cost-liquid helium/nitrogen is expensive). In the end what is going to matter is performance at the speed limit (at the range where virtually everybody is going to be using the cpu).

FX is a good product at a great price but in the end people are going to care about what THEY can get out of the chip, not what the chip is supposedly capable of under extreme condition.
 
Well, under air cooling you can get the FX 8350 to 4.8 GHz, under water cooling, it would be feasible to break 5 GHz and I have seen prime95 and cinebench and 3dmark benchmarks run on a 5.6 GHz 24/7 OC on a FX8350. The voltage is only 1.48 and the OC is very stable.
 


Generally speaking you do not want to go over 1.4 volts on your cpu for longevity purposes. Generally FX can go to higher clocks than ivy bridge. However, it uses much more power when overclocked. Most ivy bridge cpus can go to 4.4-4.6 under air (vs 4.6-4.8 for 8350) and it is worth mentioning that their stock clock is lower. FX overclocks great but it uses a ton of power.
 


Yes, all true, but AMD came out and said for an "everyday" overclock do not exceed 1.55V as the architecture is not designed to run above that...so the voltage is within their specs for the OC on the 8350 mentioned above. I know intel is less friendly with high voltage OCs.
 


"Yes" what? I did two questions.



You cannot pretend to select a few biased tests {*} showing a 15% difference and say that the "fx8350 lagged behind", whereas say that the i7 was "very, very close" on tests where the FX was 30-70% faster.

{*} As showed before the real difference is unnoticeable.



Jaguar (AMD based) was upgraded to Titan (AMD based).

#1 is Titan with a score of 17.590. You have to go to #5 in the ranking to find an Intel based supercomputer (Xeon) and this gives only a score of 2.897. Titan is 6x faster.

http://en.wikipedia.org/wiki/TOP500#Top_10_ranking



Who gave you that argument?



And what about real processors? What about AMD FX being selected as best cpu for overclocking? What about owners of AMD achieving 5.0-5.2 GHz with easiness, whereas Intel owners reporting difficulties at 4.8? What about low voltages? What about well-known thermal issues with ivy Bridge?



Do you not still understand that people who buy FX chips use them for both work and gaming. Did you see the benchmarks I gave to you? What do you believe NAS, C-ray, NAS, BZIP, 3DMAX, database search, JTR... benchmarks are for?
 




Anyone running a professional application is going to be leery of overclocking (not semi-professional but real professional). Gamers will overclock but people working and running NAS or database searches are more than not going to run the chip at stock (and if they are going to be running the chip at full load then power consumption is going to come into play). At stock we are using 75 watts more than the i7 and getting similar performance (within 5%). If that is running 24/7 then assuming electricity is 11 cents a kilowatt (0.075*24*365*0.11 =$72 a year in additional electricity consumption from the cpu alone--ignoring additional losses from the power supply(efficiency is a percentage of output) and additional costs to cool the case--AC). Now look at the overclocked power usage (generally going to be about 10% faster than the i7) and suppose power is twice the price (as it is in many countries). I use this argument to illustrate why anyone who is going to run professional applications 24/7 probably woudn't overclock and are concerned about cost over the lifetime of the machine. For the casual professional (not someone who has a rendering project on 24/7) the 8350 is a good deal and buy but if power consumption is coming into play then things change.

51144.png


x264-power-task-energy.gif


"Do you not still understand that people who buy FX chips use them for both work and gaming. Did you see the benchmarks I gave to you? What do you believe NAS, C-ray, NAS, BZIP, 3DMAX, database search, JTR... benchmarks are for?"

They are going to care about extractable performance.


Intel is a penny pincher who likes to nickle and dime their buyers with motherboard changes every couple years.

I said multiple times FX overclocks well. To put it in perspective what clocks is the average air-cooling enthusiast going to use. That is what matters the most. No one is getting 5.2 ghz with ease. 8350 tops out at 4.8 with an air cooler.

They can probably get about 4.7-4.8 on a good chip under air (from stock 4.0 (turbo 4.1)) that is an overclock of 17% (4.8/4.1). The 3770k can get around 4.4-4.5 ghz on a good chip (from a 3.8 turbo (regular 3.5)) on four cores. That is an overclock of 16-18%, which is roughly the same. The FX can hit higher clocks but the gain that both chips gain from overclocking is roughly the same (FX does not scale as well with frequency because the cache runs at a constant speed).


They got their i7 chip to 4.9 ghz (obviously an outlier).
http://techreport.com/review/22833/ivy-bridge-on-air-the-core-i7-3770k-overclocked-on-four-motherboards

Toms hardware review (bolded is mine)

Using a 1.375 V CPU voltage and a 1.175 V northbridge voltage, I was able to get FX-8350 running stably at 4.8 GHz under full load. In the screen capture above, I'm running a single-threaded test to spin the chip up, but the highlighted maximum temperature is where our benchmark suite peaked.

The FX-8350 wanted to go even faster, but the key here is a voltage setting low enough that you avoid hitting 70 degrees Celsius. At that point, the thermal monitor starts cycling cores to throttle down (evidenced in the image above), keeping the chip from getting any hotter and negatively impacting performance. So long as I didn’t trigger any threaded workloads, I was even able to run benchmarks as high as 5.125 GHz (requiring a 1.4375 V CPU voltage and 1.2 V northbridge setting).

Their system builder (not all chips are created equal --FX requires good cooling).

This is the first time anyone at Tom's Hardware has tried his hand at overclocking a retail FX-8350. And, after reading Chris' experience taking his sample from AMD up above 5 GHz, I was looking forward to something similar. It turns out that I was being far too ambitious, though. Xigmatek's Loki doesn't have the headroom to keep the 125 W processor cool beyond its stock clock rates. Beyond performance, thermals are probably AMD's biggest disadvantage in this comparison. We really would need to spend a lot more on cooling to achieve any sort of meaningful overclock.

Regardless of the processor or northbridge voltages we used, we couldn't exceed 4.63 GHz. "Fair enough," I first thought. "If I disable Turbo Core and lock the chip in at 4.6 GHz, I should still see a reasonable speed-up." But a Prime95-induced load quickly demonstrated instability as the FX-8350 shot up over 80 degrees.

It seems as though I had underestimated the FX's ability to generate copious heat, and failed to budget enough for cooling. Even at the stock 1.35 V setting, and with the clock rate dialed in to the processor's peak Turbo Core frequency of 4.3 GHz, Prime95 caused the chip to falter. Simply nudging clock rate, without touching the voltage, results in a significant temperature increase. For example, operating at 4 GHz yields a maximum 60-degree reading, but 4.2 GHz sees that number jump to 70 degrees. Interestingly, I didn't see any throttling, as Chris did when his sample crested 70 degrees. Here's the thing, though: while his Tj. Max was reported as 70 degrees, the retail processors are capped at 90, though the chip is clearly unstable well before it gets that hot.

The best I could achieve with this build's heat sink was 4.33 GHz, forced by dropping the voltage to 1.3375 V, turning off Turbo Core, and increasing the multiplier. Prime95 didn't crash, and the temperature stayed under 75 degrees. We're hesitant to call this a bad sample when the cooler is seemingly barely adequate. Should we choose an FX in the future, we'll need to cut back elsewhere on our budget to leave more room for a higher-end air or closed-loop liquid solution.

Anandtech review

AMD's FX architecture was designed for very high clock speeds. With Piledriver we're able to see some of that expressed in overclocking headroom. All of these chips should be good for close to 5GHz depending on your luck of the draw and cooling. For all of these overclocking tests I used AMD's branded closed loop liquid cooler which debuted back with the original FX launch. I didn't have enough time to go through every chip so I picked the FX-8350 and FX-4300 to show the range of overclocks that may be possible. In my case the FX-4300 hit 5GHz with minimal effort, while the FX-8350 topped out at 4.8GHz (I could hit 5GHz but it wasn't stable through all of our tests). Both of these overclocks were achieved with no more than 10% additional core voltage and by simple multiplier adjustments (hooray for unlocked everything). The increase in performance is substantial:

Tech report

When you're overclocking a CPU that starts out at 125W, you're gonna need some decent cooling. AMD recommends the big-ass FX water cooler we used to overlocked the FX-8150, but being incredibly lazy, I figured the Thermaltake Frio OCK pictured above, which was already mounted on the CPU, ought to suffice. After all, the radiator is just as large as the water cooler's, and the thing is rated to dissipate up to 240W. Also, I swear to you, there is plenty of room—more than an inch of clearance—between the CPU fan and the video card, even though it doesn't look like it in the picture above. Turns out the Frio OCK kept CPU temperatures in the mid 50° C range, even at full tilt, so I think it did its job well enough.

Trouble is, I didn't quite get the results I'd hoped. As usual, I logged my attempts at various settings as I went, and I've reproduced my notes below. I tested stability using a multithreaded Prime95 torture test. Notice that I took a very simple approach, only raising the voltage for the CPU itself, not for the VRMs or anything else. Perhaps that was the reason my attempts went like so:

4.8GHz, 1.475V - reboot
4.7GHz, 1.4875V - lock
4.6GHz, 1.525V - errors on multiple threads
4.6GHz, 1.5375V - errors with temps ~55C
4.6GHZ, 1.5375V, Turbo fan - stable with temps ~53.5C, eventually locked
4.6GHZ, 1.5375V, manual fan, 100% duty cycle at 50C - lock
4.6GHZ, 1.55V, manual fan, 100% duty cycle at 50C - crashes, temps ~54.6C
4.4GHz, 1.55V - ok
4.5GHz, 1.55V - ok, ~57C, 305W
4.5GHz, 1.475V - errors
4.5GHz, 1.525V - errors
4.5GHz, 1.5375V - OK, ~56C
At the end of the process, I could only squeeze an additional 500MHz out of the FX-8350 at 1.5375V, one notch down from the max voltage exposed in the Overdrive utility. AMD told reviewers to expect something closer to 5GHz, so apparently either I've failed or this particular chip just isn't very cooperative.

I disabled Turbo Core for my initial overclocking attempts, but once I'd established a solid base clock, I was able to grab a little more speed by creating a Turbo Core profile that ranged up to 4.8GHz at 1.55V. Here's how a pair of our benchmarks ran on the overclocked FX-8350.

From openbenchmarking

In the end, we were able to take the FX-8350 up to a stable 4.7GHz. Unfortunately, due to time constraints and an incompatibility with AMD OverDrive and our test-bed’s motherboard, we don’t have accurate temperature data to share at this point. But considering how easy it was to take our CPU to 4.7GHz, we suspect that higher clocks will easily be possible with more exotic cooling and more aggressive voltage tweaking.

Under air, clocks top out at about 4.8 ghz on a good chip for the 8350 and 4.4-4.5 for the 3770k. The % overclocks are similar.

Intel's stupid decision was probably decided by the marketing/accounting team to maximize profits. Or possibly the engineers were told to bring the costs down to $x per cpu and cutting the heat transfer material was the cheapest and easiest way to do that (considering as a percent of the market, few people overclock). Anyway, to say the ivy bridge "architecture" is poor is incorrect, rather the ivy bridge "implementation" is poor.

Edit:
"You cannot pretend to select a few biased tests {*} showing a 15% difference and say that the "fx8350 lagged behind", whereas say that the i7 was "very, very close" on tests where the FX was 30-70% faster."

What biased tests? The FX isn't 30-70% faster on two of the three tests, it was pretty much margin of error (<5%). The other test the FX was significantly faster because for some unusual reason hyperthreading wasn't being used (which is unusual but a fair victory to the 8350). I'm saying that we must look at all the tests. You are essentially showing me three tests from a sample where the FX is basically tied or beating the i7-3770k. What about other tests in that review? Where is the link?
 


I wonder if anyone of this has some resemblance to what was being discussed.

Who said you that people working and running NAS or database searches will be overclocking?

You post another biased review from ALS. Once again he is not comparing like with like. Moreover, I notice how he puts graphics baselines at 50W instead 0W giving a false impression that the difference between ivy and piledriver is greater than it is really. His trick is well-known; google "misleading graphics".

The FX consumes more at full, but finishes the work before and falls to lower consumptions than the intel. The point is that the difference in total system power consumption for Vishera was of less than 20W more. And desktop computers are 90% of time at idle or at low loads, which mean that in a years basis the cost is tiny: from cents to few dollars.

This was already remarked before and by several posters but good myths don't die.

I also find interesting the excessively high consumptions that those guys measured. No need to mention again why one waits bizarre values from ALS, but the other review is not giving info enough to know what they measured and how.

I can assure you that most of the overclockers are not using air. I think a list of overclockers and their settings was given before. Only a minority used air.

In any case I am not impressed with the i7 overclocked on air using an aftermarker EXPENSIVE cooler. It is evident that was golden chip sent to the review site. In the list of overclockers given before you can see people running FX up to 5.1GHz with the STOCK COOLER.

Who said you that marketing depts. take decisions about profits?

Who said you that "design" = "architecture"?
 


Im not sure who ALS is?

In the x.264 anandtech test the two finished in roughly the same time (pass 1 intel was ahead, pass two amd was ahead). The 8350 used far more power than the i7. Look at the task energy from tech report.

I mean overclockers on average. How many people in this forum overclock on air vs water? Most of the threads about building a computer are using air, rarely do you see water.

you are not going to get 5.1 ghz on a 8350 on the stock cooler.

I don't know where you are seeing 50 watts as baseline.

Those "excessively" high power consumptions are consistent throughout pretty much all the reviews I can find of the 8350 on the internet.

Marketing (I most mean the people in charge of deciding how to sell the chip and maximize profit--i guess this is finance as well) make many many decisions about how the product is going to be sold to maximize profits.

I don't think this thread is going anywhere so I'm going to stop.
 


IDLE delta in Watts (FX-8350 vs i7-3770k)
==========================

Anandtech: 14.7
Tech Report: 22 (Abnormally high)
CPU Boss: 17
Toms: 16
Legit: 11
Xbits: 0
Bit Tech (*): -4

Variation found in reviews: 22W - (-4W) = 24W


LOAD delta in Watts (FX-8350 vs i7-3770k)
==========================

Anandtech: 75.4
Tech Report: 96 (Abnormally high)
CPU Boss: 54
Toms: 88
Legit: 56
Xbits: 87
Bit Tech (*): 47

Variation found in reviews: 96W - 47W = 49W


TYPICAL delta in Watts (FX-8350 vs i7-3770k)
============================

CPU Boss: 45

(*) Note: "the AMD chips were tested in an ATX motherboard, while the Intel LGA1155 chips were tested in a micro-ATX board. This difference can account for up to 20W, as we found in our Energy Efficient Hardware feature."

As shown, depending of the hardware, measuring methodology, and specific task you can find up to 50W difference on claimed power consumption figures.

Moreover, how ALS writes in his review:

Note that idle power consumption can be competitive, but will obviously vary depending on the motherboard used (the Crosshair Formula V is hardly the lowest power AM3+ board available)

In fact they chose one of the highest power boards possible for AMD:

power1.png


And if you take a look to the testing methodology of the tech report with abnormally high figures. Not only they chose the power hungry Crosshair for the AMD, but a specific MSI motherboard for the Intel i5/i7, which curiously is the lowest power board

power-1.png


Therefore take this also into consideration when reading their power consumptions graphs...
 


Please keep in mind how the PS4s games are going to be coded and how the actual engines and textures will be.
 
I'm just going to say this. Both processors have their differences. One is better for video editing (AMD), and one is better for more gaming (Intel). One is better in physical build and coding (Intel), and one is almost as good, but just doesn't break the bulb (AMD). One has less power consumption (Intel) and one has more (AMD).

I'm not bad mouthing any specific one. I am going by my research and my overall and final point on these two. The obvious better buy if they are the same price is Intel, but thus Intel's physical build quality, code, and power consumption, are what bring up the price (and also what gives is the lead on AMD). I would really only choose AMD if I wasnt a big gamer, I had a couple of games, and if I do more video (GFX) editing and such.

However, since I'm a huge gamer and I want the best for my build, I'm spending that extra $100 or so to get the i7-3770K which is what benefits me and my build (Don't forget the kick-ass fan that comes with it). You can go for an i5-3570K for about the same price as an AMD, but get that better physical build and coding (even though there are a couple capabilities it doesn't have compared to the 8350 or the 3770K).

It's really what you go for and want to do. Overall Intel is better with HARDCORE Gaming, and AMD is good with it, but has a better advantage in the video and GFX department.

If you would like to go against what I am saying, I wouldn't mind replying. ^_^


EDIT: I forgot to mention. This is going off of if you have enough money for things like this. I don't criticize if you get an AMD because of your budget, that's fine. AMD is good for it even if you get one of the not so high end processors. But I'm mostly on the side of, if you go for AMD with all your life and get everything from them and think they're ever so amazing, I will eventually start to criticize (no offence).

For example: I may eventually build a another small pc that is cheap that I can do video editing with, so thus, I could use it to record my Gaming and then edit or whatever it to my needs. Then I would get an AMD so it fufills that purpose.
 
Your assumptions are all false:

1.) Intel build quality is poorer, their chips are not SOI...the reason they cost more is because of onboard graphics that no one uses.

2.) The difference in power consumption over the course of a year is equivalent to turning on an additional 40W light bulb in your home.

3.) The Gaming myth has been debunked already...games like Crysis 3, Planetside 2, Bioshock Infinite, Metro 2033, Tomb Raider, and others are all within margin for error difference between the i5-3570k and the FX 8350, and the FX 8350 even beats the i7-3770k in some games. Skyrim is the only outlier, so don't bother to cite it as an example...

4.) For the extra $130 difference between the FX8350 and i7-3770k I can buy a H100i cooling system, and still come out cheaper than the i7-3770k and it's better stock cooler.

5.) If Intel had superior build quality why does AMD hold EVERY world record for overclocking, where build quality really comes directly into play? They hold them by 1+ GHz by the way (highest record is 8.76 GHz, where intel is 7.18 GHz), not some trivial margin of 100 MHz or something like that. AMD has the world record for highest overclock with all cores active by the way as well (8 cores on the FX8350 @ 8.176 GHz)
 


1. Onboard graphics can be used to speed up certain tasks (OpenCL acceleration). To say the quality is poorer is wrong. They are using 22nm trigate.

2. Very true. But if you are running a system 24/7 rendering/encoding, it will add up.

3. Yep, with the exception of older games, newer games will use four threads are be pretty much equal across the line. Future games should show no bias.

4. Yep, though overclocked 8350 uses a shitton of power. You will probably need a better PSU.

5. I don't think intel cares about this because how many people and what portion of their market run their computers at that speed. Intel made a decision to prioritize power consumption over frequency (rightly so) because more people care about power consumption vs some overclock that they don't care about. This is why intel is kicking amd's butt in mobile and amd has nothing that can touch an i7 quad in a laptop.

Please don't ever use cpu-boss. That site is as pathetic as crap. Look at their singlethread performance ranking.

http://cpuboss.com/cpus/Desktop-CPUs-best-Single-Threaded-Performance-4310942

1. 3570S
7. Pentium G2130
11. 3770k

Does that make any sense? The site is horribly coded as well.

Good point about the motherboards.
 


Onboard graphics cannot be used in conjunction with CPU when a discrete card is present. AMD has the same process on their iGPU (with the exception of a crossfire setup, though intel offers no such option). Yes their triple gate technology is interesting, but the quality of the wafer they use is only bulk. Additionally even intel has recently conceded that their triple gate process is running out of room, and they will have to convert to SOI soon to shrink their process any smaller. They would have with Haswell if the cost to switch hadn't been so high to begin with.

I don't disagree that intel doesn't care about that segment or they clearly would have paid more attention to it. As far as mobile goes, in the laptop segment AMD is making headway again...(though only creepingly increasing market share by fractions of a %). I expect them to make a bigger splash there with the introduction of Kaveri architecture though.

Also, for smaller devices, their micro architecture is doing quite well, and I expect big things from temash in the handheld/tablet market. I think they do as well.
 
Status
Not open for further replies.