Why is the i3 always overlooked and underestimated?

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Speedstang

Reputable
Feb 14, 2015
423
0
4,810
Whenever ANYONE is talking about building a gaming rig, everyone jumps straight to the i5's and i7's. Nobody every even considers i3's. And when people want a budget build, everyone directly jumps to the Pentium anniversary and AMD FX processors. Why are they so overlooked? My system has an i3, and I have NEVER been able to lag it, I can do anything you can imagine and it keeps going strong. The integrated graphics, of course, can lose a lot of framerate on more intensive games, but for integrated graphics they really pack a lot of power. So why are i3's so overlooked? They cost about the same as budget FX CPU's, they beat out the Pentium anniversary (NOTE: Any i3 with a U, T or any other letter at the end doesn't count, I'm talking about REAL i3's not laptop power saving ones) so why aren't they ever considered?
 
Just be careful pairing a g3258 with an h81/b85 or other odd low end motherboard. Some users on win7/8.1 may not have noticed it if they haven't updated using all the optional updates but win10 includes it by default. Microcode from intel pushed on users via win10 that breaks overclocking on those boards. Many are having issues even installing win10 with a pentium cpu, those trying to overclock only get some of their settings back after disabling one of the two cores. I think it's b.s. that updates are killing features that already existed but on the other hand people are overclocking on non z motherboards and there's no free lunch. Anytime people are using a work around of any kind it's subject to change and in this case win10's changed it. People using win10 home don't even have a choice, updates are forced on them. Yet another reason why forced updates are iffy, not all updates are improvements.
 
That 50% price increase seems to resonate throughout the intel lineup when it comes to adding ht. 50% price jump from g3258 to i3, 50% price jump from i5 to i7 (quad core). Not saying ht is completely useless but it can be hit and miss and often only adds 12-15% performance in most cases. Only in a few does it add a max of 30% and yet they charge a premium.

Ht can be a lot more useful when dealing with 2c2t or 2c4t vs 4c4t/4c8t. The 5820k is only $10-12 more expensive than the 6700k and ht aside has 2 additional cores and 15mb cache vs the 6700k's 8mb. They both run ddr4 and yet the 5820k runs it in quad channel vs the 1151's dual channel. That's like trying to sell an i3 for $200. Ouch.
 
Wrong, Pentium anniversary was $70 last I checked, i3 is $100. And the i3 has more than just HT, it is simply more powerful, it's why the Pentium needs such a crazy overclock to do well. If you overclocked, say, my i3 3220 to like 4.3 Ghz, it would be far more powerful than an overclocked Pentium anniversary. The thing is only good because it's being pushed harder, it's like using nitrous on a car, it makes it faster, but you're still pushing the same hardware.
 
My initial plan was to build around a G3258 (purchased and returned before assembly) but I found a special 1-day deal on an i3-4170 ($77) with the assumption that I would pair it with a 960 GTX, which I didn't get. The only reason why I'm considering an i5-4590 ($160) or Xeon E3-1231v3 ($210) is to run a more balanced rig with the GPU I ended up with -- R9 290X. At $77 the i3 appears to have the best price-performance value and I haven't noticed any bottlenecks with the games that I run. If I choose to keep the i3 (return window is closing), I hope that it can remain gaming relevant for the next 3-5 years.
 


Well at minimum I'll be using my i3 for the next 3 years, so we're in the same boat if you decide to keep it. If you have the money the best thing to do would be to get one of your other choices, but if not just keep the i3, it shouldn't get outclassed THAT fast. I'm almost certain the G3258 will be outclassed within the next 4 years though 😛
 
An i3 can be outclassed even today,if the programmers are crap,look at ubisoft's Far cry 4.I don't know the current state of the game but at launch it performed miserable on anything that had less than 4 real cores.
 


As in it had shit code that didnt allow it to run on dual cores, but an i3 still averages well over 60 fps.
http://www.techspot.com/review/917-far-cry-4-benchmarks/page5.html

No mins are present unfortunately.
 


That proves the power of HT and the i3 😉
 
The i3 is a good budget choice, but it should be realized that it's a budget cpu. Hopes for it to remain relevant for the next 3-5yrs is being pretty optimistic. More realistic might be the next 2-3yrs, if it was able to handle gaming loads and do it well for the next 5 yrs then what need is there for anything above it? The nice thing is that with an i3 someone isn't dead stuck. They're not in a position of having a motherboard that won't handle something stronger and can easily drop in an i5 or i7 even if they don't have a z series (overclocking isn't mandatory).

It's kind of along the lines of saying forget it, I'm buying one of the lowest end economy cars they make and it needs to last me 20yrs and perform well the entire time. It would only be natural to expect it limping along partway through that time frame. It's a big ask. If you're single or just you and a partner a 2 seater car is great. 5yrs down the road and you have 3 kids and a dog and a cat, needs have changed as time progresses, an upgrade is going to be needed. Games won't be sitting still either and the pentiums will be the first to be outgrown since they're the lowest tier of hardware. Next is the i3, with the i5 and i7 above it.

That benchmark doesn't prove much for fc4, other than the fact it's not very cpu bound. Granted the i3 is keeping up close to the i5/i7 but next on the list is an fx 4320 and we already know the huge gap in ipc between the i3 and fx 4xxx. Just because my e8400 kept up with cod ghosts doesn't mean it's an all around good gaming choice.

Taking a game like gtav, it depends on the benchmarks. When you crank up the resolution and push the gpu, the i3 seems to do ok. Here the i3 fell behind a true quad core by 10fps with a titan x gpu at 2560x1600.
http://www.techspot.com/review/991-gta-5-pc-benchmarks/page6.html

Dial it back to 1920x1080 and with the titan x, max graphics, the i3 lagged in at 28fps min/42fps avg. The i5 and i7 both (true quad cores) were about tied at around 35-40fps min/68-70fps avg. Showing two things, one - ht isn't all it's cracked up to be. Two, this is a game that's already been out several months and the i3 is showing its weakness compared to true quad core cpus so where is that going to leave it in 3-5yrs?
http://www.eurogamer.net/articles/digitalfoundry-2015-the-best-pc-hardware-for-grand-theft-auto-5

Just one of the reasons why so many people recommend an i5 for gaming more often than an i3. It may change in 3-5yrs, no telling what games will require then. There are games out now that aren't as intense or hard on hardware as games from a couple years ago. I'm not aware of any game that cripples an i5 like that and runs great on an i7, if it doesn't play well on an i5 it doesn't play well on the i7 or anything else. The same isn't true for i3, some games will expose the lack of cores on an i3 even right this moment. For the budget, it does respectably well and holds up in most games. There's a difference between most games and all games though. So long as it performs for the handful of games the user intends to play and they need/want nothing more then it's a good match.
 

Because, I think, in AAA games, things DO get intense, usually right out of the gate, and at THAT point, even R7's or GTX 6XXX smoke the IG on your i3. Do you play a lot of racing games? Do you play heavy duty FPS games (BF3/4 or the newer COD games)? Any Graphically intense game will both lose frames and, usually, begin tearing/splitting screens due to it's inability to keep a consistent stream going. Indie games, as a norm, are generally lower framerate and tho many have really improved on graphics quality, depend more on the CPU for rendering than the GPU. I think that's what he may have been going for.
 


None in the games i play,shadow of mordor, dayz mod, supreme commander, bioshock gta 5, cities skylines to name a few. The fps stays pretty consistent with a mid range gpu like mine. Mins dont go below 45.
 


That's very interesting. I've seen a lot of people saying that because the games see them as 4 threads, and actually aren't as powerful as i5, it stutters. Thanks for the response.
 


They work perfectly fine, i really dont know why all of this misinformation exist on these forums and the internet in general. People have to understand, that even if a game can use 4 or 6 threads, the first 2 threads handle 90% of the total cpu load.

Games and programs also detect the hyper threading, HT has been out for awhile, it being utilized properly is no surprise.

Im not saying you should do 4x 980ti sli with, anything below a 970 will run just fine with minimal bottlenecks.
 
I think most of the overall confusion on the actual performance of the I3 is because people don't understand what hyperthreading is exactly, and how it works (and trust me I don't know how it works either). Hyperhtreading is really difficult to find understandable information on. The most information you ever get is that it "splits a processor into two virtual cores, which can improve performance by 20-30%". I have heard all the analogies on how hyperthreading works, but still I don't have a clear understanding on how the execution works. Also, is it possible for a core to hyperthread into 4 virtual cores?

Anyway, could someone explain what hyperthreading actually is, how it works, and the gains without fancy analogies? Most people on this forum (myself included) are uneducated on how hyperthreading works, and I think that plays a big role in not understanding where the I3 and I7 really come into play. Simply hearing from some website that the I7 improves performance by X amount is not good enough.
 


Exactly. I don't even know how it specifically works.
 
Turkey3_scratch, there are a lot of sites which explain hyperthreading. Depending on level of detail, you have anything from simple analogies to several page reports getting into the detail of how exactly it interacts with various types of software. Not all software makes the best use of hyperthreading so you can have a program that runs no different from non ht, 5-10% better, 25% better, 5% worse etc. Ht doesn't split a core into 4 virtual cores, only 2. (Unless intel changes their architecture). There are shared resources which if a piece of software is coded properly can make use of the shared data. If not it can hurt performance. For each core with htt, it's capable of processing 2 data pipelines (thread, processes) simultaneously. If one thread stalls waiting on data to catch up, the cpu core can continue processing using the other thread and it keep the cpu cores busier than they would otherwise be. While there's some performance improvement, ideal/best case scenario is about 25, rarely 30% more performance.

I have a feeling it would take ages trying to get into nit picky detail and breaking down how various programs are coded to determine how the cpu interacts with ht involved. That's why it often gets condensed to x amount better performance. Simply knowing for the sake of knowing would take some research and would likely involve understanding coding since the two interact with one another. That's a whole different avenue of research. The trouble is whether or not ht works and how well it works is dynamic and code dependent. It's not as simple as 'bingo, 30% improvement in everything'.

This benchmark page shows how different programs and even different versions of the same program utilize hyperthreading differently.
http://www.extremetech.com/computing/133121-maximized-performance-comparing-the-effects-of-hyper-threading-software-updates

https://pubs.vmware.com/vsphere-4-esx-vcenter/index.jsp?topic=/com.vmware.vsphere.resourcemanagement.doc_41/managing_cpu_resources/c_hyperthreading.html

https://www.pugetsystems.com/labs/articles/Hyper-Threading-may-be-Killing-your-Parallel-Performance-578/

http://www.pcworld.com/article/2971612/software-games/windows-10s-radical-directx-12-graphics-tech-tested-more-cpu-cores-more-performance.html

These are only a handful of programs and others may use ht more effectively but some good comparisons since they took the i5 4690k and the i7 4790k and oc'd them to the same speed (4.7ghz). Many of the problems of comparing an i5 and i7 4790k come from so many variables. One has 6mb cache, the other has 8mb cache. One is clocked stock at 3.5 (3.9 turbo), the other is stock at 4ghz (4.4 turbo). One has hyperthreading, one doesn't. With so many variables, it's apples and oranges really when trying to compare performance. Clocking both at the same speed removes one of the bigger variables, the speed difference. Now we're down to variation in cache size and hyperthreading.
http://www.anandtech.com/show/8227/devils-canyon-review-intel-core-i7-4790k-and-i5-4690k/3

For instance looking at the page above, encoding with handbrake. Just comparing the two cpu's out of the box the i5 achieved 510fps while the 4790k achieved almost 583fps. It appears that hyperthreading gave the i7 a 15% performance advantage - or did it? If you look at the i7 4790 (non k) since it has a closer stock clock to the i5, the difference is 8fps or 1.6%. When clocked the same as the 4790k, the difference is 3%. It's important to consider all factors when comparing cpu performance in real world programs since the larger performance gains of an i7 over i5 aren't always ht related.

How ht works with an i3 compared to a pentium g is a bit different. The fundamentals of ht are similar through the intel lineup but you're comparing 2 simultaneous threads vs 4 instead of 4 vs 8. There's a larger difference at the lower end in most cases. That's why even multithread aware programs that can properly utilize ht show smaller margins of improved performance on a 6c12t 5820k vs a 4c8t 4790k. It's possible that programs aren't 'that' heavily multithreaded but increasing the hardware of a cpu by 50% doesn't net 50% improvements.

With all the specs and details that can make someone's head spin, it's typically a lot easier just to look for a particular benchmark. If a person said to themselves ok, I do video encoding and I use handbrake to do it (handbrake used more often than not because it's open source/free and available to anyone to use). Now, let's compare cpu's on handbrake since that's a program I use (hypothetically). How much faster is one over another? Forgetting all the details, someone can see what all that fancy mumbo jumbo really means. Is it shaving the encoding time in half? Or merely knocking a few seconds off every hour of video? All the impressive theory in the world means squat if it doesn't translate into actual gains. No one can see a benefit whether their cpu uses modules, straight cores, htt etc. What they will actually notice is encoding time differences. 2sec here and 8 sec there means very little. Reducing a 30min encoding project to 20min on the other hand is something the user will appreciate. (Just one example).
 


Thanks, that was the most helpful explanation! What type of data would need to "catch up" though? Like grabbing a value from a memory address?
 
That I don't know, I'm not a professional coder. It could just be a matter of a program running and part of the code processed while the cpu has to wait for additional code to be fetched from the storage drive or ram. As fast as ram is, info in system memory is much slower than the data kept in the extremely fast l2/l3 cache on the processor. As many tricks as are used like pipeline depth, prefetch etc there will always be some delay waiting on input from the user. You can't predict everything so it can't always be preloaded and ready to run. You may reach a point in a game where you're in open world. The code running can't possibly know what your next move is going to be, forward, left, left then right etc. There may be a wait in there where the software (game) is stalled waiting on you to choose which way to go so it knows how to proceed. Compared to a more automated task like say video encoding. Once you select the methods and output file type, filters etc the program loads it all up and once you click 'start' it runs through an automated algorithm that's much more efficient since it's not directly waiting on user input. It will run that same set of code from beginning to end with xyz options selected and since it's so predictable it can be better hardcoded to work a certain way with the cpu/memory for best performance.
 
Honestly Speedstang, I'll just throw my experience out there for whatever it's worth. I have two computers in my near vicinity. Here are their specs and how long I've owned them -

Laptop - Asus (2013)
Intel i5 2410m
HD3000 Graphics
6GB 1600mhz
500GB Sata HDD

Desktop (Current Computer, 8/2015)
AMD A6-5400K
Nvidia GTX730 1GB DDR5
4x2GB HyperX 1866MHZ
750GB Sata HDD
480W 40+ PSU

My experience with my laptop ranged from Dota 2 and Diablo 3 to Skyrim and Prototype 2. My average FPS was always 20-30 with a few exceptions being lower at times, and my experience was pleasant. I have no issues at those FPS. However, I wanted to play BF:Hardline and other games that my laptop would not support, so I built this current rig. It ran me around $230, which is why it is a bit mismatched - normally you'd see the Pentium Anniversary in there instead of a strange a6-5xxx series processor, which is only a dual core. However, being a combo with the mobo at under $50, it was a deal I couldn't pass up. Not for true budget gaming anyway.

Long story short, I play BF: Hardline on high @ 30-40 fps, Rust on high @ 30 fps, Dota 2 on max @ 50-60 FPS, CS:GO on max @ 60-100FPS, and several other titles at comfortable frames, all high settings with few exceptions.

To me budget gaming comes down to one simple definition - Being able to do what you want to do at the lowest price possible. It doesn't matter if that is 5fps. To you, if that is what you want, you have achieved what you looked to do. For me, 30 fps is the goal. I like 40+ because that reduces some load jitters due to textures. 20 generally means to me that either textures are overloading my Vram, or constant action is overloading either my CPU or GPU depending on the type of action.

Newer games just have more stuff going on. Many of them don't even look better - there's just MORE of everything. More independent leaves, more blades of grass, more strands of hair. More things for your two processors to load, more things for your RAM to remember. To answer the original question, there is one simple answer to why people don't use i3's for gaming, for the most part. L2 and L3 cache size. More stuff to do means more stuff for everyone to remember, and the dedicated CPU memory, stored on the CPU itself, is more important then people tend to realize for gaming. My 1MB L2 cache is my BIGGEST bottleneck in my rig - everything else is good. I'm running a dual core, but it's stable at 4 GHZ and my GPU is OC'd to 1030 and is stable, both operating under 60 degrees in most situations. i7's and the FX processors are known for their huge L2 cache sizes, and their L3 cache size is normally over the petty 1MB I offer at L2. They simply work better for doing more stuff.