News Intel Launches Tiger Lake: Up to 4.8 GHz, LPDDR4 Memory, Iris Xe Graphics up to 1.35 GHz

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Why would you choose to play any of those on a laptop? Gaming on a laptop is terrible, no matter how fast it is.
A lot of people (more than you would think) buy "gaming laptops", and plug a gaming screen into it together with a gaming keyboard and mouse. Gaming laptops are very much the entry port for console users looking at getting into PC gaming.
 
Since you want me to do the work for you a simple google search will provide those answers.



https://hardware.slashdot.org/story...l-engineer-claims-skylake-qa-drove-apple-away



Those are three links to the same interview. And he's a little confused. He even says himself in that interview that he doesn't really know what's going on, but if he had to guess. . .
 
Brand new, just released Project Cars 3.
vTXdkcm2BxsS8TApTUkn6M-3151-80.png


Look at how pathetic that Intel i3-9100 quad core is. Let's not sugarcoat this. The i3-9100 is no where near Intel's fastest quad core. This is a $110 Coffee Lake quad core/quad thread (no hyperthreading), and a single core boost of 4.2Ghz. It loses to the $430 12 core/24 thread 3900x by less than 1 frame/sec. 3x more cores and 6x more threads for 0.7% more performance. There are settings in this review where the i3 actually beats the 3900x. Also note, the 6/12 9600k comfortably beats the 3900x with twice as many cores and treads by 14%.

Brand new MS Flight simulator.
mrrk7hfYPFkAqXxnxzWCf5-3151-80.png

3900x with a huge win over the i3 here of almost 14%. Almost 4x the cost and 6x the threads for 14%. AMD should be proud of themselves. Except for the fact the 9600k beats the 3900x by that same 14% margin with half the cores and threads.

Now realize, the top end Tiger Lake CPU's just announced would pound the i3 9100 across the board. The i7-1185G7 is a quad core 8 thread CPU with an IPC boost of probably 20-25% over Coffee Lake with a 4.8Ghz single core turbo and a 4.3Ghz all core turbo. If you could drop a 2080Ti into a Tiger Lake motherboard, this is a low wattage mobile CPU that would undoubtedly beat the 3900x at any setting in Cars 3 and would give it a serious run for its money in Flight Simulator. If Tiger lake is a pathetic quad core, what does that make the 12 core AMD CPU that loses to it?
I would suggest you read the ryzen 3600x vs core i5 9600k article on this site.
I think you are going to be shocked when you see that 12 threads can beat 6 threads and the ryzen 5 3600x took the gaming crown when both in stock settings.
On your way of educating yourself look for people who have some fun in Battlefield and Intel amazing quad core cpus.
After you will do some research and look for videos that compare 4 vs 6 vs 8 cores cpu I want to welcome you to 2020.
It is time to wake up.
MS Sim give 50 fps in 1080p and 2160p with i9 9900k. Almost all of the cpus even with rtx 3090 will not be able to break the 50 fps limit. At least learn how to read the article that you link to. MS is one of the worst optimizied game there is.
I am truly sorry I burst your Intel fanboy bubble and welcome to 2020 where quad cores in certain games gives unplayable framerates and actually 12 threads perform better than 6 threads.
 
Last edited:
1. Some people on here probably never heard of IPC or frequency or power efficiency and probably think that all quad cores are the same. They probably think that an i7 7700K is the same as an Intel Core 2 quad Q6700.

2. This is a quad-core but across the board it destroys the newest 8core/8thread (4700U) offering of AMD as well as Intel’s own 10th gen 6core/12thread 15W cpu (i7 10710u) not just in single threaded workloads but in multithreaded workloads too. And against the 8c/16t 4800U, there are plenty of workloads where Intel wins and in some it wins bigly.

3. These cpus have a nominal tdp of just 15W. Just 3 years ago if you wanted a quad core mobile cpu you needed to go for 45W skus (e.g. 7700HQ). Compare the i7 7700HQ in Geekbench 5 here (ST:829, MT:3417) versus the i7 1165G7 here (ST: 1533, MT 5769). So now you get 70% more multithreaded cpu performance (not to mention the 85% more single-threaded performance) for 1/3 of the power! And we are also talking about a large laptop versus an ultraportable.

4. Funny that some people mention gaming. These cpus pack immensely better integrated graphics - a 4x improvement over the UHD graphics of Cometlake cpus and 2x over Icelake cpus (which we know were on par with AMD’s apus).

5. These cpus are primarily meant for (i) thin-and-light ultraportable laptops and/or (ii)for laptops without a discrete gpu. They are not meant to be paired with powerful discrete graphics cards. But even if you were to pair them with a discrete gpu in beefier laptops, Intel wins in gaming cpu performance anyway. It wins even with their last-gen quad core cpus let alone these ones. And unless you are going to be playing games in 720p you would need a lot more gpu horsepower for such a quad core cpu to bottleneck the gpu. A mobile rtx 2060 ain't such gpu.

6. Even if you were to pair it with a better gpu you wouldn't leave too much performance on the table even in 1080p. And of course no bottleneck in 4K.

7. Speaking of pairing it with better gpus, a great thing about Intel mobile cpus is that they have thunderbolt. So you can buy a gpu-less laptop and use an e-gpu with thunderbolt only whenever you want to do heavy gaming. As it has been shown, the x4 PCIe3 interface of Thunderbolt, even with a 2080Ti, contrary to popular belief, only incurs about 10% of gpu performance loss.

8. In any case, if mobile, high-fps gaming is your focus you are better off buying a a laptop with a powerful discrete gpu and an H-series 45W cpu - the tigerlake H-series will be released in CES in January. And that to be paired with the right high-refresh ratio screen.
I am not going to reply to all the nonsense that you wrote but just wanted to remind you Intel recent history in benchmarks....
  1. Using industrial chiller while OC their cpu to 1000 watt to show their cpu is better while forgeting to mention it.
  2. Using stronger Nvidia gpu while claiming their cpus are a lot stronger in gaming.
  3. Using mobile cpu with twice the TDP.
  4. Using different ram to show an improvement while using the slowest ram possible on Amd cpus.
So lets calm down the warrior fanboy inside us and wait to a review that does not use LN2, nuclear reactor, stronger Nvidia gpu and ddr4 600,000 mhz.
 
  • Like
Reactions: svan71 and usiname
Good technology but who games on mobile now a days and if they do their spending 3k for a nice gaming laptop to begin with. Take can spend half of that and get a nice desktop and also the next half for a good video card and monitor and mouse and keyboard and what not. 🖐🖐💯👩‍🦲👩‍🦲✌
I agree with what you said here, but I think the number of people gaming on laptop is likely quite high.
Reviews don't tell you everything. The problem with these U series is that their performance is entirely dependent on cooling and TDP. Although its 15W, you can run it at 25W provided you have enough cooling capacity. At 25W, it will be alot faster than 15W due to higher clockspeeds. Look at the cooling capacity of 2 laptops below. And then, Intel CPUs (at least in their NUC) are able to run at 45W for brief periods of time (PL1 limit). This makes them seems alot faster esp. if the benchmark is completed within the PL1 period.

Both i5-8565u and ryzen 4800U are 15W, configurable to 25W TDP. So, if both CPUs are at their max 25W TDP and 100% load, which one do you think will throttle first due to heat?

See this Ryzen 4800U, look at the cooling capacity.

https://cdn.mos.cms.futurecdn.net/q2SzpEGVbkHTjbyGVPg7KC-650-80.jpg

Now look at this Dell 5300. Look at the cooling.

https://preview.redd.it/dqaoy3s4a63...bp&s=34da26f204b30120424fde5087b71b847ed2f009
I agree that sustained performance is subjected to a few factors, and cooling solution is one of them. In your example of the Intel i5 8565U and Ryzen 7 4800U, assuming same cooling solution and TDP, it is difficult to confirm which will throttle first. Intel's strategy is to go all out on clockspeed which gives them the single core advantage. AMD's solution is to go wide, more cores while moderate clockspeed. Either ways require more power and generates more heat, but high clockspeed is more inefficient in my opinion and observation. Moreover you are comparing a more efficient 7nm vs Intel's overclocked 14nm+++, where the latter is not as power efficient as seen in so many reviews.

And lastly, throttling is one thing. But depending on what you are running, the 8 core Ryzen may have accomplished more even if it throttles earlier than the 4 core Intel chip.
 
1440p but I'm a flight sim nut and that's always been, and will probably continue to be CPU intensive. I edit video but none longer than 30 minutes. And I've always been more patient with long renders than low framerates. Atleast I can go have a snack. For the first time in my life, I realized performance could be overshadowed by other factors. To put it in car terms... buying an Intel CPU right now feels like buying a 1995 Ford Mustang with a turbo charger and questionable oil/cooling system. Hot to trot and ready to rot.
CPU intensive is fine. But CPU intensive but only optimize for 4 cores is silly when >4 cores are common nowadays.

Also I don't disagree that taking things slow, going to get a cup of coffee or snack while waiting is nice and good. But if you are comparing a processor, you think about doing more and faster, not other way round. I will certainly not consider Intel just because I can have time to get a snack. I can still get a snack or coffee even when it renders fast. And at 1440p, the difference between Intel and AMD processors are not great. If the game is unplayable with AMD CPU for example due to CPU limitations, having an Intel processor will not miraculously make it playable.
 
I think you've missed the point of this discussion. We're not talking about video card performance. Someone said that quad cores are pathetic for gaming today. If that's the case, then lowering the resolution should prove that point as you're shifting the bottleneck from the GPU to the CPU. Even with the odds stacked against Intel's quad core, it's still hanging tight with AMD's 12 core CPU. Which would indicate that these mobile Tiger Lake CPU's would be faster than AMD's 3900x in gaming despite the huge core deficit.

You want higher resolution? Here's Cars 3 at 4k.
ePW9kUY9fgzCdb4EARckJM-3151-80.png


i3 is now 0.3fps slower than a 3900x. Quad core again, not looking too pathetic to me. If I was building a rig to play this game, why would I pay $430 for 12 cores, when a $110 quad core would give me the exact same performance?

Flight Simulator is not an exception. The exception are games that gain anything worthwhile adding more than 6 cores. There are far more games that can't properly use more than 4 cores/8 threads than can utilize more than 6/12.
I think you missed my points instead of other way around. The main point is this, MS FS is an outlier purely because of the poor optimization. As some have pointed out that Flight SIMs are very CPU intensive. If that is the case, this game would have been fine if it was launched 5 years ago where 4 cores are the in thing. With this limitation, it is no surprise that a 4 core processor is getting results very close to processors with higher core count. It is akin to someone imposing a speed limit to your otherwise fast car.

In fact if anything, MS FS just proves that 4 core processors are starting to become insufficient for gaming because in a well optimized game, it is usually the GPU that is the bottleneck. Instead, this is one of the rare case that we see a very bad CPU bottleneck here.
 
I agree with what you said here, but I think the number of people gaming on laptop is likely quite high.

I agree that sustained performance is subjected to a few factors, and cooling solution is one of them. In your example of the Intel i5 8565U and Ryzen 7 4800U, assuming same cooling solution and TDP, it is difficult to confirm which will throttle first. Intel's strategy is to go all out on clockspeed which gives them the single core advantage. AMD's solution is to go wide, more cores while moderate clockspeed. Either ways require more power and generates more heat, but high clockspeed is more inefficient in my opinion and observation. Moreover you are comparing a more efficient 7nm vs Intel's overclocked 14nm+++, where the latter is not as power efficient as seen in so many reviews.

And lastly, throttling is one thing. But depending on what you are running, the 8 core Ryzen may have accomplished more even if it throttles earlier than the 4 core Intel chip.

Err no. The dell 5300 has alot worse cooling than the lenovo laptop.
I agree with what you said here, but I think the number of people gaming on laptop is likely quite high.

I agree that sustained performance is subjected to a few factors, and cooling solution is one of them. In your example of the Intel i5 8565U and Ryzen 7 4800U, assuming same cooling solution and TDP, it is difficult to confirm which will throttle first. Intel's strategy is to go all out on clockspeed which gives them the single core advantage. AMD's solution is to go wide, more cores while moderate clockspeed. Either ways require more power and generates more heat, but high clockspeed is more inefficient in my opinion and observation. Moreover you are comparing a more efficient 7nm vs Intel's overclocked 14nm+++, where the latter is not as power efficient as seen in so many reviews.

And lastly, throttling is one thing. But depending on what you are running, the 8 core Ryzen may have accomplished more even if it throttles earlier than the 4 core Intel chip.

In my comparison, its actually to show the terribly small heatsink in the dell 5300. So the i5 will hardly be able to run at 25w tdp and cannot sustain for long.
 
Brand new, just released Project Cars 3.
vTXdkcm2BxsS8TApTUkn6M-3151-80.png


Look at how pathetic that Intel i3-9100 quad core is. Let's not sugarcoat this. The i3-9100 is no where near Intel's fastest quad core. This is a $110 Coffee Lake quad core/quad thread (no hyperthreading), and a single core boost of 4.2Ghz. It loses to the $430 12 core/24 thread 3900x by less than 1 frame/sec. 3x more cores and 6x more threads for 0.7% more performance. There are settings in this review where the i3 actually beats the 3900x. Also note, the 6/12 9600k comfortably beats the 3900x with twice as many cores and treads by 14%.

Brand new MS Flight simulator.
mrrk7hfYPFkAqXxnxzWCf5-3151-80.png

3900x with a huge win over the i3 here of almost 14%. Almost 4x the cost and 6x the threads for 14%. AMD should be proud of themselves. Except for the fact the 9600k beats the 3900x by that same 14% margin with half the cores and threads.

Now realize, the top end Tiger Lake CPU's just announced would pound the i3 9100 across the board. The i7-1185G7 is a quad core 8 thread CPU with an IPC boost of probably 20-25% over Coffee Lake with a 4.8Ghz single core turbo and a 4.3Ghz all core turbo. If you could drop a 2080Ti into a Tiger Lake motherboard, this is a low wattage mobile CPU that would undoubtedly beat the 3900x at any setting in Cars 3 and would give it a serious run for its money in Flight Simulator. If Tiger lake is a pathetic quad core, what does that make the 12 core AMD CPU that loses to it?
FWIW, I really want to get Core i3-10100 for testing. (Actually, screw it: I just ordered one!) It will probably be weeks before I have the time to test it properly with some GPUs, but it's on my list now! $133. Plus the cost of a Z490 (or at least B460 / H470). Still, that's basically less than $250 for the CPU+mobo upgrade.
 
  • Like
Reactions: Chung Leong
1. Some people on here probably never heard of IPC or frequency or power efficiency and probably think that all quad cores are the same. They probably think that an i7 7700K is the same as an Intel Core 2 quad Q6700.

2. This is a quad-core but across the board it destroys the newest 8core/8thread (4700U) offering of AMD as well as Intel’s own 10th gen 6core/12thread 15W cpu (i7 10710u) not just in single threaded workloads but in multithreaded workloads too. And against the 8c/16t 4800U, there are plenty of workloads where Intel wins and in some it wins bigly.

3. These cpus have a nominal tdp of just 15W. Just 3 years ago if you wanted a quad core mobile cpu you needed to go for 45W skus (e.g. 7700HQ). Compare the i7 7700HQ in Geekbench 5 here (ST:829, MT:3417) versus the i7 1165G7 here (ST: 1533, MT 5769). So now you get 70% more multithreaded cpu performance (not to mention the 85% more single-threaded performance) for 1/3 of the power! And we are also talking about a large laptop versus an ultraportable.

4. Funny that some people mention gaming. These cpus pack immensely better integrated graphics - a 4x improvement over the UHD graphics of Cometlake cpus and 2x over Icelake cpus (which we know were on par with AMD’s apus).

5. These cpus are primarily meant for (i) thin-and-light ultraportable laptops and/or (ii)for laptops without a discrete gpu. They are not meant to be paired with powerful discrete graphics cards. But even if you were to pair them with a discrete gpu in beefier laptops, Intel wins in gaming cpu performance anyway. It wins even with their last-gen quad core cpus let alone these ones. And unless you are going to be playing games in 720p you would need a lot more gpu horsepower for such a quad core cpu to bottleneck the gpu. A mobile rtx 2060 ain't such gpu.

6. Even if you were to pair it with a better gpu you wouldn't leave too much performance on the table even in 1080p. And of course no bottleneck in 4K.

7. Speaking of pairing it with better gpus, a great thing about Intel mobile cpus is that they have thunderbolt. So you can buy a gpu-less laptop and use an e-gpu with thunderbolt only whenever you want to do heavy gaming. As it has been shown, the x4 PCIe3 interface of Thunderbolt, even with a 2080Ti, contrary to popular belief, only incurs about 10% of gpu performance loss.

8. In any case, if mobile, high-fps gaming is your focus you are better off buying a a laptop with a powerful discrete gpu and an H-series 45W cpu - the tigerlake H-series will be released in CES in January. And that to be paired with the right high-refresh ratio screen.


where is the dislike button?
 
  • Like
Reactions: Arbie and st379
I think you missed my points instead of other way around. The main point is this, MS FS is an outlier purely because of the poor optimization. As some have pointed out that Flight SIMs are very CPU intensive. If that is the case, this game would have been fine if it was launched 5 years ago where 4 cores are the in thing. With this limitation, it is no surprise that a 4 core processor is getting results very close to processors with higher core count. It is akin to someone imposing a speed limit to your otherwise fast car.

In fact if anything, MS FS just proves that 4 core processors are starting to become insufficient for gaming because in a well optimized game, it is usually the GPU that is the bottleneck. Instead, this is one of the rare case that we see a very bad CPU bottleneck here.

Parallelization would help flight sims, but it's not as easy to implement as with tile-base rendering or frame-to-frame rendering where one part of the image does not require the other part in order to execute. The sequencing of data in a flight sim presents a different challenge. Need to know control inputs + current aerodynamic environment before aircraft movements can be calculated, and then view rendered will be based on those combined factors. Coordination of timing/data presents hurdles to multi-core.

"Not optimized" has become a catch-all phrase when the realities of programming are more nuanced than that. Even as cores grow in number, single core performance will remain a relevant aspect of CPU performance.
 
I agree with what you said here, but I think the number of people gaming on laptop is likely quite high.

I agree that sustained performance is subjected to a few factors, and cooling solution is one of them. In your example of the Intel i5 8565U and Ryzen 7 4800U, assuming same cooling solution and TDP, it is difficult to confirm which will throttle first. Intel's strategy is to go all out on clockspeed which gives them the single core advantage. AMD's solution is to go wide, more cores while moderate clockspeed. Either ways require more power and generates more heat, but high clockspeed is more inefficient in my opinion and observation. Moreover you are comparing a more efficient 7nm vs Intel's overclocked 14nm+++, where the latter is not as power efficient as seen in so many reviews.

And lastly, throttling is one thing. But depending on what you are running, the 8 core Ryzen may have accomplished more even if it throttles earlier than the 4 core Intel chip.

One more thing. The 4 core i7 8559u is actually faster than the 6 core i71070u on many benchmarks.

Higher clockspeed is inefficient only if you have to increase voltage considerably. Clocks hardly increase power consumption, its voltage. More cores is never as efficient, you need more silicon and every transistor you add means more power needed. And not to mention scaling is never perfect.

Lastly, how fast the ryzen or intel can go depends alot on settings and cooling. I mentioned before that intel cpus can hit 45w for short periods. I am not sure if the ryzen can do that. If can, its about which cpu can hold on longer.
 
I would suggest you read the ryzen 3600x vs core i5 9600k article on this site.
I think you are going to be shocked when you see that 12 threads can beat 6 threads and the ryzen 5 3600x took the gaming crown when both in stock settings.
On your way of educating yourself look for people who have some fun in Battlefield and Intel amazing quad core cpus.
After you will do some research and look for videos that compare 4 vs 6 vs 8 cores cpu I want to welcome you to 2020.
It is time to wake up.
I sincerely hope this isn't the article you were referring to. Not sure why couldn't post a link yourself. Though after browsing through results, I have a feeling I know why you didn't.

The following test were considered "wins" for the 3600x
885RX4GU6LYUJjoErRbyj7-650-80.png.webp

Tc85xWewt6hSdum68LZzv7-650-80.png.webp

uwx79BJnvrMA5B6RZWSeA8-650-80.png.webp

TUq3CoeSfUZAbEgmXtrwF8-650-80.png.webp

DdqURdkQHWHDtgd8GaadM8-650-80.png.webp


It doesn't take a very astute person to realize, THG only considered stock settings performance. If you're spending the extra money for an Intel K CPU, you're going to overclock it. Looking at these results with overclocking, the 9600K came in 1st among this group of CPU's that also included an overclocked 3800x, in 9 out of 10 games, often finishing out by itself, only getting beaten in AotS. Here is the summary chart.
3cSTREcGCUVxFjbSDhgAT8-650-80.png.webp

Stock vs stock, 3600x is 0.9% faster than the 9600K on average. OC vs OC, the 9600K is 10.5% faster on average. If I'm reading this comparison and trying to decide which CPU to spend my money on for a gaming rig. What idiot is going to pick the 3600X? Or 3700X? Or even the 3800X over the 9600K? Just to rub salt in the wounds, the 9600k is also the cheapest CPU of this group, currently $15 cheaper than the 3600X on Amazon. Enjoy your dyno queen, I'm spending my money on the option that is clearly the fastest in the real world.
 
Last edited:
I am not going to reply to all the nonsense that you wrote but just wanted to remind you Intel recent history in benchmarks...

So lets calm down the warrior fanboy inside us and wait to a review that does not use LN2, nuclear reactor, stronger Nvidia gpu and ddr4 600,000 mhz.
Sigh. I think that one benchmark where AMD fanboys would top the charts is that of ignorance. Anyway below are some Geekbench 5 results.
Here is Tigerlake i7 1165G7 in Acer Swift SF314: It does ST:1486, MT: 5827
Here is a Ryzen 7 4700U in Acer Swift SF314: It does ST:1068, MT: 4637

So basically the 1165G7 is running circles around the 4700U: 39.1% higher single threaded performance and 25.6% higher multi-threaded performance. Same chassis/cooling, same memory speed. You may now offer your apology.

where is the dislike button?
I get it, you don't like hearing the truth. But rather than asking for a dislike button why don't you ask to get an "in denial" badge under your username? Or do you prefer the fb one?
 
Sigh. I think that one benchmark where AMD fanboys would top the charts is that of ignorance. Anyway below are some Geekbench 5 results.
Here is Tigerlake i7 1165G7 in Acer Swift SF314: It does ST:1486, MT: 5827
Here is a Ryzen 7 4700U in Acer Swift SF314: It does ST:1068, MT: 4637

So basically the 1165G7 is running circles around the 4700U: 39.1% higher single threaded performance and 25.6% higher multi-threaded performance. Same chassis/cooling, same memory speed. You may now offer your apology.

I get it, you don't like hearing the truth. But rather than asking for a dislike button why don't you ask to get an "in denial" badge under your username? Or do you prefer the fb one?
In the presentation Intel used lpddr4-4266. Go and check what was the speed of the AMD memory.
I will give you a hint: 1,000 mhz slower.
Calm down the warrior fanboy inside you and lets wait for an honest review.

You may now offer your apology, and I want it 10 pages long.
 
Last edited:
I sincerely hope this isn't the article you were referring to. Not sure why couldn't post a link yourself. Though after browsing through results, I have a feeling I know why you didn't.

The following test were considered "wins" for the 3600x
885RX4GU6LYUJjoErRbyj7-650-80.png.webp

Tc85xWewt6hSdum68LZzv7-650-80.png.webp

uwx79BJnvrMA5B6RZWSeA8-650-80.png.webp

TUq3CoeSfUZAbEgmXtrwF8-650-80.png.webp

DdqURdkQHWHDtgd8GaadM8-650-80.png.webp


It doesn't take a very astute person to realize, THG only considered stock settings performance. If you're spending the extra money for an Intel K CPU, you're going to overclock it. Looking at these results with overclocking, the 9600K came in 1st among this group of CPU's that also included an overclocked 3800x, in 9 out of 10 games, often finishing out by itself, only getting beaten in AotS. Here is the summary chart.
3cSTREcGCUVxFjbSDhgAT8-650-80.png.webp

Stock vs stock, 3600x is 0.9% faster than the 9600K on average. OC vs OC, the 9600K is 10.5% faster on average. If I'm reading this comparison and trying to decide which CPU to spend my money on for a gaming rig. What idiot is going to pick the 3600X? Or 3700X? Or even the 3800X over the 9600K? Just to rub salt in the wounds, the 9600k is also the cheapest CPU of this group, currently $15 cheaper than the 3600X on Amazon. Enjoy your dyno queen, I'm spending my money on the option that is clearly the fastest in the real world.
RDR 2 is an example of a game that stutter on th 9600k and does not stutter on the 3600, maybe they fix it by now.
The point was that some games are unplayable on 4 and in very rare occasions 6 cores so your fanboy quoting Intel that quad core is all you need is stupid. Games these days can utilise 12 threads and that was the point not that AMD is best for gaming with rtx 2080 ti on 1080p. And once again I welcome you to 2020 where quad core is not enough.

What I find pathetic is that the core i3 and i7 have the same amount of cores. I understand that a die hard fanboy like you would rather have the i7 but who cares about 720p low gaming and will invest money on i7 that has the same amount of cores?
At least they could add some cores to the i7 and not give it the same specs with better boost.
I would not choose a quad core, I already have a quad core and this is not an upgrade even with the IPC increase.
Only fanboys like you considering upgrading from one quad core to another.
 
Last edited:
RDR 2 is an example of a game that stutter on th 9600k and does not stutter on the 3600, maybe they fix it by now.
The point was that some games are unplayable on 4 and in very rare occasions 6 cores so your fanboy quoting Intel that quad core is all you need is stupid.
RDR2 had a weird bug where any CPU that lacked hyperthreading/SMT would stutter. For instance, a 8C/8T 9700K had much worse 1%/0.1% FPS than a 4C/8T 7700K, which obviously makes no sense unless there's a bug. So it's not so much that 6 cores/threads weren't enough, it's just that particular game required SMT to run properly regardless of how many core you had. It still ran fine of a 4C/8T CPU. In general, 4C/8T CPUs still seem to be holding up pretty well for gaming.
 
RDR2 had a weird bug where any CPU that lacked hyperthreading/SMT would stutter. For instance, a 8C/8T 9700K had much worse 1%/0.1% FPS than a 4C/8T 7700K, which obviously makes no sense unless there's a bug. So it's not so much that 6 cores/threads weren't enough, it's just that particular game required SMT to run properly regardless of how many core you had. It still ran fine of a 4C/8T CPU. In general, 4C/8T CPUs still seem to be holding up pretty well for gaming.
How can a 4/8 run it perfectly fine and a real 8 cores don't ? Are you sure about that?
Maybe the game required more than 6 threads real or not?
 
View: https://www.youtube.com/watch?v=z_ty-gajwoA

Look around 6:30.

Needing >6 threads wouldn't explain why the 9700K still had stuttering.
Worth noting is that RDR2 had some really poor optimizations/coding at launch. It was fixed within about two weeks, though. It was spawning too many threads, and just overloading anything with fewer than 12-threads at the time. There was a workaround that improved performance as well, and then the first or second patch took care of things. Today, 6-core/6-thread i5-9600K is going to perform about the same as 4-core/8-thread i7-7700K at the same clocks.
 
  • Like
Reactions: st379