Question Bottleneck query: i3-8100 and RTX 4070 ?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Aug 4, 2023
16
2
15
Hi, currently gaming on a Dell 22 inch monitor with 1680 X 1050 resolution (i3-8100, GTX 1060 6GB ,32GB RAM, Corsair CX550M PSU).
I am thinking of taking things a notch up by upgrading to a 27 inch 1440p monitor and consequently, an RTX 4070. How much of a bottleneck I will be looking at?

The games I like to play are graphically intensive (The Witcher 3, Red Dead Redemption 2...), but not very fast paced FPSes.
 
With a PC he can accept the issues and aim to upgrade the rest when he can. He can accept that the gpu will lose approx. half its performance. Some games will run fine and others poorly.
" Intel Core i3-8100 is too weak for NVIDIA GeForce RTX 4070 on 1920 × 1080 pixels screen resolution for Red Dead Redemption 2. This configuration has 41.2% of processor bottleneck . "


If you're quoting the bottleneck calculator...we're done here.

I don't debate junk science.
 
Just for funsies, comparison of benchmarks on my 5600X system with 2 cores disabled
For the most part, there's almost no difference except in more CPU intensive tests. So again, stop looking at core counts. They don't really mean anything unless the whole processor is known.
 
" Intel Core i3-8100 is too weak for NVIDIA GeForce RTX 4070 on 1920 × 1080 pixels screen resolution for Red Dead Redemption 2. This configuration has 41.2% of processor bottleneck . "


If you're quoting the bottleneck calculator...we're done here.

I don't debate junk science.
Then the CPU he should have for that GPU is a 13700k. Anything less is a waste of money. He should get a RTX 2060 or RTX 3060. There are no resources for benchmarks with a 8100 and a 4070 because no one would pair the two.

The rest of the system needs upgraded.
 
  • Like
Reactions: letsrock53
Just for funsies, comparison of benchmarks on my 5600X system with 2 cores disabled
For the most part, there's almost no difference except in more CPU intensive tests.
Do you still have SMT enabled? I've tested 4 cores/4 threads against 4 cores 8 threads and there is a big difference in lots of titles. What average numbers are not very good at illustrating is stutters within games. Something like Cyberpunk for example on 4 cores/4 threads is a stutterery mess.
 
Just for funsies, comparison of benchmarks on my 5600X system with 2 cores disabled
For the most part, there's almost no difference except in more CPU intensive tests. So again, stop looking at core counts. They don't really mean anything unless the whole processor is known.
Firestorm Combined Test
47.40 FPS 2 cores disabled.
65.46 FPS
+ 38.1 %
gg that shows I'm right. 4 cores in Cyberpunk 2077 will hitch and stutter. Remember he has a Hard Disk, low ram bandwidth and much lower cpu frequency. Also reduced cache amount.

Accept the issues with the build and upgrade. Aiming to fix them later or upgrade the system to be more balanced.
 
Then the CPU he should have for that GPU is a 13700k. Anything less is a waste of money. He should get a RTX 2060 or RTX 3060. There are no resources for benchmarks with a 8100 and a 4070 because no one would pair the two.

The rest of the system needs upgraded.
Is the 4070 a good pair with the existing i3-8100?
No.

But "OMG! Bottleneck!" is a completely ridiculous concept.
 
  • Like
Reactions: letsrock53
Is the 4070 a good pair with the existing i3-8100?
No.

But "OMG! Bottleneck!" is a completely ridiculous concept.
You are better served with a much slower and cheaper GPU. His system already has the correct gpu. The GTX 1060. I bet he will be back later on the forums and you all will be telling him to replace his PSU. Then the rest of the system.

Tell him the truth, it will be faster but we cannot guarantee if it will be a good experience. There will just be games that wont like his CPU and he will likely lose most of the GPUs performance. He also has the minimum PSU.
 
  • Like
Reactions: letsrock53
Firestorm Combined Test
47.40 FPS 2 cores disabled.
65.46 FPS
+ 38.1 %
gg that shows I'm right. 4 cores in Cyberpunk 2077 will hitch and stutter. Remember he has a Hard Disk, low ram bandwidth and much lower cpu frequency. Also reduced cache amount.

Accept the issues with the build and upgrade. Aiming to fix them later or upgrade the system to be more balanced.
I agree I've played Cyberpunk with 4 cores and hyper threading disabled. It does as you say stutter to the point of being unplayable. 4 cores/8 threads is the absolute bare minimum but even then it's not the smoothest experience. Unless it's an i3 12100, that's pretty good but that's a lot faster than the previous Intel quad cores.
 
This all being said, I did upgrade a 4930k @ 4.5GHz and DDR3-2400 system. With a RTX 2080 and got near playable frame rates in games. I also did have a clear CPU bottleneck but that was many years ago.



Its the 4 cores and 4 threads that is the biggest deal.


Here you can see that the CPU has over half the FPS when compaired to a 5600x. Look at the 0.1% lows. 12.6 fps on the 4930k and 58 fps on the 5600x. Cyberpunk 2077 is 5fps 0.1%.
 
Last edited:
  • Like
Reactions: Nighthawk117
You are better served with a much slower and cheaper GPU. His system already has the correct gpu. The GTX 1060. I bet he will be back later on the forums and you all will be telling him to replace his PSU. Then the rest of the system.

Tell him the truth, it will be faster but we cannot guarantee if it will be a good experience. There will just be games that wont like his CPU and he will likely lose most of the GPUs performance. He also has the minimum PSU.
In the entirety of this thread, no one said this was a "good idea".
 
Firestorm Combined Test
47.40 FPS 2 cores disabled.
65.46 FPS
+ 38.1 %
gg that shows I'm right. 4 cores in Cyberpunk 2077 will hitch and stutter. Remember he has a Hard Disk, low ram bandwidth and much lower cpu frequency. Also reduced cache amount.
3DMark Firestrike's physics are scalable to at least 16 threads and are constantly running, however most games don't do this, so it shouldn't be taken as a "zomg this is definitive proof"

This is also ignoring that none of the graphics scores significantly changed. If the game logic load is light enough, then the GPU has plenty of room to flex its muscles.

Accept the issues with the build and upgrade. Aiming to fix them later or upgrade the system to be more balanced.
I don't see any issues with it, unless the whole goal is to have a "balanced, no bottleneck machine" in which case, you'll be chasing bunnies until the end of time because such a thing doesn't exist. And the whole idea of a "balanced machine" to me is ridiculous.

There's only one factor that matters: what performance can the system get? If that performance is met, then does it really matter? And sure, we could pretend that we're trying to make sure people spend their money wisely, but let people spend their money however they want. If someone still wants to blow money on a 4090 but has an i3, even after telling them to not do it, let them do it. It's not your money and it's not your problem.
 
  • Like
Reactions: letsrock53
3DMark Firestrike's physics are scalable to at least 16 threads and are constantly running, however most games don't do this, so it shouldn't be taken as a "zomg this is definitive proof"

This is also ignoring that none of the graphics scores significantly changed. If the game logic load is light enough, then the GPU has plenty of room to flex its muscles.


I don't see any issues with it, unless the whole goal is to have a "balanced, no bottleneck machine" in which case, you'll be chasing bunnies until the end of time because such a thing doesn't exist. And the whole idea of a "balanced machine" to me is ridiculous.

There's only one factor that matters: what performance can the system get? If that performance is met, then does it really matter? And sure, we could pretend that we're trying to make sure people spend their money wisely, but let people spend their money however they want. If someone still wants to blow money on a 4090 but has an i3, even after telling them to not do it, let them do it. It's not your money and it's not your problem.
The combined test is the real world game sim. Thus its the most important result in firestrike. Disabling just two cores reduced your game performance by approx. 40%. You would have still had 8 threads. The standard timespy cpu scales to 10 cores approx. That why a 10900k can be overclocked and perform at the same levels as a 5950x. With my full overclock I get 16k in Timespy cpu. Timespy extreme is a different story. Anyway this is off topic.
 
Windows 10 will reach end of support on October 14, 2025. Based on the benchmark of 6 games, the Intel Core i7-4770K Bottlenecks the RTX 4070 at 1080p - ultra settings by 36.9% on average.
Take a game like Control.
how is this relevant?
The thread poster said 2560x1440p.
Bottleneck very likely drops.
With Raytracing on Maximum(or close enough),the bottleneck with 8700K might dissapear completely.
Such thing happened when I did that on my former rig with Ryzen 5 5500(weaker than 8700K I think)+32GB DDR4+rtx3080 12GB in Cyberpunk 2077.
3080 12GB is probably around 4070 level.
Possibly a bit stronger.
Well until DLSS3 is unleashed I guess.
Then 3080 12GB looses I imagine.
 
  • Like
Reactions: letsrock53
how is this relevant?
The thread poster said 2560x1440p.
Bottleneck very likely drops.
With Raytracing on Maximum(or close enough),the bottleneck with 8700K might dissapear completely.
Such thing happened when I did that on my former rig with Ryzen 5 5500(weaker than 8700K I think)+32GB DDR4+rtx3080 12GB in Cyberpunk 2077.
3080 12GB is probably around 4070 level.
Possibly a bit stronger.
Well until DLSS3 is unleashed I guess.
Then 3080 12GB looses I imagine.
There is a 8700k bidding on ebay for £2 atm, maybe the OP can get luckly and get a cheap CPU upgrade but this is arguement is starting to hi-jack the thread. Its been done to death. So let it drop.
 
Status
Not open for further replies.