[SOLVED] Performance of i7 5775C is much worse than expected (turned out i'm wrong)

HeartOfAdel

Commendable
Apr 7, 2021
86
14
1,545
I purchaed 5775C for my MSI H97 Gaming 3 hoping to see that '7700K' level of performance in games but I got dissapointed. The cpu is better than my previous i5 4570 maybe around 20-30% at best and the performance is not even near what I saw in Youtube benchmarks. I saw a benchmark of the cpu with gtx 1080 and it got 70-75 fps on ultra settings, when i only top out at 50 fps, 50 FPS! I didnt use extra details or anything, it's just barely above my previous cpu and even drops to 40s. AC Odyssey bottlenecks at 50-60fps OUTSIDE cities, where 6700K, 7700K and even freacking 4790K do above 60 in becnhmarks. There's a test on Youtube with 1080 Ti in AC Unity where it gets 70-80 fps Where are all those bloody frames of that famous cpu?
Maybe Edram isn't working? I don't understand, it manually sets itself to 1800mhz in bios but if i disable it, Nothing changes with fps, C-states are enabled. Why am i experiencing this?
 
Last edited:
Solution
A bit late to write that here but i don't want to confuse people with wrong conclusions in the beginning.

Basically, a 4Ghz 5775C with 2133mhz+ ram is about 50-100% faster than my i5 4570, depending on the game. That's crazy.

I've also done a couple of comparisons to another cpu, Ryzen 5 3600, in a couple of games in similar scenarios and i7-5775C outperformed a stock 3600 in Cyberpunk 2077 by about 5-10%. I've also checked AC Odyssey, Matrix Awakens, SW Jedi: Fallen Order and at least it came super close if not outperformed it. But i understand there might be a margin of error depending on game patches, background applications etc. so i don't hurry with conclusions. The fact that a 4/8 DDR3 cpu holds up against a more recent 6/12...
Obviously if i'm talking about the cpu's fps i mean bottlenecked scenarios.
RTX 3060 Ti at 1440p.
Its not obvious what your card is when you don't post system specs, which I might add, is the decorum of this forum. This forum is not here so you can vent your frustration onto other people. Please, we want to help you out, just take a chill pill.
 
Its not obvious what your card is when you don't post system specs, which I might add, is the decorum of this forum. This forum is not here so you can vent your frustration onto other people. Please, we want to help you out, just take a chill pill.
No, no! I don't want to vent my frustsration for sure. I just thought it would be obvious.
The performance of the cpu is overall fine. In half scenarious gpu is maxed out but it's also frequently slightly bottlenecked at around 80-90% usage. It works fine overall, i just expected it to be a bit better. Maybe I overestimated the performance of 6700K and 7700K. But i still want to figure out why there was no impact on the fps when i disabled edram.
 
I'm not sure what leads anyone to think an i7-5775C (an older 65W TDP CPU)with 4c/8t at 3.7 GHz (something less if all cores active?) was going to equal or exceed the 7700k at 4.5 GHz/4.2 GHz all core. turbo..?

FPS differences will often be minimal with res cranked up (1440P and above compared to 1080P or below) and with mid-range GPUs (depending on the game's visual complexity of course (CS:GO graphics do not equal that of 2042 for isntance).... and, for which both conditions apply here.

Crank your resolution and/or details/quality down....does FPS increase? GPU limits, if so...

Not much/to almost no increase? CPU might indeed be a limiting factor...
 
  • Like
Reactions: HeartOfAdel
I'm not sure what leads anyone to think an i7-5775C (an older 65W TDP CPU)with 4c/8t at 3.7 GHz (something less if all cores active?) was going to equal or exceed the 7700k at 4.5 GHz/4.2 GHz all core. turbo..?

FPS differences will often be minimal with res cranked up (1440P and above compared to 1080P or below) and with mid-range GPUs (depending on the game's visual complexity of course (CS:GO graphics do not equal that of 2042 for isntance).... and, for which both conditions apply here.

Crank your resolution and/or details/quality down....does FPS increase? GPU limits, if so...

Not much/to almost no increase? CPU might indeed be a limiting factor...
I exaggerated the issue a bit. It turns out i had Nvidia sharpening on which required a bit more cpu power. Now in AC Odyssey i'm getting 60-80 fps, gpu usage is in 90s, in Watch Dogs 2 i'm at around 60 fps, the Division 2 runs at 70-80 fps (Dx12) in the city and AC Unity easily reaches around 75 fps. I also tested L4 cache in Aida and it is indeed working. In Cyberpunk 2077 the 3060 Ti is almost constantly at 99% at 60ish fps. I did some comparison with other results on Youtube and now it indeed seems to be close to 6700K and 7700K (plus the benchmarks i've seen had memory at around 3000mhz).
And if you didn't know why people make this cpu a big deal, it's because of the 128mb L4 cache which helps to boost gaming performance a bit.
 
Last edited:
  • Like
Reactions: helper800
I'm not sure what leads anyone to think an i7-5775C (an older 65W TDP CPU)with 4c/8t at 3.7 GHz (something less if all cores active?) was going to equal or exceed the 7700k at 4.5 GHz/4.2 GHz all core. turbo..?

i7-5775C has 128 MB eDRAM working as kind of L4 (it's different caching scheme than what they normally use for L1->L2->L3, but surprisingly it shows groundbreaking performance improvements with PC games; specially those games that aren't optimized for specific CPU's cache lines). Look up Anandtech's re-review of Broadwell of Nov 2020 and see for yourself. A march 2022 video on YouTube regarding i7-5775C shows that an i7 4790k at 4.9 GHz with 2200 MHz CL8 RAM is still 5-10% slower than an i7-5775C OCed 4.3 GHz with the same RAM. There is this polish review of 2018 iirc that showed i7-7700k OC 5.0 GHz was still slower in many games than the 5775C OCed to 4.2 GHz.

The reason for this is eDRAM has two 256 bit buses; one for read and one for write and both operations can happen simultaneously between L3 and eDRAM. Clock at default runs at 1800 MHz (~56 GB/s for read and same for write = 112 GB/s aggregate (bi-directional), remember this is throughput (read and write is separate bus) ). Latency is around 40-42ns at stock settings. eDRAM overclocks to 2200 MHz in some boards (137 GB/s aggregate throughput) and latency becomes 35-37ns. Intel claims the cache hit ratio is over 95% in this CPU(Source: Anandtech). CPU doesn't need to wait for memory operation( memory latency is very high) for more than around 5% of running time of an application that uses cache oblivious algorithm. Although DDR4 3600 MT/s dual channel is 112 GB/s and around 45-55ns depending on the CPU, seems fine at first but it's still not enough for typical scenarios because either read or write DRAM to L3 operation can happen at a time. Meaning there is no simultaneous read and write with DRAM and latency is always higher.

But future DRAM speed like DDR5 10000+ MT/s should be much better.
 
Last edited:
  • Like
Reactions: CompuTronix
A bit late to write that here but i don't want to confuse people with wrong conclusions in the beginning.

Basically, a 4Ghz 5775C with 2133mhz+ ram is about 50-100% faster than my i5 4570, depending on the game. That's crazy.

I've also done a couple of comparisons to another cpu, Ryzen 5 3600, in a couple of games in similar scenarios and i7-5775C outperformed a stock 3600 in Cyberpunk 2077 by about 5-10%. I've also checked AC Odyssey, Matrix Awakens, SW Jedi: Fallen Order and at least it came super close if not outperformed it. But i understand there might be a margin of error depending on game patches, background applications etc. so i don't hurry with conclusions. The fact that a 4/8 DDR3 cpu holds up against a more recent 6/12 cpu with 3200mhz ram (and possibly even 3600mhz with higher core clock) is just insane.
 
Last edited:
Solution

TRENDING THREADS