News Intel offers new guidance on 13th and 14th Gen CPU instability — but no definitive fix yet

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

TheHerald

Notable
Feb 15, 2024
1,041
299
1,060
We've just been through this. You can't cherry-pick a performance measurement in one configuration and an efficiency measurement in another! For each configuration (125W, 200W, etc.), you have to take performance and efficiency together!

Another bad thing that graph does is to compare tuned Intel CPUs against stock AMD. Anyone who's going to tweak around with their Intel CPU would also certainly tweak with their AMD CPU, if they got one! So, how does it make any sense to compare against only the competition at stock!
Power limit isn't tuning. Power limiting doesn't increase the inherent efficiency of the CPU. It's not an undervolt. "Tuning" an AMD cpu whatever that means has no relevance on the data. You already have the 7950x 3d being a "tuned" 7950x.
First, that's one of your classic bad-joke matchups, since even the R5 7600X outperforms the i5-13400F on CineBench MT.
They are close together in cb, so? Most Intel cpus beat amd in Cinebench MT, we can still compare them. What do you even mean man?

Oh, you want to talk about double-standards? How about not comparing Intel CPU in modified configuration against AMD in stock! Again, if someone is going to buy a CPU and tune it for better efficiency, why wouldn't they base that decision on a comparison of both CPUs running with modified settings, like ComputerBase did?
The 7950x 3d is already a low power 7950x. What would modifying it even more achieve? I don't get what your point is. The data is there, the only segment AMD has a lead - turns out it ain't big. It's actually tiny. What more do we need to figure out here?

Your graph makes it pretty clear, the bottom two CPUs in your efficiency chart are both AMD's, and that's in MT performance which they aren't THAT far behind. In ST the graph would be even more tragic. So why is this still a thing? Intel is just the go to for people that care about efficiency. AMD is mainly for out of the box gaming with the 7800x 3d.
 
Ι agree that the 3ds are more efficient in gaming, but there are some caveats here.

1) It only applies to the 3d chips. The non 3d chips are not, at least not at (yeah, im going to play that drum again) ISO wattage testing. In fact im fully confident that intel chips are going to be both faster and more efficient in gaming than ryzen chips (the non 3d parts).
Go look at the results from the article you pulled those efficiency graphs from. To be more performant than AMD with Intel the best two are 14900K at 95W or the 14600K and these two CPUs are only more efficient than the 7900X and 7950x (the 95W + undervolt is slightly better than the 7600X, but undervolting is never guaranteed). If you do something other than just gaming you'd need to use multiple power profiles to maximize efficiency since even the 125W 14900K is worse efficiency than 7900X/7950X for gaming.
2) The difference in gaming efficiency is just not something that could ever possibly register on any radar. Even though I personally fully admit that a 14900k if left completely uncheck will draw a ***load amount of power in games - when we are talking about 4k and 4090 PC's, the end result will be a less than 10% power draw difference. A PC + monitor with a 4090 will draw what, 400 for the GPU and another 150 for the rest of the PC and the monitor, so that's 700w from the wall after PSU loses. So okay, the 14900k will draw 100w at 4k vs 50w for the 7950x 3d, so 700 vs 750 is just not that big a deal imo. And again, thats' with a stock out of the box 14900k that is not the proper way to run it if your interest is purely gaming.
This is pure whataboutism and not an actual argument. Sure percentage wise it'll be a low amount of overall system consumption, but it's also still energy you're paying for and heat being pumped into the room for no advantage.

Personally speaking I wouldn't notice the cost difference, but during hot weather I already drop my video card 100W to lower room temp so it'd be a problem there. That being said the reason I didn't upgrade to a 7950X3D is because of the software approach to CCD assignment. I love tweaking my systems, but once I'm done I want them to just work unless something has literally failed so that became a non-starter.
Efficiency @ 200 W still isn't looking that great for it. To get good efficiency out of it, you have to dial it all the way back to 125 W, but that comes at a 19.2% performance penalty and closer to the 7900X than the 7950X. I'm not sure people are going to do that, when they could just get one of those CPUs.
At 200W the 14900K is going to still blow up power for lightly threaded workloads, but MT it comes out ahead of both by a good margin. You can see the lightly threaded problem in action with the game testing. This is also why I worded what I said the way I did it's about dialing in the settings for the use case.

AMD, like Intel, does tend to blow their power budget, but the MT efficiency increases for Intel without bottoming out the power budget are very high. Nobody I've seen doing power limit testing other than TPU measures off the CPU which makes results less reliable. They unfortunately didn't do a 7950X scaling article, but chances are with a simple power limit (which is all they did for Intel except for the one undervolt) the MT efficiency for the 7950X is not going to be better than Intel until you get down to low enough power levels the ring bus gets in the way.
 
  • Like
Reactions: bit_user

TheHerald

Notable
Feb 15, 2024
1,041
299
1,060
Go look at the results from the article you pulled those efficiency graphs from. To be more performant than AMD with Intel the best two are 14900K at 95W or the 14600K and these two CPUs are only more efficient than the 7900X and 7950x (the 95W + undervolt is slightly better than the 7600X, but undervolting is never guaranteed). If you do something other than just gaming you'd need to use multiple power profiles to maximize efficiency since even the 125W 14900K is worse efficiency than 7900X/7950X for gaming.
No, not really. At 125w the 14900k pulls 106w in gaming vs 89w for the 7950x, but it's also 10.5% faster. So there basically is a 10% difference in efficiency between the two while the 14900k delivers more frames.

This is pure whataboutism and not an actual argument. Sure percentage wise it'll be a low amount of overall system consumption, but it's also still energy you're paying for and heat being pumped into the room for no advantage.

Personally speaking I wouldn't notice the cost difference, but during hot weather I already drop my video card 100W to lower room temp so it'd be a problem there. That being said the reason I didn't upgrade to a 7950X3D is because of the software approach to CCD assignment. I love tweaking my systems, but once I'm done I want them to just work unless something has literally failed so that became a non-starter.
Again - that's the worst case scenario - using a stock out of the box 14900k for gaming. You shouldn't do that. My point is, even if you do do that, doesn't really matter, the difference is negligible.
 

bit_user

Titan
Ambassador
Power limit isn't tuning.
Call it what you want. The point is that the equivalent adjustment wasn't done on the competing CPUs.

Power limiting doesn't increase the inherent efficiency of the CPU.
The data you posted says otherwise.

We're not talking about efficiency of the architecture, here. These benchmarks measure actual application performance, on a consumer-facing website, with the implication that they're informing end users about what to buy and how to adjust it. At that point, it doesn't matter how efficient your CPU can be, what matters is how efficient it is.

Again, it's like fuel economy with cars. If someone drives a car at a constant speed of 25 mph and finds that it achieved 45 mpg, that's not relevant to me, when most of my driving is at 65 mph. I need to know the efficiency of the product in the way I intend to use it!

They are close together in cb, so? ... we can still compare them. What do you even mean man?
I don't consider a 40% performance difference "close". If you want to compare their efficiency, you can't just sweep that performance difference under the rug.

The 7950x 3d is already a low power 7950x. What would modifying it even more achieve?
It's half low-power. As for your question, that's exactly what should be answered via data.

I don't get what your point is.
It's that you & the article are all about apples vs. oranges comparisons.

The data is there,
It's not. That the precise problem. Don't act as if I didn't explain this more than clearly enough!

Your graph makes it pretty clear, the bottom two CPUs in your efficiency chart are both AMD's,
First, let's acknowledge the fact that there are 4 AMD and 3 Intel models in that graph. So, counting them out is a little misleading. I could take the 7600X out of it, because it doesn't have a proper matchup from Intel in there, but I left it because I think it's interesting to see their power scaling at just 6 cores.

Second, they are not both at the bottom, until you get above 88 W. Up to that point, the 7700X is actually beating or equaling the i5-13600K! The whole point of a graph is that the rankings and differences change, over time. It's a graph precisely because it cannot be reduced to a simple ranking!
 
No, not really. At 125w the 14900k pulls 106w in gaming vs 89w for the 7950x, but it's also 10.5% faster. So there basically is a 10% difference in efficiency between the two while the 14900k delivers more frames.
No. Taking this example into consideration the difference between the 7950x at 89w with 89.5% baseline performance of the 14900k at 106w as the 100% baseline, the difference is 17w of power which is 19.1% more power usage for the 14900k for "10%" more performance, therefore the 7950x is around 9% more power efficient per frame rendered, assuming the numbers above are based in reality.
 
  • Like
Reactions: bit_user
No, not really. At 125w the 14900k pulls 106w in gaming vs 89w for the 7950x, but it's also 10.5% faster. So there basically is a 10% difference in efficiency between the two while the 14900k delivers more frames.
Not quite sure how you're doing the math there but the 14900K is using 19.1% more power for ~10% more performance.

This is also classic goalpost moving as you said the X3D CPUs were the only more efficient and that's simply not true.
 
  • Like
Reactions: bit_user

TheHerald

Notable
Feb 15, 2024
1,041
299
1,060
I don't consider a 40% performance difference "close". If you want to compare their efficiency, you can't just sweep that performance difference under the rug.
Huh? The 13400 is faster than the 7600 and a tiny bit slower than the 7600x. What 40% are you talking about?

Call it what you want. The point is that the equivalent adjustment wasn't done on the competing CPUs.
What more adjustments do you need? The 3d is already an "adjusted" 7950x.

First, let's acknowledge the fact that there are 4 AMD and 3 Intel models in that graph. So, counting them out is a little misleading. I could take the 7600X out of it, because it doesn't have a proper matchup from Intel in there, but I left it because I think it's interesting to see their power scaling at just 6 cores.
Why doesn't the 7600x have a proper matchup? It was meant to compete against the 13600k. Yes - it failed and so the price dropped dramatically, but that doesn't matter. You don't gain efficiency by dropping prices, lol.

Second, they are not both at the bottom, until you get above 88 W. Up to that point, the 7700X is actually beating or equaling the i5-13600K! The whole point of a graph is that the rankings and differences change, over time. It's a graph precisely because it cannot be reduced to a simple ranking!
Yes, the R7 is barely hanging around the i5 - until you hit 88w and the i5 kisses it goodbye. Precisely my point.
 

TheHerald

Notable
Feb 15, 2024
1,041
299
1,060
No. Taking this example into consideration the difference between the 7950x at 89w with 89.5% baseline performance of the 14900k at 106w as the 100% baseline, the difference is 17w of power which is 19.1% more power usage for the 14900k for "10%" more performance, therefore the 7950x is around 9% more power efficient per frame rendered, assuming the numbers above are based in reality.
Not quite sure how you're doing the math there but the 14900K is using 19.1% more power for ~10% more performance.

This is also classic goalpost moving as you said the X3D CPUs were the only more efficient and that's simply not true.

Do you understand - both of you - that that's exactly what I said? That the 7950x is 10% more efficient but the 14900k is actually faster? What the actual heck lads...

And no, read my post again. I said at ISO power Intel chips ARE more efficient. This is not ISO power. Think about it the other way. Trying to boost the 7950x to reach the gaming performance of the 14900k will make it consume a lot more than 106w. Got it?

We don't even need to guess. Configured at 95w PL the 14900k draws 80w while still being faster than the 7950x, lol.

power-games.png
 

bit_user

Titan
Ambassador
Huh? The 13400 is faster than the 7600 and a tiny bit slower than the 7600x. What 40% are you talking about?
7700X, because you compared it to 8-core CPUs, if you follow the chain back a few posts.

What more adjustments do you need? The 3d is already an "adjusted" 7950x.
First, it's not. Half of it is limited, while the other CCD is about the same as a normal 7950X. Second, they limited it based on the thermal properties of the CCD with the 3D cache die, not based on a particular efficiency target.

It's telling how you're afraid of having more data on AMD CPUs. If you're not biased, then you should always consider more data points to be a good thing. Then again, you've made so many disingenuous arguments that I think your bias has now been well-established.

Why doesn't the 7600x have a proper matchup? It was meant to compete against the 13600k.
No, the i5-13600K didn't exist yet. They launched it against the 12th gen and priced it accordingly.
 

bit_user

Titan
Ambassador
I said at ISO power Intel chips ARE more efficient.
Nope.

cj1qY3F.png


Think about it the other way. Trying to boost the 7950x to reach the gaming performance of the 14900k
Why would you do that? Why not pick a TDP naturally in both of their ranges and test them there? This idea of boosting the slower CPU to try and keep up with the faster one is more of your funny business.

We don't even need to guess. Configured at 95w PL the 14900k draws 80w while still being faster than the 7950x, lol.

power-games.png
Where is faster? That graph only shows power.
 

TheHerald

Notable
Feb 15, 2024
1,041
299
1,060
7700X, because you compared it to 8-core CPUs, if you follow the chain back a few posts.


First, it's not. Half of it is limited, while the other CCD is about the same as a normal 7950X. Second, they limited it based on the thermal properties of the CCD with the 3D cache die, not based on a particular efficiency target.

It's telling how you're afraid of having more data on AMD CPUs. If you're not biased, then you should always consider more data points to be a good thing. Then again, you've made so many disingenuous arguments that I think your bias has now been well-established.


No, the i5-13600K didn't exist yet. They launched it against the 12th gen and priced it accordingly.
You said the 7600 is faster than the 13400f. That's what I responded to.

Im afraid? Of what? WTH are you even talking about? What more power limits do you want to see? I just don't get your point. You already have a 140w limited 7950x in the 3d.

The 13600k launched literally 2 weeks later man, just stop.
 

bit_user

Titan
Ambassador
You said the 7600 is faster than the 13400f. That's what I responded to.
Please don't misquote me. What I actually said was:

"... even the R5 7600X outperforms the i5-13400F on CineBench MT."

The R5 7600 and R5 7600X are different products, as I'm quite sure you're aware.

Im afraid? Of what? WTH are you even talking about?
I don't know why else you would be arguing against having more data points for AMD. How can you not want more data, unless you're afraid of what it would show you?

The 13600k launched literally 2 weeks later man, just stop.
Nope. Pricing history says they went on sale almost exactly a month apart.

And you need to stop with this fixation on launch pricing. AMD didn't magically know the exact specs and pricing of Raptor Lake, when they set the launch pricing of their 7000X launch models. It was priced to sell against the Alder Lake lineup and they must've planned to readjust, after Raptor Lake launched.
 
Last edited:
It's telling how you're afraid of having more data on AMD CPUs. If you're not biased, then you should always consider more data points to be a good thing. Then again, you've made so many disingenuous arguments that I think your bias has now been well-established.
Nope.
cj1qY3F.png
If you're not biased, then you should always consider more data points to be a good thing.

Instead you base everything on this one benchmark that tests a grand total of one different workloads....face it, 6 out of the 10 tests are 3d render, and the other 4 are one that only tests cache (7zip) and the other three are the rest of the things AMD is good at, it's too limited to be relevant and used as a blanket statement as you do.

And you had to single out one result of these ten to get to a 4% difference to support your claim that intel loses performance when not running above it's limit.

Then again, you've made so many disingenuous arguments that I think your bias has now been well-established.

Please don't misquote me. What I actually said was:
"... even the R5 7600X outperforms the i5-13400F on CineBench MT."​

The R5 7600 and R5 7600X are different products, as I'm quite sure you're aware.


I don't know why else you would be arguing against having more data points for AMD. How can you not want more data, unless you're afraid of what it would show you?
Why not have more data points?!
Oh look, cinebench and only cinebench.
 
Do you understand - both of you - that that's exactly what I said? That the 7950x is 10% more efficient but the 14900k is actually faster? What the actual heck lads...

And no, read my post again. I said at ISO power Intel chips ARE more efficient. This is not ISO power. Think about it the other way. Trying to boost the 7950x to reach the gaming performance of the 14900k will make it consume a lot more than 106w. Got it?

We don't even need to guess. Configured at 95w PL the 14900k draws 80w while still being faster than the 7950x, lol.
I was not misunderstanding anything as far as I can tell. I was just trying to put the numbers you used in a different perspective that may or may not have been useful to the conversation. I have not been able to catch up fully on the conversation and was skimming it. I saw an opportunity to do a modicum of math to potentially bring some clarity to what was being said. I may be able to read the whole conversation later today, no skimming.
 

bit_user

Titan
Ambassador
you base everything on this one benchmark that tests a grand total of one different workloads....
No, it's their composite multithreading score.

Then again, you've made so many disingenuous arguments that I think your bias has now been well-established.
That, coming from you, is...
🤣 🤣 🤣

You know very well that I'm not flat-out anti-Intel. I say plenty of good things about them and negative things about AMD. You've never said one good word about AMD, nor a single bad thing about Intel.

If I have a bias, it's against disinformation like you and Harold are putting out. If the only things you guys said were fair and accurate, then you'd hardly get an argument from me! In a certain way, putting out a message that's so heavily biased is actually self-defeating, because it draws a lot of criticism that wouldn't necessarily get posted, otherwise.

Why not have more data points?!
Oh look, cinebench and only cinebench.
I'm not opposed to having more data. I use what's available. I don't have the resources to run all these benchmarks, myself.
 
Last edited:

TheHerald

Notable
Feb 15, 2024
1,041
299
1,060
Please don't misquote me. What I actually said was:
"... even the R5 7600X outperforms the i5-13400F on CineBench MT."​

The R5 7600 and R5 7600X are different products, as I'm quite sure you're aware.


I don't know why else you would be arguing against having more data points for AMD. How can you not want more data, unless you're afraid of what it would show you?


Nope. Pricing history says they went on sale almost exactly a month apart.

And you need to stop with this fixation on launch pricing. AMD didn't magically know the exact specs and pricing of Raptor Lake, when they set the launch pricing of their 7000X launch models. It was priced to sell against the Alder Lake lineup and they must've planned to readjust, after Raptor Lake launched.
I don't have a problem with more data, im saying the data we already have are enough to reach conclusions

A month apart. Okay, obviously not similar products then, they have to come out the same day i guess
 
We were talking about gaming.
It doesn't change much, but the gaming power graphs are based on the 1080p not 720p results.
Do you understand - both of you - that that's exactly what I said? That the 7950x is 10% more efficient but the 14900k is actually faster? What the actual heck lads...
I blame it being after 3am and my eyes half glazing over 🤣
And no, read my post again. I said at ISO power Intel chips ARE more efficient. This is not ISO power. Think about it the other way. Trying to boost the 7950x to reach the gaming performance of the 14900k will make it consume a lot more than 106w. Got it?
It doesn't work that way though because you can't reach the same level of performance. It's not like AMD is power restricted in any way and that's limiting performance. That means you can realistically only compare Intel on the way down if you're considering efficiency. When you do that the 14900K can certainly surpass the 7900X/7950X (which I already said) but not the single CCD ones just through power limiting.
 
  • Like
Reactions: bit_user

TheHerald

Notable
Feb 15, 2024
1,041
299
1,060
It doesn't change much, but the gaming power graphs are based on the 1080p not 720p results.

I blame it being after 3am and my eyes half glazing over 🤣

It doesn't work that way though because you can't reach the same level of performance. It's not like AMD is power restricted in any way and that's limiting performance. That means you can realistically only compare Intel on the way down if you're considering efficiency. When you do that the 14900K can certainly surpass the 7900X/7950X (which I already said) but not the single CCD ones just through power limiting.
Well you can't be exact but you can theorize how much more power a CPU will need to reach X amount of performance. For example overclocked my 12900k is indeed 10% faster than it is a stock but it literally uses twice the power in games, from around 70w to 150.

That's why I said that even with the 14900k drawing 20% more power, the fact that it is 10% faster means it's inherently more efficient - cause closing that 10% gap will require a ton of power.

It can also surpass the single CCD ones, the 7600x uses 37% more power than the 65w 14900k for 3% more performance. It also surpasses the 7700x at that limit. It shows what the power of monolithic is, even at an inferior node with not the best architecture around Intel is just the efficiency goat due to lacking the power draw of the interconnect and the multiple dies.
 
Last edited:

bit_user

Titan
Ambassador
That's why I said that even with the 14900k drawing 20% more power, the fact that it is 10% faster means it's inherently more efficient - cause closing that 10% gap will require a ton of power.
Again, this is a flawed conclusion. TechPowerUp computed gaming power efficiency and the i9-14900K got only 1.78 fps/W, as compared to the R9 7950X's figure of 2.47 fps/W. Efficiency is simply work divided by energy, not whatever you'd like it to mean. The stock Raptor Lake CPU is faster and less efficient.

efficiency-gaming.png


You can pose all sorts of other what-ifs and hypothetical experiments, but every one of them needs an asterisk by it. An experiment involving different configurations of one or both CPUs is only relevant to people doing nearly identical things with their CPUs.

Just because a CPU can be more efficient in one configuration doesn't change the fact of how efficient it is in the configuration you're actually using!

It shows what the power of monolithic is, even at an inferior node with not the best architecture around Intel is just the efficiency goat due to lacking the power draw of the interconnect and the multiple dies.
There are way more variables at play than monolithic vs. chiplet.
 
Last edited:

TheHerald

Notable
Feb 15, 2024
1,041
299
1,060
Again, this is a flawed conclusion. TechPowerUp computed gaming power efficiency and the i9-14900K got only 1.78 fps/W, as compared to the R9 7950X's figure of 2.47 fps/W. Efficiency is simply work divided by energy, not whatever you'd like it to mean. The stock Raptor Lake CPU is faster and less efficient.
efficiency-gaming.png

You can pose all sorts of other what-ifs and hypothetical experiments, but every one of them needs an asterisk by it. An experiment involving different configurations of one or both CPUs is only relevant to people doing nearly identical things with their CPUs.

Just because a CPU can be more efficient in one configuration doesn't change the fact of how efficient it is in the configuration you're actually using!


There are way more variables at play than monolithic vs. chiplet.
With the 125w power limit they were almost identical in efficiency while the 14900k was faster. With the 95w power limit it was both faster and more efficient
 

bit_user

Titan
Ambassador
With the 125w power limit they were almost identical in efficiency while the 14900k was faster.
I would not say they were almost identical in efficiency. The stock 7950X was still 5.1% more efficient while being just 90.1% as fast.

With the 95w power limit it was both faster and more efficient
Yes. To put it in the same terms as above, the stock 7950X was 77.8% as efficient and 94.2% as fast as the i9-14900K running at 95 W.

I think it's clear that both of these CPUs can be more efficient than their stock settings. Again, it does bug me to see one CPU being power-restricted, while the other is run at stock, because I think a user who would restrict an i9-14900K would probably also restrict a R9 7950X. Leaving one at stock is implicitly posing a false dichotomy.

The i9-14900K is also really fast, especially at lightly-threaded tasks where its P-cores can shine.
 

TheHerald

Notable
Feb 15, 2024
1,041
299
1,060
I would not say they were almost identical in efficiency. The stock 7950X was still 5.1% more efficient while being just 90.1% as fast.


Yes. To put it in the same terms as above, the stock 7950X was 77.8% as efficient and 94.2% as fast as the i9-14900K running at 95 W.

I think it's clear that both of these CPUs can be more efficient than their stock settings. Again, it does bug me to see one CPU being power-restricted, while the other is run at stock, because I think a user who would restrict an i9-14900K would probably also restrict a R9 7950X. Leaving one at stock is implicitly posing a false dichotomy.

The i9-14900K is also really fast, especially at lightly-threaded tasks where its P-cores can shine.
The only false dichotomy is yours. The 7950x is already using less power in gaming than a stock 14900k, what would restricting it achieve? No matter if you restrict it or not the 14900k will be both faster and more efficient at the same power in gaming. Heck, it's even faster while drawing less power.

So why are you getting bugged, nobody knows. The results make obvious what's what
 

bit_user

Titan
Ambassador
The only false dichotomy is yours.
No. Let's not get petty.

The 7950x is already using less power in gaming than a stock 14900k, what would restricting it achieve?
We know the 7950X @ stock is running well outside of its efficiency window. We've seen abundant data showing that clipping its wings just a little bit hardly has any performance impact. So, it would make a lot of sense to someone who's concerned about efficiency - which is exactly the type of person who would be restricting an i9-14900K.

No matter if you restrict it or not the 14900k will be both faster and more efficient at the same power in gaming.
You might be right, but I'd really need to see the data.
 

TheHerald

Notable
Feb 15, 2024
1,041
299
1,060
No. Let's not get petty.


We know the 7950X @ stock is running well outside of its efficiency window. We've seen abundant data showing that clipping its wings just a little bit hardly has any performance impact. So, it would make a lot of sense to someone who's concerned about efficiency - which is exactly the type of person who would be restricting an i9-14900K.


You might be right, but I'd really need to see the data.
Man are you serious? The 7950x at stock pulls 89 watts in gaming. It's already slower than the 14900k pulling 80 watts, so no amount of playing around with the power limits will make it more efficient.

The data presented is already enough to conclude without a shadow of a doubt that the 14900k is the more efficient part in games . When you are slower while pulling more power it's kinda over. Only thing you can do is start turning off ccds.
 
Status
Not open for further replies.