News AMD CTO Mark Papermaster: More Cores Coming in the 'Era of a Slowed Moore's Law'

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

bit_user

Titan
Ambassador
Windows already has that implemented,it still needs developers to do it right.
I don't know that Windows has sufficient visibility into the way that a game is dispatching tasks to worker threads. I'm not a Windows game developer, so maybe there's some technology to facilitate this I'm not aware of, but traditionally that's not been the case.

I do get the impression that we're talking about different things.

You don't understand, anything that is left over after what you explain that is the main loop,basically in a perfect world it would be

get input
calculate response(s)
draw graphics accordingly

and that's what you can't split up anymore,the faster you are able to run this loop the more FPS you will get even if you are on a dual core if that part runs faster everything will run faster.
It's that main loop that's exactly what we're talking about parallelizing. Sure, drawing graphics would depend on updating the world state - I'm not saying they don't, but there's a lot within both of those stages that you can usually parallelize.

As for input, I'd read any pending input events but that should be quick and non-blocking.
 

Gurg

Distinguished
Mar 13, 2013
515
61
19,070
So, I'm not sure we're going to continue seeing clock speed gains as node sizes drop.

Core counts are going up for all market segments. Look back a few years and most Steam users will have had only dual-core. So, it does make sense for developers to invest in utilizing more cores.

I wouldn't tell someone they need 16 cores, but they might find increasing utilization of them, over the lifespan of that CPU. If you just buy a CPU to meet today's needs, you might be regretting it, tomorrow.

Blindly increasing cores that are never optimally loaded only increases cost and creates energy (resource) inefficiencies.

Current Intel 9900k CPUs have 40% processor efficiency available in the current circuitry design layout. The barrier to accessing it is the thermodynamics of dissipating the heat produced. Some solutions are more simple like shaving and flattening the CPU surface and using better pastes. Thinning and shrinking the circuitry using less materials creating less resistance and heat absorption is another but generally only available roughly every 3-5 years after equipment manufacturers make corresponding tolerance improvements. Chip economics favor cramming more individual chips onto each wafer conflict with heat transfer which is more efficient across a wider surface area. The advantages of process shrinks to thermodynamics can also be lost with inclusion of excessive cores, or encroaching on other functions (ie RAM GPU) that can be performed efficiently separate from the CPU. All in one processing makes sense for small electronics, but not for larger laptops or desktops.
 
Last edited:

cryogenic

Distinguished
Jul 10, 2006
449
1
18,780
The only issue I have is software takes a very long time to catch up. Core utilization in the mainstream consumer market is not going to scale as fast as HPC markets do.

I still don't see an advantage to 16 cores for the mass majority and very little for enthusiasts outside of gamers who also want to stream.

Whats needed is a major boost to IPC until software can actually, at the core lik the OS, utilize multiple cores efficiently enough.

Ho many cores do you need to be able to display some input boxes on the screen? Because the mainstream consumer market is mostly dump form based apps.

HPC market works with huge data sets and the software is highly parallelizable because the problem space is parallelizable. There's limited parallelization you can do to opening a document from a file on disk and displaying it on screen. On the other hand processing billions of data points from terabytes if data can be highly parallelizable (most of the time).

Therefore consumer PCs have stopped taking advantage of cores at 8 or 16 cores at most (even with multiple background apps). Workstations (non graphical) will stop somewhere around 32 - 64.

You cannot build a house 1000 times faster using 1000 workers because parts of the construction are dependent on previous milestones so you can have max 100 (or way less) people working at a time. The same thing happens with software, you cannot exceed the level of parallelization allowed by the problem domain. If you only have a single file to read, 1000 cores won't make that process faster, or 1000 cores won't make displaying something on the screen 1000 times faster.
 
  • Like
Reactions: Gurg
It is silly that we are in almost 2020 and programmers still have to cod etheir software to take advantage of more cores manually.
One problem with automating tasks at a high level is that the results tend to often be less efficient than what a good programmer could accomplish at a lower level. So, what might potentially make programming a bit easier, may also result in the hardware delivering less performance as a result. You see similar scenarios with modern desktop software written in high level programming languages, where the performance on modern hardware isn't really any better than how older software performed on much older and slower hardware. The developers then would take greater care to manually optimize their software, while now, lots of software is just written in scripting languages that might be easier to program for and maintain, but tend to be less efficient.

Both have uses but I still find it hard to see a direct use for more than 16 cores in most enthusiast uses, even on the extreme end.
High core count usage scenarios are kind of niche, but if someone fits into that niche of frequently benefiting from having access to more than 8 cores, they will probably benefit from having more than 16 cores as well. Things like video encoding and other forms of compression are often highly parallel tasks, as are things like 3D rendering, since it's pretty easy to evenly divide such workloads across many cores.

Of course, anyone not regularly using that kind of software probably won't see much benefit from having a ton of cores anytime soon. If someone is just gaming, 6 cores with SMT, or 8 cores without are both likely to be fine options for a while, and 8 cores with SMT may be worth considering to give a higher-end system some additional longevity, but beyond that it might be a bit of a stretch for gaming at the moment.

75% of Steam users have 4 cores or less and 95% have 6 cores or less. Are game developers to incur massive costs to develop game programs specialized for 8+ core CPUs that only 5% of gamers now own?
The publicly-available data that Steam's survey provides is not really detailed enough to draw any definitive conclusions from. They combine too many demographics from all across the world into a single pool of results, and those demographics are constantly changing as Steam expands into new markets. How many of those systems running Steam are even buying new AAA game releases? Lots of them are likely older systems or lower-end laptops that couldn't run newer games anyway, so they're not even in the market for such titles.

But even so, we can see that the percentage of systems running Steam with 6+ cores has more than doubled within the last year, moving up to around 25% now, and that number had been below 5% just two years ago. If that trend were to continue, in another couple years we might see the percentage of Steam users with 8+ cores nearing that level, while 6+ cores might be getting up near 50%. As the percentage of systems with higher cores counts increases, it makes more sense for developers to optimize their games to make better use of those additional cores.

And I'm just imagining there probably aren't too many CPU-based features that you could use in a multi-player context that wouldn't somehow be an unfair advantage or put you at a disadvantage. It's not like a 16-core user can run with a more sophisticated physics engine - their view of the game world must match everyone else's.
While the base game physics would need to remain the same for everyone in a multiplayer title, there are various physics effects that could be enhanced without significantly affecting gameplay. Things like more realistic water ripples, or cloth simulation on player models, or leaves and other inconsequential debris blowing around, or smoke and cloud effects. And of course, single-player games don't need to concern themselves with keeping things similar between players. And maybe a developer will decide that having something like a heavily-threaded AI system drop performance down to 30fps at times on system with a "mid-range" processor might be reasonable for their game, while more performance could potentially be achieved with higher core counts. Or a game might utilize procedural generation when generating levels or other content, where additional cores could potentially reduce load times or stutters in some hypothetical future game. Again though, I do think for gaming, high core counts like that are probably overkill, and likely will be for some years to come.
 
  • Like
Reactions: bit_user

InvalidError

Titan
Moderator
How many of those systems running Steam are even buying new AAA game releases? Lots of them are likely older systems or lower-end laptops that couldn't run newer games anyway, so they're not even in the market for such titles.
If you look at global rankings for most played PC games of 2019, most of the top-20 is stuff that still runs fine on five years old mid-range systems.
 
  • Like
Reactions: Gurg

bit_user

Titan
Ambassador
Blindly increasing cores that are never optimally loaded only increases cost and creates energy (resource) inefficiencies.
You've got it backwards. It's actually more energy-efficient to spread a load across a larger number of cores. That's why cellphone SoCs went multi-core, so early and aggressively, and why server CPUs have so many (even at the expense of clock speed).

Consider the energy efficiency of an EPYC 7742 - 64-cores @ 2.25 GHz = 225 W. Compare that to a Ryzen 3800X - 8-cores @ 3.9 GHz = 105 W. They're the same cores! The explanation is that power increases as a nonlinear function of clock speed. So, just dropping your clock speed can yield dramatic power savings. So, running at 57.7% of the clock speed gives you 4.62x as many CPU cycles for only 2.14x of the power!

And, in case you haven't noticed, the Ryzen 3950X features the same 105 W TDP. To accommodate 2x the core count, they simply dropped the base clock to 3.5 GHz. So, you get 2x the cores, each running 90% as fast. That's a pretty good deal, if you can use them. However, even if you can't saturate them all, at least the load will get distributed over more silicon, which both leads to better thermal transfer and less total power dissipation.

There is a catch, however. More cores creates more overhead, in maintaining cache coherency and routing data between cores & memory. So, I don't want to sound like more cores is only a positive. Obviously, having way more cores than you need is not going to be worthwhile.

Current Intel 9900k CPUs have 40% processor efficiency available in the current circuitry design layout.
Efficiency of what? Measured how?

The advantages of process shrinks to thermodynamics can also be lost with ... encroaching on other functions (ie RAM GPU) that can be performed efficiently separate from the CPU. All in one processing makes sense for small electronics, but not for larger laptops or desktops.
Agreed. iGPUs are not ideal and HBM wouldn't help the CPU's thermal situation. The difference is that you're talking about increasing the die area with independent circuitry vs. the effect of more cores, which is to take an existing workload and spread it out over a larger area of lower-clocked circuitry.
 

bit_user

Titan
Ambassador
Ho many cores do you need to be able to display some input boxes on the screen? Because the mainstream consumer market is mostly dump form based apps.
That's pretty irrelevant, though. I think most of us are talking about gaming, as the sort of thing that a typical PC user would do that's at all demanding.

Nobody is saying people should get a 16-core box for sending email and editing word docs.

Therefore consumer PCs have stopped taking advantage of cores at 8 or 16 cores at most (even with multiple background apps). Workstations (non graphical) will stop somewhere around 32 - 64.
I tend to agree with this, but primarily due to the scaling overheads I mentioned, above.

I think AMD is getting a bit ahead of the market, with 16-core CPUs. I'm not convinced it makes sense for them to go further, in a mainstream platform. Not for quite a while, anyway.
 

bit_user

Titan
Ambassador
While the base game physics would need to remain the same for everyone in a multiplayer title, there are various physics effects that could be enhanced without significantly affecting gameplay. Things like more realistic water ripples, or cloth simulation on player models, or leaves and other inconsequential debris blowing around, or smoke and cloud effects. And of course, single-player games don't need to concern themselves with keeping things similar between players.
Sure, but if you target effects at the very high-end, then the market share of users who can benefit is very small. So, I think a lot of game developers would have difficulty justifying spending much time on such effects.

Also, a lot of those sorts of effects are more efficiently generated on the GPU.

And maybe a developer will decide that having something like a heavily-threaded AI system drop performance down to 30fps at times on system with a "mid-range" processor might be reasonable for their game, while more performance could potentially be achieved with higher core counts.
With 16-core CPUs, we're facing a situation where there's literally more than an order of magnitude between the CPU performance of a typical user vs. the high end. Other than HEDT - a very tiny sliver of the market - I'm not sure we've been here, before.

I guess it's not new, if you look at iGPUs vs. high-end dGPUs, but given how pixel count scales as a square of resolution, that's slightly different.
 

InvalidError

Titan
Moderator
Nobody is saying people should get a 16-core box for sending email and editing word docs.
By the time we get to 3nm, 16 cores may be the smallest practical CPU to manufacture (40-50sqmm chiplet, not much space for power and IO pads, ridiculous power density too) and with TSMC saying it is still on-track for 3nm, we may get there surprisingly soon.

While most people may not need 16 cores any time soon, they may get it regardless once process shrinks make anything less impractical.
 

bit_user

Titan
Ambassador
By the time we get to 3nm, 16 cores may be the smallest practical CPU to manufacture (40-50sqmm chiplet, not much space for power and IO pads, ridiculous power density too) and with TSMC saying it is still on-track for 3nm, we may get there surprisingly soon.

While most people may not need 16 cores any time soon, they may get it regardless once process shrinks make anything less impractical.
Would it be practical to put some DRAM on the same die, or does that use different manufacturing processes?

We're already seeing Ice Lake include things like bigger iGPUs and AI accelerators. So, more of that would be one option.
 
While most people may not need 16 cores any time soon, they may get it regardless once process shrinks make anything less impractical.
If it's limited to ~3Ghz or even lower for the cheapest one you can be assured that people will avoid them and buy older CPUs with less cores.
I Mean look at it,intel sales are on an all time high ever since AMD came out with Ryzen,not that ryzen is doing badly it sells great as well but more cores with less clocks is not what will help games and mainstream=gaming.
 
  • Like
Reactions: Gurg

Gurg

Distinguished
Mar 13, 2013
515
61
19,070
Is further increasing CPU frequency an industry issue or is this an AMD issue?

The AMD expert (whose CPUs are having trouble running at claimed boost speeds and far below the standard 5ghz Intel all core OC frequencies) is saying that increased frequency performance for gaming isn't attainable and that future performance increase will only come from more cores and software that utilizes more cores. I'm skeptical of that and believe the recent THs OC of a 9900k from 5Ghz all core using closed loop water cooling to 6.95 Ghz with liquid nitrogen shows available frequency performance increase potential of 40%.

I believe if Intel addresses reducing CPU thermal dissipation and doesn't get distracted by adding superfluous cores (more than 8), GPU and other functions to its gaming CPUs and doesn't try to take full advantage of the available decrease in surface area of its CPU chips in implementing upcoming shrink technology to put more CPUs per wafer Intel might be able to increase its CPU frequencies from the current 5Ghz standard OC closer to 6Ghz in the next cycle using closed water cooling. When I look at CPU reviews, I focus on relative gaming performance (where Intel excels) and could care less about the specialized programs where AMD outperforms.

Having 6 cores in my current and previous PC were not a constraint in my usage and my build last year was to utilize a 5Ghz OC 9600k over my 5820k 4.2Ghz OC and take advantage of faster DDR4 memory. At this point the only enticement for me to build a new PC would be a similar increase in frequency/performance or a large jump in GPU performance only available with a new motherboard requiring a new socket CPU.
 
Last edited:

nofanneeded

Respectable
Sep 29, 2019
1,541
251
2,090
One problem with automating tasks at a high level is that the results tend to often be less efficient than what a good programmer could accomplish at a lower level. So, what might potentially make programming a bit easier, may also result in the hardware delivering less performance as a result. You see similar scenarios with modern desktop software written in high level programming languages, where the performance on modern hardware isn't really any better than how older software performed on much older and slower hardware. The developers then would take greater care to manually optimize their software, while now, lots of software is just written in scripting languages that might be easier to program for and maintain, but tend to be less efficient.

The increasing numer of cores will make up for the lost performance ... and a programmer cant address 64+ cores with ease .. I expect the number of cores to increase by ten folds in the coming 10 yers if they introduce a new Programming System.

I know tht low level programming is faster , but What I am talking about is different from using high level languages today , it is not the whole language , it is just diving the work on "n" numbers of cores. a mix . that is you can inlcude C++ Libraries in it when you want.
 

Gurg

Distinguished
Mar 13, 2013
515
61
19,070
Would it be practical to put some DRAM on the same die, or does that use different manufacturing processes?

We're already seeing Ice Lake include things like bigger iGPUs and AI accelerators. So, more of that would be one option.
The more computer processes included/operating on the CPU chip the slower the CPU will be due to the higher thermals. That is why the Intel chips with the GPU function disabled are faster than the same ones with the GPU activated
 

bit_user

Titan
Ambassador
I believe if Intel addresses reducing CPU thermal dissipation and doesn't get distracted by adding superfluous cores (more than 8), GPU and other functions to its gaming CPUs and doesn't try to take full advantage of the available decrease in surface area of its CPU chips in implementing upcoming shrink technology to put more CPUs per wafer Intel might be able to increase its CPU frequencies from the current 5Ghz standard OC closer to 6Ghz in the next cycle using closed water cooling.
So, you're prepared to be content with a 20% speed up, when doubling the core count could offer about 90% more clock cycles? And what if games improve their threading, so they can actually utilize 16 cores?

I'm just saying that's what's on the table. The easiest way to increase CPU clock cycles is by adding cores. Sure, software needs to catch up. But, if/when it does, you might regret going with that 250 Watt 6 GHz 8-core option.
 

bit_user

Titan
Ambassador
I expect the number of cores to increase by ten folds in the coming 10 yers if they introduce a new Programming System.
I'm not saying it won't happen, but you should be aware that cache coherency starts to become a really big problem, as you continue scaling up.

https://semiengineering.com/how-cache-coherency-impacts-power-performance/

I think cache coherency is overrated, and we might start to see some new architectures revisit those decisions, in the coming decade. This will require both new hardware and software technologies, to manage effectively.
 

InvalidError

Titan
Moderator
Intel all core OC frequencies) is saying that increased frequency performance for gaming isn't attainable
As I already wrote earlier, the last time AMD and Intel pursued clock frequency at any cost, they both ended up producing chips with horrible IPC, horrible power dissipation and passable overall performance. Clock frequency scaling is mostly dead, that's why all CPU designers regardless of ISA put a whole lot more design effort on IPC and TLP than clocks.

Higher clocks are simply too inefficient to be worth pursuing as a primary or even secondary design goal, so clocks are only going up as fast as whatever leftover timing margin is left after adding all other architectural features will allow.
 
  • Like
Reactions: bit_user
So, you're prepared to be content with a 20% speed up, when doubling the core count could offer about 90% more clock cycles? And what if games improve their threading, so they can actually utilize 16 cores?
That's the same argument people were making for 4,6,8 e.t.c. cores,games just don't work that way, more cores can only prevent slow downs(stutters) by making sure that you can deal with the massive amount of data the multithreaded part of the game needs but preventing slow downs is not speeding up the game.
The only way to get faster games is to be able to run the main game loop more times per second and while you do need enough threads for the multithreaded part it's the clocks/single core efficiency that will give you more loops and thus frames per second.
 
  • Like
Reactions: Gurg
Sure, but if you target effects at the very high-end, then the market share of users who can benefit is very small. So, I think a lot of game developers would have difficulty justifying spending much time on such effects.

Also, a lot of those sorts of effects are more efficiently generated on the GPU.
Many physics effects should scale relatively easily. So, additional cores could be used to perform more accurate or denser effects, for example. The same routines could be used with different numbers of threads, depending on how many cores are available, so it wouldn't necessarily be exclusive to processors with lots of cores, but able to scale up to them when available. Of course, a lot of this stuff can be done efficiently on the GPU, but if additional CPU resources are available, those could be made use of to free up the graphics hardware for rendering tasks.

And of course, most actual game developers wouldn't be touching the multithreading code anyway, since most games these days are primarily built on a handful of existing game engines and libraries. The game developer would just add their effects to the game as desired, allowing the physics engine to care about how that is handled in the background.

When I look at CPU reviews, I focus on relative gaming performance (where Intel excels
Relative compared to what? With a mid-range graphics card at 1080p or a high-end card at 1440p or higher resolutions, performance tends to be very similar between all current-generation CPUs, provided they have enough threads to handle a game well. In terms of average frame rates in today's games at 1440p with a 2080 Ti, even a Ryzen 3600 priced under $200 typically performs within 5% of a $600+ i9-9900KS. At 4K, gaming performance becomes practically identical between the two. And that trend carries down to less expensive graphics cards at lower resolutions as well.

And compared to the i5s at similar price points to the 3600, I fully suspect the Ryzen's additional threads should keep it relevant further into the future for AAA games. Sure, there will undoubtedly be some games that might benefit more from the slightly higher per-core performance of something like an i5-9600K, but lots of other games will run smoother on the Ryzen 5 due to its inclusion of SMT, so I would hardly say the i5 is the "better" option for gaming.
 

Gurg

Distinguished
Mar 13, 2013
515
61
19,070
So, you're prepared to be content with a 20% speed up, when doubling the core count could offer about 90% more clock cycles? And what if games improve their threading, so they can actually utilize 16 cores?

I'm just saying that's what's on the table. The easiest way to increase CPU clock cycles is by adding cores. Sure, software needs to catch up. But, if/when it does, you might regret going with that 250 Watt 6 GHz 8-core option.
So if in the next iteration of Intel sports 6 or 8 core at 6Ghz and PCIe 4 with 2 NMVE ports at a similar or reasonable price increase as the 9600k/z390 combo I purchased a year ago it will be in my rig soon after available to better run the existing games I already own and like to play.

Whether I would switch and buy a similar priced 12+ core optimized game system would depend on the titles and genre available.
 

InvalidError

Titan
Moderator
So if in the next iteration of Intel sports 6 or 8 core at 6Ghz
Clock frequency alone does not matter, you need IPC too as single-threaded throughput = clock x IPC. If you sacrifice 45% IPC for 90% higher clocks, you are still 10% behind in throughput and will get worse performance under pretty much all circumstances. That's pretty much what Intel did when it transitioned from P3 to P4 where the 1-1.3GHz Coppermine and Tualatin P3 continued winning benchmarks the 1.6-2.4GHz Willamette P4 until 2.4+GHz Northwood finally pulled away under all circumstances.

Most CPU designers agree that as processes get smaller, clock frequencies are likely to go down to keep power density manageable. ~5GHz may very well be the end of the road for practical consumer CPUs.
 
  • Like
Reactions: bit_user

Gurg

Distinguished
Mar 13, 2013
515
61
19,070
Relative compared to what? With a mid-range graphics card at 1080p or a high-end card at 1440p or higher resolutions, performance tends to be very similar between all current-generation CPUs, provided they have enough threads to handle a game well. In terms of average frame rates in today's games at 1440p with a 2080 Ti, even a Ryzen 3600 priced under $200 typically performs within 5% of a $600+ i9-9900KS. At 4K, gaming performance becomes practically identical between the two. And that trend carries down to less expensive graphics cards at lower resolutions as well.
Read:
https://www.tomshardware.com/news/nvidia-explains-why-high-frame-rates-matter-in-competitive-games
Then look at the Avg FPS Entire Test Suite chart comparing CPUs on:
https://www.tomshardware.com/reviews/amd-ryzen-5-3600x-review,6245-11.html
Perhaps you will want to rethink your comment? Facts matter?
 

nofanneeded

Respectable
Sep 29, 2019
1,541
251
2,090
By the time we get to 3nm, 16 cores may be the smallest practical CPU to manufacture (40-50sqmm chiplet, not much space for power and IO pads, ridiculous power density too) and with TSMC saying it is still on-track for 3nm, we may get there surprisingly soon.

While most people may not need 16 cores any time soon, they may get it regardless once process shrinks make anything less impractical.

True , the Smart phones is a proof ... tons of cores and memory and it is just a phone and 90% of the people never need that power in their phones they just use it for Maps , and facebook/whatsup/etc and for cameras.
 

InvalidError

Titan
Moderator
True , the Smart phones is a proof ... tons of cores and memory
Smarphones and tablets are somewhat of a special case since they are under extreme pressure for highest performance per watt due to very limited battery capacity and more cores is the most power-efficient way of increasing total throughput. You may not need eight full-speed cores to check emails but you can certainly use 4-6 low-power cores to delegate all of the non-performance-critical collateral activity and background stuff so the 2-4 high-speed high-power cores can spend more time in lower-power states.
 
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
That's the same argument people were making for 4,6,8 e.t.c. cores,games just don't work that way, more cores can only prevent slow downs(stutters) by making sure that you can deal with the massive amount of data the multithreaded part of the game needs but preventing slow downs is not speeding up the game.
I think there's really not a whole lot that can't benefit from concurrency. It's just a question of how motivated game developers are to utilize it. That payoff is increasing, as the market moves to higher core-counts. So, I don't know if the past is a very good guide, in this regard.

The only way to get faster games is to be able to run the main game loop more times per second and while you do need enough threads for the multithreaded part it's the clocks/single core efficiency that will give you more loops and thus frames per second.
Sure, I get the appeal of high clock speeds. All else being equal, if you can increase the clocks 10%, then you should get 10% better throughput (assuming good cache hit rate, that other bottlenecks don't arise). And the same often isn't as true of adding cores. The issue is just that clock speed wins are a lot harder to achieve.

Anyway, there's an uncomfortable compromise that AMD is dealing with. They're using the same cores for both server and desktop applications, and that creates some real tension. As I noted above, lower clock speeds are more energy efficient, and therefore preferable to cloud operators (also, they should lead to better reliability and longer lifespan). However, the mainstream still really cares about single-thread performance. AMD could make cores that clock higher, but when you turn down the clock speeds in server chips, the extra design tolerances added to reach higher frequency targets would effectively be "wasted", leading to longer-than-necessary pipelines and sub-par server performance. So, AMD is naturally trying to do a balancing act, and perhaps that's one reason even the Zen2 cores don't clock as high as Intel's.

Intel, on the other hand, has somewhat distinct cores for both desktop and HEDT/server. So, they could at least theoretically tune their pipelines better for single-thread vs. efficiency targets.
 
  • Like
Reactions: Gurg