AMD's Future Chips & SoC's: News, Info & Rumours.

Page 120 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Consider AMD was close to dead prior to Ryzen releasing, this is amazing. Intel really needs to get it together ASAP.
Once again, keep in mind that Intel's market is much bigger than desktop PCs and it is still selling everything it can possibly with most of its fabs at ~100% capacity with months of back-order on the books.

Aside from sorting out its 10nm and 7nm issues so it can ramp those up in eligible fabs to increase throughput and catch up with back-orders, there isn't much more "get togetherness" that Intel can do.
 
  • Like
Reactions: drea.drechsler
Which market? There's more than a few. Intel/AMD selling to consumers is moot, nobody off the streets calls and orders a mainstream cpu from Intel directly. Nor AMD.

So you have the contract buyers like Dell and HP etc. You also have the Boutique buyers markets like ibuypower and cyberpower. You also have the retailer market like Amazon and newegg. And that's just mainstream consumer markets, that's not including the commercial markets or industrial markets.

A few years ago I helped upgrade the lighting in Cracker Barrel corporate headquarters, over 1000 pc's in use there, 90% Intel, 10% AMD and 4 Mac. And you can bet that Intel and AMD and nvidia are all signed up for maintenence/replacement contracts.

So which market is AMD currently king by a hair? It's one of many, might even actually be 2 or 3 markets that's not posted, or written about, but if AMD is king in 3, Intel is still king in 10 others.

Don't get me wrong, I'm loving the fact that Intel no longer gets to throw "90% of the world's internet" commercials at me, somebody finally snatched the monopoly pedestal out from under Intel, but AMD has its eggs in multiple baskets, unlike Intel, and doesn't Need to be king. AMD just needs to Be there to keep the king in check.
 
Last edited:
Intel has eggs in many more baskets than AMD does. Just because you don't care nor hear about it as a PC enthusiast does not make it cease to exist.

Maybe, you could be right, I dunno exactly, don't think more than a handful of ppl actually do know the full extent of what either amd or Intel are into. But if Intel quit making cpus altogether, they'd be sunk. AMD maybe not so much.

And there's more to me than pc enthusiast 😎
 
Once again, keep in mind that Intel's market is much bigger than desktop PCs and it is still selling everything it can possibly with most of its fabs at ~100% capacity with months of back-order on the books.

Aside from sorting out its 10nm and 7nm issues so it can ramp those up in eligible fabs to increase throughput and catch up with back-orders, there isn't much more "get togetherness" that Intel can do.

True, in the short to medium term Intel has no shortage of orders and they also have a huge cash reserve. They are in no danger from a business point of view. That said, there have been some quite big shifts recently - AMD getting their act together is one thing, however with other large firms now fielding performance equal ARM processor designs (specifically talking about the Apple M1 - which can match Zen 3 in single core) and other firms also working on their own in house ARM based cpu designs for servers (Microsoft, Google etc) Intel does have some issues to deal with long term. Heck the M1 can even emulate x86-64 instructions faster than most Intel low power parts can run native.

For a long time, Intel were on top of their game, the cpu releases were regular and the supply was plenty so there was little need for other industry players to develop an alternative. With Intel being stuck on the same process and core for so long though it's forced multiple large partners to move into new areas. AMD have done well as a result however I don't think they can supply the volumes some of these customers are looking for - hence the in house designs. I think this is where the real issue is for Intel, they are behind on the fab game and Apple have just proven there is no need to stick to x86 processors (even providing strong performance in existing non native software) - that opens the flood gates to a lot of possible competitors to enter the market on an architecture that Intel doesn't control - and for the first time ever they are working on more advanced process tech than Intel can currently field.

It's hard to know for sure at the moment but I think more damage has been done to Intel than is currently apparent - the true outcomes of all this are going to take a few years to play out. On a more positive note I am confident Intel will get back into the game - they have historically been there best when put under pressure.
 
Intel could just remove the stupid iGPU from mainstream (and up) CPUs and leave them for the low end.

AMD has done a good job of kicking the Intel cage, so I hope we see some competition and not just Intel increases prices AGAIN to squeeze consumers more instead of competing.

Cheers!
 
  • Like
Reactions: cdrkf
With Intel being stuck on the same process and core for so long though it's forced multiple large partners to move into new areas.
Intel has only "been stuck with the same core" because it keeps running into issues with process because pushing more complex cores without compromising on power and clocks requires a more advanced process. If Intel hadn't run into process issues year after year, Cannon Lake would have launched four years ago, Ice Lake two years ago and it would be Golden Cove launching in 2021 instead of backported Sunny Cove / Ice Lake.

Intel is doing fine design-wise. Process hiccups are holding everything back.

On the plus side, were it not for Intel's 10nm delays, AMD would be facing imminent bankruptcy right now due to Zen not coming close to being considered credible competition except maybe in the high core count and massive IO arenas.
 
True. To a point. But kinda depends on which angle you look at that. Intels core and process issues aren't on their end, it's our fault. The consumer. We created the Intel known today and are also responsible for AMD's decline after the K5's success. Intel went left, specializing in single core speeds and we lapped it up, ending up with uber high fps in games like CSGO. AMD went right, tried the multiple core thing with the FX and got kicked to the curb as a result.

Public demand, we want more. More power, faster cpu, cooler temps, plug and play and more speed. Intel happily obliged and we made them #1. Now, they cannot go backwards. They've pushed cpus to 5.3GHz turbos inside TDP values. But public demand in games and other software requires more threads. Intel obliged us again with the 10th Gen adding hyperthreading, doubling the thread counts. But they can't go backwards, they can't sell an 11thgen with slower turbos, less power, just to be able to keep it cooled by anything less than a custom loop.

We created the monster. Intel just hasn't figured out how to cage the beast, and not end up with an Infinity Fabric based cpu. We won't see 11th gen until it's better than 10th Gen, Intel isn't about to revisit Broadwell.
 
True. To a point. But kinda depends on which angle you look at that. Intels core and process issues aren't on their end, it's our fault. The consumer. We created the Intel known today and are also responsible for AMD's decline after the K5's success. Intel went left, specializing in single core speeds and we lapped it up, ending up with uber high fps in games like CSGO. AMD went right, tried the multiple core thing with the FX and got kicked to the curb as a result.

Public demand, we want more. More power, faster cpu, cooler temps, plug and play and more speed. Intel happily obliged and we made them #1. Now, they cannot go backwards. They've pushed cpus to 5.3GHz turbos inside TDP values. But public demand in games and other software requires more threads. Intel obliged us again with the 10th Gen adding hyperthreading, doubling the thread counts. But they can't go backwards, they can't sell an 11thgen with slower turbos, less power, just to be able to keep it cooled by anything less than a custom loop.

We created the monster. Intel just hasn't figured out how to cage the beast, and not end up with an Infinity Fabric based cpu. We won't see 11th gen until it's better than 10th Gen, Intel isn't about to revisit Broadwell.

I'm not sure I agree with this - the demand for faster compute has always been there (otherwise we would still all be running on IBM XT clones happy with out green phosphor on black MDA screes working under Dos). The issue is Intel has gotten stuck, the high turbos and core counts are a result of Intel pulling a defensive action to stay relevant. They haven't really given us anything new that wasn't possible when Skylake processors first appeared (you could overclock 6700K to approaching 5ghz, and higher core count options were offered on HEDT and server platforms long before they became the consumer parts, all Intel have done is moved these initially high end parts down into the mainstream space). What they have done is very similar to AMD's actions when they were in this situation with FX, which simply missed the mark performance wise (i.e. the 9590 - overclock the hell out of it to make it competitive ish and forget about power use).

I think the issue for Intel is however they were too slow to react when they ran into problems (or likely the true state of the 10nm and 7nm was hidden from the top execs due to people being fearful for their jobs). Whatever the reason Intel didn't prepare to stay on the 14nm node and thus didn't develop anything new on that basis. It's similar to what happened between nVidia and AMD on graphics, nVidia realised the next node was going to be a long time coming so they did some top notch design work to squeeze a full process shrink like performance jump on the same process (900 series) whilst AMD waited. The end result was AMD wound up playing catch up for 3 generations.

I mean Intel's whole business model has been based on being the best - the danger to them now is that is no longer the case, and it's no longer just AMD they have to contend with. I think the biggest risk for Intel is actually in the server space, where on the one hand you have very strong offerings in the form of Epyc, and on the other lots of companies are experimenting with Arm based solutions.
 
Intel has been a victim of their own greed. Their market strategy has been to segment the markets artificially to create as many niches as possible in order to squeeze them hard. They kept the "mainstream consumer" market with 4 cores because they already had an absurdly more expensive platform alternative in case you wanted or needed more and also AMD struggling to get something out to compete. Due to that extreme segmentation, when one of their main assets kind of crumbled (fab'ing), they could not keep feeding the artificial segments they created in the mainstream market and AMD took the chance. I can assure you, we'd still have 4 core parts in the mainstream if it weren't for AMD. Maybe 6 core parts, maybe.

For context: https://arstechnica.com/gadgets/202...es-intel-for-lack-of-ecc-ram-in-consumer-pcs/

Cheers!
 
Intel has been a victim...

Intel's no victim. Even with severely dented pride in the desktop segment they're not suffering financially since they can at least fill customer orders. As long as AMD remains fab-less they can't control their own supply chain and so will never catch up to Intel. Not that it necessarily will happen I can see Intel quite merrily ceding the 'gaming king' crown to AMD, take the 'value leader' crown from them and retain an overwhelming sales leadership position for a long time. It's the large scale customer orders that make the big money and they need assurances they can get product delivered, in numbers and in timely fashion.

I have to think ECC memory in the desktop arena really isn't important making Linus Torvalds rant largely just self-serving. I note that Ryzen processors (some at least) have supported ECC but motherboards that implement it are few and far between. So Intel's segmentation probably isn't such a bad idea in this case.
 
Last edited:
Nahh. BF4 abused that notion. The FX8350 was Second only to to the i7-4790k (not counting HEDT platforms) in FPS, surpassing even an oc'd i5-4690k. That game just used threads to a much better optimization than speed alone.

2 cores at 3.6GHz can get more work done than a single core at 5GHz. That's why we have Xeons in the first place, consoles that ran 8 threads at 1.8GHz etc. With BF4 there was a shift in thinking with game devs, they could get more inside a frame and still get good frame counts, with using more threads. They could seperate Ai, physX etc from the long single code string into multiple strings that got processed faster as a result.

The days of quads were numbered, just as the days of Pentium and dual cores were numbered. It was just a matter of time. AMD was counting on that shift, they just jumped the gun and brought out the FX way to early. Now with games using 8 threads more popular, Ryzen was a perfect fit. Intels answer was to add hyperthreading to more than just the top mainstream cpu. And dropping the prices on equitable cpus. They had to, just to remain competitive with not only public demands, but game demands. GTA V proved that. 4 thread minimum and quads still bog, anywhere upto 100% usage.

It was time. Just this time intel was second runner up.
 
2 cores at 3.6GHz can get more work done than a single core at 5GHz.

If you can scale the workload and are free from storage/memory performance constraints and other potential system bottlenecks, true. But not all workloads scale in that manner. And for most non-server workloads, there's diminishing returns after a few cores since the workloads you are attempting to run simply don't scale to that many processors.
 
... And for most non-server workloads, there's diminishing returns after a few cores since the workloads you are attempting to run simply don't scale to that many processors.
People always think in terms of a single workload; Windows and (especially) Linux is multi-tasking and some users actually put it to work running multiple apps with their own workloads launching multiple threads. In particular: what about developers and others who run multiple VM's? Each VM can have it's workload(s) requiring more cores for optimum performance.

I'm also confident that developers will continue producing app's that scale with more cores even if done wastefully. That, simply because they can. Minimum memory configurations-even for the average home desktop-have climbed steadily over the years as developers assumed you'd just get more since it was readily available and (usually) ever cheaper. Why won't that happen with highly threaded applications?
 
Last edited:
Gotta start with the code strings. Back in the day when 16bit was prevalent, code strings were relatively short. Ai was basically non-existent, graphics were simplistic, wasn't hard to write all you needed for single thread usage.

Then things got complicated, and do so more with every new game and it's use of photo-realism, cgi, physX, Ai, ambient occlusion etc. That's far easier to write as several seperate lines of code instead of trying to cram all of it in just 1 or 2 threads. The cpu will process each line as it can, if it has the threads to do so, cramming it all together post process in the master thread to get shipped to the gpu.

Without that ability to multi-process multiple lines simultaneously, you'll get code strings long enough to show up as seriously deminished fps. It's the entire reason GTA V went back to specifying a quad minimum, which wasn't seen in a game for several years.
 
Whatever the reason Intel didn't prepare to stay on the 14nm node and thus didn't develop anything new on that basis.
There is nothing new to develop on an old process, Intel has already squeezed just about everything it can possibly squeeze out of a given process after one or two "optimization" cycles. Pushing forward requires process advancements, be it process shrinks or significant refinements to an existing process like 14nm+++ which open up the opportunity to squeeze in some more logic while mitigating the impact on TDP and clocks.

You can ever only do so much within a given power and die area budget on a given process.
 
Intel's no victim.
It's an idiom... It doesn't mean they're the "poor victims", but it's meant to reflect something akin to "they were stupid enough to get in that situation".

As for the technology stack for Intel... Credit due where credit is owed: 14nm has been an amazing node for them and, at the very least, it's been carrying their delays like a champ. If they just weren't so darn stubborn and actually try to compete on technology alone, then they could have pushed their designs a bit more to accommodate 14nm for longer. They are using more power and have slightly less performing cores, but they are still "up there". For instance, why did they just didn't adapt X299 for mainstream and stopped being so DARN stubborn to keep the market segmentation? Like, WHY Intel, WHY. No, I know why: DYI market is too small to care and AMD is not that far away.

Anyway, I'm sure they'll need a a couple more years to make their process better, at the very least. Let's see how they cope with the design of the uArchs having the, on paper, lesser node.

EDIT: Are they biting the bullet and obeying their investors to drop their fabs? https://hardware.slashdot.org/story...smc-samsung-to-outsource-some-chip-production

Cheers!
 
Last edited:
People always think in terms of a single workload; Windows and (especially) Linux is multi-tasking and some users actually put it to work running multiple apps with their own workloads launching multiple threads. In particular: what about developers and others who run multiple VM's? Each VM can have it's workload(s) requiring more cores for optimum performance.

I'm also confident that developers will continue producing app's that scale with more cores even if done wastefully. That, simply because they can. Minimum memory configurations-even for the average home desktop-have climbed steadily over the years as developers assumed you'd just get more since it was readily available and (usually) ever cheaper. Why won't that happen with highly threaded applications?

Multiple applications will obviously scale (assuming no other bottlenecks exist), but even then your gain is limited as there's a point where thread overhead overtakes the advantage of running multiple apps in parallel. You also need to consider memory access (especially if paging gets involved) and other IO bottlenecks for this case, which start to eat away at your potential gains fairly quickly.

There's also the odd case (which I'm seeing come up a LOT more often in recent years) where developers start to make assumptions about CPU resource availability and start coding their applications to use as much CPU is available without bothering to check if anyone else is using said CPU resources. Thread thrashing is absolutely a thing, and it's a mess to clean up when it does.

I also note that we aren't going to be able to continue scaling CPU cores forever; CPU yields are already starting to become a limiting factor in CPU design, and that's not going to get better as we keep shrinking manufacturing nodes. Also consider that adding additional CPU cores is expensive (both in terms of cost and power/die space). I suspect we'll likely top out at 32 cores, then have to go back to incremental IPC/Clock increases again until someone comes up with a new CPU architecture.
 
Multiple applications will obviously scale (assuming no other bottlenecks exist), but even then your gain is limited as there's a point where thread overhead overtakes the advantage of running multiple apps in parallel. You also need to consider memory access (especially if paging gets involved) and other IO bottlenecks for this case, which start to eat away at your potential gains fairly quickly.

There's also the odd case (which I'm seeing come up a LOT more often in recent years) where developers start to make assumptions about CPU resource availability and start coding their applications to use as much CPU is available without bothering to check if anyone else is using said CPU resources. Thread thrashing is absolutely a thing, and it's a mess to clean up when it does.

I also note that we aren't going to be able to continue scaling CPU cores forever; CPU yields are already starting to become a limiting factor in CPU design, and that's not going to get better as we keep shrinking manufacturing nodes. Also consider that adding additional CPU cores is expensive (both in terms of cost and power/die space). I suspect we'll likely top out at 32 cores, then have to go back to incremental IPC/Clock increases again until someone comes up with a new CPU architecture.

The chiplet strategy AMD have been employing does mitigate against the yield issue to some extent - I do agree though there is an upper limit to what makes sense in the mainstream desktop market as other bottlnecks start to appear as core counts go up. Memory bandwidth jumps out- the reality is desktop systems need to stick with 2 memory channels (OEM's are already pretty bad for not even fully populating a dual channel setup), so bandwidth per core is an issue. I think 16 cores looks to be the practical limit for DDR4, DDR5 might allow that to increase again.

The truth is though for most consumers 4 cores or 32 cores won't make a difference to them - most standard tasks don't need that much compute power. I just read a question on this forum of a guy going from an Ivy Bridge i5 to a 10400K and wondering why he didn't 'notice a difference' - for windows, emails, web browsing and such the i5 was already more than adequate (as would be even older hardware in most cases, provided it's paired with sufficient ram and a decent SSD). It's only really when you start getting into rendering, simulation and heavy video work that the higher core counts make sense (and even then it's only a part of the workflow that is that demanding)
 
What people forget AMD was dead.They are challenging nvidia and intel at the same time.I think Lisa Su did a hell of a job for amd.Intel's releasing the 11 series soon But First it's gonna run hot.Second intel's more expensive(here (5900x cost 2.4k 10900k cost 14k)Good thing is it's on same board so you can use your z490 or w/e.I couldn't care less who's faster.I have been with amd for many years even through FX.I still have my first dual core amd motherboard cpu and ram(i upgraded mobo last month but old mobo still work as i will use her for another project)Fan boys will always b@#th and moan and cry to prove the brand they choose is the fastest.
Wake up it's 2021 who really gives a flying F@#$.Buy what you can afford.I came from an FX to a Ryzen 5 2600 and i am happy with my choice paired with a 1070 ti.I will maybe upgrade in 5 years time as she does everything i want to do.I don't have much games but those i do have all runs 100fps+.I personally am impressed how fast amd caught up with the 2 competitors.Even if you don't like AMD you have to be impressed with Lisa Su.She pulled a rabbit out of a hat and changed gaming forever.
PS:If Intel claims is true then they are 5% faster in gaming but only 8c/16t so amd's gonna slaughter then in multi core.
 
....
I also note that we aren't going to be able to continue scaling CPU cores forever; CPU yields are already starting to become a limiting factor in CPU design, and that's not going to get better as we keep shrinking manufacturing nodes.
....
Isn't that what the Zen2/3 chiplet architectural approach solves? Or, if not solve exactly, mitigates? Rumors making it to us lay people have it that Intel is going down the path of something like 'chiplets' too...perhaps for similar reasons?

Too, it sounds like a lot of things can't continue indefinitely in semiconductor design, as we've found with 7nm for instance. Relying on a geometry reduction for performance improvement simply isn't feasible when clocks become limited simply because the heat can't be removed fast enough with conventional cooling methods. The power density problem is real and won't go away, only put off a little bit.

It's curious to note how Zen 2 and 3 CPU's are perfectly capable of enormously high clock speed operation, they just have to be cooled to sub-zero temperatures to do so. Impractical, of course, but it does seem to indicate the architecture is capable of much more if only there was a way to cool it well enough.

The bulk of the desktop market is 8 cores or less, so AMD having an advantage beyond 8 cores is of little material importance.
Surely, though, the bulk of the HEDT market, where time is money, must be increasingly going greater than 24 cores. This market isn't worried about things like 'latency' nearly as much as the gamer crowd so the compromises bringing 32 cores (and more) to bear on complex computational problems aren't much of an issue.

I believe the bulk of the 12 and 16 core DT market is people who dabble (as in don't do it as their primary source of revenue) in things that benefit from the cores but prioritize raw low-latency performance for things like gaming. Either that or just like knowing they have such beastly systems; made possible by being so affordable.
 
Last edited:
Status
Not open for further replies.