Intel Introduces New Mesh Architecture For Xeon And Skylake-X Processors

Status
Not open for further replies.

Dugimodo

Distinguished
Competition is good, but intel haven't just plucked this design out of the air in response to AMD releasing Ryzen. The truth is if all intel wanted to do was compete with Ryzen they already have products that match or better them and all they'd need to do is drop the prices - admittedly hugely. I'm impressed by Ryzen, but it only wins on price.
 


I have to disagree a bit. If you compare Ryzen to the fastest non HEDT CPU, the 7700k, there are lot workstation and scientific tasks Ryzen is faster than Intel. Threadriper is coming to compete in the HEDT line so I think we can have more apples to apples discussions instead of comparing Ryzen to Intel's HEDT line which is a bit disingenuous. Will your statement hold true once all the chips on the table, maybe, but I think before we jump to conclusions we see Intels and AMD's HEDT chips then call it.
 


Well, if you factor in price AND performance, then it comes out ahead. Yes, in raw speed Intel has faster cores, but you pay through the nose for those faster cores. For everyone else except the independently wealthy or large enterprise customers, they need the best bang for their buck - and that's what Ryzen provides.

(disclosure - I have a Ryzen 1700 - and boy it has some processing power)

 

ClusT3R

Distinguished
Dec 29, 2009
16
0
18,510
Lol like always AMD innovating and the other has been the copy cat, the funny thing here they were force to change the architecture I don't know ho much but now if when you are going to see APPs that needs to bee optimize for that thing and is when people are going to understand what the innovations process cost to AMD all this years.
 

bit_user

Titan
Ambassador
After seeing this on Xeon Phi v2 (KNL), I was wondering whether we'd see it in E5/E7 Xeons. Rings only get you so far.

BTW,
trips to fetch data in adjacent CCXes incurs a latency penalty due to the trip across the Infinity Fabric. Communication between threads on cores located in disparate CCXes also suffers.
This is redundant. Threads communicate via caches & memory.
 

homeles

Distinguished
Mar 14, 2011
2
0
18,510
The "mesh" thing has been in the works for ages. David Kanter over at Realworldtech predicted that Skylake-EX would use it back in January 2014. The TeraScale project was Intel's first (at least publicly) exploration of a 2D mesh, and that presentation was back at ISSCC 2007. Knight's Landing already uses it. It has zero to do with AMD.

Now is Intel making changes in response to AMD? Surely they are. Where you'll see this is with marketing, SKUs, and pricing -- you can change this on the fly. Something like this "2D mesh" however has been in the works for over a decade, and to claim that "it must be AMD!" is perhaps the single most asinine claim I have read in my entire time of following hardware (a good 7 years or so).

- III-V
 

kinggremlin

Distinguished
Jul 14, 2009
574
41
19,010


Intel demonstrated a 48 core cpu that utilized a mesh topology back in 2009, LONG before AMD was talking about infinity fabric. Intel didn't just magically come up with this in an instance in response to AMD.
 

InvalidError

Titan
Moderator

Intel didn't put Skylake-X together overnight. For its mesh arrangement to be about to hit production silicon now, Intel must have made the design decision somewhere in the neighborhood of two years ago.

Also, that's the third time Intel has changed its server CPUs' internal interconnect arrangement despite its monopoly position in the server space and this newest scheme bears a striking resemblance to Altera's FPGA scheme where extra D-flops are buried in the routing fabric, so I strongly suspect that the new routing arrangement is closely related to Intel's acquisition of Altera a few years ago - buy a programmable logic chip company, integrate some programmable logic related architecture in new CPUs.

Additionally, although Intel may have been slacking off in the low(er) margin mainstream and HEDT markets, its higher-end server CPUs have continued scaling as Intel needs more powerful and power-efficient chips to get repeat sales from companies that have already bought their previous-gen chips at those crazy E5/E7 prices and profit margins.

AMD's Ryzen/TR/EPYC have nothing to do with it, that's where Intel was going on the server side regardless.
 

ClusT3R

Distinguished
Dec 29, 2009
16
0
18,510
One thing is start research and other is getting done, AMD just getting done, if they really have that technology from long time a go, they will release it three years ago back, and look at the history, first 64 bit, first dual core, first, quadcore, and the list go on, I'm not saying that have capacity to innovate, they just don't want it, they play safe all this years like nvidia, like a say innovations has big problems this days because development process takes years to understand and make very good use of the hardware, Intel knows that very well also Nvidia and by the way Mesh and Infinity Fabric are two completely different things.
 

bit_user

Titan
Ambassador
AMD's design held a scalability advantage over Intel's ring bus architecture -- the company can simply infuse more CCXes onto the package to increase the core count.
Traffic between dies supposedly traverses PCIe. Even though it's still managed using the InfinityFabric protocol, it's susceptible to bottlenecks and latencies you wouldn't see between two CCX's on the same die.
 

InvalidError

Titan
Moderator

If you came up with a brand new idea for a CPU optimization, it will still take you at least two years from concept to first working silicon. If you have multiple competing solutions to a design challenge, the first one you pick may not yield the scaling properties you initially expected but you won't know that for certain until you get the first real results two years later. From there, it'll be another two more years before you can correct that mistake with an updated design using one of the other candidates.

Most significant ASIC re-designs don't happen overnight.
 

bit_user

Titan
Ambassador

Yeah, sounds familiar. Rings are pretty common, as are meshes. The PS3's Cell processor also used a ring bus, for instance.

It seems like meshes would be a bit more difficult or require more transistors to implement, possibly explaining why Intel hadn't done it sooner. If rings scaled well enough to the core counts they had 'till now, why bother with meshes?
 

InvalidError

Titan
Moderator

I'm guessing the internal CCX interconnect is a direct connection between cores for cache snoops with the rest being done over the shared L3. That classic scheme only works well for modest core counts as the snoop network grows proportionally to N^2. Prohibitively expensive for multi-socket configurations from back in the days of single-core CPUs capable of multi-socket configurations beyond two sharing the FSB but manageable on a chip maybe up to eight cores.
 

kinggremlin

Distinguished
Jul 14, 2009
574
41
19,010


You're wrong on pretty much all accounts. AMD wasn't remotely close to the first 64bit processor in 2003. DEC Alpha was 64bit in 1992 and had window's support at that point. IBM Power was 64bit 1998. MIPS went 64 bit in 1999, and Intel Itanium was released in 2001. AMD was dead last in the race to 64 bit among major architectures which is why they could only come up with the band aid 64bit extensions for x86 which ended up screwing us all and handicapping us to all the outdated garbage that was in x86.

AMD was not first to dual core either. IBM was first with their Power CPU which targeted the same server market as AMD's initial dual core Opteron CPU. Power PC did however make into consumer targeted Apple Power Macs. Intel was the first to sell a consumer targeted dual core x86 cpu with Smithfield. IBM also beat AMD to quad core by a few years.

I'm not sure what it is about AMD fans that they think AMD invented everything when they've not really innovated much of anything. You don't see that with fanboys of other brands.
 


You had me until "You don't see that with fanboys of other brands." because that is just not true. Yeah we see fanboy-ism with a lot of other brands.
 

bit_user

Titan
Ambassador

This is definitely wrong. According to this, MIPS 64-bit R4000 was launched in 1991.

http://processortimeline.info/proc1990.htm

I personally remember the 64-bit race among RISC CPUs of the early 1990's, with MIPS among them. In fact, the Nintendo 64 launched in 1996 and had a legitimately 64-bit CPU.


This definitely makes you sound like an AMD-hater. Let's not forget that Intel had plenty of time to extend x86 to 64-bit however they wanted. Instead, they chose to use it as an opportunity to force their new IA64 micro-architecture on the mainstream computing world.

And if you want to talk about band-aid solutions, how can you overlook the hack that Intel bestowed upon the x86 ecosystem that is PAE - extending 32-bit x86 to address 36-bits of physical memory? This was purely to breathe a bit more life into x86 until their Itanium CPU could take hold (but IA64 was short on more than just time). Had AMD not launched x86-64, maybe Intel would've just updated this to support 40 bits, etc.

You might nit-pick a few decisions AMD made in x86-64, but I didn't hear much criticism at the time. IMO, it was eminently pragmatic, and included a number of worthwhile improvements, beyond just 64-bit addressing.

As for the rest of your CPU history lesson, I'd suggest anyone truly interested in the timeline of microprocessor developments would be much better served by reading this:

http://processortimeline.info

From my perspective, discovering it was the best thing to come of your post.


I dunno which is worse: fanboys or haters.

A lot of the innovators of the computing world are no longer with us, or are shadows of their former selves. It turns out that market timing & execution are things which matter just as much as an idea's genius and originality. The fact is that virtually all of the ideas underpinning today's computing products had their origins decades earlier. I knew a computer engineer who worked at Data General & Stratus, in the 1980's and 1990's. He said he thought they were innovating, only later to discover pretty much all of the original ideas they thought they had were already done by IBM and others, in the 1960's and 1970's.

Credit should go where it's due, but I don't really worry too much about who invented what. The things that really matter are bringing good products to market, pricing them reasonably, and supporting them properly. How much of their IP is truly original is fairly moot, IMO.

If you want to see real innovation in computing hardware, I think you'll need to look beyond semiconductors. For example, I find quantum computers fascinating - like cutting-edge physics experiments, except you can use them to run algorithms!
 

rwinches

Distinguished
Jun 29, 2006
888
0
19,060
Wow! So many Intel apologists, typical Tom's 'In either case, the mesh architecture has a |tremendous| scalability advantage over its predecessor' can we wait to see real numbers?

AMD's Server lineup does show they are real competition for the server farm market;
http://wccftech.com/amd-epyc-7000-series-server-cpu-family-specifications-price-performance-leak/
 

kinggremlin

Distinguished
Jul 14, 2009
574
41
19,010


Read it again, I didn't say other brands don't have fanboys, I said that fan boys of other brands don't claim the brand they're married to invented everything. Yet you see that frequently with AMD both on the GPU side and the CPU side.
 

kinggremlin

Distinguished
Jul 14, 2009
574
41
19,010


I don't get your point. Nothing in your post disputes that AMD was beaten to market with commercially viable products for everything ClusT3R claimed AMD was first for.

So, I'll take that to mean you agree with my point. I appreciate the support.
 
Status
Not open for further replies.