Intel Fires Back, Announces X-Series 18-core Core-i9 Skylake-X, Kaby Lake-X i7, i5, X299 Basin Falls

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

kinney

Distinguished
Sep 24, 2001
2,262
17
19,785


i5-7640X 4C/4T is HEDT now? And at 112W TDP. i7-7820X is 8C/16T at 140W TDP, Ryzen can do that with 65-95W. Yes it's impressive you can fit Ryzen in SFF while Intel can barely keep their new CPUs cool without mandating watercooling. These are a complete joke and it's just embarrassing for Intel.
 

bit_user

Polypheme
Ambassador

Don't count on it.

I'm sure MS would appreciate you buying a copy of Windows Server. I don't know if that's any better for you than Win10.
 

bit_user

Polypheme
Ambassador

YES. I don't know where you got the idea that a platform with such a big socket, quad-channel memory, up to 44 PCIe lanes, 10x SATA 3, and 3x NVMe should be appropriate for Small Form Factor, but you didn't have to down-vote all my posts just for pointing that out.


Not at the same performance level and not with all the above lanes & ports. You're comparing apples & oranges, here - completely missing the point of this platform.


The bad joke here is i5-7640x and i7-7740x. The rest is just a continuation of what Intel's done with their HEDT platform since Sandybridge.

If being corrected when you're wrong makes you angry, just unsubscribe from the thread and continue to believe falsely. Down-voting all of someone's posts for trying to educate you just makes you look childish.
 


Intel is also using TIM instead of soldering the IHS on these HEDT cpus so expect to delid if you are going to run anything above stock clocks. I just don't understand Intel right now, they have a great CPU line they cripple the PCIE lanes on the lower core count products and they make all of them run hot by using TIM between the IHS. Seriously these are almost free to do across the HEDT lineup. Intel, wtf are you thinking here I just don't get it?

 

Philphoto

Distinguished
Feb 24, 2009
24
0
18,510
I don't think that Intel will able to sell those chips as well since They want your $$$$$.I bought one chip was 4790K i7. It is not much faster yet> I am thinking about get AMD chip 1800x for less $$. I have old AMD 3500 still run in PC.this is more than 13 years old.
 

InvalidError

Titan
Moderator

Intel does not care that their CPUs run hot by overclockers' standards. All it cares about is that the CPU runs reliably within specs. Although Intel unlock their top-specs CPUs, its official stance is still that OCing is not officially endorsed and people who want to push their luck are on their own.
 

bit_user

Polypheme
Ambassador

Really? Then why do they provide a tool called Intel (R) Extreme Tuning Utility? And what's this about?
Intel offers its performance tuning protection plan for the new series, which is essentially an additional warranty that covers damage due to overclocking.
Sounds to me like Intel is changing its tune (the extra $$$ helps, no doubt).
 

InvalidError

Titan
Moderator

Yes, Intel makes the tools available and a separate warranty. Doesn't change the fact that overclocking isn't covered under the standard warranty. Even the tuning protection plan officially doesn't encourage it:

https://click.intel.com/tuningplan/faq
Does this mean that Intel is supporting or encouraging overclocking?
No. While we will, under the Plan, replace an eligible processor that fails while running outside of Intel’s specifications, we will not provide any assistance with configuration, data recovery, failure of associated parts, or any other activities or issues associated with the processor or system resulting from overclocking or otherwise running outside of Intel’s published specifications.

Unless it has changed since its introduction, the protection is only valid once per original CPU purchase. If you blow your replacement, you'll have to buy your next CPU retail. I'm guessing the plan does not cover physical damage from delidding, extreme mounting force and other unusual causes.
 

mapesdhs

Distinguished
Fact is, Intel knows full well that the core appeal of this class of CPU is for overclockers and similar enthusiasts, and they've been happy to use PR in the past to appeal to such people, yet ever since IB onwards they've been doing things to make this aspect of PC tech a total pain (eg. delidding benefits with numerous CPUs). I don't believe Intel doesn't care their CPUs run hot by oc standards; that would mean they'd be blind to a key target market for this type of product. It's just that until now they haven't needed to care, but even then these recent changes are blatantly things they could have done a long time ago, as toms itself pointed out when SB-E was first released. It's nuts that for IB-E everyone thought it perfectly normal for there to be a 4-core chip with effectively 10 PCIe lanes per core, yet now Intel is touting a 10-core chip with barely more than 4 lanes per core. The need for bandwidth has gone way up, but Intel didn't bother improving the design because they didn't have to.

I expect the 12+ core SL-X chips will have more than 44 lanes, but that the prices they're asking, it's going to be well into AMD territory. Who's going to oc SL-X to lofty heights when the financial cost for a failure is so high? And at that pricing levels, as toms said before, it begins to make more sense to think XEON and dual-socket boards instead, or indeed Naples.

So now Intel is playing catchup, will pretending it doesn't have to. But I think the readership of tech sites are going to tire pretty quickly of commentators who make excuses for what Intel has been doing for so long when their own articles were criticising Intel for the same issues 5 years ago.

Effectively, Intel's new options don't really change anything; the ones that make any kind of sense as an upgrade over existing tech just cost too much and don't provide what peope need these days. It seems to be AMD that's listened and strived to produce what actually makes sense.

Ian.

 

InvalidError

Titan
Moderator

I can think of a very simple reason for that: since there is no easy way to tell that a CPU has been OC'd, making significant OCs nearly impossible to achieve without physically modifying the CPU (delidding) makes it much easier to deny warranty claims.

With so many chips clocking 200-300MHz higher with next to no effort, I'm more surprised that Intel didn't simply launch higher speed bin SKUs and cut most of the OC margin off that way, then we'd have a Ryzen-like situation where next to no chip OCs much beyond the top SKU's stock boost. Maybe Intel figured that it is already making filthy rich margins on its chips and couldn't be bothered to make them any more expensive than they already are.
 

mapesdhs

Distinguished


Just sorted my bro's 3930K @ 4.7, ie. 1200MHz higher with next to no effort. :D For CB R15 it scores 1211 (ASUS P9X79 Deluxe, 16GB @ 2133 CL10, GTX 980), same result as a stock 6850K, and 8% better than a 7700K @ 5.1. I could run it higher, probably 5+, but at 4.7 it's sweet. 850 Pro for a C-drive; today it gets a 960 EVO for game data.





Sounds plausible!

Ian.

PS. Anyone got any X79 mbds spare or for sale? Looking for at least one for a charitable cause.

 

InvalidError

Titan
Moderator

I had Intel's newer chips that already have stock boosts over 4GHz in mind when I wrote 200-300MHz, not that it changes anything on HEDT CPUs which have no clock bins and are differentiated mainly by core count instead. SKL-X has stock boosts up to 4.5GHz, so it seems Intel is aiming closer to the limit this time around. I bet ThreadRipper gets a good chunk of the credit for that as a low-clocked i9 would look bad next to a cheaper ThreadRipper with more cores without a clock frequency advantage to at least score wins in less heavily threaded workloads.
 


There is absolutely no way Intel changed everything up. We were going to get this platform no matter what. Of course without inside knowledge all we can go on is what we can fine. I can find information on X299 and Skylake-X and Kaby Laker-X as early as July 2016:

https://hothardware.com/news/intel-skylake-x-kaby-lake-x-lga-2066-processors-2h-2017

That is almost a year before Zen was even launched. Again the amount of testing they have to put in to make sure everything is working not to even mention the driver building they have to do is not something easily accomplished in a few months. Even for Intel, who has one of the largest software dev teams in the world, it takes a long time. Considering that this is coming out in the 2H of 2017 there is not enough time to test and debug everything. They planned the cores out years ago. Because of Zen? Maybe but I bet more than they planned it out because higher end workstations benefit from more threads.

Just because they hired back Jim Keller doesn't mean anything either. To be fair the majority of K7/K8 tech came from DEC.

I personally do not think Intel was being overtly conservative. I think they have legitimate issues in the new process tech that was holding them back more than anything. 10nm is hard as hell, otherwise we would have had it already. One thing Intel did not let up on from Phenom to Piledriver was the die shrinks as that is a major advantage in many ways. And yes Phenom was better than BD/PD in a competitive means, mainly Phenom II, but it still lost heavily to 45nm C2Q and Nehalem/SB.

I honestly do not think Intel would have done much different be it Zen was better or worse.
 

InvalidError

Titan
Moderator

Since Intel uses the same socket from 1S HEDT to 8S servers, the option of taking a Xeon E5 and putting it into a "mainstream" board has always been there, Intel simply had no reason to do so until now. No need for much additional R&D as the chip is already there in the form of previously server-only SKUs.
 

bit_user

Polypheme
Ambassador

I think the reason this didn't happen is quite clear: Turbo Boost 3.0. Without it, the speed of lightly-threaded workloads on high-core-count CPUs might've been too low to satisfy gamers & would certainly look much worse than their corresponding i7-xxxK.

There's also the fact that Broadwell-EP becomes NUMA, above 10 cores. So, care is needed or you're going to get poor scaling when you go from <= 10 to >= 12 cores.

http://www.anandtech.com/show/10158/the-intel-xeon-e5-v4-review/2

Up through Haswell, the NUMA threshold was > 8 cores.

http://www.anandtech.com/show/8423/intel-xeon-e5-version-3-up-to-18-haswell-ep-cores-/4


Well, there's also price & market size. These are big chips, and their 14 nm process was newer and probably had a lower yield than now, meaning higher price floor. Once they cross the NUMA threshold, the die size jumps, giving yet another reason to draw the line before this.

Now, more software is better threaded. Windows 10 has (potentially) better, NUMA-aware scheduling, and they probably also want to try and foreclose a market to AMD's ThreadRipper. All of which are probably reasonable answers to the question: "Why now?"
 

InvalidError

Titan
Moderator

Well, ThreadRipper prices are out and the 16C32T model will be $850. Half of Intel's 16C32T price. Whatever performance advantage Intel may have is unlikely to be worth a 100% higher price in many applications.
 

H4rdware

Reputable
Sep 6, 2014
9
0
4,510
I am glad to see AMD come back from the brink of obsolescence and bring some products to market that you can call competitive without being sarcastic, putting it in quotation marks or comparing them to 2 generation old Intel products.

Without question AMD's latest offerings are formidable. Regardless of where your preference, or loyalties, lie, we all win and will enjoy a period of substantial gains and slashed prices as AMD strives to increase their market share and Intel strives to retain theirs.

While I don't see die hard Intel fans abandoning their preferred brand en mass, becoming AMD fans overnight, I do see renewed respect for, and interest in, AMD's new CPU lines, and I can't think of a single example of how this is a bad thing.

I came to the x99 party a little too late for me to be willing to go "all in" on a HEDT platform upgrade knowing that within a year we would be looking at the next generation HEDT platform. Now that x299 is here, I couldn't be happier with my decision to wait.

Now it's just a matter of deciding on the particulars.
 

InvalidError

Titan
Moderator

You may want to wait some more to see if rumors that the medium core count SKUs may require different motherboards are true as all motherboard manufacturers said they had never officially heard of 12-18 cores models before Intel's Computex announcement and never had such chips to test their boards with. With the higher-end SKUs possibly slipping into 2018, it could be a while before motherboard manufacturers get to properly test their boards against the whole SKL-X range..
 

bit_user

Polypheme
Ambassador

I'm still not seeing a good use case for > 10 cores for non-professional users. In Haswell/Broadwell, above 10 cores is where the CPU becomes NUMA. Even highly-threaded workloads can scale poorly, if they're not written to be NUMA-aware.

Anyway, I have difficulty seeing what would actually fail that's not fixable via BIOS updates, unless they're just concerned about TDP-related issues (especially when overclocking these beasts).
 

InvalidError

Titan
Moderator

Programmers already need to apply NUMA-style programming to get around Ryzen's dual CCX arrangement if they want to minimize performance penalties from crossing the fabric more often than absolutely necessary.

As for what may not be fixable with Intel's MCC chips, since all the specs are TBD and motherboard manufacturers didn't hear about earlier, motherboard manufacturers had no VRM spec for 'em when they put their first wave of X299 boards together. Even if the existing VRMs may be able to deliver the power, they could fail on other parameters such as transient response and power sequencing.
 

bit_user

Polypheme
Ambassador

Only in regards to L3 thrashing (which is really just an extension of the concerns you'd have for other levels of cache, in any other modern multi-core CPU), but not much else. I can only speak of Sandybridge through Broadwell, since I think Intel hasn't yet disclosed this info about Skylake, but they actually distribute the memory controllers between the different rings. In Ryzen, the memory controller isn't tied to a CCX.

Thus far, I think NUMA optimizations haven't even been an afterthought for most developers. Maybe that'll start to change, as 12+ core Intel CPUs and Threadripper start to appear in high-end enthusiasts' setups.


Is that true? I read the article to mean that Intel put out the platform spec, and that's not changed. So, if the mobos are built to spec, then they should just work (though I certainly appreciate the need for actually testing stuff). Maybe that's not true, but then it seems like Intel really dropped the ball on messaging.


All of which I was lumping under the power spec.
 

InvalidError

Titan
Moderator

If you read Computex coverage, many people interviewed motherboard manufacturers about the 12-18 core CPUs and all of them said these weren't part of the spec when they designed their boards.
 


I am surprised AMD is being that low on the price. Maybe they don't feel they have gained enough to price it higher? That or the performance is pretty low.

I mean AMD is only known for a cheaper product because most of the time they don't have the overall better product platform. Their CPUs can be either competitive or just bad (Phenom II, Phenom) yet normally when they have a competitive product they price it pretty well. Fury X came out priced pretty high compared to where it should have.

I guess we will see what is the reason why. I don't think AMD would shy away from being able to get bigger margins. They didn't during the K8 days or when they ruled the server realm
 

InvalidError

Titan
Moderator

AMD lost its premium shine and Ryzen's launch was far from smooth. AMD doesn't have the reputation necessary to unduly inflate its prices and with ThreadRipper CPUs allegedly costing AMD only $120 to manufacture, AMD's gross margins will still be greater than they've been in a very long time even at $850.

As far as I am concerned, ThreadRipper at $850 is merely AMD partially correcting 10 years of Intel stagnation and monopolistic price inflation. Had Intel followed Moore's law at half of its historic rate, doubling transistor count every three years instead of 18 months, 16C32T would have become mainstream two or three years ago instead of "coming soon" on the HEDT market in mid-2017.
 
Status
Not open for further replies.