Intel Core i9-7900X Review: Meet Skylake-X

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

PhoneyVirus

Distinguished
Sep 24, 2008
90
0
18,630
AMD will own about 35% of the market share come this time next year, Intel has no idea what to do or for that matter, handle it. Intel thought they had to fight AMD on the graphics side, while they treat their loyal fans like they were bonds. AMD is back like the Athlon 64 days and I personally for one think, their here to stay. Oh we have Cannon Lake... no wait please! don't leave, we also have Ice Lake.

Intel never talked about their architecture in years, not alone mention the following one. Intel is going to know what it feels like to own less then 50% market share when they hit the 10nm and 5nm, when AMD decides to throw in the graphics. Intel thought to much about ARM's architecture a few years ago and AMD's graphics, only to be throwing out hot ass cores and completely ignoring their fans about what they want TIM. You'll be able to purchase a 10 core processor for no little then $260 come 2019, take it with a grain/pinch of slat well, because AMD is Thread Ripping it all up.
 

AgentLozen

Distinguished
May 2, 2011
527
12
19,015


You say this so matter-of-factly. Do you have a source?

I'm happy that AMD has made a comeback and Intel has some decent competition again. Realistically though, Intel sells way more than AMD does and probably will continue to for the foreseeable future. Even if AMD does gain a bit more market share, 35% is an unrealistically BIG number.
 

bloodroses

Distinguished


Exactly. While I'm quite happy that AMD is competitive again vs Intel, there are certain facts. Right now, AMD wins on cost and core count only. Intel still has a more efficient core that can clock higher. Adding extra cores or cutting prices is much easier than improving CPU design. Also, Intel has a much higher revenue and R&D budget than AMD does. AMD caught Intel with their pants down back in the Athlon days because Intel was counting on CPU's being able to ramp up in frequency much better than they did (they predicted 10ghz+ by now). I don't see Intel making that mistake again.

Hopefully AMD can pull rabbit out of their hat when the next generation of Ryzen's come out. Same with Vega vs the Nvidia competition. Chances are though that they'll just focus on their upcoming APUs though since that is one market they have a definite advantage in.
 


AMD's rabbits if they have them will come with 7nm Zen and Navi the release after Vega. If you look at what they did Ryzen is mostly hurt by clocks and some lower IPC. The clocks which are effected by Global Foundries 14nm a good deal will get a bump with 7nm as the processes is geared towards frequency. I expect the 7nm node from Global Foundries to be extremely good, its the first node they are using IBM's engineers and patents that they acquired 2 years ago. Then with Navi they will be gluing multiple cores together with infinity fabric like they did with Zen so think crossfire ish with no crossfire penalty/drivers which is of course why they have built using HBM2. Both Zen and Vega are there first steps and will be competitive but there big rabbits come the release after for each. Now with all that said Nvidia and Intel both know and have higher budgets so we shall see how it turns out but one thing is for sure progress is about to speed up.



 
Large businesses don't make the decision to switch platforms likely. Even if it's clearly a good idea, they're going to take their time and study it, try a sample first, and watch and see what everyone else is doing. (And then, like as not wind up going with some company the CEO's nephew works for so he can score a commission...) Just look at the OS adoption rate. Not the same thing, but still reflective of the conservative approach.
 


I assume that was in response to the 35% market share statement. I totally agree with you that is way, way off base. If AMD ends up having a great stable platform including the CPU, motherboards, etc it will take 1-2 years for companies to certify and start switching. AMD will not gain significant market share for at least another full year even if the EPYC is the bees knees as it takes while for these big "battleships" to turn.
 


Minor correction, the cheapest 16 core part is rumored to be $850. All the 10,12, and 14 core parts are less. Nothing official but if we are going off the rumors that was the only one out there that the 16 core is $850. Paints a very different picture.
 

Petefromthe90s

Prominent
Jul 14, 2017
1
0
510
Well then, for $999 you can now get a 10 core/20 thread CPU form intel. About time, shame that AMD just shat on them with the 12 core/24 thread for $799 or 16 core/32thread for the same $999.

Intel keeps the IPC advantage but have negated their advantage with their usually better cpu frequency. As of late it was a given that you could get your CPU from them and push it to 4.7 GHz easily with an AiO or a beefy air cooler but with this chip we are stuck with sock clock speeds. Granted its still better than the 4GHz cap on current AMD's chips (due to their 14nm process, look it up) but its no longer enough.

same price
20 threads vs 32 threads
4.3 GHz vs 4.0GHz
44 PCI-E lanes vs 64 PCI-E lanes
13.75 Mb cache vs 32 Mb cache

The choice seems obvious now. AMD coming out of nowhere with their new lineup was an unexpected but hugely pleasant surprise
 


Agree and Intel's higher core counts when they arrive should not be able to hit very high frequencies due to the heat unless you have some exotic cooling. I am curious how this all unfolds as I have a suspicion AMD's 16 core will have higher clocks vs 16 core Intel where AMD's more efficient design starts to pull ahead.
 

mapesdhs

Distinguished
Another problem Intel has is that even if they could respond a bit later (say next year) with something more sensible than X299, that would mean yet another socket, etc. By releasing X299 now, they've locked themselves into a rather silly platform to which they'll have to stick for a while, unless they're not bothered about really annoying people by releasing yet another socket/chipset in the near term.

By contrast AMD is ticking some good boxes here, such as ECC support, features not being disabled on lesser products, etc. To a large extent the things AMD is doing right reflect the same things people have been criticising Intel in recent years for doing wrong.

Ian.



 

bloodroses

Distinguished
Personally, I don't think it's that bad that Intel changes sockets nearly each CPU version release since I usually don't replace my CPU unless I plan on upgrading the entire system anyways. I usually try to get 3-5 years out of the CPU, then reuse it for odd projects after that. I do know some people though that does continuously upgrade, which definitely has been in AMD's benefit with their socket choices.
 


Why do you assume a new socket? How many uArchs did LGA 2011 support? If it is a optimized uArch or die shrink chances are X299 will support it. X99 supported two uArchs, Haswell-E and Broadwell-E which is the pretty normal support range per Intel chipset.

The one benefit to not holding down a socket/chipset for so long is the platform around the CPU evolves too. AMD just now got PCIe 3.0 and native NVME M.2 support. Intel has had it for quite a bit longer.

Also who on the consumer desktop market needs ECC? The only people who benefit are people who need workstation class systems which tend to have Xeons anyways and support ECC if needed. If anything the majority of consumers would be at a disadvantage to use ECC since it is slower than non ECC RAM.

I guess we will see in, what is it 4 years AMD expects to get out of the current socket if they have also kept their platform up to date as well. To me it matters if you cannot get the latest ideas that will be the ones moving forward.
 

mapesdhs

Distinguished
jimmysmitty write:
> Why do you assume a new socket? ...

I don't see how else they can fit in the extra PCIe lanes, and given how many times they've changed sockets already it's a perfectly logical assumption.


> ... which is the pretty normal support range per Intel chipset.

It's bad that the dumb things they did starting with IB and the 5820K is considered normal.


> The one benefit to not holding down a socket/chipset for so long is the platform around the CPU evolves too.

But it didn't evolve, that's the problem. Read the reviews of X79, tech sites (including toms) were complaining about the lack of native USB 3.0. Indeed, X79 is to a large extent just a mild evolution of X58. Since then, the no. of cores went way up, but no more PCIe lanes, no significant max RAM increase (why don't pro X79 boards get BIOS updates to support 128GB??) and now Intel's wolloped the price right up if one wants to have at least 40 lanes. They're crazy.


> ... Intel has had it for quite a bit longer.

That ignores the total mess of whether one can use it or not depending on what CPU is fitted, which is even worse with X299.


> Also who on the consumer desktop market needs ECC? The only people who benefit are people who need workstation class systems ...

There is no such thing as pure consumer vs. workstation. There's a huge prosumer crossover, solo pro's who cannot afford XEON-type workstations. I've specialised in helping such users for years.

Point is, AMD is doing what Intel could have done but didn't because they thought they were top dog and could just stroll around not innovating. Now they've panicked and what they've produced is a mess. I was warning this could happen years ago. Intel chops up its product stack, with a plethora of different models, this feature vs. that feature is on here, off there, buy higher up the stack if you want this/that. I come back to the PCIe lane crippling Intel has been doing: why did we let them get away with that? It's completely nuts that the 4820K, an old 4-core chip that's 50 quid or so 2nd-hand, can do things on an X79 board that are not possible with X299 and an 8-core 7820X that costs ten times as much. :D


> If anything the majority of consumers would be at a disadvantage to use ECC since it is slower than non ECC RAM.

The "majority" of consumers don't constitute that part of the market which is most lucrative.


> To me it matters if you cannot get the latest ideas that will be the ones moving forward.

Intel did entirely the opposite though, they deliberately left out the latest tech even though they *knew* it was what people wanted, eg. USB3, and as I've said many times before, remember the 3930K was a deliberately crippled 8-core (something tech sites mentioned at the time); this nonsense started a long time ago with the poor TIM in IB. I've spent the past few weeks rereading a lot of older reviews, it all comes across as if the last real push forward Intel made was X58 and Nehalem, with SB a good refinement. Since then they've been treading water.

It's not just the CPUs they release or the chipsets that crawl along with barely sensible updates, it's elsewhere in the PC space aswell: why can't I buy a simple PCIe addin card using an Intel SATA3 controller? Because if one could, people with older mbds would buy them by the truckload: native driver support, proper speed and reliability, unlike the poor Marvell/ASMedia controllers. Ditto support for native PCIe booting on older mbds, perfectly doable, but mbd vendors won't do updates to add such support (at least one can use SSDs that have native boot roms); one could say this is a vendor issue, but they have the same vested interest in pushing sales of newer stuff even though the previous stuff could have been sensibly updated in the first place. They feed off each other, it's a chicken & egg mess that could have been prevented. Intel could have given its designs greater longevity, releasing CPU products which provided sensible and useful performance boosts. Instead, we've gone through a plethora of chipsets and CPUs without that much of a real overall performance improvement, because of all the various issues involved. Overclocking a 3930K was pretty easy, and the cost wasn't so insane that the risk was too much; the price gap between it and the 3960X was a good temptation. Yet intel never released an 8-core for X79 (even though the 3930K was an 8-core internally), and now, the options for X299 are just stupid. Only 28 lanes for a chip that costs 600 UKP? Double the cores but fewer lanes than a 4820K?

What bugs me is tech sites used to comment about all this a lot more than they do now. Oh sure, plenty have chimed in with criticising the obviously daft aspects of X299, but my point is perhaps if the tech media had been a lot more vocal about what was obviously going rotten in recent years, maybe Intel wouldn't be in such a potential pickle now, and maybe the PC market would be much healthier. For years people have been posting on forums that each new CPU/chipset release just didn't seem to offer much that made it worthwhile upgrading, something made worse by the dribbling IPC increases as each new chip came along:

http://www.tomshardware.co.uk/core-i7-4770k-haswell-review,review-32699.html

People hailed the 6700K when it first came out, but when I saw the price my jaw hit the floor (over 350 UKP).

I'm tired of the jargon-laden talk of Intel's uarch product cycles as if they're done with some kind of logical rationale held back only by what's technically possible, when all it's really been for at least five years is them sitting on their behind because they simply didn't have to do anything better. Well, we all know what karma is, eh? :D

Don't get me wrong btw, I certainly don't want Intel to fail as a company (though there will inevitably be people who'll say stuff like that on forums). I'd just like them to once again become a company that genuinely innovates and pushes the bounds of what is possible, rather than crawl along at a snail's pace just because the competition is so weak. They have the money, they have the fabs, they ought to be the olympic champion leading the way. Competition is what made them surge forward with Nehalem, so I hope they can come back again with something good eventually, but X299 and SL-X is definitely not it at all, not by miles.

Ian.

 

bit_user

Titan
Ambassador

No. Need for ECC does not correlate directly with high computational needs. I use ECC in my NAS. There are plenty of low-powered machines used by businesses, where ECC makes sense. Not only industrial control and robotics, but any case where an error could be persisted. So, client machines used to update databases. There are other scenarios where errors could be costly.

These are some reasons why Intel offers ECC in their E3-series Xeons, which share the same socket as their mainstream desktop parts. It's a reliability feature they charge extra money for, but not only a workstation feature.

Also, ECC RAM is not appreciably slower than non-ECC. At least, not in my experience. When I ran benchmarks on my systems, I observed comparable performance to equivalent systems with similar non-ECC memory.
 

bit_user

Titan
Ambassador

It does really bug me that I'd have to go all the way to 10 cores just to get the full contingent of PCIe lanes and fully-enabled AVX512. I don't need 10 cores, I don't want to cool this beast, and I'm not going to spend $1k on a CPU. I'd just like a non-crippled 8-core CPU, thank you. I'd pay up to ~2x of their unlocked 4-core CPU. That's it.
 

mapesdhs

Distinguished


Financial transaction processing is another example, a task that's also often accelerated with CUDA. Someone I know who worked on such code told me they can use Titans for initial dev work, but production environments have to use Quadro/Tesla in order to have ECC. Same issue with the host system, not so much a factor for dev work, but ECC essential for final deployment.

Basically mission critical stuff, where errors are costly as you say. Doesn't matter in games or movie renders.





I keep meaning to try some tests with my Dell T7500 (X58-based), never seem to have the time. It has two X5570s, scores about 10.9 for CB 11.5 which isn't too bad, but I've always wondered how it would behave for running Viewperf with more powerful GPUs. Threaded-wise, two 6-core CPUs in the system would score about 16.5, which is roughly between a 6900K and a 1700X.





That's why the 4820K has ended up being such a peculiar CPU from a historical perspective. Only 4 cores, yet is has PCIe 3.0 and a full 40 lanes. It makes SL-X look ridiculous. A 4820K costs diddly squat these days (cheapest I've seen is 41 UKP by normal auction), yet I could put it in my P9X79-E WS, slap on three Titans, a 960 Pro, very nice CUDA box and sweet I/O (I tested with an R4E, 3GB/sec with a 960 Pro 512GB). Can't do this with X299 without spending serious money. And of course the 4820K oc's like crazy, offering very nice equivalent IPC (larger TDP-per-core budget to play with). Kicks KL-X's butt for I/O potential. Likewise, the i7 3820 was a popular CPU; not even an unlocked model (though strapping allows for excellent overclocks), yet it too has 40 PCIe lanes.

Ian.

 

aldaia

Distinguished
Oct 22, 2010
535
23
18,995


ECC memory just has an extra 8 bit wide chip to store 7 bits hamming + 1 bit parity but this chip is read/written in parallel with the 8 data chips. When writing, parity and hamming have to be computed but that just requires a xor gate whose delay can be measured in picoseconds. When reading, data must be corrected if 1-bit errors are detected, that may require 2 levels of gates, but still measured in the order of picoseconds. The delay introduced is much smaller that the typical variability of several runs of a benchmark.
Other people also observed no appreciable performance degradation.
https://www.pugetsystems.com/labs/articles/ECC-and-REG-ECC-Memory-Performance-560/
Overall, ECC RAM was just .25% slower than standard RAM.
 


Why would they add PCIe lanes? While 64 is a ton to compete with I personally see no purpose to them. Most of the configurations people come up with are in the same realm as a "I have way too much money" build. The majority of people will not use 64 PCIe lanes for just GPUs.

The most likely even is that next year will come a die shrink (10nm) using the same socket and then a new socket in 2019. That might not even happen though since LGA 2011 lasted 6 years (was launched in 2011).

Every socket since Intel started with at least the "Tick-Tock" strategy has been 2 per socket. It was new uArch then die shrink along with two chipsets normally. The only one that "skipped" this was the LGA 1156 socket and 5 series chipset as that was the first time Intel split their mainstream and high end. However the 6 series and LGA 1155 and on, each socket has supported 2 chipsets and 2 CPUs:

LGA 775 - Chipsets i845 to the 4 series, CPUs: Pentium 4/D and Conroe-Penryn
LGA 1155 - Chipsets 6 and 7 series, CPUS: Sandy Bridge and Ivy Bridge
LGA 1150 - Chipsets 8 and 9 series, CPUS: Haswell and Broadwell (bonus Devils Canyon)
LGA 1151 - Chipsets 100 and 200 series, CPUs: Skylake, Kaby Lake, Coffee Lake
LGA 2011 - Chipsets X79 and X99 series, CPUs: Sandy Bridge - E, Ivy Bridge - E, Haswell and Broadwell - E

So it has been this way since at least 2005 so not sure how the support range is bad.

I am not sure how X79 not having USB 3 is an issue considering the first official Intel chipset to support USB 3.0 was the mainstream 7 series chipset which was launched in April of 2012. Hell AMD didn't even have USB 3.0 in the 990 series chipsets and had to rely on the third party USB 3 like an X79 user or 6 series chipset user.

And you are wrong. There was quite a difference between X58 and X79. The X58 held all the PCIe lanes, not the CPU, and the I/O hub was also not part of it, X79 was a PCH which took the functions of the NB that were left and not absorbed by the CPU (such as the MC and PCIe lanes which now resided on the CPU) and combined them with the ICH.

And to top it off the PCIe lanes were not the same from X58 to X79. The X58 chipset only supported 36 total PCIe lanes, X79 had 8 but a CPU could have up to 40 for 48 total PCIe lanes PCH and CPU combined.

As for why the X79 doesn't support 128GB RAM? Plenty of reasons. The main is that this platform was not designed with ECC in mind and, even Asus X79 WS board is non ECC, and the largest non ECC module you can buy for DDR3 is 8GB thus the max is 64GB (8GB x 8 slots). Besides it is not up to the chipset, remember the memory controller is on the CPU since Nehalem and newer.

And how does the CPU affect anything but the PCIe lanes? You assume that these features, like USB 3 and M.2 NVMe rely only on the CPU. Remember the chipset has its own PCIe lanes that are normally spent on these features before the CPU lanes get touched.

Intel has hit multiple walls in many ways. Their original plan was for us to be on 7nm by now. that changed especially with the issues getting past 10nm is showing. However could you imagine if Intel did not have those technical limitations? If they were going full steam ahead, do you think a company like AMD would have survived? If Intel was able to do what you want and did, where would AMD be? If they didn't have the time they have had to develop and deploy Zen to a successful launch they would have died. AMD has not been competitive, sure there were spurts of OK gains but not like this or K8. They have been on the sidelines with Intel playing by themselves.

I personally am fine that they didn't plow full steam ahead. If they did I am sure they would hev been broken up again adding another bone to the mix.

BTW, Nehalem wasn't the competition. Conroe was the competition. AMD scrambled then too to compete with C2Q, remember QuadFX? The dual socket desktop setup? Pulled from the servers because AMD did not have a quad core desktop model. To make it better, much like now, Intel had better pricing at the time to go along with the performance. Nehalem was just Intel showing that they could do what AMD did better (IMC, fast bus link).

And do you not consider Thunderbolt innovation? A true all in one interface? NVMe? 3D stacked transistors? Intel innovates in many areas, just not all of them are directly involved in the CPU. Remember Intel has a hand in EVERYTHING. Unlike AMD Intel helps to define many standards that become used. That's not AMDs fault, they don't have the capital to invest in R&D like Intel does.



So the majority of consumers have a need for ECC? Because as far as I know in their desktops the majority of consumers, not businesses, have no need for ECC RAM. It is slower than normal RAM due to the ECC, while not a tone or enough to notice overall it is, and the cost is higher thanks to the added components and since ECC is normally geared towards servers and is made to be more stable and reliable than normal RAM where stability, reliability and data integrity is vastly more important.

And yes higher computational workloads where the data has to be correct does correlate with ECC RAM. Most decent workstations will have ECC as optional due to the need for it in some design and engineering jobs.

I assume you have a self built NAS but those are purpose built machines. I mean if a consumer goes out today and buys a system for personal use, there is no doubt in my mind than 99.9% of those people wouldn't ask about ECC vs non ECC Ram. Hell that percent barely understands the difference between RAM and HDD/SSD.


 

mapesdhs

Distinguished
Wow, so many inaccuracies in your post there. I give up. :D I'd rather spend time more fruitfully elsewhere. As one simple example though, X79 does support 128GB RAM, eg. that's why ASUS has released BIOS updates in some casesd (but annoyingly not all), the Deluxe update was released in 2014.

Ian.

 


What inaccuracies? Besides the one board that supports 128GB, I did miss that one and stand corrected. I also found 16GB DDR3 DIMMs. However 16GB DDR3 DIMMs are scarce (only one brand on Newegg) and very costly ($179) compared to 16GB DDR4 DIMMs. If you can find more for less let me know. Everything I find is either ECC or doesn't exist.

Still the X79 chipset was consumer minded and it was nothing like the X58 as you claim. X58 still had the ICH, ICH10R, while X79 was the NB and ICH combined. They were very different chipsets.

Still what is wrong? The mainstream 7 series chipset was Intels first consumer chipset with USB 3.0. The 9 series chipset from AMD did not support native USB 3.0, the A series did however.

Intels socket strategy has been similar for over 10 years which is why your assumption that Intel would toss it so quickly is off to me.

I can see one reason why Asus did not update all. X99 and DDR4 was launched the same year as that update. Considering that DDR4 was to double density and make it easier and more affordable for 16GB DIMMs I can see why X79 was not fast tracked to 128GB support. Again it has to do with support beyond the board. I can't find any major RAM manufacture who provided them outside of the server/ECC market.
 

bit_user

Titan
Ambassador

He's annoyed that to get > 28 lanes, you have to spend $1k. I like that my current workstation has 40 lanes, but I'm only using 24 or 25 (not counting DMI). I also like that my old AMD Phenom II had 38 lanes, but I think I never used more than 24.

I'd spend a bit more for 44 lanes vs. 28, but not nearly as much as Intel is charging for the privilege.



Nope. LGA 2011 and LGA 2011-3 are distinct sockets, with no compatibility. Both lasted exactly 2 generations, just like their desktop sockets since Tick-Tock began.


I never said that. What I said is that it's needed in plenty of non-workstation/server class machines.


Regarding the performance issue, see @aldaia's post. Nobody here is saying that you should be forced to use it, or that motherboard should be forced to support it. Just that it makes sense for platforms other than workstations & servers to support it.

Intel and AMD both implemented it in their mainstream desktop processors. The only difference is that Intel disabled it in their non-Xeon CPUs, while AMD left it enabled. That's one of the reasons I previously bought a Phenom II (see also: 38 lanes of PCIe). And when I replace that system, the same reasons will probably push me in the direction of a Zen-based CPU or APU.


It was a machine I built for the purpose (back when i3's still had ECC), but ECC support is commonly found in business-oriented NAS devices. I would never recommend that a business use any NAS or fileserver that didn't have ECC. Even for home users - if they care about that data - should use a NAS with ECC. As you say, most don't understand the issue or they probably would do.
 
Status
Not open for further replies.