But the first 6C/12T parts (i7 8700K?) for the mainstream platform are not going to be using 10nm, right? I thought they were going to be in 14nm++++++++ or whatever it's the name, but 14nm by the end of this year and the 10nm parts would be 8C/16T for mainstream with Ice Lake?
I got confused yet again.
Cheers!
My understanding is:
8th gen chips are Coffee Lake 14nm++(+?) for desktop and should be out Oct. 5th or 10th.
Laptop chips are Cannon Lake 10nm and out now.
Desktop Ice lake 10nm chips will probably be 3rd quarter 2018 if they can get it to work. But I've read 1st quarter as well.
I'm waiting for i5-8600k reviews and will make a buying decision between that and Ryzen next month based on comparo's.
My understanding is:
8th gen chips are Coffee Lake 14nm++(+?) for desktop and should be out Oct. 5th or 10th.
Laptop chips are Cannon Lake 10nm and out now.
Desktop Ice lake 10nm chips will probably be 3rd quarter 2018 if they can get it to work. But I've read 1st quarter as well.
I'm waiting for i5-8600k reviews and will make a buying decision between that and Ryzen next month based on comparo's.
Thanks, that helps.
So, then it is worth to wait and see how the 14nm desktop parts turn out.
Even if I trust those numbers he gives. We have double the number of cores by $10 extra (8% more) on the new i3 models and 50% moar cores (plus higher clocks) by $40 extra (10%) on the new i7 modes.
But I have some difficulties to believe those numbers, $400 for the i7-8700k? I can purchase it for less than that
which is about the same pricing than current i7-7770k.
Like with anything until it's released everything is just a rumor. The wording on the last slide says specifications and prices have not been confirmed.
The link I have given is not a rumor. It is the price of the i7-8700k in that store.
When you put a CPU that is aimed at tasks that are not primarily gaming or "single threaded", it is *obvious* you will test it in such disciplines as an addition. Then, you will not care how the CPU actually achieves the performance, only that is achieves a particular threshold at the stipulated price range it will sell. In this particular case, TR and the 7900X were both put into a shizzle ton of tests across 50+ disciplines (if not more) and in all they were trading blows. There is no clear winner and the answer in every single review (as far as I have read) is that TR justifies its price with no issues facing the 7900X.
If you want to de-merit AMD for achieving a milestone just because "it doesn't game as well as the Intel", that is on you, not on the data gathered nor the tasks at hand. If you can think of a test that actually is needed that should be added, why not make the suggestion for a next round of tests here at Toms or another site that is good at reviewing stuff?
Also, if you have doubts on methodology, that is another story. Not all sites are trustworthy or draw conclusions in a weird manner, but as long as they expose their testing methodology and you can reproduce their figures, you will know why those numbers come to be and have more information to take in and make a decision.
Hell, I still haven't seen a *SINGLE* site that has put OBS (or any streaming software) while testing games to see how the CPUs behave. People has whined about 4K SO FRIGGIN' MUCH, but I haven't seen a single tear shed about streaming. So, I'll have to shed those manly tears from now on.
My complain is not about reviews whose tone shows an evident bias towards a given brand, where when Intel wins by a huge margin such as 45%, this is dismissed with a "Zen gives enough performance" or a similar claim, but when AMD wins by a margin of 30%, the tone changes to something as "the 1950X completely smashes the i9-7900X".
My main complain is when reviews use dirty tricks to favor one brand over other. Since Zen launch, and including AMD pre-release demos, we have seen the next dirty tricks:
■ quad-channel disabled on Intel chips
■ turbo disabled on Intel chips
■ testing with workloads that have a bug favoring Zen
■ most of tests being workloads where Zen shines such as rendering, rendering, rendering, and rendering
■ testing at 4K to generate huge GPU-bottlenecks and favor Zen on games
■ testing memory-bound workloads with memory subsystem crippled on Intel side
■ testing with compiler/flags combinations that reduce Intel chips performance by 40--60%
■ testing a concrete model of Intel CPU on motherboards with known compatibility issues with that concrete model of CPU
■ testing Intel engineering samples instead retails chips, and label the chip on graphs as if was retail
■ testing overclocked AMD chips vs stock Intel chips, and label chips on the graphs as if were both stock
■ testing custom workload that favor Zen, instead existent official workloads
■ test Intel platforms with beta BIOS, and don't retest latter with final BIOS
■ And so on
1. Less than 1% performance difference in the relevant benchmarks from quad channel memory per testing at 3 different sources. It looks good on AIDA, but beyond that, it is mostly superfluous outside professional level workloads that need the raw capacity.
2. It was also disabled on Ryzen in those tests, are we going to make it fair, or edge one side over the other? That was per clock testing, and it was relevant because the clocks were locked (which is what any reviewer would do to test per clock).
3. Source? There were no workloads favoring Zen in any testing, and the sleep bug was discovered by a random guy on the internet. Clearly AMD never put their PCs to sleep.
4. What about the gameplay with OBS running? What about the photo editing and video editing workloads where Intel normally shines?
5. Both sides were tested at 4K (which is relevant), you can complain all you want about it, but I want to know how a top end system performs running top end settings. I could not care less how a $4k PC build runs @ 320x240 or some other insanely bad resolution.
6. The memory system was never crippled on Intel processors in any tests.
7. There were no compiler optimizations beyond accommodating for removal of libquantum that heavily favors Intel.
8. If Intel has known compatibility issues in their product stack, that is not an issue of AMD's
9. They tested retail chips...AMD does not get hands on with Intel engineering samples (I cannot even make a leap in logic to a world where Intel would legitimately send AMD engineering samples...they went to court over that very idea).
10. This is a fallacy, AMD never tested an overclocked chip against a stock Intel chip.
11. Uhh...SPEC is official.
12. They tested what was available...How many reviewers have gone back to test Ryzen since the AGESA update? None? No sympathy.
13. Considering everything listed to this point in this rant was bad information, or flat out wrong, I am curious to hear what "And so on" could possibly be.
But the first 6C/12T parts (i7 8700K?) for the mainstream platform are not going to be using 10nm, right? I thought they were going to be in 14nm++++++++ or whatever it's the name, but 14nm by the end of this year and the 10nm parts would be 8C/16T for mainstream with Ice Lake?
I got confused yet again.
Cheers!
My understanding is:
8th gen chips are Coffee Lake 14nm++(+?) for desktop and should be out Oct. 5th or 10th.
Laptop chips are Cannon Lake 10nm and out now.
Desktop Ice lake 10nm chips will probably be 3rd quarter 2018 if they can get it to work. But I've read 1st quarter as well.
I'm waiting for i5-8600k reviews and will make a buying decision between that and Ryzen next month based on comparo's.
That's how I traditionally handle my builds too. If you are on a budget (I'd say ~$900 for parts is one of the most common) and want a balanced system that is a capable gamer you are typically allocating around $500 for your CPU+MB+GPU. In that scenario I usually end up spending ~$200 each on CPU and GPU, which has meant best bang for buck has usually been an unlocked Core i5 on sale for the past 7+ years or so. Ryzen's release and Intel's new pricing due to their core count increases might lean me more toward the AMD route if I were assembling a new build in the next 6 months.
That's one of the main reasons I've typically tried to stay current on the latest Intel news.
no doubt in my mind that ryzen pushed intel into releasing coffee lake . intel does not exist in a market vacuum
https://segmentnext.com/2017/09/19/intel-coffee-lake-8-core-cpus/ - would take this rumour with a grain of salt, however it is highly possible, unless there is specific intel roadmap to the contrary. (such as new icelake architecture, which we know is coming)
coffee lake 8 core could be a potential stopgap measure prior to icelake production....but i dont really understand the practicalities of maunfacturing...
Even if I trust those numbers he gives. We have double the number of cores by $10 extra (8% more) on the new i3 models and 50% moar cores (plus higher clocks) by $40 extra (10%) on the new i7 modes.
But I have some difficulties to believe those numbers, $400 for the i7-8700k? I can purchase it for less than that
which is about the same pricing than current i7-7770k.
Like with anything until it's released everything is just a rumor. The wording on the last slide says specifications and prices have not been confirmed.
The link I have given is not a rumor. It is the price of the i7-8700k in that store.
As some commenters on that article pointed out, that store has been terrible for predicting msrp in the past. You also have to account for the fact that you're looking at prices in a different country, which are bound to be different.
But the first 6C/12T parts (i7 8700K?) for the mainstream platform are not going to be using 10nm, right? I thought they were going to be in 14nm++++++++ or whatever it's the name, but 14nm by the end of this year and the 10nm parts would be 8C/16T for mainstream with Ice Lake?
I got confused yet again.
goldstone77 :
Like Yuka said, who can keep track of all these rumors... Lack of clear communication from Intel doesn't help.
When you put a CPU that is aimed at tasks that are not primarily gaming or "single threaded", it is *obvious* you will test it in such disciplines as an addition. Then, you will not care how the CPU actually achieves the performance, only that is achieves a particular threshold at the stipulated price range it will sell. In this particular case, TR and the 7900X were both put into a shizzle ton of tests across 50+ disciplines (if not more) and in all they were trading blows. There is no clear winner and the answer in every single review (as far as I have read) is that TR justifies its price with no issues facing the 7900X.
If you want to de-merit AMD for achieving a milestone just because "it doesn't game as well as the Intel", that is on you, not on the data gathered nor the tasks at hand. If you can think of a test that actually is needed that should be added, why not make the suggestion for a next round of tests here at Toms or another site that is good at reviewing stuff?
Also, if you have doubts on methodology, that is another story. Not all sites are trustworthy or draw conclusions in a weird manner, but as long as they expose their testing methodology and you can reproduce their figures, you will know why those numbers come to be and have more information to take in and make a decision.
Hell, I still haven't seen a *SINGLE* site that has put OBS (or any streaming software) while testing games to see how the CPUs behave. People has whined about 4K SO FRIGGIN' MUCH, but I haven't seen a single tear shed about streaming. So, I'll have to shed those manly tears from now on.
My complain is not about reviews whose tone shows an evident bias towards a given brand, where when Intel wins by a huge margin such as 45%, this is dismissed with a "Zen gives enough performance" or a similar claim, but when AMD wins by a margin of 30%, the tone changes to something as "the 1950X completely smashes the i9-7900X".
My main complain is when reviews use dirty tricks to favor one brand over other. Since Zen launch, and including AMD pre-release demos, we have seen the next dirty tricks:
■ quad-channel disabled on Intel chips
■ turbo disabled on Intel chips
■ testing with workloads that have a bug favoring Zen
■ most of tests being workloads where Zen shines such as rendering, rendering, rendering, and rendering
■ testing at 4K to generate huge GPU-bottlenecks and favor Zen on games
■ testing memory-bound workloads with memory subsystem crippled on Intel side
■ testing with compiler/flags combinations that reduce Intel chips performance by 40--60%
■ testing a concrete model of Intel CPU on motherboards with known compatibility issues with that concrete model of CPU
■ testing Intel engineering samples instead retails chips, and label the chip on graphs as if was retail
■ testing overclocked AMD chips vs stock Intel chips, and label chips on the graphs as if were both stock
■ testing custom workload that favor Zen, instead existent official workloads
■ test Intel platforms with beta BIOS, and don't retest latter with final BIOS
■ And so on
1. Less than 1% performance difference in the relevant benchmarks from quad channel memory per testing at 3 different sources. It looks good on AIDA, but beyond that, it is mostly superfluous outside professional level workloads that need the raw capacity.
2. It was also disabled on Ryzen in those tests, are we going to make it fair, or edge one side over the other? That was per clock testing, and it was relevant because the clocks were locked (which is what any reviewer would do to test per clock).
3. Source? There were no workloads favoring Zen in any testing, and the sleep bug was discovered by a random guy on the internet. Clearly AMD never put their PCs to sleep.
4. What about the gameplay with OBS running? What about the photo editing and video editing workloads where Intel normally shines?
5. Both sides were tested at 4K (which is relevant), you can complain all you want about it, but I want to know how a top end system performs running top end settings. I could not care less how a $4k PC build runs @ 320x240 or some other insanely bad resolution.
6. The memory system was never crippled on Intel processors in any tests.
7. There were no compiler optimizations beyond accommodating for removal of libquantum that heavily favors Intel.
8. If Intel has known compatibility issues in their product stack, that is not an issue of AMD's
9. They tested retail chips...AMD does not get hands on with Intel engineering samples (I cannot even make a leap in logic to a world where Intel would legitimately send AMD engineering samples...they went to court over that very idea).
10. This is a fallacy, AMD never tested an overclocked chip against a stock Intel chip.
11. Uhh...SPEC is official.
12. They tested what was available...How many reviewers have gone back to test Ryzen since the AGESA update? None? No sympathy.
13. Considering everything listed to this point in this rant was bad information, or flat out wrong, I am curious to hear what "And so on" could possibly be.
1. It is difficult to accept that AMD disabled half the memory channels on Intel platform only to get less than 1% advantage. Moreover, here a real test where quad-channel provides a nice 7% performance advantage.
2. Turbo 3 disabled on Intel chips. Turbo enabled on RyZen. That is why reviews couldn't reproduce AMD performance claims.
3. Early leaks and benches using CPU-Z. It was found a bug affected scores on chips with 256KB L2 (as Broadwell-E) and gave extra performance advantage to RyZen over Broadwell-E. The bug was corrected latter in a new version of CPU-Z.
Also the bugs that affected several claimed performance overclocks made with RyZen, until the point that HWBOT banned all those claimed scores submitted for RyZen, because the performance measured was fake.
5. Testing at 4K generates a GPU bottleneck and hides the performance deficits of RyZen. That is why the so-called "CPU test" involve low resolutions. Those 720p tests aren't made because people play games at those resolutions, but for a different reason; a technical reason which has been given a dozen of times.
6. It was made on AMD demos involving Broadwel-E and Broadwell Xeons.
7. The libquantum subscore was removed for both Intel and AMD. The cheating was on the rest of subtests. For instance the 403.gcc subscore was 40% slower on Broadwell-E Xeons and about 60% slower on Skylake Xeons thanks to special compiler/flag choices. Curiously using those choices the biased review manage to reproduce AMD official results for Broadwell chips.
8. No one said it is an issue for AMD. It is an issue for the reviewer that chose the only known incompatible X299 motherboard for his review.
9. Guru3d or HU/Techspot have used engineering samples of Intel chips in their reviews and comparisons of Intel vs AMD.
10. Guru3D, Techspot, Arstechnica do compare OC AMD vs stock Intel.
11. No one mentioned SPEC in this point. My claim was about AMD using a custom workload for Blender. Using one of the standard Blender workoads the picture is difference: RyZen loses instead wining.
12. Virtually any review site has retested Zen with latest AGESA/BIOS. Some reviews have published special articles comparing new versions of AGESA/BIOS for RyZen, and some few have published special articles detailing changes made on new BIOS/AGESA. On the other hand I only know a pair of sites that retested i9 with final BIOS. Everyone else only published the launch BETA BIOS results, and some didn't even mention the use of BETA bios.
We have talked about this multiple times, of course AMD want us to use their CPU's at higher resolutions so they can hide their weak single core performance, because at low resolutions, the Intel advantage is significant, and it trails off as you run games at higher resolutions and detail settings only (because the graphics card becomes the performance bottleneck) duh. The intel core i7 7700K outpaced the 1700 by around 40fps in the CPU-intensive game Ashes of the Singularity test.
The 7700K’s good performance continued to single-threaded tests: its 472-point result in POVRay was easily ahead of the Ryzen chip’s 315-point result, and it is nearly 60 points better in Cinebench. The Core i7 is a better overclocker than AMD, too, and its power consumption isn't much higher than the Ryzen 7 1700.The only area where the Core i7-7700K falls behind is in multi-threaded benchmarks, don't let fake reviews fool you.
We have talked about this multiple times, of course AMD want us to use their CPU's at higher resolutions so they can hide their weak single core performance, because at low resolutions, the Intel advantage is significant, and it trails off as you run games at higher resolutions and detail settings only (because the graphics card becomes the performance bottleneck) duh. The intel core i7 7700K outpaced the 1700 by around 40fps in the CPU-intensive game Ashes of the Singularity test.
The 7700K’s good performance continued to single-threaded tests: its 472-point result in POVRay was easily ahead of the Ryzen chip’s 315-point result, and it is nearly 60 points better in Cinebench. The Core i7 is a better overclocker than AMD, too, and its power consumption isn't much higher than the Ryzen 7 1700.The only area where the Core i7-7700K falls behind is in multi-threaded benchmarks, don't let fake reviews fool you.
We have talked about this multiple times, of course AMD want us to use their CPU's at higher resolutions so they can hide their weak single core performance, because at low resolutions, the Intel advantage is significant, and it trails off as you run games at higher resolutions and detail settings only (because the graphics card becomes the performance bottleneck) duh. The intel core i7 7700K outpaced the 1700 by around 40fps in the CPU-intensive game Ashes of the Singularity test.
The 7700K’s good performance continued to single-threaded tests: its 472-point result in POVRay was easily ahead of the Ryzen chip’s 315-point result, and it is nearly 60 points better in Cinebench. The Core i7 is a better overclocker than AMD, too, and its power consumption isn't much higher than the Ryzen 7 1700.The only area where the Core i7-7700K falls behind is in multi-threaded benchmarks, don't let fake reviews fool you.
Do you really want me to believe that when we ran a test and my i7 990X past 4GHz released Q1'11 gets equal and sometimes 20+ more FPS than my Friend's new Ryzen 7 each one using one of my 2 GTX 980 Ti's graphics cards while playing online together!!!??? Same games, same levels walking side by side while talking through a headset. !!!??? I just can't.
Do you really want me to believe that when we ran a test and my i7 990X past 4GHz released Q1'11 gets equal and sometimes 20+ more FPS than my Friend's new Ryzen 7 each one using one of my 2 GTX 980 Ti's graphics cards while playing online together!!!??? Same games, same levels walking side by side while talking through a headset. !!!??? I just can't.
I can... There are a plethora of different things that can be happening in the background that you don't know. From the usual "did he apply the thermal paste correctly" to the "can he have malware running?" options and everything in between.
To compare correctly, you need lab conditions, what you're talking about is, like it or not, anecdotal.
Do you really want me to believe that when we ran a test and my i7 990X past 4GHz released Q1'11 gets equal and sometimes 20+ more FPS than my Friend's new Ryzen 7 each one using one of my 2 GTX 980 Ti's graphics cards while playing online together!!!??? Same games, same levels walking side by side while talking through a headset. !!!??? I just can't.
I can... There are a plethora of different things that can be happening in the background that you don't know. From the usual "did he apply the thermal paste correctly" to the "can he have malware running?" options and everything in between.
To compare correctly, you need lab conditions, what you're talking about is, like it or not, anecdotal.
Cheers!
did he applied thermal paste correctly? LOL We are both running XSPC EX360 High Performance Radiator just for the processor. We have been building our own computers since we can remember. Like I said my single core performance is 5+% better than his Ryzen 7 when using CPU-Z , mine was @4.5 when testing, I keep it @4.2 for normal use(no need for more), his is at @3.9 because we can't get it to run stable past that not even with the XSPC EX360 High Performance CAR Radiator :lol:
Do you really want me to believe that when we ran a test and my i7 990X past 4GHz released Q1'11 gets equal and sometimes 20+ more FPS than my Friend's new Ryzen 7 each one using one of my 2 GTX 980 Ti's graphics cards while playing online together!!!??? Same games, same levels walking side by side while talking through a headset. !!!??? I just can't.
I can... There are a plethora of different things that can be happening in the background that you don't know. From the usual "did he apply the thermal paste correctly" to the "can he have malware running?" options and everything in between.
To compare correctly, you need lab conditions, what you're talking about is, like it or not, anecdotal.
Cheers!
did he applied thermal paste correctly? LOL We are both running XSPC EX360 High Performance Radiator just for the processor. We have been building our own computers since we can remember. Like I said my single core performance is 5+% better than his Ryzen 7 when using CPU-Z , mine was @4.5 when testing, I keep it @4.2 for normal use(no need for more), his is at @3.9 because we can't get it to run stable past that not even with the XSPC EX360 High Performance CAR Radiator :lol:
And that is fine and all. What about the rest of the variables?
We have talked about this multiple times, of course AMD want us to use their CPU's at higher resolutions so they can hide their weak single core performance, because at low resolutions, the Intel advantage is significant, and it trails off as you run games at higher resolutions and detail settings only (because the graphics card becomes the performance bottleneck) duh. The intel core i7 7700K outpaced the 1700 by around 40fps in the CPU-intensive game Ashes of the Singularity test.
The 7700K’s good performance continued to single-threaded tests: its 472-point result in POVRay was easily ahead of the Ryzen chip’s 315-point result, and it is nearly 60 points better in Cinebench. The Core i7 is a better overclocker than AMD, too, and its power consumption isn't much higher than the Ryzen 7 1700.The only area where the Core i7-7700K falls behind is in multi-threaded benchmarks, don't let fake reviews fool you.
Do you really want me to believe that when we ran a test and my i7 990X past 4GHz released Q1'11 gets equal and sometimes 20+ more FPS than my Friend's new Ryzen 7 each one using one of my 2 GTX 980 Ti's graphics cards while playing online together!!!??? Same games, same levels walking side by side while talking through a headset. !!!??? I just can't.
As the graph shows depending on the game results can vary depending on the title, and there were compatibility issues with RAM and optimization issues with games. Try comparing frame rates now. agesa 1.0.0.6 update have made things much better now for RAM compatibility, and tightened the internal latency problem with Ryzen a bit. You do know that the difference in using 2133MHz RAM vs 3200MHz RAM offers a 15% better FPS performance increase vs. Intel using the same RAM. I'm sure your testing conditions were likely less than ideal.
When the X299 platform and its associated motherboards debuted, extreme overclockers like der8auer remarked that the new boards' mostly-decorative VRM heatsinks were actually interfering with cooling the hot hardware underneath. A couple of weeks later, the aforementioned Deutschlander showed that you could resolve that problem just by grinding a few grooves into the heatsinks. Asus has apparently taken that advice to heart, as the new ROG Strix X299-XE Gaming is identical to its "X"-deprived forebear save for the grooved VRM heatsink.
The new board carries forward the same bounty of functionality as its predecessor. Asus says the X299-XE Gaming will handle four channels of DDR4 memory at up to 4133 MT/s. The mobo has three PCIe x16 slots that can run in an x16/x16/x8 configuration with a 44-lane CPU installed, and the pair of M.2 sockets can simultaneously run in PCIe 3.0 x4 mode.
There's on-board 802.11ac Wi-Fi and Bluetooth 4.2 connectivity, a trio of on-board USB 3.1 Gen 2 ports (in both Type-A and Type-C flavors), and Asus' top-tier SupremeFX audio setup with a Realtek S1220A codec and Japanese capacitors. The board naturally has RGB LED lighting, and Asus includes a 12" (30 cm) light strip in the box.
The ROG Strix X299-XE Gaming board has only just appeared on Asus' website, and the only listing we found for it is for a third-party Amazon seller, going for $430. As more stores have the board in stock, we expect its price to more closely follow the $345 of the existing ROG Strix X299-E Gaming.
We have talked about this multiple times, of course AMD want us to use their CPU's at higher resolutions so they can hide their weak single core performance, because at low resolutions, the Intel advantage is significant, and it trails off as you run games at higher resolutions and detail settings only (because the graphics card becomes the performance bottleneck) duh. The intel core i7 7700K outpaced the 1700 by around 40fps in the CPU-intensive game Ashes of the Singularity test.
The 7700K’s good performance continued to single-threaded tests: its 472-point result in POVRay was easily ahead of the Ryzen chip’s 315-point result, and it is nearly 60 points better in Cinebench. The Core i7 is a better overclocker than AMD, too, and its power consumption isn't much higher than the Ryzen 7 1700.The only area where the Core i7-7700K falls behind is in multi-threaded benchmarks, don't let fake reviews fool you.
Do you really want me to believe that when we ran a test and my i7 990X past 4GHz released Q1'11 gets equal and sometimes 20+ more FPS than my Friend's new Ryzen 7 each one using one of my 2 GTX 980 Ti's graphics cards while playing online together!!!??? Same games, same levels walking side by side while talking through a headset. !!!??? I just can't.
As the graph shows depending on the game results can vary depending on the title, and there were compatibility issues with RAM and optimization issues with games. Try comparing frame rates now. agesa 1.0.0.6 update have made things much better now for RAM compatibility, and tightened the internal latency problem with Ryzen a bit. You do know that the difference in using 2133MHz RAM vs 3200MHz RAM offers a 15% better FPS performance increase vs. Intel using the same RAM. I'm sure your testing conditions were likely less than ideal.
Lol I'm still using triple channel DDR3 and he is using DDR4
Intel has reportedly rescheduled the releases for some of its next-generation Cannon Lake-based processors, mostly ones with an integrated GPU, to the end of 2018, which has already affected notebook brand vendors' new projects and their suppliers, according to sources from the upstream supply chain.
Some vendors are even considering skipping Cannon Lake to wait for the release of its successor, the Ice Lake CPUs, which according to Intel's roadmap, should be available shortly after the specific processors' rescheduled launch, the sources said.
In response, Intel said that the company will be shipping its first 10nm products near the end of 2017 beginning with a lower volume SKU followed by a volume ramp in the first half of 2018.
After experiencing five consecutive years of shipment declines, demand for notebooks has grown stable in 2017. Industry players hope that Intel's new 10nm Cannon Lake CPUs - which are expected to see up to 25% performance improvement and 45% less power consumption compared to existing 14nm Kaby Lake processors - can rejuvenate the notebook market, the sources pointed out.
Most notebook vemdors have already begun their request for quotation (RFQ) processes for 2018 notebook orders, but they now may have to revise their notebook plans, the sources said.
In response, Intel said that the company will be shipping its first 10nm products near the end of 2017 beginning with a lower volume SKU followed by a volume ramp in the first half of 2018.
We have talked about this multiple times, of course AMD want us to use their CPU's at higher resolutions so they can hide their weak single core performance, because at low resolutions, the Intel advantage is significant, and it trails off as you run games at higher resolutions and detail settings only (because the graphics card becomes the performance bottleneck) duh. The intel core i7 7700K outpaced the 1700 by around 40fps in the CPU-intensive game Ashes of the Singularity test.
The 7700K’s good performance continued to single-threaded tests: its 472-point result in POVRay was easily ahead of the Ryzen chip’s 315-point result, and it is nearly 60 points better in Cinebench. The Core i7 is a better overclocker than AMD, too, and its power consumption isn't much higher than the Ryzen 7 1700.The only area where the Core i7-7700K falls behind is in multi-threaded benchmarks, don't let fake reviews fool you.
Do you really want me to believe that when we ran a test and my i7 990X past 4GHz released Q1'11 gets equal and sometimes 20+ more FPS than my Friend's new Ryzen 7 each one using one of my 2 GTX 980 Ti's graphics cards while playing online together!!!??? Same games, same levels walking side by side while talking through a headset. !!!??? I just can't.
As the graph shows depending on the game results can vary depending on the title, and there were compatibility issues with RAM and optimization issues with games. Try comparing frame rates now. agesa 1.0.0.6 update have made things much better now for RAM compatibility, and tightened the internal latency problem with Ryzen a bit. You do know that the difference in using 2133MHz RAM vs 3200MHz RAM offers a 15% better FPS performance increase vs. Intel using the same RAM. I'm sure your testing conditions were likely less than ideal.
Lol I'm still using triple channel DDR3 and he is using DDR4
You processor doesn't suffer Internal latency problems. Faster RAM reduces internal latency associated with infinity fabric. Also, as mentioned before,"agesa 1.0.0.6 update have made things much better now for RAM compatibility, and tightened the internal latency problem with Ryzen a bit." Like I also said, "I'm sure your testing conditions were likely less than ideal."
...overall gains in cpu performance on the intel side of things hasnt been improving in leaps and bounds since 2012.
compare 3770k 2012 to 7700k 2017 its roughly a 30% performance increase...in 5 years.
whacky do.
the graphics card side of things is much more impressive..
in this regard, even as was mentioned many years ago with the move to dual and then more cores that multicore would pave the way for the future of cpus (and therefore programming aswell) because of GHz limitations. finally in 2017 we have a cpu that can achieve 5GHz...i remember talking about this in 2008 5 GHz seemed beyond reach in 2008 a good intel cpu was around 4.0 GHz.
anyway my main point is cpu gains on the single core front have been limited since they started to reach GHz limitations / process limitations. an economy of scale way to increase processing power is obviously more cores.
...overall gains in cpu performance on the intel side of things hasnt been improving in leaps and bounds since 2012.
compare 3770k 2012 to 7700k 2017 its roughly a 30% performance increase...in 5 years.
whacky do.
the graphics card side of things is much more impressive..
in this regard, even as was mentioned many years ago with the move to dual and then more cores that multicore would pave the way for the future of cpus (and therefore programming aswell) because of GHz limitations. finally in 2017 we have a cpu that can achieve 5GHz...i remember talking about this in 2008 5 GHz seemed beyond reach in 2008 a good intel cpu was around 4.0 GHz.
anyway my main point is cpu gains on the single core front have been limited since they started to reach GHz limitations / process limitations. an economy of scale way to increase processing power is obviously more cores.
My i7-3770K@4.6 measures up against the stock i7-7700K fairly good in some tests. 171/194; 848/984. Mind you my testing environment was less than ideal with programs running while I was testing, and being far from a fresh installation of windows 10. Now of course that's why we get K processors, for longevity. I also agree that more cores increases throughput!
Edit: 171/196 of the similar overclock 7900X@4.6GHz
We have talked about this multiple times, of course AMD want us to use their CPU's at higher resolutions so they can hide their weak single core performance, because at low resolutions, the Intel advantage is significant, and it trails off as you run games at higher resolutions and detail settings only (because the graphics card becomes the performance bottleneck) duh. The intel core i7 7700K outpaced the 1700 by around 40fps in the CPU-intensive game Ashes of the Singularity test.
The 7700K’s good performance continued to single-threaded tests: its 472-point result in POVRay was easily ahead of the Ryzen chip’s 315-point result, and it is nearly 60 points better in Cinebench. The Core i7 is a better overclocker than AMD, too, and its power consumption isn't much higher than the Ryzen 7 1700.The only area where the Core i7-7700K falls behind is in multi-threaded benchmarks, don't let fake reviews fool you.
Do you really want me to believe that when we ran a test and my i7 990X past 4GHz released Q1'11 gets equal and sometimes 20+ more FPS than my Friend's new Ryzen 7 each one using one of my 2 GTX 980 Ti's graphics cards while playing online together!!!??? Same games, same levels walking side by side while talking through a headset. !!!??? I just can't.
As the graph shows depending on the game results can vary depending on the title, and there were compatibility issues with RAM and optimization issues with games. Try comparing frame rates now. agesa 1.0.0.6 update have made things much better now for RAM compatibility, and tightened the internal latency problem with Ryzen a bit. You do know that the difference in using 2133MHz RAM vs 3200MHz RAM offers a 15% better FPS performance increase vs. Intel using the same RAM. I'm sure your testing conditions were likely less than ideal.
Lol I'm still using triple channel DDR3 and he is using DDR4
You processor doesn't suffer Internal latency problems. Faster RAM reduces internal latency associated with infinity fabric. Also, as mentioned before,"agesa 1.0.0.6 update have made things much better now for RAM compatibility, and tightened the internal latency problem with Ryzen a bit." Like I also said, "I'm sure your testing conditions were likely less than ideal."
Regardless of what you have said, that's still a SIX year old chip matching or beating a chip released this year.
Looking at benchmarks it comes within 10% of Ryzen's single core score with both chips overclocked. Sure, it gets pretty much demolished in MC, but that's a pretty unimpressive result.
I doubt there are viruses or malware on a brand new computer, and in gaming Ryzen's many more cores than needed for that task ensure that background tasks likely aren't getting in the way.
Stop using the slower RAM as an excuse, as it's entirely possible that the 990x was indeed faster at processing the assets required for that game. And even if that isn't the case, 3200MHz ram is considerably more expensive than 2133MHz RAM.