Intel's Future Chips: News, Rumours & Reviews

Page 99 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


My understanding is:
8th gen chips are Coffee Lake 14nm++(+?) for desktop and should be out Oct. 5th or 10th.
Laptop chips are Cannon Lake 10nm and out now.
Desktop Ice lake 10nm chips will probably be 3rd quarter 2018 if they can get it to work. But I've read 1st quarter as well.

I'm waiting for i5-8600k reviews and will make a buying decision between that and Ryzen next month based on comparo's.
 


Thanks, that helps.

So, then it is worth to wait and see how the 14nm desktop parts turn out.

Cheers!
 


The link I have given is not a rumor. It is the price of the i7-8700k in that store.
 


1. Less than 1% performance difference in the relevant benchmarks from quad channel memory per testing at 3 different sources. It looks good on AIDA, but beyond that, it is mostly superfluous outside professional level workloads that need the raw capacity.

2. It was also disabled on Ryzen in those tests, are we going to make it fair, or edge one side over the other? That was per clock testing, and it was relevant because the clocks were locked (which is what any reviewer would do to test per clock).

3. Source? There were no workloads favoring Zen in any testing, and the sleep bug was discovered by a random guy on the internet. Clearly AMD never put their PCs to sleep.

4. What about the gameplay with OBS running? What about the photo editing and video editing workloads where Intel normally shines?

5. Both sides were tested at 4K (which is relevant), you can complain all you want about it, but I want to know how a top end system performs running top end settings. I could not care less how a $4k PC build runs @ 320x240 or some other insanely bad resolution.

6. The memory system was never crippled on Intel processors in any tests.

7. There were no compiler optimizations beyond accommodating for removal of libquantum that heavily favors Intel.

8. If Intel has known compatibility issues in their product stack, that is not an issue of AMD's

9. They tested retail chips...AMD does not get hands on with Intel engineering samples (I cannot even make a leap in logic to a world where Intel would legitimately send AMD engineering samples...they went to court over that very idea).

10. This is a fallacy, AMD never tested an overclocked chip against a stock Intel chip.

11. Uhh...SPEC is official.

12. They tested what was available...How many reviewers have gone back to test Ryzen since the AGESA update? None? No sympathy.

13. Considering everything listed to this point in this rant was bad information, or flat out wrong, I am curious to hear what "And so on" could possibly be.
 


That's how I traditionally handle my builds too. If you are on a budget (I'd say ~$900 for parts is one of the most common) and want a balanced system that is a capable gamer you are typically allocating around $500 for your CPU+MB+GPU. In that scenario I usually end up spending ~$200 each on CPU and GPU, which has meant best bang for buck has usually been an unlocked Core i5 on sale for the past 7+ years or so. Ryzen's release and Intel's new pricing due to their core count increases might lean me more toward the AMD route if I were assembling a new build in the next 6 months.

That's one of the main reasons I've typically tried to stay current on the latest Intel news.
 
no doubt in my mind that ryzen pushed intel into releasing coffee lake . intel does not exist in a market vacuum

https://segmentnext.com/2017/09/19/intel-coffee-lake-8-core-cpus/ - would take this rumour with a grain of salt, however it is highly possible, unless there is specific intel roadmap to the contrary. (such as new icelake architecture, which we know is coming)

coffee lake 8 core could be a potential stopgap measure prior to icelake production....but i dont really understand the practicalities of maunfacturing...



 


As some commenters on that article pointed out, that store has been terrible for predicting msrp in the past. You also have to account for the fact that you're looking at prices in a different country, which are bound to be different.
 




http://www.tomshardware.co.uk/forum/id-1581001/intel-future-chips-news-rumours-reviews/page-49.html#20194068

2017:
mobile --> Cannonlake (10nm)
desktop --> CoffeLake (14nm++)

2018:
mobile --> Icelake (10nm+)
desktop --> Icelake (10nm+)
 


1. It is difficult to accept that AMD disabled half the memory channels on Intel platform only to get less than 1% advantage. Moreover, here a real test where quad-channel provides a nice 7% performance advantage.

memory_bandwidth_7zip_938-100613932-orig.png


2. Turbo 3 disabled on Intel chips. Turbo enabled on RyZen. That is why reviews couldn't reproduce AMD performance claims.

3. Early leaks and benches using CPU-Z. It was found a bug affected scores on chips with 256KB L2 (as Broadwell-E) and gave extra performance advantage to RyZen over Broadwell-E. The bug was corrected latter in a new version of CPU-Z.

Also the bugs that affected several claimed performance overclocks made with RyZen, until the point that HWBOT banned all those claimed scores submitted for RyZen, because the performance measured was fake.

5. Testing at 4K generates a GPU bottleneck and hides the performance deficits of RyZen. That is why the so-called "CPU test" involve low resolutions. Those 720p tests aren't made because people play games at those resolutions, but for a different reason; a technical reason which has been given a dozen of times.

6. It was made on AMD demos involving Broadwel-E and Broadwell Xeons.

7. The libquantum subscore was removed for both Intel and AMD. The cheating was on the rest of subtests. For instance the 403.gcc subscore was 40% slower on Broadwell-E Xeons and about 60% slower on Skylake Xeons thanks to special compiler/flag choices. Curiously using those choices the biased review manage to reproduce AMD official results for Broadwell chips.

8. No one said it is an issue for AMD. It is an issue for the reviewer that chose the only known incompatible X299 motherboard for his review.

9. Guru3d or HU/Techspot have used engineering samples of Intel chips in their reviews and comparisons of Intel vs AMD.

10. Guru3D, Techspot, Arstechnica do compare OC AMD vs stock Intel.

11. No one mentioned SPEC in this point. My claim was about AMD using a custom workload for Blender. Using one of the standard Blender workoads the picture is difference: RyZen loses instead wining.

12. Virtually any review site has retested Zen with latest AGESA/BIOS. Some reviews have published special articles comparing new versions of AGESA/BIOS for RyZen, and some few have published special articles detailing changes made on new BIOS/AGESA. On the other hand I only know a pair of sites that retested i9 with final BIOS. Everyone else only published the launch BETA BIOS results, and some didn't even mention the use of BETA bios.
 
We have talked about this multiple times, of course AMD want us to use their CPU's at higher resolutions so they can hide their weak single core performance, because at low resolutions, the Intel advantage is significant, and it trails off as you run games at higher resolutions and detail settings only (because the graphics card becomes the performance bottleneck) duh. The intel core i7 7700K outpaced the 1700 by around 40fps in the CPU-intensive game Ashes of the Singularity test.
The 7700K’s good performance continued to single-threaded tests: its 472-point result in POVRay was easily ahead of the Ryzen chip’s 315-point result, and it is nearly 60 points better in Cinebench. The Core i7 is a better overclocker than AMD, too, and its power consumption isn't much higher than the Ryzen 7 1700.The only area where the Core i7-7700K falls behind is in multi-threaded benchmarks, don't let fake reviews fool you.
 



Vega_14.png

Prey.png

Ashes.png

Dirt.png


 


Do you really want me to believe that when we ran a test and my i7 990X past 4GHz released Q1'11 gets equal and sometimes 20+ more FPS than my Friend's new Ryzen 7 each one using one of my 2 GTX 980 Ti's graphics cards while playing online together!!!??? Same games, same levels walking side by side while talking through a headset. !!!??? I just can't.
 


I can... There are a plethora of different things that can be happening in the background that you don't know. From the usual "did he apply the thermal paste correctly" to the "can he have malware running?" options and everything in between.

To compare correctly, you need lab conditions, what you're talking about is, like it or not, anecdotal.

Cheers!
 


did he applied thermal paste correctly? LOL We are both running XSPC EX360 High Performance Radiator just for the processor. We have been building our own computers since we can remember. Like I said my single core performance is 5+% better than his Ryzen 7 when using CPU-Z , mine was @4.5 when testing, I keep it @4.2 for normal use(no need for more), his is at @3.9 because we can't get it to run stable past that not even with the XSPC EX360 High Performance CAR Radiator :lol:
 


And that is fine and all. What about the rest of the variables?
 


As the graph shows depending on the game results can vary depending on the title, and there were compatibility issues with RAM and optimization issues with games. Try comparing frame rates now. agesa 1.0.0.6 update have made things much better now for RAM compatibility, and tightened the internal latency problem with Ryzen a bit. You do know that the difference in using 2133MHz RAM vs 3200MHz RAM offers a 15% better FPS performance increase vs. Intel using the same RAM. I'm sure your testing conditions were likely less than ideal.
 
Well, Intel just official denied the rumor of delays

https://twitter.com/witeken/status/911219352745148416

So the roadmap continues being that posted here

http://www.tomshardware.co.uk/forum/id-1581001/intel-future-chips-news-rumours-reviews/page-49.html#20194068
http://www.tomshardware.co.uk/forum/id-1581001/intel-future-chips-news-rumours-reviews/page-50.html#20199323

Not only there is no foundry convergence for the next year (hype about 7nm Glofo), but it is just the contrary... the foundry gap increases.

14nm Zen will be fighting 14nm++ CoffeLake and 10nm CannonLake this year.

14nm+ Zen ('12nm') will be fighting 10nm+ IceLake in 2018.
 
ROG Strix X299-XE Gaming motherboard is rather groovy
by Zak Killian — 4:00 PM on September 21, 2017

When the X299 platform and its associated motherboards debuted, extreme overclockers like der8auer remarked that the new boards' mostly-decorative VRM heatsinks were actually interfering with cooling the hot hardware underneath. A couple of weeks later, the aforementioned Deutschlander showed that you could resolve that problem just by grinding a few grooves into the heatsinks. Asus has apparently taken that advice to heart, as the new ROG Strix X299-XE Gaming is identical to its "X"-deprived forebear save for the grooved VRM heatsink.
x299xer.jpg

The new board carries forward the same bounty of functionality as its predecessor. Asus says the X299-XE Gaming will handle four channels of DDR4 memory at up to 4133 MT/s. The mobo has three PCIe x16 slots that can run in an x16/x16/x8 configuration with a 44-lane CPU installed, and the pair of M.2 sockets can simultaneously run in PCIe 3.0 x4 mode.

There's on-board 802.11ac Wi-Fi and Bluetooth 4.2 connectivity, a trio of on-board USB 3.1 Gen 2 ports (in both Type-A and Type-C flavors), and Asus' top-tier SupremeFX audio setup with a Realtek S1220A codec and Japanese capacitors. The board naturally has RGB LED lighting, and Asus includes a 12" (30 cm) light strip in the box.

The ROG Strix X299-XE Gaming board has only just appeared on Asus' website, and the only listing we found for it is for a third-party Amazon seller, going for $430. As more stores have the board in stock, we expect its price to more closely follow the $345 of the existing ROG Strix X299-E Gaming.
 


Lol I'm still using triple channel DDR3 and he is using DDR4

 
Intel GPU-integrated Cannon Lake may not be ready until year-end 2018, say sources
Aaron Lee, Taipei; Joseph Tsai, DIGITIMES [Wednesday 20 September 2017]

Intel has reportedly rescheduled the releases for some of its next-generation Cannon Lake-based processors, mostly ones with an integrated GPU, to the end of 2018, which has already affected notebook brand vendors' new projects and their suppliers, according to sources from the upstream supply chain.

Some vendors are even considering skipping Cannon Lake to wait for the release of its successor, the Ice Lake CPUs, which according to Intel's roadmap, should be available shortly after the specific processors' rescheduled launch, the sources said.

In response, Intel said that the company will be shipping its first 10nm products near the end of 2017 beginning with a lower volume SKU followed by a volume ramp in the first half of 2018.

After experiencing five consecutive years of shipment declines, demand for notebooks has grown stable in 2017. Industry players hope that Intel's new 10nm Cannon Lake CPUs - which are expected to see up to 25% performance improvement and 45% less power consumption compared to existing 14nm Kaby Lake processors - can rejuvenate the notebook market, the sources pointed out.

Most notebook vemdors have already begun their request for quotation (RFQ) processes for 2018 notebook orders, but they now may have to revise their notebook plans, the sources said.

In response, Intel said that the company will be shipping its first 10nm products near the end of 2017 beginning with a lower volume SKU followed by a volume ramp in the first half of 2018.

Like I said one day to a week Intel should respond if the rumors aren't true.
 


You processor doesn't suffer Internal latency problems. Faster RAM reduces internal latency associated with infinity fabric. Also, as mentioned before,"agesa 1.0.0.6 update have made things much better now for RAM compatibility, and tightened the internal latency problem with Ryzen a bit." Like I also said, "I'm sure your testing conditions were likely less than ideal."
 
...overall gains in cpu performance on the intel side of things hasnt been improving in leaps and bounds since 2012.

compare 3770k 2012 to 7700k 2017 its roughly a 30% performance increase...in 5 years.

whacky do.

the graphics card side of things is much more impressive..

in this regard, even as was mentioned many years ago with the move to dual and then more cores that multicore would pave the way for the future of cpus (and therefore programming aswell) because of GHz limitations. finally in 2017 we have a cpu that can achieve 5GHz...i remember talking about this in 2008 5 GHz seemed beyond reach in 2008 a good intel cpu was around 4.0 GHz.

anyway my main point is cpu gains on the single core front have been limited since they started to reach GHz limitations / process limitations. an economy of scale way to increase processing power is obviously more cores.
 


My i7-3770K@4.6 measures up against the stock i7-7700K fairly good in some tests. 171/194; 848/984. Mind you my testing environment was less than ideal with programs running while I was testing, and being far from a fresh installation of windows 10. Now of course that's why we get K processors, for longevity. I also agree that more cores increases throughput!
rdlBywx.jpg

Production.003-1440x1080.png

Production.002-1440x1080.png


Edit: 171/196 of the similar overclock 7900X@4.6GHz
 


Regardless of what you have said, that's still a SIX year old chip matching or beating a chip released this year.

Looking at benchmarks it comes within 10% of Ryzen's single core score with both chips overclocked. Sure, it gets pretty much demolished in MC, but that's a pretty unimpressive result.

I doubt there are viruses or malware on a brand new computer, and in gaming Ryzen's many more cores than needed for that task ensure that background tasks likely aren't getting in the way.

Stop using the slower RAM as an excuse, as it's entirely possible that the 990x was indeed faster at processing the assets required for that game. And even if that isn't the case, 3200MHz ram is considerably more expensive than 2133MHz RAM.
 
Status
Not open for further replies.