Intel's Future Chips: News, Rumours & Reviews

Page 57 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


That is a SoC based Xeon, looks like a replacement for the Atom based server chips. Meant for low power operations, probably blade servers etc.

Interesting to me is that it has a built in 10Gbe.
 


Consider that most server board support dual Xeons up to 145W each:

https://www.supermicro.com/products/motherboard/Xeon/C600/X10DRD-i.cfm

That is low power. That means a high end server using normal Xeons would probably be in the 200-290w range on CPUs alone, more than triple that of this board. In the server world it is low power, it is miniITX meaning it is meant for small clients and low power use compared to normal server which are for much higher power applications.
 
Intel is planning to build spin-based graphics chips . What are advantages and disadvantages of this ???
 

My i5's are still doing well with dual monitors.

I have a 1920x1200 and a 1680 x 1050 monitor. I suspect that if I went to 1440 or 4k, I will have a problem, but I'd also have to upgrade graphics cards. That's for the next build I'm going to hold off for a year or so more to have the 4K thing shake out a little better.

 
My problem are the types of games I play. WoW is quite CPU dependent, and GW2 is even worse so. My CPU will peg near 100% with either, but my GPU sits at around 75%, for WoW, and 50% for GW2. The hyperthreading should help alleviate my issues, by handling the tasks on my second monitor.
 


Several reasons:

1: Measurement error: Task Manager has a long cycle time of 1 second per update; not the best tool for looking at CPU load. All it takes is ONE CPU core to get overloaded for a couple of ns to get a 1 FPS performance delta. In task manager, that brief loading disappears in the averages.

2: Noise: The OS is going to schedule threads slightly differently every time you run an app, because of what background tasks are doing, memory allocation differences, and the like. That by itself will result in a 1-2 FPS delta between essentially equal CPUs.

3: CPU extensions and other low-level architecture optimizations: If the CPU is a little faster, the GPU driver may run a little faster, resulting in a very small speedup.

4: Hyperthreading: Depending on resource allocation, hyperthreading will have a non-zero performance impact. Usually positive, sometimes negative.

So yeah, there's enough noise where you are going to get a good 1-5 FPS delta on a system between benchmark runs. That's why good benchmarking runs the results several times and records the averages, rather then taking one run as the absolute. Windows is a multitasking OS, which introduces noise into any performance metrics. Simple as that.
 


And may or may not ever be a "now" thing. Intel is always working on new ideas. Most ideas we see start 5 or more years before being deployed mainly to work out the logistics and optimize it. Short of the 22nm node Intel hasn't really had any better steppings because they also tend to spin up their new nodes well before a CPU launches on them and even then the Devils Canyon was not due to a better stepping but a better TIM between the CPU and IHS.
 



It's all about spintronics tech. That's actually a new computing technology . Spins of electrons can make 1s and 0s .
 


So if i'm understanding this right instead of the traditional positive negative electron's as binary we're taking each electrons spin as a 1 or 0? As this pertains to i'm assuming quantum computing and storing data using 1-9?
 


Somewhere, I think I saw a rumor, that we won't see it till Computex, in June.

Also take this with a grain of salt, considering the source.

http://wccftech.com/intel-kaby-lake-q3-2016-cannonlake-2017/
 
Status
Not open for further replies.