Intel's Future Chips: News, Rumours & Reviews

Page 26 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I haven't gamed much on Linux, since most of my experience is with Virtual Machines on a E5300 with 4 gigs of RAM, but I only know of one time where stock Linux/Intel Gfx Drivers didn't work, and that was when Linux mint was in Software rendering...
 


Sorry that happen to you mint is actually my favorite linux OS that i used that and fedora(over it based on Unix).

 
moar on skylake-powered zotac zbox sn970
http://techreport.com/news/27927/a-peek-inside-zotac-console-killer-steam-machine
3 slots for solid state storage, maxwell dgfx, possibly up tp 16GB system memory.
it's dimensions really stand out. it looks like a real console instead of a pc-in-a-mini-itx-case-relabelled.
intel could stick in one of those 8 core atoms or bdw-de cpus or desktop ulv core i7 cpus.
except the $999 price will drive people away. 🙁
 


I saw that mini gaming PC, looks like its the same size as a Wii U but with a I7 and 970M. I could easily hide that thing, sadly its probably a lot of money, but i expect it to play everything at ultra settings at 1080P around 30fps and probably 45-60fps on medium.
 


The problem was, the OGL driver layer, by design, is HUGE. You also have the fact the API and Hardware went two different ways, which makes it a LOT harder to keep things working over time. A more sane API should help.

Problem is, you ARE making a new API from scratch, so there will be many, many bugs the first version or two.
 


My point is another. Vulkan will not change anything what is said at the blog. Vendor A will continue being the "Graphics Mafia", Vendor B will continue being developing a "hodgepodge, inconsistent performance, very buggy, inconsistent regression testing, dysfunctional driver threading that is completely outside of the dev's official control" and vendor C "don't really want to do graphics, it's really just a distraction from their historically core business".

A remark on the blog, it is not at all true that Intel don't really want to do graphics, what happens is that Intel is not interested in current GPU-based graphics. Their long term plan is to do graphics on the CPU.
 
I have to say i tried dying light on Ubuntu and i was upset. Juan i'm sure you looked into the PS3 Cell CPU and the SP units and how they were never going to even include a GPU in the console what are your thoughts about that?
 
A remark on the blog, it is not at all true that Intel don't really want to do graphics, what happens is that Intel is not interested in current GPU-based graphics. Their long term plan is to do graphics on the CPU.

They can try, but they will fail. Graphics processing is too big to do on serial cores, no matter how many of them you wire together.
 


I know little about Cell, except was a distaster. Check my below reply to gamerk for details on my point.



The flaw in your argument is that the plan is not to use serial cores, but a kind of hybrid cores (not anything as morphcore) that provide the best of both latency and throughput.

The original idea came from a developer who has designed a CPU-based rendering engine. He communicated it to me during a set of discussions at RWT forums. My first reaction was similar to your. I believed then it is not possible to use efficiently CPUs to do GPU stuff, but he gave me details, and latter I showed in the forums that effectively a hybrid-core CPU could provide the same performance and efficiency than the best GPU design known.

In fact, a very-well known game developer like this kind of ideas and expect future games to be rendered on CPUs not on GPUs. I agree with him.
 
Intel Fires Off At ARM With Xeon D SoC
http://www.tomsitpro.com/articles/intel-xeon-processor-soc-server,1-2490.html
http://techreport.com/review/27928/intel-xeon-d-brings-broadwell-to-cloud-web-services
8 core broadwell-d.e. is here.
this thing is a 8C/16T processor with 12MB of L3 cache (more like 1.5MB per core, configured like -E cpus), 24 lanes of PCIe Gen3, and dual 10GigE links.
We don't yet have full pricing and specs on the various Xeon D models Intel will offer. We do know that the Xeon D-1450 will have eight cores with a base clock of 2.0GHz, an all-core Turbo peak of 2.5GHz, and a single-core Turbo peak of 2.6GHz. Meanwhile, the Xeon D-1520 will feature a 2.2GHz base frequency, 2.5GHz all-core Turbo, and a 2.6GHz single-core peak. Both chips should be available this month.
i'd like to see an 8 core, non HT version in that zotac zbox sn970 pc... 😗
 


A more accurate description would read: Intel micro servers on 22nm FinFET 2x--3x as power efficient than APM X-Gene 64-bit on ancient 40nm planar.

The real fight will be X-Gene 3 on 16FF vs Broadwell D/EP on 14FF.
 


But ARM is so efficient shouldn't that make up for it? JK

The architecture fared worse than I thought to be honest. Particularly the memory controller throughput numbers, and cache latency. It's not so much the cores killing it it's the uncore.

Intel has put a lot of money into their ring bus controller, system agent and caches. That's going to be hard to beat no matter what.

Also X-Gene 3 is a 2016 part so you mean Skylake-Xeon-D or whatever the equivalent will be a year from now. Broadwell Xeon-D is shipping now.
 
I know little about Cell, except was a distaster. Check my below reply to gamerk for details on my point.

The Cell CPU is still faster then any normal CPU out there, or did we suddenly become able to push >200 GFLOPs with just a CPU while I wasn't looking?

Granted, it's a beast to code for, expensive to manufacture, but in terms of performance, nothing else currently comes close.
 


Gamer i know nothing about this stuff but my CPU pushes 500+ Gfops according to Aida64 let me show a picture i'm probably looking at it wrong.
http://s16.postimg.org/89tt7t4zp/251.png

This is with the 4790K
 
Wonder why Aida64 always pulls such insane numbers; SiSoft gets around 100 GFLOPS, and Whetstone AVX around 120, and that's using AVX extensions.

Point being, the Cell CPU is about twice as fast as a 4790k. It's still a beast of a CPU.
 


What is JK? Jim Keller?

ARM is more efficient, but is not a magical ISA and cannot compensate a huge process node gap: 40nm PL vs 22nm FF.

About performance we know since a year ago that the X-Gene was supposed to score about 110 on SPECint_2006, which is Avoton level. And Anandtech finds X-Gene to provide 93% of single thread performance of Avoton.

X-Gene 3 is not aimed at microservers. I did mean Broadwell EP/EX.
 


JK as in Just Kidding.

Now you see the up hill battle they face. Avoton came out in 2013 and that part is twice as efficient as X-Gene. They need to double efficiency just to catch Avoton and Intel has moved on to Xeon-D.
 


Let me take Anandtech calculations with a grain of salt. I agree that Avoton on 22FF is more efficient than X-Gene on 40PL but I doubt the gap is 2x. The gap is more close to 60% according to data I own (Specint/W).

According to Intel, Xeon D improved efficiency by 1.7x over Avoton, despite Broadwell is less efficient than Silvermont, because 14FF brings >2x more efficiency than 22FF.

ARM efficiency can compensate for the gap between 16FF (X-Gene 3) and 14FF (Broadwell), but cannot compensate for the huge difference between 40PL (X-Gene) and 22FF (Avoton), neither the huge difference between 28PL (X-Gene 2) and 14FF (Broadwell).
 


The cores may scale but who knows about the uncore. ARM holdings only spends about 400Mil a year on R&D.
 

amd doesn't have a modem afaik. one of the possible reasons they can't compete against the likes of qualcomm, nvidia and intel. intel acquired infineon's wireless division in 2011 and turned it into intel mobile communications.
http://semiaccurate.com/2010/08/09/intel-wants-piece-infineon/

edit2:
Want 32GB of RAM in your laptop or NUC? You can finally do it
http://www.pcworld.com/article/2894509/want-32gb-of-ram-in-your-laptop-or-nuc-you-can-finally-do-it.html
finally ditch SSD altogether. <3

edit3:
in reality, until ram kits ship with a data retention tech, we might still need an ssd to quickly save data in case of a problem. i am still for ditching ssd, it's an intermediate technology.

Intel Lowers First-Quarter Revenue Outlook
http://www.techpowerup.com/210618/intel-lowers-first-quarter-revenue-outlook.html
consumer broadwell has hit.
 
Status
Not open for further replies.