Intel's Future Chips: News, Rumours & Reviews

Page 46 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Most companies do this while also putting older stock on sale. Happens all the time, like in the car business. The dealerships will order only a few of the new model years and put the old model years on big discounts.

I personally think it is more of a supply issue than Intel holding out. They never have before, normally they have the inventory built up and shipping so they are readily available on launch.

That'd be understandable if a company had limited stock of an item first come first serve. But they literally had nothing. Which throws up a ton of red flags. I wouldn't put it past them to resort to such a tactic, but what you said makes sense if they've never had something like this happen before.

Considering every Intel CPU I have ever bought has been packaged in Costa Rica I would wonder why they would be using their Taiwan packaging facility for the US instead of the markets over there.

That is what would make me say the blog is probably wrong. The smart move for Intel is to use Costa Rica to package for our markets and the others near the other markets for them. Saves on shipping costs in the long run. It is much like for the most part the CPUs the US receives are made at the FABs here in the states.

Who knows maybe these cpu's are being fabricated in taiwan. Could be cheaper in the long run, costs of production etc. It'd definitely be more cost effective then making them in the united states. Though i quoted two sources saying the same information. The blog could've just quoted wccftech so who knows. I just hope it comes out before this month ends.
 


That part is kind of confusing. But maybe they saw it as a cost-cutting measure since they know most Skylake users won't be depending on the IGP. It is still an improvement over the Haswell from what I've read.
 


Maybe its because they wanted to have broadwell specifically cater towards laptops? Makes sense integrated graphics are pretty important. Though i think their are quite a few broad-well chips made for desktops too.
 


Hillsboro, OR: D1X, D1D and D1C 300MM wafer, 10, 14 and 22nm
Chandler, AZ: Fab 12, 32 and 42 300/450mm (Fab 42) wafer 14, 22 and 65nm
Rio Rancho, NM: Fab 11 300mm wafer, 32/45nm
Hudson, MA: Fab 17 300mm wafer, 130nm
Leixlip, Ireland: Fab 24 300mm wafer, 14nm
Kiryat Gat, Israel: Fab 28 300mm wafer, 22nm
Dalian, Liaoning, China: Fab 68 300mm wafer, 65nm

Those are all the FABs Intel has currently with the majority being in the US and most in either Oregon or Arizona (the location in Arizona has 350 days of sunshine a year). The majority of CPUs are made here and the ones sold here are normally made here. Of course the nm each Fab works on might have changed but that is the most up to date I can find.

They have assembly sites in Costa Rica, Chandler, Arizona, China, Malaysia, Vietnam and Israel. None in Taiwan. So it wouldn't make sense to Fab a CPU in Oregon or Arizona then send it across the world to be packaged and tested then sent all the way back.



No CPUs are fabricated in Taiwan. Most are fabricated in the US with three other Fabs outside of the US, most likely to cut production and shipping costs to other markets outside of the US.

Just look at my list above. 9 out of 12 active Fabs in the US. Sure it would be cheaper but Intel probably likes to keep the Fabrication as local as possible to maintain better control. Is it easier to control in the US or in China?



I think why with these specific CPUs is because the ones we are getting right now are targeted at the enthusiasts. They are supposed to also be releasing a GT4e version of Skylake later on which will be targeted for HTPCs and such.

I understand it. Personally I would like no IGP but since I don't feel like spending 50% more for the same setup on LGA 2011 I am willing to sacrifice it.
 
Intel's Skylake Processors Allegedly Rocking 'Inverse Hyper Threading' - VISC Like Architecture with Massive Single Threaded Performance
http://wccftech.com/intel-inverse-hyper-threading-skylake/
Intel Skylake Gen9 Graphics Architecture Explained - GT2 With 24 EUs, GT3 With 48 EUs and GT4e With 72 EUs
http://wccftech.com/intel-skylake-gen9-graphics-architecture-explained-gt2-24-eus-gt3-48-eus-gt4e-72-eus/
Early Intel Skylake Linux Users May Run Into A Silly Issue
http://www.phoronix.com/scan.php?page=news_item&px=Intel-SKL-Prelim-Support

@logain: drivers may be an issue.
 
^That is very interesting. I wonder how manageable that is. Say you have two threads going and you can combine two cores for each thread.....

Does anyone else find this interesting in a ironic sense? I remember back when K10 was supposed to come out people were claiming that AMD was going to implement reverse HT (what they called it back then)......
 
There is always a trade off, Jimmy. Always.

There must be something Intel is not telling us about their implementation of VISC. Also, the iGPU improvement could be related to this first version of VISC as well.

There are a lot of things that make me think Intel is experimenting with this CPU more than it's letting us know.

Cheers!
 
Yuka, pretty much every uArch from Intel that is new (not die shrinks like Yorksfield, Ivy Bridge or Broadwell) has been Intel experimenting with new ideas and ways to improve power and efficiency.

Skylake is just doing it in a bigger way.
 


Where do you see Intel going backwards on IGP?

Perhaps there is some confusion on the part numbers as the larger IGP parts have not been officially announced or shipping yet.

The desktop Skylake so far only have 1 slice which is the replacement for GT2 graphics. The GT3 (2 slice) and GT4 (3 slice) will be much faster.
 


As interesting as that is I suspect it has more to do with cache advancements and that particular benchmark.

Things like Cinebench didn't see anywhere near a doubling in single thread performance. Just the typical ~5% or so.

Edit: Unless it needs a BIOS/OS switch or something to enable it.
 


Yes, you're right. What I mean though is that Intel is not letting all of the details out yet. I know they put weird stuff from time to time to see how it works, but VISC is, like you say, big. I think we have to wait a bit to see what is really new.



I've been reading that, for Software to make full use of VISC-like uArchs, they need to change the code a bit for programs. I think it's a tad more than a simple recompile, or at least, that's what I've read.

In any case, the idea is neat. I wonder how much is the overhead though.

Cheers!
 
Well, it could be doable in CPU HW. The idea is you take a thread, do analysis of the instructions within, and have multiple cores work on the instruction stream to limit the amount of time instructions are waiting for HW resources (ALUs, etc) to free up. Really don't see any other way to do it; its certainly not something you want done by the compiler/developer, and I'd be very hesitant to let the OS scheduler deal with this fine level of control.

Also remember that the OS literally has a couple hundred threads going at any one point in time which have a much higher priority then any user space program, so actually splitting up instructions on the CPU is itself a delicate dance. This is the type of thing that honestly, would benefit RTOS's more then general purpose PCs simply because you have less stuff going on in the background. I really don't see VISC being that big a thing for heavy OS's because it's so rarely going to get invoked.

In other words: "Best Case" again.
 


Broadwell's IGP was faster, than Skylake. http://www.tomshardware.com/reviews/skylake-intel-core-i7-6700k-core-i5-6600k,4252-9.html
 
Broadwell's IGP was faster, than Skylake. http://www.tomshardware.com/reviews/skylake-intel-core-i7-6700k-core-i5-6600k,4252-9.html[/quotemsg]

I wanna say that broadwell's igp was 40% faster then haswell, and skylakes was 20%. You can see a 20 fps difference in the link.
 

BDW's GT3e is faster without any doubt. almost any hd gfx igpu without the helping of the EDRAM cache and EUs will be slower than the ones with EDRAM.
 


Yes but that is a GT2 part not a GT4e part which will be the replacement for Broadwell:

http://www.guru3d.com/news-story/intel-skylake-gt4e-gpu-50-percent-faster-compared-to-broadwell-gt3e.html

According to Intel Skylakes GT4e should be 50% faster in gaming than Broadwells GT3e. You just need to give them the time to release it.
 


I don't think it will take too long. From what I can find Intel has 5 Fabs already ready to produce 14nm and Fab 42 can be spun up at any time which also would use the 450mm wafers for even better yields. I think they just need to get the production to fully ramp up and then we will be up to our eyeballs in Skylake CPUs.
 

i said that earlier in jest... but now i really think it'll take a while for intel. GT4e is the highest igpu part, got EDRAM, and is usually bundled with the highest binned mobile cpus and socs. i discinctly remember how rare the hsw iris pro and -R processors were and they never really became widespread. bdw -c series came later and it's virtually vaporware despite it's clearly superior performance. skylake is the latest and the DT parts are having trouble staying available. i am keeping my expectations lowered for GT4e (the supply, not the performance).

edit:
not only that, the iris pro parts usually have the highest igpu clocks as well as low TDP - making them particularly expensive. i wish intel would sell a core i3/xeon e3 v5 sku with iris pro at a sub $190 point.
 


Another nonsensical article from Usman. It is full of mistakes, and the more important: there is nothing of that on Skylake.

Skylake arch is just Haswell with more/bigger stuff: more execution units, bigger OOOE window, more integer registers,...



The correct term for this technique is SpMT (speculative Multithreadoing) or TLS (Thread Level Speculation). It wasn't invented by Intel nor by AMD. In principle it scale up to arbitrary number of cores using speculation. I read studies made with 64 cores running a single thread . This technique is at research status and I doubt will achieve commercial status.

Contrary to Usman's claims this technique is completely unrelated to morphcore or VISC.

About the irony that you mention, the history is still more funny. The original K10 prototype that was going to implement SpMT was designed by the same engineer that previously worked on CMT at Intel. And he got the CMT concept from trying to copy to CPUs some aspects of Nvidia SIMT model for GPUs. :lol:
 
Status
Not open for further replies.