Nvidia GeForce GTX 1000 Series (Pascal) MegaThread: FAQ and Resources

Page 86 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
@U6b36ef
Unfortunately, the real world is not as simple as you think. With each die-shrink the angle you have to be able to distinguish gets smaller in the optical parts of the lithographic process. That is fine on a small chip, because you can make the lense basically narrower, but if you want to make a bigger chip, the lens has to have a very good resolution (had to be of much better quality) at the outer edges, and that technology is part of the reason die-shrinks take time. In addition, the new finfets will include more quantum effects such as current leakage, which increases the chances of producing faulty parts because the transistors are more sensitive to the thinness of the transistor walls.

All silicon production has a certain failing percentage of transistors, and a new shrink is always more error prone then the previous one. If that wasn't the case, there would be no such thing as chip binning into 1080s and 1070s, and no silicon lottery...
 
EVGA 1080 Hybrid finally out http://www.evga.com/Products/Product.aspx?pn=08G-P4-6288-KR

$729 for a ftw hybrid card.

08G-P4-6288-KR_XL_4.jpg
 


the 900 series (and 750Ti) is not a simple maxwell being redesign in 28nm. GM200 for example was suppose to replace GK110 in both SP and DP performance. when nvidia coming out with GK210 some people already predicted maxwell probably did not go as how nvidia first envision it. and since nvidia also have few engineers dedicated to TSMC nvidia probably fully aware 20nm is a no go for gpu since early on. personally i think nvidia probably already have back up plan for maxwell since early on. if they not then we will not going to see maxwell in 2014 just like how nvidia show maxwell will be their 2014 architecture back in 2009/2010.
 
Interesting article/video I found on Pascal/Maxwell Architecture and an insight on how it might achieve its efficiency.

http://www.realworldtech.com/tile-based-rasterization-nvidia-gpus/

(Might be handy if someone over at Radeon read this!)
 

I'm not sure what they are talking about. My 1070 STRIX never reaches above 65C in my case with the fan speed at 40% (virtually silent) during extended gaming sessions (3+ hours) of The Witcher 3 / Mad Max which both utilize about 80% of the GPU. My case is the Fractal Design R5 which is not known for it's airflow either.
 


tegra probably did not go the way they hope in the mobile but nvidia involvement with mobile made them rethink the way how the design their gpu for more power efficiency. as for AMD i think they can be more power efficient than they currently are (or even be ahead of nvidia) but ultimately they choose to not drastically change how GCN work. this is my speculation but to be more power efficient AMD need to make large changes to their architecture enough that the new architecture probably can't be call GCN anymore. i think AMD want their hardware in console having an impact on their pc performance. why AMD pushing low level API such as mantle on pc? because with low level API they can push more optimization responsibility to developer. and to make developer job easier they need console to have the same architecture as their pc hardware. this is most likely the most ideal situation for AMD when they don't have resource like nvidia did. but the trade off is they can't change their architecture too much or they will lose the benefit having hardware in the console.
 
Nice article for people that want to know a bit more about how nVidia can get very good perf/watt: http://www.realworldtech.com/tile-based-rasterization-nvidia-gpus/

Related to read that: https://imgtec.com/blog/a-look-at-the-powervr-graphics-architecture-tile-based-rendering/

Context: AMD got rid of all "low power" optimizations when they bought ATI to Qualcomm (as in sold them). I wonder how much of that portfolio is still relevant to the current day.

Cheers!
 
Well, there's more to this than just "better perf/watt".

It is nice to see how nVidia broke the mold (sort of speaking) in the world of traditional GPUs using a mobile GPU part "common" technique for breaking the rendering model. The interesting part is about AMD maybe looking into that same territory (if it doesn't already).

Cheers!
 
Hmm, just got an email from Newegg about the "980" i ordered from them being involved in a lawsuit. (It was a 980ti)

Dear James E Mason,
We have been informed about a class action lawsuit settlement with NVIDIA regarding the GTX 980 graphics card. Newegg is currently awaiting details about the settlement claims process (instructions, website). Once we have this important information, we will send you a follow-up email with the specifics on how you can submit your claim.

If you have any questions regarding the information provided in this email, please don't hesitate to contact Newegg Customer Service through one of the convenient contact methods provided here.

Sincerely,
Your Newegg Customer Support Team
 

What the actual f... what?
 


It's the stupidity/laziness of the court in California where the lawsuit was filed. They just lumped all 9xx series cards together in the lawsuit description overview instead of breaking down only the 970 model that is in the actual lawsuit itself. According to NewEgg themselves, it's not unusual for class action lawsuits to fail breaking down details like this. FYI NewEgg has since removed that forum post commenting on all 9xx cards.


 


Wonder if my 750ti or 960m will get included then >_> (750ti is kinda technically a 900 series card)
 


No because anything other than a GTX 970 is not a part of the lawsuit. This was a clerical issue referencing all the 9xx cards. Actual wording of the lawsuit itself:

"The instant case is a consolidated set of 15 consumer class action lawsuits filed nationwide beginning in February 2015, which were either filed in or transferred to the U.S. District Court for the Northern District of California, as well as an additional lawsuit pending in San Diego County Superior Court. All of these related actions arise out of allegedly false and misleading representations on the packaging and advertising of computer graphics cards incorporating the NVIDIA GeForce GTX 970 graphics processing units (hereinafter, the “GTX 970” devices)."

 


for us end user probably it does not really matter. but i think it is important thing for AMD and nvidia. power efficiency matter especially if AMD want more design win on the laptop market. during 5k and 6k series the laptop market were dominated by AMD because of their power efficiency. back then laptop discrete graphic market share almost exactly the opposite of what we see on desktop gpu market share where in laptop AMD command around 60-65 percent of the market share.

in desktop market we will see power efficiency becoming more important as we nearing 300w limit for a single card. right now nvidia have the advantage of being faster while consume much lesser power. imagine if both AMD and nvidia card rated at 250w. as it is which company will have faster card if we limit the card power consumption to 250w-300w?
 
The reviewers already have their samples, Gordan from PC world was showing his Nvidia Titan x config and promised benchmarks today.

Those are impressive scores slightly below my friends Maxwell Titan X SLi scores...
 
that aside did nvidia intend to sell many of this? seems like not even board partner will have their hands on this new titan. also i believe most people are more interested with 1080Ti. some people already theorized that nvidia probably will not going to come out 1080Ti until AMD can come up with 1080 direct competitor. did AMD really intend to let Fury series as their high end gpu this year?
 
I think the CEO made this cards for deep learning However he knows the cards will be popular. Those that want the best will spend the money. I will probably wait until a fully enabled chip with hbm 2 before I upgrade the 1080 is fantastic. Finally got some time to enjoy it...
 
they stated this card was not aimed at gaming but rather deep learning type uses. won't stop people from buying the stupidly expensive card for a gaming rig but it's still not the stated purpose of the card.

and i am pretty sure this was going to be an nvidia exclusive with no partner cards. at least i'm pretty sure that's what the pr said. it's for sale now at the titan x product page and in stock for $1200 limit 2 per person.

looking at these numbers http://videocardz.com/62766/first-gaming-benchmarks-of-nvidia-titan-x-are-here it is nowhere near worth the price. a $500 premium for an extra 9-10 fps (4k fps) is a price strictly for suckers.
 
I was expecting the new Titan to be around that performance level compared to the 1080. Also, lots of people complaining the reviews did not have nVidia stuff turned on.

In any case, if the performance gap is so narrow, is there any space left for a 1080ti? I mean, an OCed 1080 is *very* close to the Titan X-Pascal, so cutting down the Titan will 'prolly put it in the same performance level as a mid-OC 1080.

Cheers!
 
Nvidia can have 1080ti to have the same spec as the new titan x except the VRAM amount. 780ti also have the same spec as titan black except VRAM amount. Actually the titan x already using cut down version of GP102. so far only quadro use fully enabled GP102. so to be honest i'm thinking we probably not going to see further cut down chip for 1080ti. also 1080ti is just one possibility. If AMD really did not come out with anything faster than fury x we might see GP102 being use in nvidia next series instead like what happen to GK110.