Does anyone know what Nvidia's GPU release schedule will be for the next 5 years?

SeriousGaming101

Distinguished
Mar 17, 2016
301
0
18,780
1. Nvidia is releasing the RTX series GPUs 2080Ti, 2080, 2070 and etc, but how long until the next series of GPUs or architecture comes? Like RTX 2180 or whatever name Nvidia comes up with during their schedule in the next 5 years?

2. For AMD its Polaris >>> Vega >>> Navi >>> ?????

3. What/When is Nvidia's next GPU? Maxwell >>> Pascal >>> Turing >>> ????? >>> ?????
Will Nvidia release another flagship GPU next year in 2019? 2 years? Anyone know?
 
Solution
Yes, Nvidia knows, maybe. Otherwise, no. Nobody has access to that information and so far as I know they aren't sharing the intended roadmap. Half the time we don't even have accurate information on what architecture is coming out next. As with this release, where they skipped right over the expected architecture and went straight to Turing. If you look at practically any sites "roadmap" it shows Maxwell, Pascal and then Volta. Where is Volta? Lost? No, skipped.

Your best bet, if you want to "feel" like you know what's up and coming, is to go do some reading at Ars technica

https://arstechnica.com/

Just don't expect any of the supposedly "insider" reports to be accurate. They occasionally get it right, but often it's just a lot of...
Yes, Nvidia knows, maybe. Otherwise, no. Nobody has access to that information and so far as I know they aren't sharing the intended roadmap. Half the time we don't even have accurate information on what architecture is coming out next. As with this release, where they skipped right over the expected architecture and went straight to Turing. If you look at practically any sites "roadmap" it shows Maxwell, Pascal and then Volta. Where is Volta? Lost? No, skipped.

Your best bet, if you want to "feel" like you know what's up and coming, is to go do some reading at Ars technica

https://arstechnica.com/

Just don't expect any of the supposedly "insider" reports to be accurate. They occasionally get it right, but often it's just a lot of guesswork and outright fiction. Nobody really knows anything until they volunteer it, and they don't usually do that until a short while before they plan to publicly launch at one event or another. Anything other than straight from the horses mouth and you might as well just make something up and believe it.

 
Solution

Karadjgne

Titan
Ambassador
If AMD knew what nvidia was actually going to do in the next 5 years, they'd scrap most current plans and be hard at work trying to beat nvidia to the punch. Imagine if AMD had released a gpu with even close to the power of a 2080, earlier this year when mining was still big.

No. You won't get anything even close to concrete until nvidia decides AMD needs another kick in the gut.
 
It's actually kind of hilarious to think what if Nvidia really did have the next 5 years of releases set, and that information got out, and that anyone could get that information simply by asking random people on the internet.

The best you can do, in real life, is to read up on the last generations of Nvidia cards. For instance, start with the GTX 5xx series. When was that announced? When were they actually released? When was the GTX 6xx series announced? When were they released? Do the same until you reach today's cards. You should see some patterns in how long between generations. Now use that information to project out into the future.
 
Except that, as seen, increasingly, from Intel, that doesn't always pan out too well if they run into problems in the fab process or there are shortages of materials. That can seriously delay, or cause the complete skipping of an architecture.

In fact, Nvidia is a good example. We were supposed to be seeing Volta cards right now, not Turing. So until a company actually says "This is what we are releasing", it is STILL nothing but guesswork. Might as well write a fictional novel. It has just as much chance of being the truth as any patterns we might see in the clouds.
 

Karadjgne

Titan
Ambassador
Wasn't it just last week ppl were still debating whether they'd be called GTX1180's or GTX2080's or if nvidia would even stick to that scheme. Don't think many expected the RTX, not with AMD as Rx already.

5 years ago, nvidia had just released the GTX 780. Seriously doubt that Turing and a gpu such as an RTX2080 was even an idea of a conception at that time.
 


in the past nvidia used to mention the name for their future architecture for two generation ahead with some of it's feature. for example we know the existence of maxwell that will succeeding kepler back in 2009/2010 when nvidia coming out with Fermi. when nvidia launch kepler we know volta will be maxwell successor (back then pascal still did not exist in nvidia roadmap). but in the end talking all this stuff in advance are useless. because a lot of things and problem can happen which end up changing the road map or even the architecture itself. take maxwell for example. nvidia image of maxwell initially is another compute architecture will succeed kepler in both FP32 and FP64. and with maxwell nvidia was supposed to have somekind of ARM CPU directly integrated into the GPU dies itself (this thing is still a GPU and not an APU). in the end none of that happen.
 


from my understanding Turing is indeed a modified volta. Turing just lack some of compute features needed by HPC and data centers (for example massive FP64 support) but everything else pointing towards Turing to be using volta base design. SM configuration for example it is the same between the two. both have the capability of running INT and FP operation at the same time. turing main improvement is the addition of dedicated RT cores.
 

mjbn1977

Distinguished
I think it gets harder and harder to predict because we getting close to the physical limits in regards to manufacturing processes. We hear a lot how semiconductor manufacturers struggle with the new smaller processes (look at intel which is super delayed with their 10nn process). Pascal was 16nn and 14nn (some chips), Turing is 12nn. A lot of people hoping and waiting for 7nn. I read that Nvidia decided for 12nn this time around due to the much higher yields that can be accomplished in 12nn compared to 10 or smaller. So, manufacturing process improvements get harder and harder to accomplish and therefore harder to predict.

Furthermore, we reached a point where more graphics power is not really possible or necessary. At least in 1080p and 1440p and at least for the meantime. Some RTX 2080Ti reviews talking about how even in 1440p the 2080Ti is bottle necking a overclocked 8700k. So what is the point for more rasterization power unless you mainly play in 4k. I think future gaming graphics quality improvements will mainly come from ray tracing and that is were a lot of future chip innovation has to go into.
 


The point is, they didn't follow the map the way it was reported by them to be expected. Certainly we've seen gens before that were only partially or were based on, the previous gen. But it still is not THAT gen. It is a derivative of that gen. So if you expect a thing and never see it, it makes it pointless to plan according to that thing.
 

mjbn1977

Distinguished


Well, there is no law that a company has to stick to their previous made and announced plans. Plans are there to be changed, adjusted, tweaked, and sometimes to be followed. I think this whole discussion about what Nvidia is doing is stupid. They can do whatever they think is best for them. They can price the cards however they want. If nobody is buying the new cards, they will lower the price. If they sold out and go like hot cakes, it will prove them right and their high price approach was justified. Never forget that the RTX 2080 is considered new high-end and the RTX 2080Ti new enthusiast high-end. WITHOUT COMPETITION!! there are plenty cheaper cards people can buy. Those cards are not supposed to be sold by the masses of players. That's what the mid-range cards are for.
 

mjbn1977

Distinguished


you are right and I agree, Darkbreeze. I've got taken away. sorry! ;)

 


that's what i said in my first reply to this thread. though Turing did not differ that much from Volta but only have extra bit that is more relevant in rendering performance and ray tracing. so maybe you can say Turing is Volta v2. but maxwell? it is definitely a different creature than what nvidia first told us about it haha.