[SOLVED] NVIDIA is working on a GTX 1650 Ti GPU, for an October launch !! *new info*

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
D

Deleted member 2731765

Guest
Hello,

One leak has just been published on an Asian website.

Rumour has it that Nvidia has plans to add a new entry into its Turing graphics card lineup, this time adding to the company's lower-end GTX range of GPU hardware.

Nvidia's next chip is rumoured to be called the GTX 1650 Ti, a graphics card which will sit between the GTX 1650 and GTX 1660, acting as a mid-way point between the two. The chip is rumoured to feature a CUDA core count of between 896 and 1408, enabling performance levels which could match or exceed Nvidia's last-generation GTX 1060 graphics card.

At this time, little is known about Nvidia's GTX 1650 Ti, though it is rumored to be released sometime in late September or early October.


This graphics card will fill a major gap in Nvidia's low-end GPU lineup, a space which is being exploited by low-cost RX 570 and RX 580 graphics cards from AMD.
To sit between Nvidia's GTX 1650 and GTX 1660, Nvidia's GTX 1650 Ti would need to feature between 1024 and 1280 CUDA cores. Nvidia graphics cards tend to offer CUDA cores in collections of 128, making 1024, 1152 and 1280 CUDA core counts possible. Ultimately, it depends on kind of silicon Nvidia plans to use to create its GTX 1650 Ti.

AMD is rumoured to be working on a low-end Navi graphics card to compete with Nvidia's GTX Turing offerings, making the creation of a GTX 1650 Ti a solid move for the green team. The GTX 1650 Ti could act as a counter to Radeon's low-end Navi graphics cards, or at a minimum solidify Nvidia's position in the low-end GPU market.

http://www.fashaoyou.net/Article/1543/94385.html

Thanks for reading. And translate the above website.
 
Solution
I think people started buying 2 cheap GPUs instead of 1 high end one, so AMD and Nvidia disabled multigpu support on lower-end GPUs.

personally i think that might not the reason. if user end up buying two GPU nvidia and AMD still end up selling more anyway. and those that buy low end or mid range GPU probably never dream of spending on buying high end GPU in the first place either. at the very least i think this will be the case for majority of people. the problem lies on this: will spending the money for two GPU will going to worth the performance it will bring despite the drawback of such system? ideally we want two low end GPU to outperform mid range GPU while cost lest to buy those two. same rule with mid range GPU. for the...
As per one recent article posted online (was taken down), in terms of specifications, the GTX 1660 SUPER has been confirmed to utilize the TU116-300 die, same as the existing GeForce GTX 1660. It would feature the same core config of 1408 CUDA cores, 80 TMUs, and 48 ROPs. The clock speeds are not mentioned but those can get a slight bump.

The major change to the graphics card would be the memory design.

While the GeForce GTX 1660 features 6 GB of GDDR5 memory running at 8 Gbps, the GeForce GTX 1660 SUPER would feature 6 GB of GDDR6 memory running at 14 Gbps. This memory is faster than the GeForce GTX 1660 Ti which features 12 Gbps dies. The memory would be featured along a 192-bit bus interface and deliver a total bandwidth of 336 GB/s.

So yeah, there is surely a performance uplift...
 
The 1660 super most likely similar to when 1060 and 1080 when both being upgrade with 9gbps memory. To be honest i don't think we are going to see much performance uplift. Unless they also change other things like more CUDA cores or even faster GPU clock.

BTW it's already confirm that nvidia already done with Turing successor. It is just a matter of when to release them. Right now the GPU team are already working with their next gen GPU that will replace Turing successor.
 
The AMPERE GPU architecture is next on the road map, as far as I'm concerned. What other successor are you aware of ?

even Ampere to my knowledge is still not the official names for architecture after Turing. and initially the early speculation was Ampere will succeed Volta. i don't remember exactly when but nvidia stops talking about their future architecture around maxwell or pascal generation. normally when nvidia launch a new architecture they will also tell the next two architecture code names after that. we know about maxwell when nvidia launch fermi in 2010. when they launch kepler volta was announced to succeed maxwell. but in the end talking about all of this stuff in advance end up being useless when they actually launch the architecture it end up being different that what they said three to four years prior. maxwell for example was to be another compute monster to succeed kepler. and it was supposed to be nvidia first GPU to integrate ARM CPU inside them. and then 20nm happen. to increase gaming performance from kepler while keep using the same node as kepler nvidia end up to change maxwell into pure gaming/rendering architecture and continue to use kepler for another two years until pascal GP100 arrive as for their compute solution. that ARM CPU inside their discrete GPU never happen either.


Why so ? The faster Memory will surely be an asset.

unless 1660 being heavily starved by GDDR5 it's using now then we might not going to see much improvement. from history we often see nvidia GPU are not really that starve from memory bandwidth unlike AMD GPU.
 
No. I'm 100% sure the next-gen GPU arch would be named as AMPERE series, unless Nvidia has some last minute change of plans. Though, Ampere is listed on NVIDIA's roadmap.
 
Last edited by a moderator:
well in the interview Dally was asked what the code names for the new GPU he did not answer it nor confirm that Ampere is next. for me it could be like this: Ampere might be Volta successor but it might be not the architecture to succeed Turing. nvidia architecture naming is a bit all over the place since pascal generation. we know GP100 is designed differently than the rest of pascal chip but they were still all being called as pascal anyway. in a way other than GP100 the rest of the chip probably more accurate if we call them maxwell version 3. and then despite volta and turing is very similar in design nvidia decides to call their compute architecture as volta and gaming architecture as turing. if nvidia did follow their current naming scheme their compute and gaming solution will bear different architecture names even if the design is almost similar to each other.

right now we just don't know when nvidia will launch turing successor. some people speculate Q3 2020 because turing was being launch in Q3 2018. but my guts telling me that we might see them in Q2 instead of Q3.
 
I'm hoping next year Nvidia launches the RTX 3000 series of GPUs, based on the same TURING architecture. Though, Nvidia doesn't have to worry that much about any competition from AMD, since the still have the high-end fastest GPUs till date.

The RTX 2080 Ti be more precise. AMD is still behind in the flagship bracket.
 
right now we just don't know when nvidia will launch turing successor. some people speculate Q3 2020 because turing was being launch in Q3 2018. but my guts telling me that we might see them in Q2 instead of Q3.

BTW, I speculate Q3 2020 at the earliest. Though, it might also release in Q2, but I don't think Nvidia is in any hurry to release high-end flagship GPUs sooner. I was reading one article, in which it was stated that the next-gen GPUs would be having MAJOR architectural changes as well.
 
I'm hoping next year Nvidia launches the RTX 3000 series of GPUs, based on the same TURING architecture. Though, Nvidia doesn't have to worry that much about any competition from AMD, since the still have the high-end fastest GPUs till date.

The RTX 2080 Ti be more precise. AMD is still behind in the flagship bracket.

for one nvidia can improve Turing the same way they improve pascal over maxwell. meaning no big changes in architecture wise but they can push things like GPU clock further and able to reduce their die size further on top of node shrink. die size wise AMD usually end up being much smaller than nvidia. but during pascal generation nvidia end up winning on both performance and die size vs AMD. look at GP106 for example. the performance is almost as fast as GTX980 despite only having half the amount of CUDA cores vs GTX980 thanks to higher clock end up compensating for the lack of CUDA cores. less CUDA cores also means nvidia can reduce die size further on top of 16nm node shrink at that time. right now they said they said AMD Navi will still going to have advantage die size wise vs nvidia even if nvidia shrunk Turing to 7nm process. but i see they just make assumption based on direct shrink of Turing only not taking into account the improvement nvidia can make for the architecture itself together with die shrink.

also even if AMD did not compete with nvidia fastest they still need to release faster GPU to the market. nvidia did not wait for AMD to compete with GTX1080 to release GTX1080ti. and they also did not wait for AMD to compete with GTX1080Ti to release RTX2080Ti.

BTW, I speculate Q3 2020 at the earliest. Though, it might also release in Q2, but I don't think Nvidia is in any hurry to release high-end flagship GPUs sooner. I was reading one article, in which it was stated that the next-gen GPUs would be having MAJOR architectural changes as well.

i speculate it might be early because nvidia sometimes do release their stuff at a time that some people did not expect them to be. pascal and volta was like this. also the competition is going to heat up next year with intel joining the fray. rather than waiting for the competition to catch up nvidia might be the one that start the initiatives first. just look at stuff like support for integer scaling that happen recently. it is intel that first to announce to support the feature and they were get some praise from the public for doing it when both AMD and nvidia for years choose to ignore it. then bam. nvidia suddenly release driver to support integer scaling even before intel release theirs to the public.
 
As for the 1660 SUPER, I wouldn't expect that much from it. A 1660 Ti is only around 15% faster than a 1660, while featuring about 9% more graphics cores and 50% faster GDDR6 VRAM. The cores should account for about a 9% performance difference, while the faster VRAM not much more than 6% or so. Even if the VRAM is updated to be around 16% faster than what's in the 1660 Ti, that should only amount to roughly a 7% performance uplift, which should place its performance almost directly in between the 1660 and 1660 Ti, assuming clock rates don't change much. So, not exactly all that impressive.

Of course, it's value will come down to pricing. The 1660 has an official MSRP of $219, and the 1660 Ti is $279, though some models can be found for less. If the 1660 SUPER replaces the 1660 at its existing price point, that could be a reasonable update, and would likely cause the 1660 and 1660 Ti to drop in price a bit. If, on the other hand, they slap a $249 price on it, then nothing has really changed in terms of value.

I'm hoping next year Nvidia launches the RTX 3000 series of GPUs, based on the same TURING architecture.
I suspect they will probably release what amounts to a Turing refresh on a smaller process node, with significantly more RT cores on the higher-end parts to help make the performance of raytraced lighting effects somewhat reasonable, and RT support brought to the mid-range as well.

For as much as Nvidia pushed raytracing at Turing's launch, performance with the feature enabled is quite poor, and is definitely the lineup's weakness from a performance standpoint. The 20-series cards are generally good for 1440p to 4K in most games, but raytracing effectively turns them all into 1080p cards, with the more moderately-priced models struggling to even manage that. A doubling of RT cores could halve the performance hit from raytracing and make the effects a lot more viable. I believe the RT cores don't take up more than 10% of the current chips, so a process-shrink could make doubling those components a viable option. If raytracing becomes a big feature of the next-generation consoles, and competing cards support it as well, you can expect it to become a standard feature among "ultra" graphics settings, so RT performance may become a big selling point in the next year or two.

Aside from that, the performance improvement to rasterized graphics might not be all that large for a given price point, and they might just rely on the process-shrink to increase performance rather than increasing core counts.
 
also the competition is going to heat up next year with intel joining the fray. rather than waiting for the competition to catch up nvidia might be the one that start the initiatives first. just look at stuff like support for integer scaling that happen recently. it is intel that first to announce to support the feature and they were get some praise from the public for doing it when both AMD and nvidia for years choose to ignore it. then bam. nvidia suddenly release driver to support integer scaling even before intel release theirs to the public.

INTEL is only going to release mainstream mid-range budget GPUs next year, based on the XE architecture. It was pointed out before by RAJA than they don't have any plans to target the high-end GPU market.

So mostly we might get to see sub $200-250 priced GPUs, IMO.

As for the 1660 SUPER, I wouldn't expect that much from it. A 1660 Ti is only around 15% faster than a 1660, while featuring about 9% more graphics cores and 50% faster GDDR6 VRAM. The cores should account for about a 9% performance difference, while the faster VRAM not much more than 6% or so. Even if the VRAM is updated to be around 16% faster than what's in the 1660 Ti, that should only amount to roughly a 7% performance uplift, which should place its performance almost directly in between the 1660 and 1660 Ti, assuming clock rates don't change much. So, not exactly all that impressive.

Of course, it's value will come down to pricing. The 1660 has an official MSRP of $219, and the 1660 Ti is $279, though some models can be found for less. If the 1660 SUPER replaces the 1660 at its existing price point, that could be a reasonable update, and would likely cause the 1660 and 1660 Ti to drop in price a bit. If, on the other hand, they slap a $249 price on it, then nothing has really changed in terms of value.

The GTX 1660 SUPER is a very confusing and strange entry. Makes little sense, imo. Despite it being a SUPER variant, I think it will still lack support for RT and TENSOR cores as well ?

So I don't see any point in releasing this card. Why give it the SUPER nomenclature in the first place ? I totally agree with your other points though.
 
INTEL is only going to release mainstream mid-range budget GPUs next year, based on the XE architecture. It was pointed out before by RAJA than they don't have any plans to target the high-end GPU market.

So mostly we might get to see sub $200-250 priced GPUs, IMO.

except we don't know what kind of performance that "mainstream" intel is referring to. imagine if their $250 mid range cards is in the realm of RTX2070 or RX 5700 XT. that can shake the market. some people might think intel does not have the brand power or even the expertise similar to nvidia and AMD in graphic card (especially software) but intel can brute force for user to have their GPU by bundling OEM machine with their GPU for much cheaper price than the one using AMD or nvidia discrete GPU. this kind of strategy was very successful in compute accelerator market where intel able to erode nvidia dominance and pretty much kill AMD in that particular market.
 
Status
Not open for further replies.

TRENDING THREADS

Latest posts