Nvidia Volta Megathread

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
but at what level of clock does this occur?

maybe somewhere above 2.1ghz mark? probably why nvidia decided to "lock" pascal at pretty much 2.0-2.1Ghz limit right now. they can probably make work around to make it clock higher but will the effort going to worth it? just look at AMD Vega. AMD spend 3.9 billion of transistor just so GCN can be clocked much higher. they finally able to do it but their power consumption also going crazy.
 


ipc maxwell considerably faster than pascal. a 980ti@1600mhz matches a 1070@2200mhz. and kingpins 2000mhz+ 980ti world record at the time was something like 27000ish in firestrike gpu score which is close to a 1080ti in stock form. all that really does mean much, we just want 20-40% real world gaming performance increases at the same power draw. i dont really care what the clock rate or memory config is, just give me performance.
 


A 980Ti has 2816 cores. A 1070 has 1920 cores. I agree that performance/watt is what we are looking for. The comparison you are looking for is the 2048 core GTX 980 vs. the 1920 core GTX1070. Still not exactly equivalent, but life is unfair.
 
Most often people only look at core clock but overlook the core count. On architecture level maxwell and pascal structure is almost identically the same. So IPC wise it ahould be identical (per anandtech assumption). Just look at GTX980 vs GTX1060. GTX1060 need much higher clock to reach 980 performance but 1060 also have much less CUDA core than 980 (1280 vs 2048).
 
I have one problem with looking at individual "execution units" (or shader arrangements) and taking them as part of the "efficiency" equation.

That is similar to taking ALUs, AGUs, FPUs and other internal execution units from a CPU and using them as an effective measure of "efficiency" in a CPU design. I know you *can* do it, but it's hard to justify it in context. When you count them, you're leaving other effective parts of the GPU that need to be included, such as memory controllers and... I can't think of another, haha.

It's simpler to just analyze GPUs as a monolithic entity, much like we see a CPU core. GPUs don't have a divide (yet, they might actually go "MCM" shortly).

Cheers!
 


Another way to look at it is that the 980 ti has 8 billion transistors, while the 1070 has 10% less, 7.2 billion.
 
Nvidia is very agressive to maintain the lead they have on AI. They rely 4 years on GK110 for their top compute card before. Now GP100 has only been a year on the market and they already replace them with GV100 ( though nvidia still selling even GK110 based accelerator to the market).
 
There's one caveat though. From what I'm gathering, nVidia is pushing to lead the conversations for AI work in terms of technology. All the IP they are showing and discussing is proprietary stuff with zero hints of FOSS. This smells like a rehash of the CUDA strategy. I'm not saying it's bad or good, but that is what it smells like to my nose, heh.

Cheers!
 
Because for over a decade it has been working very well for them. Open source might be the best option in ideal world but the thing with Open source is everyone have a say in shaping the API/software. That's good so the software will work on all hardware but sometimes they can hold back things from going forward. I think that's why AMD very reluctant to give IHV direct access to their Mantle API before. They will give the base spec to Khronos group but they did not want nvidia or intel to have any hands in developing mantle itself.
 
Fair point.

That is the good side of having a proprietary API in use: tailored performance. Open Source APIs have the "fit all" and the "backwards compatibility" problems. Those two are not easy nor simple to approach.

Hence why I don't consider it a bad thing inherently. It's just... I get itchy when I have to deal with proprietary stuff, haha.

Cheers!
 
I think nvidia quite openly said they did not like having to do all the job while others just get a free rides. That's why they go with more proprietary route and if possible charge money for them. Consumer might not like this but this is probably why nvidia are in much better position than amd financial wise.

Personally i would like nvidia to adopt more open standard (like adaptive sync) instead of seeing them as competitor to their solution. Though if nvidia indeed suporting adaptive sync it might end up hurting AMD more. To me despite the complain many people had for nvidia not supporting adaptive sync it give AMD some breathing room. Rigt now supporting adaptive sync becoming one of AMD advantage. Just look the recent pricing with Vega. It probably much less of a mess for AMD if nvidia decided to capitalize their lead by keeping 1080 original pricing and charge more for 1080ti.
 
It's just an approach they have that I (and a lot more people) don't really share nor like.

FOSS, or free to use software, does not mean people is stealing things from other people when they contribute. The whole point is that "everybody wins". The take nVidia uses is to justify their own angle, but at least they're honest about it: they don't like sharing because they feel like they lose instead. That is fine; they're not hiding it either. They're a company after all. This model brings them the most profit, otherwise they'd use another business model.

And I can give you a lot of examples where the FOSS is just as good as the private/closed version. And in terms of vulnerabilities, FOSS is better hands down.

BUT! This is a very dense and diverse topic that is too tangencial to this thread, haha.

In regards to nVidia having success, that is thanks to their engineers first and marketing team second, haha. AMD derping is also helping them a lot.

Cheers!
 
When do you think will I be able to get a notebook with a volta's replacement for 1070/1080 mobile? Want to know if it's worth the wait really,considering I don't really need a fast conputer anytime soon and I'm already getting a halfdecent 1080p machine with a rx560 (decent, lowpower uograde from gtx760) and a r5 1600 (from a8 6600k, which is slower than g4560). I have loads of older games ro play still and at medium settings I could play most newer games for the time being, especially since I'm finally uograding to freesync. However, I can't always use the desktop, hence why I'm not going all-out, and want to get a clevo coffee lake notebook with a 1080 or whatever that will last me as long as possible for gaming whereever in the house, so I don't have to wait for me sister to finish her homework.
 
waiting can be worth it....but it can also be other wise. like those that waiting for Vega and hoping it will compete head to head with nvidia 1080ti. though notebook with nvidia next generation GPU probably won't be here anytime soon. desktop card will be here first. for the mobile version it might not coming out until a few months later.
 


I read that, too. My first thought was that is right on schedule for holding up a piece of silicon, saying we are introducing Ampere, giving a list of features, and then, saying it's the next gen after Volta. Available at some time in 2019. He's been doing this at GTC for years.
 


reading the article it is more like Ampere will directly replace "gaming" pascal and will launch in Q2 2018. probably nvidia decided to use separate architecture name for their gaming and compute GPU. with pascal we have gaming and compute pascal. despite sharing the same architecture name compute pascal was build differently. with kepler even when GK110 were more compute focus than it's smaller sibling they still share similar design. for example all SMX in kepler have 192 cuda cores. with pascal "gaming" pascal have 128 cuda cores per SM. with GP100 each SM only have 64 Cuda cores each. anyway if this is real we should hear more early next year.
 
Nvidia Is Building Its Own AI Supercomputer
by Paul Alcorn November 13, 2017 at 3:00 PM

AI has begun to take over the data center and HPC applications, as a quick trip through the recent HotChips and ongoing Supercomputing 2017 trade shows quickly affirms.
Nvidia has been one of the companies at the forefront of AI development, and its heavy investments in hardware, and more importantly the decade-long development of CUDA, have paid off tremendously. In fact, the company announced that 34 new Nvidia GPU-accelerated supercomputers were added to the TOP500 list of supercomputers, bringing its total to 87. The company also powers 14 of the top 20 energy-efficiency supercomputers on the Green500 list.
Jensen Huang, Nvidia CEO, took to the stage at Supercomputing 2017 to announce that the company is also developing its own AI supercomputer for its autonomous driving program. Nvidia projects its new supercomputer will land in the top 10 list of worldwide AI supercomputers, which we'll cover shortly.

New threats loom in the form of Intel's recent announcement that it's forming a new Core and Visual Computing business group, headed by Raja Koduri of AMD fame, to develop not only a discrete GPU but also a differentiated graphics stack that scales from the edge to the data center. AMD is also enjoying significant uptake of its EPYC platform paired with its competitive Radeon Instinct GPUs, a big theme here at the show, while a host of FPGA and ASIC vendors are also vying for a slice of the exploding AI segment.
The show floor is packed with Volta V100 demonstrations from multiple vendors, as we'll get to in the coming days, and Nvidia also outlined its progress on multiple AI fronts.
Nvidia's SaturnV AI Supercomputer
aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9VL04vNzI4MTU5L29yaWdpbmFsLzEzLkpQRw==

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9VL08vNzI4MTYwL29yaWdpbmFsLzE1LkpQRw==

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9VL1AvNzI4MTYxL29yaWdpbmFsLzE0LkpQRw==
Nvidia also announced that it's upgrading its own supercomputer, the SaturnV, with 660 Nvidia DGX-1 nodes (more detail on the nodes here), spread over five rows. Each node houses eight Tesla V100s for a total of 5,280 GPUs. That powers up to 660 PetaFLOPS of FP16 performance (nearly an ExaFLOPS) and a peak of 40 PetaFLOPS of FP64. The company plans to use SaturnV for its own autonomous vehicle development programs.
Amazingly, Nvidia claimed that the SaturnV will easily land in the top 10 AI supercomputers worldwide, and might even enter the top five once completed.
That's impressive considering that state-run institutions drive many of the world’s leading supercomputers. Nvidia and IBM are also developing the Summit supercomputer at the Oakridge National Labs. Summit should unseat the current top supercomputer, which resides in China, with up to three ExaFLOPS of AI performance.
Expanding The Ecosystem
aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9VL0IvNzI4MTQ3L29yaWdpbmFsLzAxLkpQRw==

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9VL0MvNzI4MTQ4L29yaWdpbmFsLzAyLkpQRw==

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9VL0QvNzI4MTQ5L29yaWdpbmFsLzAzLkpQRw==
Nvidia announced that its massive V100 GPU, which boasts an 815mm2 die packed with 21 billion transistors and paired with 16GB of HBM2, has been adopted by every major system and cloud vendor. The blue-chip systems roster includes Dell EMC, HPE, Huawei, IBM, Lenovo, and a host of whitebox server vendors. AI development is expanding rapidly, and a number of startups, businesses, and academic institutions are rapidly developing new products and capabilities, but purchasing and managing the requisite infrastructure can hinder adoption.
Every major cloud now offers Volta-powered cloud instances, including AWS, Azure, Google Cloud, Alibaba, Tencent, and others. Microsoft recently announced its new cloud-based Volta services, which will be available in November. The company also now supports up to 500 applications with the CUDA framework.
aHR0cHM6Ly9pbWcucHVyY2guY29tL3IvNjAweDQ1MC9hSFIwY0RvdkwyMWxaR2xoTG1KbGMzUnZabTFwWTNKdkxtTnZiUzlWTDB3dk56STRNVFUzTDI5eWFXZHBibUZzTHpBMExrcFFSdz09

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9VL00vNzI4MTU4L29yaWdpbmFsLzA1LkpQRw==

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9VL0UvNzI4MTUwL29yaWdpbmFsLzA2LkpQRw==

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9VL0gvNzI4MTUzL29yaWdpbmFsLzA3LkpQRw==

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9VL0YvNzI4MTUxL29yaWdpbmFsLzA4LkpQRw==

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9VL0cvNzI4MTUyL29yaWdpbmFsLzA5LkpQRw==

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9VL0kvNzI4MTU0L29yaWdpbmFsLzEwLkpQRw==

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9VL0ovNzI4MTU1L29yaWdpbmFsLzExLkpQRw==

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9VL0svNzI4MTU2L29yaWdpbmFsLzEyLkpQRw==
Nvidia is also offering new software and tools on the Nvidia GPU Cloud (NGC), such as new HPC containers that simplify deployment. The containers work with Docker, with support for other container applications coming in the future. The applications run on any Pascal or newer GPU, along with HPC supercomputing clusters and Nvidia's DGX systems. These containers support all the main frameworks, such as Caffee, Torch, Pytorch, and many others. Nvidia has focused on simplifying to process down to a few minutes.
Japan's AIST also incorporated 4,352 Tesla Volta V100 GPUs into its ABCI supercomputer, which provides over 37 PetaFLOPS of FP64 performance, making it the fastest AI supercomputer in Japan. It's the first ExaFLOPS- AI supercomputer in the world.
We'll be scouring the show floor for some of the latest Tesla V100-powered systems; stay tuned.
 
Compute accelerator it is pretty much dominated by nvidia since fermi. But the most surprusing thing is in the past we thought that this market is a duopoly market between nvidia and AMD. Intel actually more successful than AMD in compute accelerator market with their Xeon Phi. Market share wise AMD is almost non existent.
 
DGX SaturnV Volta - NVIDIA DGX-1 Volta36, Xeon E5-2698v4 20C 2.2GHz, Infiniband EDR, NVIDIA Tesla V100 Specs:

https://www.nvidia.com/en-us/data-center/dgx-1/

Will the Volta GPU Help NVIDIA Conquer the Cloud? Looks Like Nvidia has it made.
I do think that NVIDIA's latest chip (Volta) could help consolidate its lead in the fast-growing market for server GPU accelerators.

Harsh Chauhan Pointed out why will Nvidia lead the way Nov 13, 2017 at 9:39PM
NVIDIA's (NASDAQ:NVDA) data center business has been one of its biggest growth drivers in recent quarters thanks to the growing application of GPUs (graphics processing units) in cloud computing applications. The chipmaker now gets almost 19% of its revenue by selling GPUs for data centers, up from 10% just a year ago as almost all the big cloud players are lining up for its chips to speed up their infrastructure and prepare for the era of artificial intelligence (AI).

In fact, NVIDIA's data center revenue shot up 175% year over year during the second quarter. More importantly, it is capable of maintaining this terrific pace of growth on the back of its recently launched Volta GPUs that are expected to enhance its lead in this space

Amazon Web Services (AWS) recently launched its P3 cloud computing instance (an instance is a virtual server) powered by the latest Volta GPUs to target intensive applications such as autonomous vehicles, machine learning, and seismic analysis, among others. More specifically, AWS' P3 instance allows users to deploy up to eight Volta GPUs, boosting performance over the previous-generation P2 instance by as much as 14 times.

This enhanced performance will help users train artificial intelligence models in just hours instead of days, which isn't surprising as NVIDIA claims that the new Volta chips are 12 times faster than the preceding Pascal chips. The graphics specialist has managed to eke out such terrific performance from Volta with the help of tensor cores that are meant to substantially accelerate the training and inferencing of artificial intelligence algorithms and applications.

There has been a lot of interest in these new GPUs given the performance leap they deliver over the previous-generation architecture. The likes of HPE, IBM, Dell EMC, Huawei, and Lenovo have already announced that they will soon start using the Volta GPUs. Additionally, Chinese large-scale cloud computing providers Tencent, Baidu, and Alibaba have announced their commitment to NVIDIA's new chip, stating that they will upgrade from Pascal to Volta GPUs across data centers and related cloud infrastructure.
 
Return to shareholders:
NVIDIA has committed to return nearly ~$1.3 billion to shareholders in fiscal 2018. It has already returned $924 million to shareholders in fiscal 1H18. In fiscal 2H18, it’s likely to spend over $160 million on dividend payments and $160 million on share buybacks.

NVIDIA’s capital spending strategy bodes well for its investors likely because they’re not interested in dividends, but rather in returns through stock price appreciation.

Fiscal 2Q18 EPS
NVIDIA’s non-GAAP (generally accepted accounting principles) EPS rose 91% YoY (year-over-year), or 19% sequentially, to $1.01 in fiscal 2Q18 because its operating profits rose faster than revenues in dollar terms. Its operating profits rose because the quarter included the sales of its very high-end GTX 1080 Ti and Titan XP, which generate higher margins.

Intel’s (INTC) and AMD’s EPS fell 12% and 50% sequentially, respectively, during the same quarter.

Fiscal 3Q18 EPS guidance
Analysts expect NVIDIA to report EPS of $0.94 in fiscal 3Q18—a 7% fall from fiscal 2Q18. Fiscal 3Q is a seasonally strong quarter for NVIDIA, and it usually reports strong double-digit EPS growth during the quarter.

The company has beaten analysts EPS estimate by an average of 36% for the past eight quarters. If the company maintains this momentum, it’s likely to report EPS of $1.28, representing 27% sequential growth.

Long-term EPS forecast
Analysts expect NVIDIA’s EPS to rise 17% YoY to $3.6 in fiscal 2018. The company has already earned EPS of $1.86 in fiscal 1H18, but there’s a possibility that an analyst might revise his or her EPS estimate for fiscal 2018 if the company reports EPS of over $1.0 in its fiscal 3Q18 earnings.

For the long-term, RBC Capital Markets analyst Mitch Steves expects NVIDIA’s EPS to grow to $6.0 in fiscal 2019 and $10.0 in fiscal 2021. Evercore ISI analyst C.J. Muse expects NVIDIA’s EPS to reach $10 in the next three to five years. Both the analysts are bullish on NVIDIA.
There is no stopping NVIDIA.
 
Take non-GAAPs analysis with a huge grain of salt though. That is the "accounting cooking book" to show good numbers.

Still, no one can deny that nVidia is in an amazing place currently. I hope Volta keeps the good trend.

Cheers!
 
Status
Not open for further replies.