Nvidia Lowers Q4 Revenue Guidance, Stock Drops

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

InvalidError

Titan
Moderator

Despite Nvidia CEO's claims that Nvidia wants to remain in the gaming card arena, its investments in GPGPU scream that AI and HPC are a higher priority since that's where the high-margin fast-growth market is. Consumers are likely far more upset about the retail prices than Nvidia is upset about lower sales of consumer GPUs.
 
So far you appear to be right. I find it funny every one is rooting for AMD and Intel GPUs these days, and Nvidia is the greedy party.

 

bit_user

Splendid
Herald

I think there's room to improve energy efficiency (and performance) by making caches more cooperative.

If you look back into semi-recent history, many cutting-edge designs (like IBM's Cell processor) have eschewed big cache hierarchies, instead favoring software-managed local memory. The downside is that software is ultimately limited in what it can do with prefetching. So, the idea would be to strike a better balance by providing software with some hardware support (like directly accessible CAMs) to efficiently use local memory like cache, while avoiding the overhead of a cache when software wants to directly manage it and use it as a scratch pad.

It's not hugely revolutionary, but it might offer enough margin to edge out competing architectures. I'm not predicting this coming to x86, mind you. Maybe ARM will try something like this, in a future incarnation of their ISA. Maybe it'll happen in Risc VI, or maybe it'll come out of the blue (VISC, anyone?).
 

bit_user

Splendid
Herald

Don't forget self-driving cars*.

Seriously, I'm sure Nvidia has used a big chunk of its cryptomining windfall to staff up its automotive division, and because that market is still in its infancy, they need to continue funding it with inflated RTX GPU prices.

I'm not saying that's the main factor behind the high prices of RTX products, but it surely isn't helping.

* Note: they are also pitching Xavier at the high-end drone and robotics market.
 

InvalidError

Titan
Moderator

Having local "scratch" memory is basically software-directed caching. Yes, that might be slightly more power-efficient from avoiding cache-coherence concerns (would be expected to get sorted out in software where applicable), at the expense of putting the onus on the programmer, compiler, libraries, OS, etc. to explicitly tell the CPU what to do with its cache/scratch memory to extract any benefits from it and the need for the OS to save that scratch area during context switches too. That's a horrible lot of effort for likely very small performance gains compared to using L1/2/3 as memory-backed implicit scratch space. Not worth the trouble for mostly sequential lightly threaded workloads.

Where the design principle is alive and well is in embarrassingly parallel processors where the software has fine exclusive control over the hardware such as GPUs where they are used to pass data between shader threads. It works great there because by the time wavefronts/warps reach shader units, it is generally assumed that the software has done a good enough job at preventing buffer addressing conflicts between threads and that any errors should be non-critical. Putting full-blown cache there would be a waste of silicon, especially if the outcome is non-critical (ex.: occasional triangle not getting textured under very specific conditions, random polygon glitch, texture layering glitches, etc.) and the choice of permissible error rate is left in the developer's hands. (To troubleshoot and fix or not troubleshoot and notfix?)

CPUs don't have the benefit of a higher-order CPU and OS neatly pre-vetting and pre-arranging code and data up to "presumably good enough" state to afford the architecture the luxury of skipping checks or making assumptions about thread execution order. Having yet another type of RAM internal to CPU cores that exists outside conventional memory address space would also open up a whole new can of worms security-wise.
 

bit_user

Splendid
Herald

Well, there are different levels at which you could place it. If you made it private to a core/thread, then there should be no security concern. IMO, cache is still a good solution for inter-thread communication - which tends to be far lower-bandwidth, as well.

As for additional context switching overhead, the L1/L2 cache routinely gets flushed (and I think some spectre or meltdown fix might now explicitly do that) anyway, so we're really not talking about a lot of additional memory traffic. Plus, have you considered the size of the AVX-512 register file? I think it might be bigger than the Cell processor's entire per-SPE scratch pad area!

Anyway, we can debate the relative merits, but only time will tell, for sure. Really, there's not so much difference between this and simply having a large register file. Just add indexed access and you're basically there. Another way to think of it is like having your stack frame fetched into local memory, which I think is a little like what SPARC tried to do with register windows.
 

Logos

Distinguished
Apr 5, 2005
8
0
18,510
0
First off, I love Tomhardware.com, and also a PC enthusiast. I run a small business, and use Tom's a lot for my company tech needs, but after the "just buy" article, I really questioned some of their sponsorship/advertiser conflict of interest. I actually considered Purchasing the 2080 Ti......
 
You must have missed the part where it said "editorial" or "Just Kidding" like everyone else on the planet. I'm not sure Tom's realizes the loss of face that occurred from the "just Buy It" article, or if thinks it can just ignore the HUGH loss of credibility, faith, respect and status that one article cost Tom's. Every time I read one of their reviews and it gets to the "Why trust us" part I just have to shake my head. There is no more just trusting Tom's on anything they say. I don't accept it until at least 1-2 other sites confirm it now. It wasn't that way before, but it is that way now.

 

ASK THE COMMUNITY

TRENDING THREADS

Latest posts