News At Nvidia's GTC event, Pat Gelsinger reiterated that Jensen got lucky with AI, Intel missed the boat with Larrabee

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Bigger question can Nvidia grow without alternative fabs from Intel or Samsung when TSMC'S has saturated its allocation?
Last year, they were limited by HBM availability, not logic wafers. I assume that's still true, since I think I read that HBM production capacity is sold out through the end of the year. I think advanced packaging has been another choke point.

But, I take your point that they're currently limited by production they don't ultimately control. I'm not entirely sure how bad it's been, because it's probably had the effect of bidding up their prices. So, the amount of profit they'd get on greater volume might not actually be that much more than they've been making.

Going forward, the challenge for them is to offer more perf/$. They can do that by either reworking their existing dies to further optimize performance, which is what it sounds like B300 is. Another way is by using a smaller node, as in Rubin, which I believe will use a TSMC N3 node for logic and HBM4 to increase bandwidth. By offering more performance at the same or better perf/$, they can charge more and make more money even off relatively fixed production volume.
 
Last edited:
Definitely not luck. Smart people have a vision and stick to it or adapt when necessary. Jensen was always able to do that. Maybe not immediately but still kept to vision and in steps everything came together to get NVIDIA to the point where they are now.
Just like Elon Musk. It doesn't quite matter whether he invented anything initially but he had a vision and has now taken several companies to great success.
Smart people make smart choices and adapt when necessary.
 
I had a project
This is not the first time he is talking like that, "I this" and "I that" all the time.. I wonder how people on Intel could put up with him for so long :) But now they are "freed" so I believe Intel are already back in business
Regarding Microsoft or Intel phones, people should understand that that kind of monopoly is not going to work ever. Each one in his domain is better for all of us - well, obviously the opposite is not working anyway
 
Last edited:
  • Like
Reactions: bit_user
I think Pat made it clear that Nvidia live up to the principle of "the harder one work the luckier one get"
 
Definitely not luck. Smart people have a vision and stick to it or adapt when necessary. Jensen was always able to do that. Maybe not immediately but still kept to vision and in steps everything came together to get NVIDIA to the point where they are now.
Just like Elon Musk. It doesn't quite matter whether he invented anything initially but he had a vision and has now taken several companies to great success.
Smart people make smart choices and adapt when necessary.
Definitely not intelligence. Jensen may very well be a genius, but that's not why nVidia has catapulted. The man is a master class salesman. Vision is also key, but visions to improve humanity rarely work out as well as visions for power. Time will tell if the products were worth the attention.
 
  • Like
Reactions: bit_user
Bigger question can Nvidia grow without alternative fabs from Intel or Samsung when TSMC'S has saturated its allocation?
TSMC has saturated it allocation? What are you smoking? TSMC is NOT out of capacity. Nvidia simply isn’t going to pay 3x the price for last second allocation
 
Nvidia is lucky, that's for sure, but Intel was led by greedy incompetent idiots.

Saying no to the iPhone project, ignoring discreet/advanced GPU market until very recently,
I don't know about whatever "iPhone project" you're referring to, but Larrabee was originally all about trying to get back into the dGPU market, which Intel had first dabbled in, back in the late 1990's.

Post-Larrabee, Intel went heavy on iGPUs, arguably culminating in that Broadwell with eDRAM. I don't know if that might've originated as a bid for the console market, or exactly where they thought they were going with that. 10 years after Larrabee (so, about 20 years after the original i740), they hatched plans to have another go at the dGPU market.

So, I wouldn't say they ignored it, exactly.

ignoring smartphone market in general,
Uh, they did try that, and lost $Billions, in the attempt.

not trying to do ARM SoCs to comply with high demand,
That wouldn't make sense, to me. As long as their own SoCs are competitive and there's a big enough market for them, it would be a little silly to offer ARM. The time to do that would be after the market reaches a tipping point.

However the CEO and corporates salaries are gigantic, whatever happens.
Technically, it's the stocks and options, where they make the lion's share of their compensation. Yes, they're quite well compensated.

I think Pat made it clear that Nvidia live up to the principle of "the harder one work the luckier one get"
Yeah, I like the saying: "Chance favors the prepared mind."
 
Last edited:
Definitely not luck. Smart people have a vision and stick to it or adapt when necessary.
Oh sure there was! You just can't do it on luck, alone. In fact, there were several points where Nvidia was close to going under.

And the fact that Nvidia had the right solution, at the right time, was definitely a bit of luck. You could've never planned it that way. Back when they launched CUDA, I'm sure they didn't imagine it'd make them a multi-$Trillion company.

There are definitely smarter and harder working people than Jensen, who never hit it nearly as big. I'm not saying he's not good, but luck had a lot to do with the scale of his success.

Just like Elon Musk. It doesn't quite matter whether he invented anything initially but he had a vision and has now taken several companies to great success.
Where he's been successful is revolutionizing legacy industries with technology: Finance, Automotive, and Aerospace. Once he tried to take on an industry that was born in the internet age (i.e. social media), that's where he reached his limit.
 
I don't know about whatever "iPhone project" you're referring to, but Larrabee was originally all about trying to get back into the dGPU market, which Intel had first dabbled in, back in the late 1990's.

Post-Larrabee, Intel went heavy on iGPUs, arguably culminating in that Broadwell with eDRAM. I don't know if that might've originated as a bid for the console market, or exactly where they thought they were going with that. 10 years after Larrabee (so, about 20 years after the original i740), they hatched plans to have another go at the dGPU market.

So, I wouldn't say they ignored it, exactly.


Uh, they did try that, and lost $Billions, in the attempt.


That wouldn't make sense, to me. As long as their own SoCs are competitive and there's a big enough market for them, it would be a little silly to offer ARM. The time to do that would be after the market reaches a tipping point.


Technically, it's the stocks and options, where they make the lion's share of their compensation. Yes, they're quite well compensated.


Yeah, I like the saying: "Chance favors the prepared mind."
Hey that Broadwell part is better than it gets credit for. I assume the point of it was internally proving decent iGPU performance was possible even with ddr3 bandwidth limitations and somebody just liked it enough that it made it to production. ChipsandCheese did a nice write up on it sometime in the last year.
 
  • Like
Reactions: bit_user
Hey that Broadwell part is better than it gets credit for. I assume the point of it was internally proving decent iGPU performance was possible even with ddr3 bandwidth limitations and somebody just liked it enough that it made it to production. ChipsandCheese did a nice write up on it sometime in the last year.
There was no grand vision for adding an L4 eDRAM cache to Broadwell, which is demonstrated by never seeing it again. The reason why Broadwell desktop got it was as a performance bandaid for the giant clock speed loss vs Haswell (4.2Ghz->3.7Ghz). Intel was struggling with 14nm and it significantly delayed Broadwell. As a result, we only got two desktop models, one i7 and one i5, and they were replaced just 2 months later by Skylake.
 
Hey that Broadwell part is better than it gets credit for. I assume the point of it was internally proving decent iGPU performance was possible even with ddr3 bandwidth limitations and somebody just liked it enough that it made it to production. ChipsandCheese did a nice write up on it sometime in the last year.
One reason I suspect it was a bid for consoles (I might've even heard a rumor about that) is because Microsoft/XBox went through several iterations of using in-package memory. So much so, that I was honestly surprised when they ditched that for a standard GDDR setup with the Series X. Coming years before that, Broadwell's eDRAM seemed right up their alley.

There was no grand vision for adding an L4 eDRAM cache to Broadwell,
Not even to keep Apple interested?

which is demonstrated by never seeing it again.
It was expensive. That's probably why they didn't keep doing it.
 
There was no grand vision for adding an L4 eDRAM cache to Broadwell, which is demonstrated by never seeing it again.
The usage of eDRAM debuted in HSW mobile as a way to increase memory bandwidth. It also came back after BDW, but with a different design structure so the CPU couldn't dump the L3. There were several SKL SKUs that had it (mobile only if I'm remembering right), but it was effectively exclusive to the IGP so it didn't have the benefit that BDW saw. I believe this was mostly a move to appease Apple to get higher IGP performance. I wouldn't be surprised if Intel's entire eDRAM movement was for this, but don't recall if Apple was the driving factor.

Once DDR4 got really moving the bandwidth was close to what the eDRAM provided (and higher once 3200 landed) so the cost benefit definitely wouldn't be there anymore even if the CPU side could have benefited due to the low latency.
 
  • Like
Reactions: bit_user
The usage of eDRAM debuted in HSW mobile as a way to increase memory bandwidth. It also came back after BDW, but with a different design structure so the CPU couldn't dump the L3. There were several SKL SKUs that had it (mobile only if I'm remembering right), but it was effectively exclusive to the IGP so it didn't have the benefit that BDW saw. I believe this was mostly a move to appease Apple to get higher IGP performance. I wouldn't be surprised if Intel's entire eDRAM movement was for this, but don't recall if Apple was the driving factor.

Once DDR4 got really moving the bandwidth was close to what the eDRAM provided (and higher once 3200 landed) so the cost benefit definitely wouldn't be there anymore even if the CPU side could have benefited due to the low latency.
None of this has anything to do with why the 2 crippled Broadwell desktop CPU's had it.

These CPU's ended up being a preview of current X3D CPU's. The BW chips were solid gaming CPU's well after release because of the extra cache, but Intel never bothered with it again to this day for desktop CPU's.