(now, Samsung Galaxy costs as much as iPhone).
Last year, they were limited by HBM availability, not logic wafers. I assume that's still true, since I think I read that HBM production capacity is sold out through the end of the year. I think advanced packaging has been another choke point.Bigger question can Nvidia grow without alternative fabs from Intel or Samsung when TSMC'S has saturated its allocation?
LOL, a good chunk of that $3k price tag should be for the included warranty coverage, given how much trouble I've heard foldable displays have had with the screens eventually cracking.Huawei has a tri-fold device that costs > $3,000
On this point they also messed up something with the initial Blackwell design which directly impacted yields of GB100 so I wonder how much that set them behind on top of all the other capacity issues.I think advanced packaging has been another choke point.
This is not the first time he is talking like that, "I this" and "I that" all the time.. I wonder how people on Intel could put up with him for so longI had a project
Definitely not intelligence. Jensen may very well be a genius, but that's not why nVidia has catapulted. The man is a master class salesman. Vision is also key, but visions to improve humanity rarely work out as well as visions for power. Time will tell if the products were worth the attention.Definitely not luck. Smart people have a vision and stick to it or adapt when necessary. Jensen was always able to do that. Maybe not immediately but still kept to vision and in steps everything came together to get NVIDIA to the point where they are now.
Just like Elon Musk. It doesn't quite matter whether he invented anything initially but he had a vision and has now taken several companies to great success.
Smart people make smart choices and adapt when necessary.
TSMC has saturated it allocation? What are you smoking? TSMC is NOT out of capacity. Nvidia simply isn’t going to pay 3x the price for last second allocationBigger question can Nvidia grow without alternative fabs from Intel or Samsung when TSMC'S has saturated its allocation?
I don't know about whatever "iPhone project" you're referring to, but Larrabee was originally all about trying to get back into the dGPU market, which Intel had first dabbled in, back in the late 1990's.Nvidia is lucky, that's for sure, but Intel was led by greedy incompetent idiots.
Saying no to the iPhone project, ignoring discreet/advanced GPU market until very recently,
Uh, they did try that, and lost $Billions, in the attempt.ignoring smartphone market in general,
That wouldn't make sense, to me. As long as their own SoCs are competitive and there's a big enough market for them, it would be a little silly to offer ARM. The time to do that would be after the market reaches a tipping point.not trying to do ARM SoCs to comply with high demand,
Technically, it's the stocks and options, where they make the lion's share of their compensation. Yes, they're quite well compensated.However the CEO and corporates salaries are gigantic, whatever happens.
Yeah, I like the saying: "Chance favors the prepared mind."I think Pat made it clear that Nvidia live up to the principle of "the harder one work the luckier one get"
Oh sure there was! You just can't do it on luck, alone. In fact, there were several points where Nvidia was close to going under.Definitely not luck. Smart people have a vision and stick to it or adapt when necessary.
Where he's been successful is revolutionizing legacy industries with technology: Finance, Automotive, and Aerospace. Once he tried to take on an industry that was born in the internet age (i.e. social media), that's where he reached his limit.Just like Elon Musk. It doesn't quite matter whether he invented anything initially but he had a vision and has now taken several companies to great success.
Hey that Broadwell part is better than it gets credit for. I assume the point of it was internally proving decent iGPU performance was possible even with ddr3 bandwidth limitations and somebody just liked it enough that it made it to production. ChipsandCheese did a nice write up on it sometime in the last year.I don't know about whatever "iPhone project" you're referring to, but Larrabee was originally all about trying to get back into the dGPU market, which Intel had first dabbled in, back in the late 1990's.
Post-Larrabee, Intel went heavy on iGPUs, arguably culminating in that Broadwell with eDRAM. I don't know if that might've originated as a bid for the console market, or exactly where they thought they were going with that. 10 years after Larrabee (so, about 20 years after the original i740), they hatched plans to have another go at the dGPU market.
So, I wouldn't say they ignored it, exactly.
Uh, they did try that, and lost $Billions, in the attempt.
That wouldn't make sense, to me. As long as their own SoCs are competitive and there's a big enough market for them, it would be a little silly to offer ARM. The time to do that would be after the market reaches a tipping point.
Technically, it's the stocks and options, where they make the lion's share of their compensation. Yes, they're quite well compensated.
Yeah, I like the saying: "Chance favors the prepared mind."
There was no grand vision for adding an L4 eDRAM cache to Broadwell, which is demonstrated by never seeing it again. The reason why Broadwell desktop got it was as a performance bandaid for the giant clock speed loss vs Haswell (4.2Ghz->3.7Ghz). Intel was struggling with 14nm and it significantly delayed Broadwell. As a result, we only got two desktop models, one i7 and one i5, and they were replaced just 2 months later by Skylake.Hey that Broadwell part is better than it gets credit for. I assume the point of it was internally proving decent iGPU performance was possible even with ddr3 bandwidth limitations and somebody just liked it enough that it made it to production. ChipsandCheese did a nice write up on it sometime in the last year.
One reason I suspect it was a bid for consoles (I might've even heard a rumor about that) is because Microsoft/XBox went through several iterations of using in-package memory. So much so, that I was honestly surprised when they ditched that for a standard GDDR setup with the Series X. Coming years before that, Broadwell's eDRAM seemed right up their alley.Hey that Broadwell part is better than it gets credit for. I assume the point of it was internally proving decent iGPU performance was possible even with ddr3 bandwidth limitations and somebody just liked it enough that it made it to production. ChipsandCheese did a nice write up on it sometime in the last year.
Not even to keep Apple interested?There was no grand vision for adding an L4 eDRAM cache to Broadwell,
It was expensive. That's probably why they didn't keep doing it.which is demonstrated by never seeing it again.
The usage of eDRAM debuted in HSW mobile as a way to increase memory bandwidth. It also came back after BDW, but with a different design structure so the CPU couldn't dump the L3. There were several SKL SKUs that had it (mobile only if I'm remembering right), but it was effectively exclusive to the IGP so it didn't have the benefit that BDW saw. I believe this was mostly a move to appease Apple to get higher IGP performance. I wouldn't be surprised if Intel's entire eDRAM movement was for this, but don't recall if Apple was the driving factor.There was no grand vision for adding an L4 eDRAM cache to Broadwell, which is demonstrated by never seeing it again.
None of this has anything to do with why the 2 crippled Broadwell desktop CPU's had it.The usage of eDRAM debuted in HSW mobile as a way to increase memory bandwidth. It also came back after BDW, but with a different design structure so the CPU couldn't dump the L3. There were several SKL SKUs that had it (mobile only if I'm remembering right), but it was effectively exclusive to the IGP so it didn't have the benefit that BDW saw. I believe this was mostly a move to appease Apple to get higher IGP performance. I wouldn't be surprised if Intel's entire eDRAM movement was for this, but don't recall if Apple was the driving factor.
Once DDR4 got really moving the bandwidth was close to what the eDRAM provided (and higher once 3200 landed) so the cost benefit definitely wouldn't be there anymore even if the CPU side could have benefited due to the low latency.