nvidia will adapt as usual.
I agree, but there are no guarantees. I already said I wouldn't bet against Nvidia, but nobody can say that a dark horse like Cerebras won't turn out to have an advantage beyond Nvidia's ability to counter. Because patents, it's not as if Nvidia can necessarily just copy whatever their competitors do.
10 years ago, Intel looked completely unassailable, and yet look how far they've fallen.
something similar happen before with regards to tensor core. nvidia release GP100 in mid 2016. then Google release their TPU in late 2016. nvidia probably did not want to push into tensor core until 2018 with GV100. so what nvidia did was they release GV100 in mid 2017 instead of 2018
That's a stretch. Google doesn't sell TPUs on the open market. I don't even know when they started making those instances available to the public, but I remember it being quite a while after we first heard about them.
Furthermore, chip design takes a long time. V100 launched too soon to be a counter to the TPU. A more plausible explanation is that Nvidia simply took a more sensible approach to optimizing deep learning, by hard-wiring a few matrix multiply instructions (which is all their "tensor cores" actually are), after noticing how wasteful it was to use a series of dot-products for that purpose.
GP100 as their top compute solution in just a year. and for those that already order GP100 but still not get theirs nvidia are giving them GV100 instead.
It was still perfectly fine for HPC, which relies mainly on fp64. They were available for
much more than a year after V100's launch. I hadn't heard about Nvidia offering substitutions, but I'm certain they continued making P100's. For the kind of supercomputer and HPC applications the P100 primarily targeted, you couldn't necessarily just substitute in a V100.