If I am reading that right, they side-stepped adding support for their hardware in those frameworks
I had some similar thoughts. It's probably worth actually diving into the details, because some of their comments suggest otherwise.
Anyway, to the extent someone cares, they can investigate further. Since I have no stake in the matter, I'm done with this.
And the second option is C++ access to some "kernel" (supposedly well documented) in a roll-your-own fashion without the benefit of open-source collaboration?
You're referring to Metalium? I believe that's how you program the hardware, directly. ...since it doesn't support CUDA.
"The figure below shows the software layers that can be built on top of the TT-Metalium platform. With TT-Metalium, developers can write host and kernel programs that can implement a specific math operation (e.g., matrix multiplication, image resizing etc.), which are then packaged into libraries. Using the libraries as building blocks, various frameworks provide the user with a flexible high-level environment in which they can develop a variety of HPC and ML applications."
Source: https://tenstorrent.com/software/tt-metalium/
As for the "open-source" part, Jim had previously touted their intention to open source this stuff. I don't know if that's still the plan, and maybe time-to-market concerns just de-prioritized that aspect? Or, maybe they made a strategic decision to do otherwise. Would be interesting to know.
Provided I am not mistaken that's:
1. Trying to get a vendor lock-in on customers
2. Being open-source in name but not in spirit (leeching from other frameworks, not giving anything back)
Thanks for posting that, now I dislike him even more.
Okay, but what if you are mistaken? Why rush to a conclusion without lining up the facts?
Also, let's say it's no longer planned to be open-sourced. How would that be worse than CUDA?
Finally, let's not forget the context, here. Jim is concerned about AI accelerators, not HPC or other domains CUDA is intended to serve. CUDA is much more general than what you need for an AI accelerator, and that generality doesn't come without costs (i.e. in terms of performance, efficiency, and literal cost). I think that's another way to see the "swamp" analogy - that Nvidia is bogging down its AI accelerators with the generality needed to support CUDA.