The problem is that a lot of AI workloads need compute as well as bandwidth as well as VRAM capacity. There might be niche markets where 16GB paired with 88 teraflops of FP16 compute and a 128-bit interface are "good enough," but most companies dabbling with AI should just fork over the money for a more capable GPU.
Like, seriously: Who's going to hire an AI researcher at potentially $5,000+ per month, and then saddle them with crappy hardware just to save $500 or even $1,000? If you want to do AI properly, on consumer hardware, get the RTX 4090. If you're a hobbyist just poking around at AI workloads, fine, maybe a $500 4060 Ti 16GB makes some weird sort of sense. But for non-hobby use, this isn't going to suffice.