Plenty of storage applications still depend on HDDs. Mostly offline/nearline storage and where GB/$ is the dominant factor. Most of the data in the cloud sits at rest, which means it just needs to be accessible enough, and speed isn't otherwise an overriding concern. That rules out tape, but makes it amenable to HDD storage.
The other point I have to keep reminding people is that SSDs aren't suitable for cold storage. You can't put them in a drawer or on a shelf and just forget about them, the way you can with HDDs (to some extent), tape, or optical. If you're not periodically powering them on, they'll lose their contents. I've personally experienced this, and it's only going to become more common as NAND cell sizes continue to shrink and more bits keep getting packed into them (e.g. QLC, PLC).
I can say from personal use over the last year the HDD market is far from dead. I upgraded my storage capacity to 120TB X2 using enterprise grade HDDs. There is absolutely a market for those with large storage needs like enterprise/cloud, creators, scientific community, etc. I get most home users don't need the capacity my household does. And while users like me may be niche, we are not going away anytime soon for all the reasons you mentioned above. It just the average home users who have mostly disappeared from the market.
In my case I currently use two separate machines (the wife's and mine) running stable bit drive pool (JOBD), to ensure our data is safe instead of the more typical a raid setup on a single machine. I have used raids extensively over the years but honestly find this solution to be safer and mostly more convenient option though it is bit more time consuming. If a drive fails or a machine gets totally fried from a failed PSU, I have it all on the other machine. Sadly I have had exactly one PC completely die from a PSU going bad...it literally killed every powered component in my setup save my water pump...
With my pool setup, I'll only have to rewrite the failed drive's files. Though typically I tend to freshen up the entire data pool regardless if a failure happens. Currently a full rewrite is my gravest point of failure with this setup and/or in a scenario where a drive or more on both machines fail that contain the same data.
Thus in the next year or so I am looking at getting a TB or USB4 attached 10 bay enclosure, roughly, to minimize my chance of losing our data further. I plan to have it at her folks place most of the time, only bringing it around every month or so for back ups (her folks have a slow connection or I would just build a remote node), then quickly returning it. That's another BIG bit a lot of people forget to do. Have a copy of all your data off site. It only takes one house fire/lightening strike/etc to eliminate all your data in the blink of an eye if you don't have off site storage.
At the end of the day you hit the nail on the head. Between the cost per GB and the inherent issues with nand as cold storage, HDDs are sticking around for a long time. Until nand can match the cost and data reliability in cold storage, HDD and tape will continue to afford users/corporations vastly greater storage capacity at a fraction of the cost compared to nand.
As demand for HDD falls, they will become more expensive, and less profitable, due to fixed costs, and it will cause further collapse of demand.
While true to a degree, as things currently stand though that is hardly a big issue at the moment. The HDD market only died in the consumer space. Nand will continue to be to cost prohibitive and more than a little problematic for cold storage. I'd be amazed to see HDDs completely die in the next 15 years for all the reasons bit_user and myself both brought up. There are just to many markets where cost per GB is king and data reliability is paramount. Until nand based solutions can address those issues fully, things will not change.
I certainly have nothing against NVMe, except that it's much cheaper to pack a bunch of SATA ports into a mainstream board than it is a comparable number of PCIe lanes.
It doesn't help Intel and AMD have cut the number of lanes they offer to mainstream CPU/boards over the years either. Long gone are the days of a mainstream board/CPU having 40 lanes total with two full x16 slots or those running multiplexer giving you 4 or more slots at 'full' speed.