>The reason I feel this way is the pace of improvement over the 8 years I've been aware and worked with ML has actually been way slower than I thought it would be.
My thoughts on this: Yes, societal change is never instant, but neither is it linear. Past is not prologue. As with many shifts, there is a critical threshold after which changes becoming cascading. From my view, AI is on that cusp.
>For example, I thought self driving cars would have been well out of fixed areas and "beta" by now, yet we are still looking like we are 5 or more years away
Self-driving car development is not a good analogy, as it has to contend with the existing infrastructure and ingrained practices. Imagine if we were at the beginning, when there are no vehicles or highways, and we are building both from scratch, with self-driving as guide. That is more reflective of AI as it stands today, where there are no guideposts or concerns with conforming to an existing infrastructure.
Second, self-driving has to deal with safety, where the tech has to be bulletproof, or it will cause death. Contrast this to current LLMs where hallucinations are bad, but is an acceptable trade-off against the benefits.
Third, self-driving has to deal with the physical world, and more importantly, with human quirks and unpredictability (to wit: the San Fran accident that effectively shuts down Cruise robotaxis). AI development doesn't have to deal with this.