Hallucination is what kills most LLM applications, as there is no guarantee whatsoever that LLM-generated text or code is correct factually or otherwise. So it will always need humans, or complementary and completely different software technologies, for checking and correction. Not intelligent enough. By far.
As this is so, like many believe, there is a huge AI bubble just waiting to burst. Wishful thinking that LLMs somehow represent a huge step ahead in General AI is the root cause for the AI bubble.
Think back a couple of years: Blockchains were supposed to revolutionize everything. Also, self-driving cars and autonomous drones were supposed to revolutionize everything. These and other things have some applications, but all those applications pale in comparison to the huge and completely imaginary revolution they used to represent.
Think about it.