Here we should define the meaning of "solve problems". A lot of softwares are developed to "solve problems" never viewed, spreadsheets, RADs, programming languages, etc...
A lot of things done by LLM actually are not so wonderful. Some examples : sorting things, summarise things, generate texts, generate images, translate texts, interpret texts, interpret images, etc... Each of those can be achieved with specific neural networks and I think nobody in the world interpret such ability as "intelligence".
Blending all this "abilities" together through other neural networks may be considered "intelligence" ? For me not, it is only a more complex neural network that can mimic human behaviour.
The ability to "solve problems" never exposed to, may exist, but not the ability to "solve problems" never trained. If the network was never trained on image recognition it never will be able to interpret images, if the LLM is able to sort elements, it was trained on sorting and so on.
If this is the definition of intelligence I am pretty sure that actual LLMs cannot solve problems, because LLMs solve problems only if trained on a similar problems.
In practice LLM find a known solution to a similar problem or a solution to a known problem, as you prefer, but cannot find a new solution for a known problem or a solution for an unknown problem.