They're pattern engines much more than probability, though. The whole way they work is to find patterns and structure in the training data. That said, I think you're right about them trying to find plausible-sounding answers, and that's a lot to do with what @derekullo said about them being forced to answer and like I said about them not being trained to qualify their level of confidence about their answer.It's more a matter of AIs being purely probability engines : if something looks like something else, then the answer fitting that "something else" should fit that something also. Mathematically, it's sound; in practice, not really.
Usually, when I speculate about something, I try to qualify that answer with something indicating my certainty in my answer.
I do find it ironic just how many humans are "hallucinating" incorrect statements about AI. In spite of a very cursory understanding, these threads are full of self-assured statements about it that aren't well founded in fact.