We should remember that most of what has been used to train AI-LLMs is from websites (like Reddit), forums, and anything else that they could gain knowledge, whether good or bad. Since we have seen people make comments and throw caustic jabs nearly everywhere we look, I am not at all surprised that an LLM would return comments such as these in the article. It is showing us the worse of humanity right back to us.
Just like a few weeks ago that a kid who was asking for homework help was told by an LLM to just kill himself already. Now imagine using that same LLM to help with diagnosing medical patients like they are already considering.