News Elon Musk says Grok 3.5 will provide answers that aren't from internet sources

Status
Not open for further replies.
Cool. The answers will still be wrong though. AI is only good at doing secretarial tasks. Schedule an appointment. Turn the lights on. Reduce volume on the home theater receiver. Those tasks are fine. The moment you ask for a sample Software Bill of Materials in XML format is when you get fictitious schemas, tags, and attributes. In other words, cue Gordon Ramsay saying "WRONG" gif.
 
Does this have to do with switching to a reasoning model, or pulling from sources that aren't typically accessible on the Internet, such as full length books, obscure databases, etc.?

If a reasoning model can select books to read, this could be an interesting copyright dodge. Meta is being sued for torrenting books to train its AI. What if you had the books downloaded, but instead of using them as part of the training set, they just sit on an internal server and get accessed by an LLM as needed? You may be able to include a safety measure that ensures that full paragraphs aren't (directly) plagiarized from the selected source (badgering the AI to not do this as a part of reasoning steps invisible to the end user).
 
  • Like
Reactions: smitz314
You say this in response to an article by what looks like a little girl who thinks it's possible for generators to be illegal.

Like seriously?

There are entire branches of science dedicated to defining knowledge that have been refined for hundreds of years, the lessons from which are already integral to the functioning of neural nets. The Internet contains only a tiny tiny fraction of knowable things.

This really boils down to two things: sources and facts.

Using and citing sources are taught because they're important.
They are for intellectual credit yes, but also critical for checking facts.

Without that, and without basing discussion in facts, you are left with an informational void.
This can be applied to any media platform / outlet, and our every day lives.

Unfortunately, it is the entire point of this move.
It's the same reason fact checks were dropped for community notes as well.

If you're being honest you want your content validated, it only serves to benefit you.
If you're being dishonest you don't, and the same applies.
 
Last edited:
I would just prefer that all those pseudo ais would just indicate what are the sources that they used and why (lowercase on purpose as they are not really that important apart from generating random plausible next words for a sentence in response to a prompt).
 
What's with all the schizos ranting in this thread? It's simple, its either what another user said with the AI citing book sources from a server/similar OR using a similar process to the new GPT model where it "thinks" to make an accurate answer.
More likely, it's the latter, coming from this quote: "creating answers from scratch using a "reasoning" model."
Meaning it generates each step in a process and uses that to make an answer instead of generating what seems like an appropriate answer from the start.
 
Status
Not open for further replies.