News AI language model runs on a Windows 98 system with Pentium II and 128MB of RAM — Open-source AI flagbearers demonstrate Llama 2 LLM in extreme cond...

The article said:
Of course, a 260K LLM is on the small side
That's putting it mildly! If you look at the output it generated, it's like one step above complete gibberish!

aCBueZGpcxQaUy3RQ7MirH.jpg


In my mind, this utterly fails as a PR stunt, because the result is both useless and unimpressive. They should've gone for the largest model the machine could handle, unless that would've barely moved the needle on output quality.

The article said:
Thus EXO hopes to "Build open infrastructure to train frontier models and enable any human to run them anywhere." In this way, ordinary folk can hope to train and run AI models on almost any device
Did he say the "any device" part? Because no - you need a lot of data, a large cluster, and lots of communication bandwidth for training. This isn't going to happen on "any device".

The way I read the "open infrastructure" comment is just that you would have a training infrastructure you could more easily port from one hardware to another, not that it's going to vastly lower compute requirements than existing training solutions.
 
  • Like
Reactions: thisisaname