News Phison's new software uses SSDs and DRAM to boost effective memory for AI training — demos a single workstation running a massive 70 billion parame...

https://cdn.mos.cms.futurecdn.net/BbG9F5vqyJ45LkuymbE2j-970-80.jpg

Since the cost of each system is a fixed cost (besides electricity and maintenance ... and IT people gotta pay them!) a few questions come up.

Namely how would you convert time saved training an AI into a dollar amount?

Does the the power savings of only having to run an aiDaptiv 4 node/ 16 GPUs offset whatever profit gained or time saved from running 50% slower than the baseline 8 node / 30 GPU ?

Or in other words ... how soon if ever would you recoup the extra $108k spent on the baseline 8 node / 30 GPU over the aiDaptiv+ 4 node / 16 GPU?
 
  • Like
Reactions: slightnitpick
https://cdn.mos.cms.futurecdn.net/BbG9F5vqyJ45LkuymbE2j-970-80.jpg

Since the cost of each system is a fixed cost (besides electricity and maintenance ... and IT people gotta pay them!) a few questions come up.

Namely how would you convert time saved training an AI into a dollar amount?

Does the the power savings of only having to run an aiDaptiv 4 node/ 16 GPUs offset whatever profit gained or time saved from running 50% slower than the baseline 8 node / 30 GPU ?

Or in other words ... how soon if ever would you recoup the extra $108k spent on the baseline 8 node / 30 GPU over the aiDaptiv+ 4 node / 16 GPU?
This is kind of assuming the purchaser would only want to train one thing at a time, and/or that there is sufficient supply to fill all of the needs.

Or that this opens things up a bit more for disruption given that it has made the machine learning cheaper, and thus more accessible without renting time on someone else's nodes.

For those who really want to push the envelope it also will make things more scalable I assume. It's pretty nice having terabytes of accessible and speedy enough memory to access while only using gigabytes of RAM. Though I'm not even a dilettante I am under the impression that addressable RAM is a limitation of the CPU/GPGPU.
 
Last edited: