News Intel Supercomputing 2023 Update: 1-Trillion Parameter AI Model Running on Aurora Supercomputer, Granite Rapids Benchmarks

Admin

Administrator
Staff member

weber462

Honorable
Mar 22, 2018
14
7
10,515
Support Petals AI. Support BOINC. Normal people should have access to AI. Normal people should have access to processing power.
 

bit_user

Titan
Ambassador
Support Petals AI. Support BOINC. Normal people should have access to AI. Normal people should have access to processing power.
As far as I can tell, Petals is only usable for inference. For training (and many HPC workloads), you need all that processing power to be very tightly-integrated with high-bandwidth interconnects and having fast access to huge amounts of data.

As far as normal people having access to it, there are cloud providers (including Nvidia) where you can rent time.
 

jkflipflop98

Distinguished
Support Petals AI. Support BOINC. Normal people should have access to AI. Normal people should have access to processing power.

Normal people don't have the electrical capacity to even boot up the amount of silicon it takes to run something like this. You basically need your own electrical sub-station to deliver the amount of energy this kind of thing takes.

Sure, in 20 years any old laptop at bestbuy will be able to do what these systems are doing, but for right now your best bet is to do as BU says and rent some cloud time.
 

bit_user

Titan
Ambassador
Normal people don't have the electrical capacity to even boot up the amount of silicon it takes to run something like this.
I think you missed @weber462 's point, which was to suggest that distributed computing could substitute for these kinds of supercomputers - not that an ordinary person would have even a single one of the Aurora-class machine in their home.

As I pointed out, distributed computing is only applicable to a rather limited set of problems. It's great when it does apply.

Sure, in 20 years any old laptop at bestbuy will be able to do what these systems are doing, but for right now your best bet is to do as BU says and rent some cloud time.
I'm not saying you're wrong, but I think there's probably not (yet) a technology roadmap that would get us there. More difficult than merely scaling compute performance is going to be improving energy efficiency to match.

Anyone interested in the subject should take a close read through this (especially the slides):

 
Last edited:

bit_user

Titan
Ambassador
I threw up a little bit when I read the page about the project written in Fortran.
I don't see where it says that in the article, but I would point out that Fortran has continued evolving, like C, C++, and other mature languages:


I've never used Fortran, but I'd be surprised if they hadn't added enough quality-of-life improvements to it, for it to be something I could live with.