News Nvidia CEO Jensen Huang and the King of Denmark plug in the country's first AI supercomputer — Gefion leverages 1,528 Nvidia H100 AI GPUs

Is a 1528 GPU computer a "super-computer" by today's standards?
They don't even mention AI as an application?
"quantum computing, medicine, green energy, and more."
Wait: "The Gefion AI supercomputer is owned and operated by the Danish Centre for AI Innovation A/S (DCAI)"
Looks like this has been getting a lot of hype in Denmark for the last year, or maybe that's from NVidia, LOL.
 
Is a 1528 GPU computer a "super-computer" by today's standards?
They don't even mention AI as an application?
"quantum computing, medicine, green energy, and more."
Wait: "The Gefion AI supercomputer is owned and operated by the Danish Centre for AI Innovation A/S (DCAI)"
Looks like this has been getting a lot of hype in Denmark for the last year, or maybe that's from NVidia, LOL.
I'd say it definitely qualifies as "super-computer" especially since each of those 1528 GPUs is a compute monster. While they don't specifically mention AI, the application they do mention can certainly make sure of AI(or machine learning) in a non-clickbait way. Bio-med and protein related problems can be optimized with AI. Green energy and in general, smart electric grid management can also be improved. So it definitely makes sense that Denmark would launch this kind of supercomputer in a place named "Danish Centre for AI Innovation A/S". Of course both Denmark and Nvidia want to play this up for PR, but I think this is definitely a legitimately good and useful piece of compute.
 
Think what he means is it's rather small to be exceptionally noteworthy. Elon Musks' supercomputer uses 100,000 of these same H100 Nvidia cards and is expanding to 200,000 in a couple years. That's 64 of these setups combined running today and 128 in a couple years from now. An accomplishment yes. Big ...no...
 
  • Like
Reactions: TCA_ChinChin
Just one more puzzle piece to the demise of the human race. When our computers are offing us one by one, because the AI that man developed sees us as a destructive war torn inferior race that should be eradicated, only then will we see what a huge mistake it was. The reality is it only made a small few filthy rich before the end.
 
Just one more puzzle piece to the demise of the human race. When our computers are offing us one by one, because the AI that man developed sees us as a destructive war torn inferior race that should be eradicated, only then will we see what a huge mistake it was. The reality is it only made a small few filthy rich before the end.
The AI itself is not the threat. The problem are the handlers of these AI's.

AI is advertised as a tool that will make life easier for the mass, but the purpose of AI is a tool to control the masses. It's already being used by government around the world to spy on you're bank accounts and many other aspects of you're life.
 
The AI itself is not the threat. The problem are the handlers of these AI's.
I agree that the near-term threat comes from the users, but you can't possibly rule out the potential of rogue AI.

AI is advertised as a tool that will make life easier for the mass, but the purpose of AI is a tool to control the masses.
It's not that nefarious, in the sense that there's any grand plan. At its core, it's mostly a bunch of nerds and researchers just pushing the tech as far as they can. However, it's very expensive, which tends to limit its access to the rich and powerful. As with all technologies, humans naturally use it to further their aims. The end effect is that it is being used to spy on us, replace us with automation, and trick us with things like deep fakes and highly targeted & personalized advertising.
 
  • Like
Reactions: TCA_ChinChin
I guess you have to start with the hardware. But at the end of the day its the people who do the code. And the company that manages it.
 
I guess you have to start with the hardware. But at the end of the day its the people who do the code. And the company that manages it.
Not for long though, AI has already proven itself to be better at coding than people, not to bring politics in, but we might see a reversal of telling miners “learn to code” where now we will be telling coders “learn to mine” to feed the AI monster with power and energy storage.
 
AI has already proven itself to be better at coding than people
Hmmm... I think this will happen, but do you really think it's there yet? Source?

I haven't tried it, but people have told me that it's okay for doing a first cut at something that basically amounts to boilerplate code or test cases. That intel is probably 6-8 months old, though.

now we will be telling coders “learn to mine” to feed the AI monster with power and energy storage.
Or, maybe just "plug yourselves into The Matrix!"
: D
 
Hmmm... I think this will happen, but do you really think it's there yet? Source?

I haven't tried it, but people have told me that it's okay for doing a first cut at something that basically amounts to boilerplate code or test cases. That intel is probably 6-8 months old, though.


Or, maybe just "plug yourselves into The Matrix!"
: D

Okay so I found a few articles but (caveat time) I am not a programmer, after a year and a half into a computer science degree, I decided it was not for me after being tasked to make a program that called on other programs. To my brain it was program-ception lol.


https://www.itpro.com/technology/ar...de-generation-really-replace-human-developers
 
  • Like
Reactions: bit_user
So, this is talking just about programming contests, where the challenge is to find the best algorithm for solving a given problem. That means it concerns a problem that's fairly limited in scope and has a simple specification of the problem. On the other hand, it's also dated 2.5 years ago, so it's certainly doesn't reflect the current state of the art.

I just skimmed it, but this seems pretty full of the usual caveats I expected. It was published only about 6 months ago.

An interesting space to watch, for sure.
 
So, this is talking just about programming contests, where the challenge is to find the best algorithm for solving a given problem. That means it concerns a problem that's fairly limited in scope and has a simple specification of the problem. On the other hand, it's also dated 2.5 years ago, so it's certainly doesn't reflect the current state of the art.


I just skimmed it, but this seems pretty full of the usual caveats I expected. It was published only about 6 months ago.

An interesting space to watch, for sure.
You are probably right in that it’s not as good right now, but with all the talk of self-improving algorithms and AIs one day being able to improve their own code, it seems like a inevitability.