News NASA's old supercomputers are causing mission delays — one has 18,000 CPUs but only 48 GPUs, highlighting need for updated compute

Status
Not open for further replies.

Co BIY

Splendid
The problem of a lack of talent in high-tech jobs is typical in all levels of government.

Even mediocre tech talent can get great salaries in the private sector making a government job unappealing.

Add in typical .gov bureaucracy, Federal purchasing rules, a rapidly changing field and likely political lobbying it's amazing anything is accomplished.

I'd like an article on Space X's supercomputer infrastructure. It would make an interesting comparison.
 
  • Like
Reactions: shady28

bit_user

Polypheme
Ambassador
The problem of a lack of talent in high-tech jobs is typical in all levels of government.

Even mediocre tech talent can get great salaries in the private sector making a government job unappealing.
Not only that, but surely the looming threat (and sometimes practical reality) of government shutdowns, every couple of years, can't help.

I'd like an article on Space X's supercomputer infrastructure. It would make an interesting comparison.
Yeah, but theirs is surely RoI-based, whereas NASA's is probably limited by what they have budget for, when the need arises. NASA is not RoI-based, nor do they have time-to-market constraints. They have missions which are much more limited in their funding than they are in time. So, if the computing portion takes longer because you have to run it on older infrastructure, that's easier to live with than scaling back mission objectives so you can buy newer computing hardware.

IMO, they should just negotiate big contracts with cloud computing providers and decommission anything that's significantly less cost-effective to operate than those contracts. They should also keep a transition plan in their back pocket, so they could revert to using their own infrastructure, if the cloud computing costs ever got out of hand.
 

bit_user

Polypheme
Ambassador
Im not impressed with the quality of this article. FPGAs are better matrix math accelerators than GPUs.
For one thing, you have to keep in mind that people programming them are typically experts in some other domain. So, the programming model they're typically using is probably OpenMP or other standard HPC frameworks.

Also, the benefits of FPGA are mainly around signal processing and limited-precision arithmetic, whereas most HPC code utilizes 64-bit floating point. Show me a FPGA that can even come close to competing on fp64 with a H100 or MI300.

NASA might build a better supercomputer with Intel CPUs and FPGAs.
Oops, didn't you hear? Intel is spinning off Altera.

But then that wouldn't make [member of Congress] any richer now would it?
Unless you have specific knowledge to the contrary, I believe Congress doesn't allocate NASA funding down to the level of choosing the suppliers for their compute infrastructure. Congress allocates budget for NASA, perhaps earmarking certain amounts for specific missions, but then I think the rest of the decisions are up to NASA.
 

jlake3

Distinguished
Jul 9, 2014
49
66
18,610
Im not impressed with the quality of this article. FPGAs are better matrix math accelerators than GPUs. Why would a supercomputer benefit from GPUs versus FPGAs? Without knowing the actual architecture of the HPCs controlled by NASA then all this article is doing is regurgitating some moron at NASA who thinks adding A100s makes a better supercomputer.

NASA might build a better supercomputer with Intel CPUs and FPGAs. But then that wouldn't make Nacy [insult removed] Pelosi any richer now would it?
I'm sorry, but what?

If you look at the TOP500 supercomputer rankings, none of the top systems are using FPGAs as their primary accelerators/coprocessors. That sounds like it could be difficult to develop for, as well as difficult to manage the flashing and synchronizing, less flexible if you're sharing compute time between multiple projects, and still slower and less efficient than an ASIC if your workload is extremely predictable.
 
  • Like
Reactions: bit_user

Co BIY

Splendid
Not only that, but surely the looming threat (and sometimes practical reality) of government shutdowns, every couple of years, can't help.

Federal government shutdowns are a perk for the workers not a detriment. They always get paid the missing days in a few weeks and it ends up being a bonus vacation. It's the taxpayers who lose in a shutdown, but they also lose if government shutdowns are considered unthinkable.

IMO, they should just negotiate big contracts with cloud computing providers and decommission anything that's significantly less cost-effective to operate than those contracts. They should also keep a transition plan in their back pocket, so they could revert to using their own infrastructure, if the cloud computing costs ever got out of hand.

You would need some pretty talented managers to do that well also. I'm not knowledgeable enough to know how difficult managing the security aspect would be.
 
Unless you have specific knowledge to the contrary, I believe Congress doesn't allocate NASA funding down to the level of choosing the suppliers for their compute infrastructure. Congress allocates budget for NASA, perhaps earmarking certain amounts for specific missions, but then I think the rest of the decisions are up to NASA.
NASA has been extremely screwed by congressional budgeting for years due to earmarking. Somewhere around half their budget is already spoken for due to mandatory programs. Back when Shelby was still a senator we had the building that was being worked on for the engines for the Constellation program that he forced to be completed despite the program being canceled and NASA not being able to use it for the engines for SLS. This sort of thing has dogged the agency for decades which is partly how we ended up with no shuttle replacement even in true planning before the program ended.

The ridiculous bureaucracy has undoubtedly caused a huge amount of problems getting/retaining talent. Who really wants to work on programs that are pointless, bad ideas or have a high likelihood of being retasked. The problem with computing here really seems like a situation of knowing there's no money to fix it so they keep moving along.
 
  • Like
Reactions: bit_user
Mar 6, 2024
11
0
10
For one thing, you have to keep in mind that people programming them are typically experts in some other domain. So, the programming model they're typically using is probably OpenMP or other standard HPC frameworks.

Also, the benefits of FPGA are mainly around signal processing and limited-precision arithmetic, whereas most HPC code utilizes 64-bit floating point. Show me a FPGA that can even come close to competing on fp64 with a H100 or MI300.


Oops, didn't you hear? Intel is spinning off Altera.


Unless you have specific knowledge to the contrary, I believe Congress doesn't allocate NASA funding down to the level of choosing the suppliers for their compute infrastructure. Congress allocates budget for NASA, perhaps earmarking certain amounts for specific missions, but then I think the rest of the decisions are up to NASA.
All infrastructure is political. My "facetious comment" about FPGAs is meant to provoke thought. Without knowing the actual architecture of these HPCs all we are given is some puff piece that goes out of its way to mention GPUs during a "GPU craze".

Nowhere is there any consideration of the design of the HPCs in question which may be built using any number of other accelerators than GPUs. It could have been ASICs.

It could be INTL processors and AMD (xilinx) FPGA accelerators.

It could be ARM.

There is no reason to mention GPUs whatsoever in this news article.
 
Last edited by a moderator:

USAFRet

Titan
Moderator
Federal government shutdowns are a perk for the workers not a detriment. They always get paid the missing days in a few weeks and it ends up being a bonus vacation.
As a person who was subject to one or more of those "shutdowns", you're wrong.

It is not a perk, or vacation. It is maybe 1 or 2 days a month.
And you can't really go anywhere, because if Congress comes to its senses and signs something at 11:58PM, you have to go to work the next day.

Additionally, the pay is retroactive. You are minus that pay for the term of the shutdown.
A lot of uncertainty.

And there is NOTHING in writing where they have to provide that backpay. Yes, they do. But is it NOT in writing.

But, this has gone far off topic, so lets leave it.
 
  • Like
Reactions: jlake3 and bit_user
Mar 6, 2024
11
0
10
I'm sorry, but what?

If you look at the TOP500 supercomputer rankings, none of the top systems are using FPGAs as their primary accelerators/coprocessors. That sounds like it could be difficult to develop for, as well as difficult to manage the flashing and synchronizing, less flexible if you're sharing compute time between multiple projects, and still slower and less efficient than an ASIC if your workload is extremely predictable.
You're comparing "generalized public supercomputers" designed to sell fractions of time to highest bidders versus catered and specifically designed in-house systems designed to cater to exacting specifications and security requirements.

Which is also why it's ridiculous to imagine using public cloud computing capabilities to run top secret nuclear proliferation controlled technology data on them.

All missile technology is considered controlled by nuclear proliferation laws and missile technology export controls. Etc.

I also want to point out that the Pleiades Super computer at NASA specifically was built using FPGAs if I recall.

Which further makes my point. The article (not the person here quoting it) seems buffoonish and wrongly motivated to hop on the GPU bandwagon.
 
Last edited:
Mar 6, 2024
11
0
10
I will draw everyone's attention to the Pleiades Supercomputer. The article attempts to denigrate the performance of mission ready computations due to inadequate compute.

I would argue, given that info, they specifically mean the Pleiades supercomputer.

A supercomputer which has almost no GPUs. But the reason it has no GPUs is because for its tasks it uses FPGAs which are significantly better than GPUs for the tasks needed.

I stand by the comment that this article seems a lot like some of the C-management I know.

they hear about how a new technology is accelerating things and don't understand its application to a piece of architecture and design or why it's used.

They assume without it then we are "behind" and it is the performance bottleneck that can be fixed by their great intellectual assessment that all the plebe techs beneath them have no clue about.

They are Moses come down from the mountain and here's their ten commandments to get NASAs supercomputers caught up to the 21st century.

Even though Pleiades was built in 2007 and upgraded in 2016.

Also there's the Aitken supercomputer still under construction. I can't find specific documentation to what matrix math accelerators it uses. Perhaps someone else can comment on Aitken?
 
Last edited:

bit_user

Polypheme
Ambassador
Federal government shutdowns are a perk for the workers not a detriment. They always get paid the missing days in a few weeks and it ends up being a bonus vacation.
That's not a given, and isn't true for contractors.

Since it also affects payment of suppliers, I expect NASA could probably negotiate slightly better deals if its funding didn't have a tendency to get tied up like that.
 
  • Like
Reactions: thestryker
Mar 6, 2024
11
0
10
So while I've demonstrated that A) Pleiades uses few GPUs and uses FPGAs and that B) Aitken may or may not use GPUs as I can't find documentation, in this last branch of thread I want to draw attention to WHY this is the case for NASA?

The reason, I think, having brushed up on my orbital mechanics, is that orbits are highly iterative and data dependent calculations.

As such they are not easily parallelizable.

And not being "parallelizable" they don't get much advantage from GPUs.

My intent isn't to say anything other than that NASA has built super computers useful to its needs and that adding a bunch of A100s aren't going to improve mission times etc.

I would go so far as to say the news article is entirely ignorant of the situation. Not by maliciousness or ignorance, but we can't all be masters of the universe. The people charged with building supercomputers for NASA have a great understanding of the tasks needed and built something best suited for those tasks.

It's fun to rip on bureaucracy or inefficiencies but at the end of the day I would say I think there is zero bottleneck due to compute power at NASA. Regardless of any other dumb stuff they may do.

And certainly it isn't solved with the addition of "modern supercomputers"


They have plenty of compute power.
 

bit_user

Polypheme
Ambassador
I also want to point out that the Pleiades Super computer at NASA specifically was built using FPGAs if I recall.
I'm reading that it was "Originally deployed in 2008", which means planning for it happened in the very earliest days of CUDA.
Back then, GPU compute was barely starting to penetrate into the HPC sector.

The article (not the person here quoting it) seems buffoonish and wrongly motivated to hop on the GPU bandwagon.
Take it easy, please. The article cites a report by NASA's Office of the Inspector General. The report, itself, cites the small number of GPUs as an example of NASA failing to keep "up with today’s rapid technological developments and specialized scientific and advanced research computing requirements". The author is just reporting on NASA's own report.

With that said, I tend to agree with the report, that lack of GPUs is a bad sign. Note how they're virtually ubiquitous among machines in the Top 500 list, with notable exceptions where nationalistic/supply concerns may have prevented their use.

The article cited examples where these supercomputers are shared resources, rather than being built for a dedicated purpose. I would therefore expect them to align roughly with what the typical machine looks like, on the Top 500 list. Furthermore, with NASA not being exactly a hotbed of HPC talent these days, they would likely prioritize the ability to use existing libraries and HPC frameworks. Mission managers probably don't want to undertake the costs, risks, and potential delays of doing lots of bespoke software development that's not fundamental to their mission, and therefore would likely prefer to use as much off-the-shelf and mature solutions as possible.

they hear about how a new technology is accelerating things and don't understand its application to a piece of architecture and design or why it's used.
To me, this sounds a lot like your preoccupation with FPGAs. You have yet to show any evidence that FPGAs would be an asset, in a modern HPC context.

hey assume without it then we are "behind" and it is the performance bottleneck that can be fixed by their great intellectual assessment that all the plebe techs beneath them have no clue about.
The machine you cited is comprised of about half Sandybridge/Ivy Bridge nodes and the other half Haswell/Broadwell, with a handful of Cascade Lake GPU nodes thrown in. Nobody in the HPC sector would consider such a machine not to be obsolete.

They are Moses come down from the mountain and here's their ten commandments to get NASAs supercomputers caught up to the 21st century.
The report cited mission delays. That's the problem they are trying to solve. They don't appear to me to be complaining just for its own sake.

The reason, I think, having brushed up on my orbital mechanics, is that orbits are highly iterative and data dependent calculations.

As such they are not easily parallelizable.
It seems quite bizarre that you think they need all this compute power for orbital mechanics! There are lots of problems concerning rocket design and simulation that run quite well on modern HPC machines. Furthermore, if we're talking about simulating robotics missions, you can run lots of different simulation scenarios in parallel. They also mentioned astrophysics and climate as other subjects of simulation.

For instance, when the Perseverance rover sent back seismic data from meteor impacts on Mars, they needed to run some compute jobs to try and derive what it can tell us about Mars' inner structure. That's exactly the sort of job I'd expect someone to run on a NASA supercomputer. Or maybe climate modelling on Venus, which we've recently learned probably wasn't always such an inhospitable place!

adding a bunch of A100s aren't going to improve mission times etc.
This question is best answered by the mission managers, themselves. The Inspector General hopefully talked to them about their unmet needs and why they are either buying their own equipment or using cloud computing, in the cases where they're not using NASA's existing HPC resources.

They have plenty of compute power.
This is quite a statement, coming from an outsider.
 
Last edited:
Mar 6, 2024
11
0
10
I'm reading that it was "Originally deployed in 2008", which means planning for it happened in the very earliest days of CUDA.
Back then, GPU compute was barely starting to penetrate into the HPC sector.


Take it easy, please. The article cites a report by NASA's Office of the Inspector General. The report, itself, cites the small number of GPUs as an example of NASA failing to keep "up with today’s rapid technological developments and specialized scientific and advanced research
computing requirements". The author is just reporting on NASA's own report.

With that said, I tend to agree with the report, that lack of GPUs is a bad sign. Note how they're virtually ubiquitous among machines in the Top 500 list, with notable exceptions where nationalistic/supply concerns may have prevented their use.

The article cited examples where these supercomputers are shared resources, rather than being built for a dedicated purpose. I would therefore expect them to align roughly with what the typical machine looks like, on the Top 500 list. Furthermore, with NASA not being exactly a hotbed of HPC talent these days, they would likely prioritize the ability to use existing libraries and HPC frameworks. Mission managers probably don't want to undertake the costs, risks, and potential delays of doing lots of bespoke software development that's not fundamental to their mission, and therefore would likely prefer to use as much off-the-shelf and mature solutions as possible.
Not meant as ad hominem but most of your points are errors in logic. The error is in the following: that because industry trends have gone down Road A, going down Road B is less effective.


Industry trends are toward GPU because there is a HUGE demand in that specific type of compute. Language models especially, to the point where we virtually have a universal language translator available to anyone.

We needed image analysis, text analysis an language models to drive the development of human behavioral analytics that comprises something like 30% of the US GDP.

As I espouse elsewhere: NASA isn't trying to model human behavior and market them the most "pertinent products".

They are trying to do compute tasks that are not very parallizeable. A whole different set of challenges.

Thus I would say that while Pleiades was built at a time before the rise of NVDA. We're Pleiades built today it would incorporate no more GPU into its architecture.

A great example would be the Aitken but they've been more stubborn explaining what accelerators exactly are in Aitken.

If I come across it I'll be sure to post here. But I suspect it won't be as many GPUs as we would expect based on industry trend. Simply because they are not as useful to the math being performed.

As for the inspector general whoever. I stand by my statement. Those types of people are "Moses come down the mountain".

In a technological arms race a country can only suffer so many political stooges...

PS - I fully agree this news article is just news. But I would stand by my statement the "NASA report" is not in the benefit of NASA. It's either a case of "last to find out" and someone just heard about GPU abilities but doesn't understand them, or worse, it's a case of corporations using government to sell products.

Oh and...

PSS - I meant to comment. I get that NASAs Supercomputers are "shared". But I stress that they are shared resources of a specific discipline of science. Behavioral analytics aren't buying time to crunch numbers using. Pleiades or Aitken. They go to something more catered for that.

NASA is very astro focused. So orbits, at large time scales, and remote sensing data.

I'm particularly interested in the remote sensing data from a "compressed sense - fractal compression" perspective where can we compute compressed data that is compressed using fractal compression?

That stuff doesn't really run on GPU type architectures.
 
Last edited by a moderator:

bit_user

Polypheme
Ambassador
Not meant as ad hominem but most of your points are errors in logic.
I'm still waiting for you to provide any data supporting your advocacy of FPGAs. Until then, I have nothing more to say on the matter.

They are trying to do compute tasks that are not very parallizeable. A whole different set of challenges.
The 10k compute nodes in the machine you cited seems to undermine your claim.

I'd like to see data behind this claim, as well. How much do you actually know about their compute tasks? Can you tell us which programs they run on their machines, how many nodes they use, and what libraries & frameworks they use?

I'm particularly interested in the remote sensing data from a "compressed sense - fractal compression" perspective where can we compute compressed data that is compressed using fractal compression?

That stuff doesn't really run on GPU type architectures.
Data rates from deep space are highly-constrained. That means you have to compress as much as possible. Once you get it back on Earth, I'm sure the first thing they do is decompress it and then further processing of it parallelizes just fine, even if the decompression step doesn't.
 
Mar 6, 2024
11
0
10
@bit_user

So I wanted to further branch into the "parallel computing" aspect of this discussion. I continue to argue that the results of the study conducted by "whoever" is not in NASAs favor. I'm cynical enough to say it's just product placement by NVDA. For example.

But opinion aside I still would argue NASA is highly focused on non-parallelized compute.

I want to emphasize that remote sensing data and orbital mechanics are both very stubborn to "parallelization".

GPU are very parallizeable and catered to whatever can be calculated in parallel the most efficiently.

I am interested in seeing more supercomputers built with FPGAs because I believe there is advantage there to breaking through the supercomputer memory-wall problem through compressed-compute.

But to do compression sense (compute) you would basically need hardware programmed to compute compressed data.

Like "parallelization" is a term in compute of "what can be parallelized?" There is an equally important term in handling big data "what can be computed while compressed?"


That is the future of some branches of super computing.

NASA should chase what's best for the compute tasks it needs most. And then share resources with others needing those tasks.

We will have plenty of language models from tensor processors and A100s without NASA. I think NASA should build into "compressed sense" computing to be honest.

We need it to break through the memory wall.

And something no one seems to publicly talk about is that if quantum computing can solve problems non-polynomial problems quick (that is 2^n rather than n^2) then this means we need a classical computer system capable of loading n^2 states into its memory.

The memory-wall is WORSE for quantum computing.

GPU focused supercomputing has its place. But it's a sideshow for what's needed to win the technology war racing to Quantum Supremacy.

Just like ASIC bitcoin miners do nothing to secure a network from an 150 algorithmic qubit computer.
 
Mar 6, 2024
11
0
10
I'm still waiting for you to provide any data supporting your advocacy of FPGAs. Until then, I have nothing more to say on the matter.


The 10k compute nodes in the machine you cited seems to undermine your claim.

I'd like to see data behind this claim, as well. How much do you actually know about their compute tasks? Can you tell us which programs they run on their machines, how many nodes they use, and what libraries & frameworks they use?


Data rates from deep space are highly-constrained. That means you have to compress as much as possible. Once you get it back on Earth, I'm sure the first thing they do is decompress it and then further processing of it parallelizes just fine, even if the decompression step doesn't.
There is a difference between brute forcing parallelization in node architecture and trying to parallelize a program. A GPU is not capable of parallelizing a program the way a CPU is. FPGAs are tailored to precisely this problem. It accelerates the kind of math used in both orbital mechanics and remote sensing analysis. Where as the GPU accelerates a different kind of math.

So when I mention parallelization do not think in terms of breaking up a program into 10 pieces then running each piece on a different Core. That should be called hyperthreading.

Parallelization in this case is how the algorithms running can be computed. "Can a math problem be parallelized?"

This question has diminishing returns where the time it takes to complete a computation is the amount not parallizeable.

With regard to compressed data. I mean less about deep space data and its compression.

Right now we have virtually no compressed computation ability.

All compressed data has to be decompressed to be acted upon.

What I'm referring to here, tangentially, is that GPUs are not good for expanding ability into compressed computation. FPGAs or ASICs and specifically designed circuits are.

Ironically this is a field the Russians are WAY AHEAD in.

Regarding data specific computations it's referring to how a math problem needs to act upon the data in memory before it can act upon another memory set.

In some problems this is specialized and not generalized and so it reduces parallelized computation.
 

George³

Prominent
Oct 1, 2022
228
124
760
Yes, the government has nothing to do with it. In my opinion, someone at NASA just got an offer from some company that makes GPU computing accelerators, or supercomputers with such accelerators, to arrange a deal with them, from which he would receive a commission. I don't see a lag issue caused by insufficient computing resources. Radiation-shielded on-board computers in space probes are less powerful than a chip in a smart watch example 1, example 2, but handle navigation, control of scientific instruments, and all communication with Earth...In fact there has few probes(Voyager ) with more older on-board computers which is alive.
 

bit_user

Polypheme
Ambassador
There is a difference between brute forcing parallelization in node architecture and trying to parallelize a program. A GPU is not capable of parallelizing a program the way a CPU is. FPGAs are tailored to precisely this problem. It accelerates the kind of math used in both orbital mechanics and remote sensing analysis. Where as the GPU accelerates a different kind of math.
I know a fair bit about FPGAs and GPU programming. What I asked for was data. If you can't provide any recent data, then there's nothing more to discuss on the matter.

All compressed data has to be decompressed to be acted upon.

What I'm referring to here, tangentially, is that GPUs are not good for expanding ability into compressed computation.
Not exactly what you're talking about, but a related example involving GPU processing of compressed data is how they support compressed textures!

FPGAs or ASICs and specifically designed circuits are.
Depends on the size of the tables, if using table-based compression. There are some decompression benchmarks where AMD's X3D processors smoke anything else, just because they happen to be able to fit the whole tables in their L3 cache. FPGAs don't usually have that much on-die memory.

Regarding data specific computations it's referring to how a math problem needs to act upon the data in memory before it can act upon another memory set.

In some problems this is specialized and not generalized and so it reduces parallelized computation.
You seem so sure of this, but you can't cite a single HPC paper or reference that indicates decompression is a bottleneck?
 

bit_user

Polypheme
Ambassador
someone at NASA just got an offer from some company that makes GPU computing accelerators, or supercomputers with such accelerators, to arrange a deal with them, from which he would receive a commission.
Source? Bribing a government official is a felony. You should have evidence of these allegations, or else don't make them.

I don't see a lag issue caused by insufficient computing resources. Radiation-shielded on-board computers in space probes are less powerful than a chip in a smart watch
That has nothing to do with how much computing resources they need for designing the crafts, running mission simulations, or post processing the data collected during the missions, which are the sorts of things the article is about.
 
Mar 6, 2024
11
0
10
I was thinking of if there is a way to illustrate what is happening in the two architectures and why.

What I came up with was a spreadsheet, with a single column of unorganized numbers.

A GPU excels with data where no matter how many columns you subdivide the data into, you can sort that data and stack the columns back onto each other as they were originally and come out with the correct answer.

This is to say that all the data of each set is independent of the other sets.

An FPGA excels at any data that cannot be done upon. Because a designer would examine the data they are working with and then craft a circuit to handle it as efficiently as we know how.

Graphics (GPU) are excellent at process parallelization where the data is independent of each other. This should be called hyperthreading. When the data is called, computed and stitched back together doesn't really matter. Just like inverse square root vector maths that are famous in GPU engines.

FPGAs will be programmed to do something similar but with data that cannot be independent of each other but instead are iterative. The processes have to be manually turned into blocks that can then be parallelized.

The nodes of a system are general compute parallelization as opposed to a GPU that is catered to parallelized tasks.

The FPGA matters in this case as previously stated, data dependent and iterative tasks like orbits are not parallelizable.
 

ngilbert

Prominent
Mar 30, 2023
6
7
515
I will draw everyone's attention to the Pleiades Supercomputer. The article attempts to denigrate the performance of mission ready computations due to inadequate compute.

I would argue, given that info, they specifically mean the Pleiades supercomputer.

A supercomputer which has almost no GPUs. But the reason it has no GPUs is because for its tasks it uses FPGAs which are significantly better than GPUs for the tasks needed.

And yet, with all of the documentation about Pleiades supercomputer, I find no mention of them using FGPAs, it is primarily just a large cluster of Intel XEON processor nodes - with the newest nodes in the system also using NVIDIA GPUs

https://www.nas.nasa.gov/hecc/resources/pleiades.html

(And NASA's documentation on Pleiades seems to be a bit more up to date than the article we are commenting on - when the article mentions "For example, all NAS supercomputers use over 18,000 CPUs and only 48 GPUs" when the Pleiades itself has 57 Tesla V100s and 64 Tesla K40s from NVIDIA)
 
  • Like
Reactions: jlake3 and bit_user
Status
Not open for further replies.