News Vendor crams 128 CPU cores and 28,416 GPU cores into AIO workstation

jlake3

Distinguished
Jul 9, 2014
148
211
18,960
...but why?

I don't work in the medical field and am I'm not up on all the requirements of medical imaging, but this seems like a mashup of hot tech buzzwords in search of a problem. AI! ARM! 4K!

Is it for development of medical software? If so, then what exactly is the benefit of making it an all-in-one?

Or is it for deploying into a clinic, where space might be limited? If that's the case, then can anything actually use all that compute power? Does the current workflow run on ARM? Does it have the ports and software support to interface with existing imaging machines? My impression as a patient is that imaging machines come with their own specialized control and processing systems, then a pretty average Windows workstation is used for review and sharing of results. Which rather than running "many apps at the same time," tends to just be running the viewer app for the results at hand, a really dated looking frontend to their patient record database, a calendar, and maybe an in-office messaging program.

And maybe you could have a fully-local AI that checks scans and flags areas for further review by a human operator or closer-up scanning, which reduces the issues of sending that data out to someone else's data center... but their ending disclaimer makes it sound like rather than this being the appliance to run something they've cooked up, they're just chucking hardware out there on the assumption that the software will be built.
 

ivan_vy

Respectable
Apr 22, 2022
201
211
1,960
...but why?

I don't work in the medical field and am I'm not up on all the requirements of medical imaging, but this seems like a mashup of hot tech buzzwords in search of a problem. AI! ARM! 4K!

Is it for development of medical software? If so, then what exactly is the benefit of making it an all-in-one?

Or is it for deploying into a clinic, where space might be limited? If that's the case, then can anything actually use all that compute power? Does the current workflow run on ARM? Does it have the ports and software support to interface with existing imaging machines? My impression as a patient is that imaging machines come with their own specialized control and processing systems, then a pretty average Windows workstation is used for review and sharing of results. Which rather than running "many apps at the same time," tends to just be running the viewer app for the results at hand, a really dated looking frontend to their patient record database, a calendar, and maybe an in-office messaging program.

And maybe you could have a fully-local AI that checks scans and flags areas for further review by a human operator or closer-up scanning, which reduces the issues of sending that data out to someone else's data center... but their ending disclaimer makes it sound like rather than this being the appliance to run something they've cooked up, they're just chucking hardware out there on the assumption that the software will be built.
looks like a solution looking for a problem, it has no place in the doctor's room and is a burden or risk to have a full system instead of a server backed compute and storage.
maybe USA hospitals have a lot of cash to burn to give one of these to every doctor.
 
  • Like
Reactions: jlake3 and Order 66

Wimpers

Distinguished
Feb 26, 2016
30
7
18,535
I don't work in the medical field and am I'm not up on all the requirements of medical imaging, but this seems like a mashup of hot tech buzzwords in search of a problem. AI! ARM! 4K!
I've worked in a hospital as an IT guy for 4 years and as far as I know, medial images don't need a lot of processing power.
They even have their own standard, PACS, but this only used for storage and distribution of the images.

Fujitsu Siemens makes the hardware and HP Z-series workstations (but those were kinda overkill as they just ran 1 program) were used by the doctors in combination with EIZO 10-bit monochrome high-contrast greyscale monitors, like this one.

Interpretation of the images was always done by a human and normally always double-checked by another specialist.

Maybe AI could be used for preliminary diagnostics or pointing out interesting/alarming areas to the operators, but that's about all, no need for a supercomputer.
 
  • Like
Reactions: ivan_vy