Ampere plans to launch 256-core AmpereOne processors at 3nm next year.
Ampere announces 256-core 3nm CPU, unveils partnership with Qualcomm : Read more
Ampere announces 256-core 3nm CPU, unveils partnership with Qualcomm : Read more
The scheduled forum maintenance has now been completed. If you spot any issues, please report them here in this thread. Thank you!
Yeah, but you could just put those Qualcomm PCIe cards in a server with any other kind of CPU for similar AI performance and efficiency. When scaling AI inference, it's really the AI accelerators that primarily drive the efficiency. What CPU sits at the heart of the system doesn't matter as much.The article said:So, it teamed up with Qualcomm, and the two companies plan to build platforms for LLM inferencing based on Ampere's CPUs and Qualcomm's Cloud AI 100 Ultra accelerators.
They've been seemingly less open about these than their prior Neoverse based SoCs I wonder if that's why. Of course they'd also said they were targeting specific workload performance increases over general IPC so I'd love to see that in action.So far, what little performance data I've seen on AmpereOne performance suggests they don't perform markedly better than the ARM Neoverse V-series cores used by Amazon, Nvidia, and others.
Not to mention GNR/Zen 5 will both be likely bringing that core count to high performance this year and also SRF/Zen 5c so the competition in this space ought to be massive.That's indeed a lot of cores, but I'd point out that AMD Bergamo already has 256 threads (using 128 Zen 4C cores).
Isn’t AmpereOne just a neoverse core like Ampere Altra was, but just a newer version?That's indeed a lot of cores, but I'd point out that AMD Bergamo already has 256 threads (using 128 Zen 4C cores).
So far, what little performance data I've seen on AmpereOne performance suggests they don't perform markedly better than the ARM Neoverse V-series cores used by Amazon, Nvidia, and others.
Yeah, but you could just put those Qualcomm PCIe cards in a server with any other kind of CPU for similar AI performance and efficiency. When scaling AI inference, it's really the AI accelerators that primarily drive the efficiency. What CPU sits at the heart of the system doesn't matter as much.
Nope it's supposed to be semi/full custom, but they haven't released much of any information about it still.Isn’t AmpereOne just a neoverse core like Ampere Altra was, but just a newer version?
What @thestryker said, but it's really not surprising if we consider that Ampere Computing released CPUs with in-house designed cores before Altra. From what I heard, Altra was something of a stopgap measure, due to AmpereOne running behind schedule.Isn’t AmpereOne just a neoverse core like Ampere Altra was, but just a newer version?