Fujitsu Claims World's Fastest CPU

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]ravenware[/nom]So...does Fujitsu manufacture them and then sun just sells them in server equipment?[/citation]
I understand quite a few vendors sell them in the computation market. It's just that most people today are only aware of the bittybox processors used in standard business machines that do text processing.
 
@buzznut: Faster than the fastest x86 ever, but it's not quite twice as fast(or more), so it fails? That's like booing someone at the Olympics because they only won the 5k by half a lap.
 
It's amazing to me how many people on a "tech saavy" site still miss the point of RISC computing. Sure, it wont play Crysis. Sure, it's not going to encode DiVX movies. Sure, it's not going to do Adobe Aftereffects rendering for you. But then again, it's not designed to. It's all about FPU calculations. Give this thing a lot of FPU intense work like weather pattern calculations or gene folding, and clock for clock it would stomp a mudhole in anything Intel and/or AMD has to offer. Why? Because it'd designed to do those kinds of calculations. That's like saying the greatest Soccer player in the world is a chump because he can't hit home runs in a baseball game. Makes no sense.
 
I think most kids here are missing a huge chunk of their education.
RISC CPU's are way faster at a lot of tasks and have a way better power/performance ratio.
In fact this is comparing apples with apples for some part.
The Apple PPC computers used to outrun any given intel or clone back in the day and yes they did fail but not because the RISC platform failed.

Also even though this CPU is not designed for in your desk toaster it is not impossible to run games on the architecture to make it worse most (if not all) game consoles run on RISC CPU's.
And all major RISC players are running altivec/velocity/VMX or even VIM and there are a load of other instruction sets that are capable of competing with SSE* or MMX (IBM even updated there VMX instruction set to VMX128 for the xbox360).

Here comes the kicker both RISC platforms biggest selling point and the reason they fail for home computing (at least the reason fruit companies decided there is not enough money to make on them).
RISC is short for Reduced Instruction Set Computing, lots of calculations require little or even one instruction on an x86 system while on a RISC system to do the same calculation you need to give multiple instructions giving the same end result.
Breaking down the work in smaller jobs is not a bad thing in most cases the work will be done faster however the extra work programming for such a system is a specially at times you might need to get creative.

If programmed well RISC systems will beat any x86 system though since x86 systems are cheaper (time is money) to develop for and IBM/intel (and clones) are widely spread.

I for one wish Apple sticked to RISC and i hoped by some magic reason the pegasos would get an unbelievable market share back in the day if only because good RISC computing (in example POWER PPC or FreeScale) is in my opinion the last real opposition to x86 and motivator to both AMD and intel to Bake something of this world :)

PS: cuddo's for the this is not exactly comparing apples to apples remark some where on top it really made me smile 😀.


 
[citation][nom]stradric[/nom]It's RISC or Reduced Instruction Set. Which basically means it sucks for multimedia applications and all that fun stuff that people do on their home PCs. The processing pipeline is much smaller and far less complicated than that of our x86/x64 processors. So, in other words, it would be quite a more difficult task to take on Intel and AMD with regard to consumer CPUs. An undertaking that would almost certainly cost Fujitsu more money than its worth.[/citation]

It's untrue to say that RISC processors are less powerful than CISC processors like Intel processors. To tell you the truth, AMD processors are a hybrid between a RISC and a CISC. Most of the AMD CPU is a RISC and only the part that makes sures that it's x86 compatible is CISC. There are many high performing RISC processors. RISC processors are much more efficient and powerful than any CISC. The fastest supercomputers are mainly RISC processors. These include the IBM Roadrunner based on the POWER5 architecture to the IBM Bluegene.

For multimedia processing, Apple has been using RISC processors for decades and they generally achieved higher computational power on a lower clock than the comparable Intel/AMD processor at the time, especially true for multimedia applications like movie editing and Photoshop.

The processors inside the XBOX 360, PS3 and the Wii all use an IBM based PowerPC core that is in itself a RISC architecture and they are very powerful. Especially for the PS3 since it is coupled with the Cell SPEs.

hemelskonijn is right and I agree that Apple should have stuck with IBM. I got really pissed that Apple decide to go backwards in technology for going Intel and forgoing the PowerPC architecture.

The Fujitsu SPARC64 is not meant for consumers but it is still a great engineering feat and I would love to see more information about this chip as the SPARC architecture is quite interesting like the PowerPC architecture which I've been starting to look into a lot.
 
[citation][nom]ravenware[/nom]So...does Fujitsu manufacture them and then sun just sells them in server equipment?[/citation]

No, SPARC is actually a licensable architecture like the POWER architecture by IBM. Unlike the x86, these companies are open about their architecture to let other companies to build chips that are based on the same design. It's almost like how the ARM architecture got so ubiquous in the mobile market. It's low power and it's manufactured by dozens of companies worldwide which ensures that the chip is cheap.
 
So this 128Gflops, to how many Ghz does it translate to?
If it's faster than a Xeon flagship, I guess it must run near to 3 - 4Ghz/core, and from the size it might very well be a 6 to a 9core chip!
 
128 GFLOPs? Why not just get an nVidia Tesla GPU (~1TFLOP per stand alone unit). I'm sure it costs a lot less than that thing.
 
I bet you I could get solaris 10 workin on this thing since it can run ultrasparc processor.. or maybe another flavor of linux and run virtual machines with it and maybe .. do some CG on it maybe... or maybe one day apple would turn to this again... sadly programming for it is more complicated soo only scientific folks will use it. Or have a emulator for x86 stuff to run right... but it would end up being slower then native maybe. I love my Old SGI vector Processors and my ibook g3 clamshell with its old Power Pc Processor. maybe one day we will have cheap multiprocessor super cluster with theses things for cloud gaming. So all can run crysis in high quality on a netbook with a ok display res while being hooked up to a external monitor. Cloud gaming is coming .. and after that we can start seeing people push super clusters and ultra powerful scalable processing to super mainstream!
 
hemelskonijn said it right, RISC rules the roost, those who think x86 is the best need to go hide in a dark closet and never come out. The raw power of RISC can easily outway the difference in code execution. Now that storage capacity is just plain huge, the added size of the code is no longer a problem either.

PowerPC's and MIPS where far better performers per mghz than any x86 chip.

Just look at the companies that used them, SGI, ILM, Apple, ok, maybe not that one :)
 
Lots of misinformed people here. First of all, GFLOPS ARE comparable across architectures. Apparently several people here are equating that metric to MIPS/GIPS, in which case x86 performance is indeed understated. However, unlike instructions, flops can be precisely defined, typically as 32- or 64-bit floating-point adds or multiplies since those are by far the most common operations in a wide range of workloads. All modern CPUs conform to the IEEE 754 FP format so the work done is identical. And for the record, modern RISC CPUs have SIMD just like x86 ones. The talk about RISC being bad at multimedia is ridiculous. From personal experience, my Mac Mini's PowerPC G4 with its dedicated 128-bit wide SIMD units (Altivec/VMX) handles H.264 encoding better than my old Athlon XP, despite being clocked slightly slower and having quite a bit less memory bandwidth. Altivec was a great SIMD instruction set, just constrained by the G4's anemic FSB. That was rectified with the G5 and indeed the G5 was a multimedia powerhouse in its time.

Second, the CISC vs. RISC battle was fought back in the 80s and 90s; RISC won. Granted, x86 is still technically CISC (since we're talking about the visible instruction set), but x86 CPUs adopted RISC-like "execution cores" over a decade ago when they decoupled the front-end instruction decode from the back-end execution core. AMD has been arguably "RISC" since K5 and Intel since P6. For the vast majority of a modern x86 CPU's pipeline, the stages are working with RISC-like micro-instructions that x86 instructions decode into. The "pure" RISC CPUs might have ultimately succumbed to x86 economies of scale, but certainly the RISC philosophy is very much a part of any modern high-performance CPU.

Lastly, the old RISC giants (MIPS, Sparc, POWER/PowerPC, and Alpha) did have clear performance, clock, and power advantages back in the 80s and 90s. As transistor counts increased, the "x86 translation overhead" became less and less of a factor in Intel's and AMD's CPUs and their comparatively huge volume ensured they could push their architectures further and eventually beat traditional RISC CPUs at their own game. Before I heard this announcement, I thought that IBM's POWER/PowerPC was the sole remaining non-x86 performance-competitive architecture, but I guess there's a new contender. I'm not sure what SPARC's future is with Sun being bought by Oracle though...
 
[citation][nom]ProDigit80[/nom]So this 128Gflops, to how many Ghz does it translate to?If it's faster than a Xeon flagship, I guess it must run near to 3 - 4Ghz/core, and from the size it might very well be a 6 to a 9core chip![/citation]
The chip has 8 core, but probably it can achieve the performance of a Xeon core with one core using a lower frequency than the intel processor (as normally RISC processors do).
I would point out, for what it is worth, that the Xeon 5500 has "only" 4 cores so in a single core confrontation the Xeon has the upper hand in GFlops.

[citation][nom]THUS[/nom]The original report also states that Fujitsu’s chip has 2.5 times the “high speed operation” and one-third the power consumption of an Intel CPU[/citation]
I think this comparison where between Sparc64 VIIIfx and the old Sparc64 VII from fujitsu and not with intel processors.
 
@ hemelskonijn : rics processors are only faster because they're built with the same concept as old lotus sportscars. Remove everything that isn't nessecary to run, and improve what's left. The result is, that they'll run fast. But as you already know, a lotus elan isn't desperately practical for shopping in ikea, as company or family car.
That makes it very uninteresting from a business point of view for most people.
Same with risc. they're good at a select few things, and the majority, even in the computer trade, don't really care cause they won't ever come face to face with such a processor. It's an educational lesson you can very well give in theory, cause those few who need indepth knowledge of it, will aquire that thru their profession.
 
I belive todays graphics cards have 1-3 teraflops of computing power.

So how is this faster than that?
 
[citation][nom]nimoraca[/nom]I belive todays graphics cards have 1-3 teraflops of computing power.So how is this faster than that?[/citation]
nobody claims that the fastest cpu is faster than the fastest gpu ?
 
[citation][nom]DXrick[/nom]Good points. No "MMX, SSE, SSE2, SSE3, SSE4, EM64T". They would also need to get licenses from Intel and AMD for the x86 and other techs.[/citation]
Um... you really don't get it, do you. This chip doesn't need any of that stuff to work. They have their own "MMX/SSE/etc" stuff, called VIS. And EM64T, um, dude, why would they reduce performance? SPARC has been 64-bit for over a decade, nearly two decades now.

My hats off to Fujitsu! As for those of you that say "why don't they make a desktop version", well, if you want one, you can roll it yourself, there are three SPARC CPU's under the GPL.
 
[citation][nom]cruiseoveride[/nom]Whos gonna sell Sparc with SUN out of the game now?[/citation]

Whomever wants to. It's an open arch.
 
[citation][nom]starryman[/nom]It is interesting to see how the CPU is 2.5X larger. Must provide better cooling OR it require more cooling. It's cool to see another player in the CPU world. It gets dull hearing the only two contenders AMD and Intel. Same thing with AMD/ATI and Nvidia. We need more teams.[/citation]
This is not a desktop PC chip. It is for high end servers and workstations. In that market, Intel/AMD are the underdogs and Sparc, PowerPC, and BlueGENE, etc. are the main players. This is the latest version of the Sparc architecture that I believe originated at Sun Microsystems. The Intel/AMD servers and workstations are 'Low-End' and are only gaining market share because they're a LOT cheaper than these things are.
 
The only problem you run into with using GPGPU tech is the near-complete inability of the architecture to branch-predict and thus prefect accurately. Ever wonder why it's just video encoding/rendering stuff that gets written for GPGPU? It is because these loads can be made what is called massively parallel. That means you don't need data from early in the set to complete work later in the set. SPARc certainly does not suffer from this problem. Same for POWER4/5/PC from IBM. Another issue with RISC v. GPGPU is precision. I am unsure of exact specs for SPARc, but the G5 (based on POWER4) could perform double-precision FLOPs on 128-bit data chunks in a single cycle. It takes any Intel chip other than i7 4 cycles, and even i7 takes 2-3, depending on the operand. These and other reasons are why the research/HPC community will be a solid market for RISC architecture for a good while yet.

ps- I am typing this on a 1.33GHz iBook G4, and it encodes videos faster than my buddy's new dual-core 2.1GHz AMD Athlon laptop. I like AMD and all, but those IBM chips were awesome to behold.
 
POWER isn't a RISC architecture, it's a load-store. It's not a simple instruction set by any means, and it's hardly reduced - it's quite large.

CISC was better when memory was very small. You could get a lot more into 4K, or 16K on a computer that had a powerful instruction set. As memory exploded in capacity, this became less important, and processor implications became more salient. The term RISC is really quite silly, although it applied to load-store architectures initially, in many cases these processors have very extensive instruction sets.

These numbers from Fujitso border on being meaningless. It would be relatively simple to achieve this, if I needed to run 128 threads to get the performance, for example. It would also be unimportant unless I could run that many. So, before everyone gets excited, we should know what this number is based on. Niagara cores were relatively poor on individual threads, but could run many per core, and also had many cores per processors. Sun would shout about the maximum performance, but it wasn't real to most people, and in reality the single threaded performance was quite poor, and could be a problem.

I don't know what this processor is, but it might be another Niagara, where they create really simple processors that can run a lot of threads, and in very specific situations offer excellent performance, but not universally so.
 
@Jimmysmitty: Chill out on the Intel fanboyism. In case you missed it, the terrascale chip became Larrabee, which is basically 20 Atom CPUs one one die. That whole thing was a sham, the terrascale chip was running fraudulent private benchmarks that Intel wrote themselves, that thing could've done at most 1/10th that performance level in a real-world scenario, it had almost no on chip cache, and not nearly enough memory bandwidth to keep the cores fed. But hey, it fooled you, and that's all that matters.

AMD, on the other hand, have GPGPU technology that can reach 2 teraflops in real-world scenarios
 
Status
Not open for further replies.