Asus Supercomputer Motherboard Revealed

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I am not an Electrical/Computer Engineer, but if scaling seems to be the path of the uber-workstation, and SLI/Crossfire is a viable solution anywhere in the computing world, why not have scalable motherboards in a folding double-sided server style BTX motherboard/chasse configuration? Alternate PCI slots, with a motherboard-SLI connection. The rig would be relitively easy to service,and the airflow would be unidirectional.
 
[citation][nom]jacobdrj[/nom]I am not an Electrical/Computer Engineer, but if scaling seems to be the path of the uber-workstation, and SLI/Crossfire is a viable solution anywhere in the computing world, why not have scalable motherboards in a folding double-sided server style BTX motherboard/chasse configuration? Alternate PCI slots, with a motherboard-SLI connection. The rig would be relitively easy to service,and the airflow would be unidirectional.[/citation]
you know, i wish i thought about that. for a workstation in an environment suitable, that is a fantastic idea. Asus? MSI?
 
I'd like to see a block diagram of this beast.
7 x16 PCIe cards are wonderful and all but lets do the bandwidth math.
There the best X58 chipset port breakdown scenario is officially 4 ports @ x8 worth of lanes available.
Plus another port at x4 lanes, then and two ports at 1 lanes, that's 7 total possible ports.
Which tells me that all they likely did was take the 4 x8 and add the nforce switches to them simply to allow SLI support (that the board already had since nVidia caved, but nVidia insists is somehow "better" to have these useless switches than not having them, which electrically makes no sense.)
And then pop x16 slots onto x4 and x1 electrical links.
Big deal, maybe there is some BIOS magic in there to make certain card combinations easier, but since PCIe is an auto-negotiating you can do the same thing at home with a dremel tool.

You can add all the switches and ports you want but you are still limited to 42 lanes of PCIe, in very specific configurations.
Even if all 7 slots are somehow electrically x16 lanes through the use of those n200s you are still shoving all that bandwidth through two at least x16s at the Northbridge (think Skulltrail).
Not all that compelling.
This thing is a waste of silicon to begin with and certain to be overpriced one at that.
 
Also, the biggest reason for no dual socket is that this isn't a board for server cpus, you would want, again, Skulltrail for that, or any server motherboard that suits your budget.
Multiple CPUs are server processors only.
Even Skulltrail just used a rebranded Server chipset and Xeon CPUs... that's why it used FB-DIMMs.
 
3 4870x2's and a 4890 in a case with 8 expansion slots would be sick on this. Of course, you need a small fusion reactor to power it all, but thats like 8.6 teraflops. Now imagine 7 single slot watercooled 4870x2s. Then you talking 16.8 teraflops, of course at this point we are pretty much talking about government funding only.
 
[citation][nom]scook9[/nom] Now imagine 7 single slot watercooled 4870x2s. Then you talking 16.8 teraflops, of course at this point we are pretty much talking about government funding only.[/citation]
I doubt passive cooling would be enough. You'd want to at least go with phase cooling, if not a multilevel cascade system.
 
[citation][nom]Tindytim[/nom]I doubt passive cooling would be enough. You'd want to at least go with phase cooling, if not a multilevel cascade system.[/citation]
Water cooling is not passive. Maybe you meant standard cooling... but water cooling is hardly standard either.
 
[citation][nom]scook9[/nom]3 4870x2's and a 4890 in a case with 8 expansion slots would be sick on this. Of course, you need a small fusion reactor to power it all, but thats like 8.6 teraflops. Now imagine 7 single slot watercooled 4870x2s. Then you talking 16.8 teraflops, of course at this point we are pretty much talking about government funding only. [/citation]

Keep in mind, again, the relative bandwidths going into, and out of, these cards.
The best case scenario is 4 cards with x8 PCIe, 1 more with x4 PCIe and two more with x1.
Despite the 7 PCIe x16 slots, you are not getting 112 PCIe lanes back to the chipset/processor, you are still only getting 38 lanes(I mistakenly said 42 before, I have no idea where I got that number from).

Keep that in mind when doing your GPGPU calculations, they may be able to process really fast, but will you have the bandwidth to feed them to that performance level?
 
[citation][nom]ViPr[/nom]i thought that with the new technology like OpenCL and Cloud Computing, PCs would be replaced by server racks.[/citation]

Never heard CUDA-based (or these kinds of boards) can help in cloud computing, maybe good in animation rendering, but cloud computing?
 
[citation][nom]kittle[/nom]bleah.. wheres the dual socket core i7 board?supercomputer mabye.. but not everything runs on CUDA.[/citation]
Systems like this are built for a particular purpose. They're not built to run everything.
 
[citation][nom]Miribus[/nom]Keep in mind, again, the relative bandwidths going into, and out of, these cards.The best case scenario is 4 cards with x8 PCIe, 1 more with x4 PCIe and two more with x1.Despite the 7 PCIe x16 slots, you are not getting 112 PCIe lanes back to the chipset/processor, you are still only getting 38 lanes(I mistakenly said 42 before, I have no idea where I got that number from).Keep that in mind when doing your GPGPU calculations, they may be able to process really fast, but will you have the bandwidth to feed them to that performance level?[/citation]


You are getting that all wrong. From Asus themselves they claim true 4 PCIe x16. The four blue PCIe slots are channeled for x16. You will have true x16 triple SLI with this board.

3 x PCIe 2.0 x16 (@ x16 or x8)
3 x PCIe 2.0 x16 (@ x8)
1 x PCIe 2.0 x16 (@ x16)
 
[citation][nom]Miribus[/nom]The best case scenario is 4 cards with x8 PCIe, 1 more with x4 PCIe and two more with x1.Despite the 7 PCIe x16 slots, you are not getting 112 PCIe lanes back to the chipset/processor, you are still only getting 38 lanes(I mistakenly said 42 before, I have no idea where I got that number from)[/citation]
Where the hell are you getting you're numbers? All of the information I've gotten says the nForce 200 chipset has 62 PCI-e lanes, 32 of which are PCI-e 2.0. Now, this board has 2 nForce chipsets, giving it a total 124 PCI-e lanes, 64 of which are PCI-e 2.0. Meaning you could stick 4 dual slot Tesla cards on this Mobo with each getting x16 2.0 bandwidth, and still have plenty of bandwidth left over.

[citation][nom]sandcomp[/nom]Never heard CUDA-based (or these kinds of boards) can help in cloud computing, maybe good in animation rendering, but cloud computing?[/citation]
Look at GPGPU technology:
http://en.wikipedia.org/wiki/GPGPU
 
All of this talk about cooling.

Why doesn't someone take one of these smaller beer refrigerators (like the ones we all had in college) or an old basement freezer, and port it thru with HDMI, USB, and Esata cables that are sealed in place with RTV.

You can attach all your Blue Ray DVD's, extra hard drives, etc. outside the fridge box.

Then take your pc case (sans DVD's and other extraneous airflow blockers but leave in all the exiting heat sinks and fans), install one of the new PCI 2.o based SSD's, strip off the doors and panels, hard mount the whole rig in the middle of the fridge/freezer and attach the cables. Turn the temperature control to Arctic and RTV the door shut. After the RTV cures plug in the fridge.

Once it has gotten down to freezing, boot up, put your OS and stuff on the SSD and away you and go. I gotta think this would run way cooler for a lot less cash than most of the exotic and expensive cooling rigs folks have been trying to shoe horn into their cases.
 
[citation][nom]wayneepalmer[/nom]All of this talk about cooling.Why doesn't someone take one of these smaller beer refrigerators (like the ones we all had in college) or an old basement freezer, and port it thru with HDMI, USB, and Esata cables that are sealed in place with RTV.[/citation]
Wouldn't work, refrigerators don't have the power to do so.
http://www.ocforums.com/showthread.php?t=373263
 
Yo, people: it is a niche board. It is primarily intended for people who run GPGPU code for number crunching, not you amateurs.

"Now imagine 7 single slot watercooled 4870x2s. Then you talking 16.8 teraflops", etc etc etc... very nice, you can add. No, unless you know how to program the thing to extract 16.8TF from it, maybe be hush.
 
It's not a super computer if it is DOA. Asus sucks, i've had to RMA numerous Asus motherboards and their RMA process stinks as well. They'll ""repair"" a board and send it back, DOA. Never Asus again for me, their 3 year warranty is rotten.
 
i wouldn't necessarily call this thing super but extreme performance desktop maybe ,unless it had dual sockets then were really taking.

anyways it about time thet got rid of all the old legacy devices and support and i mean all of it (ie😛ata,pci,dvi,rca,x86
 
[citation][nom]bill gates is your daddy[/nom]You are getting that all wrong. From Asus themselves they claim true 4 PCIe x16. The four blue PCIe slots are channeled for x16. You will have true x16 triple SLI with this board.3 x PCIe 2.0 x16 (@ x16 or x8) 3 x PCIe 2.0 x16 (@ x8) 1 x PCIe 2.0 x16 (@ x16)[/citation]

Let me clarify, as I misunderstood the way the n200 in particular works.
The n200 is a pretty heavily optimized PCIe switch so it's a fair bit of a step up from the n100 that I got it confused with. While it is true that there are X amount of lanes going to those slots, they all branch at some point from the X58 chipset which has limited bandwidth. So communication to the processor will still be bottle-necked. Like having a 12 GbE switch with 1 GbE uplink to a server for example.
For CUDA or whatever, you are using the GPGPU it's still interesting if you keep all of the computing away from the CPU until it's done.
I was mistaken, they are quite a bit more than just PCIe switch in gpgpu apps because the link to the CPU is irrelevant as long as what your cards have done is small enough to fit back into the CPU for whatever portion it needs to do.

For graphics though, it doesn't seem worthwhile over any other 3-way board.
As for using both SLI/Xfire simultaneously that someone had asked, that's a driver nightmare, maybe with some really well done abstraction layer, especially if one were limited to a virtual environment... don't know, programming isn't my thing.

[citation][nom]Tindytim[/nom]Where the hell are you getting you're numbers? All of the information I've gotten says the nForce 200 chipset has 62 PCI-e lanes, 32 of which are PCI-e 2.0. Now, this board has 2 nForce chipsets, giving it a total 124 PCI-e lanes, 64 of which are PCI-e 2.0. Meaning you could stick 4 dual slot Tesla cards on this Mobo with each getting x16 2.0 bandwidth, and still have plenty of bandwidth left over.[/citation]

I've never read 62-lanes for the n200, but I did read such a thing for the 780i with the n200.

Did they ever optimize YETI@Home for Stream processors? I'd definitely fire up a few quadros for the cause.
 
[citation][nom]wayneepalmer[/nom]Hoohoo, What about a full size deep freezer or refrigerator?[/citation]
They are not made to be constantly cooling something. The things that go into a refrigerator do not generate heat. A phase, or cascade cooling compressor would make more sense, and take less work.
 
I seriously think that Tom's needs to sweet talk someone at Asus and get their little hands on one of these and load it up with 4x HD4890 and see what happens.

If they don't want to do it then please give me the hardware and I will gladly review it....you won't ever get the hardware back but that is a moot point.
 
Status
Not open for further replies.