Dell Precision T5600: Two Eight-Core CPUs In A Workstation

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

folem

Distinguished
Oct 9, 2011
63
7
18,635
I see that you like to test the artistic applications. Could you expand your tests to include some engineering CAD applications like SolidWorks, Catia, NX, Creo Parametric, Inventor, or even the arcane AutoCAD (don't actually test AutoCAD as it is almost entirely CPU bound and will run just as well on a GPU from the late 90s as a K6000 or W9000). These applications are of much more use to many of us than the artistic ones.
 

clinamen

Distinguished
Dec 13, 2009
3
0
18,510
Well, it's better than the T3600. I was involved in setting up a lab that included 5 T3600's. What a load of crap! First off, it took well over a minute too boot up! Second, it came equipped with a RAID card hooked up...to one SSD and one HDD! On a RAID!

So I first updated the BIOS to a current version. Next, pull the RAID card reconfigure the drives and switch BIOS to legacy. Now the unit booted up in the usual time for an SSD.

Another problem: The expansion bays would not hold any drive I cared to install!

These work stations were connected to a server for running dedicated tests, but it took a long time to figure out what to do. In the end, the Dell blog for the 3600 pointed out the way to go.

It ran well after that, but frankly so did a current desktop unit from Dell we had in abundance.
 
G

Guest

Guest

That is why, I have my i7 3820 OC'd to 4.2 GHz and slowly inching up; new uEFI version appear to be more stable on my ASUS X79 board.

FYI, I could not connect to the link you provided.
 

mapesdhs

Distinguished


That's the way to do it! 8)




Odd, maybe it was site issue, it's working ok atm.

Ian.

 
G

Guest

Guest

Cool, got through now. Nice list. Did you run SpecviewPerf 11 at 1920x1080 or some other resolution?
 

mapesdhs

Distinguished
radiovan writes:
> Cool, got through now. Nice list. ...

Thanks! Oodles more tests to do, just wish I had the time.


> ... Did you run SpecviewPerf 11 at 1920x1080 or some other resolution?

Just the standard 1920x1080.

Ian.

 
G

Guest

Guest

Good to know, thanks - apples to apples comparison.
 
G

Guest

Guest

Appreciate the offer. Mainly wanted to know if the number I got on my Quadro 2000 were "apples to apples" comparable with yours and Tom's - assuming Tom's ran their test at 1920x1080. Thanks again.
 

mapesdhs

Distinguished


Anytime! 8)

I guess the main thing I've noticed doing these tests is the way
main CPU power has such a drastic effect on the final results. Oc'd
consumer CPUs can give numbers that far exceed typical XEON
systems, ie. some of the tests seem sensitive to absolute clock
rate, especially ProE (look at the way I have an oc'd i3 550 @ 4.7
with a Quadro 600 outperforming a stock Pentium G840 @ 2.8
with a Quadro 4000).

Driver changes can skew the results though, eg. the G840 shows
a much higher number for Maya vs. the 2700K, which is bizarre,
but the latter was a much older driver (same mbd please note).

I may have to redo some of the tests to iron out these issues.

Ian.



 
G

Guest

Guest

No doubt. I notice the a difference when using SolidWorks with my i7 3820 at stock vs. a conservative OC of 4.0 GHz. I cannot wait for day when the major CAD software companies will optimized their base programs for multi-treading - not holding my breath though.

Indeed about the drivers; the only problem though is that SolidWorks changes their kernel for each new version, hence, the driver that works well now, will unlikely work well in the 2014 version, but here is hoping.
 

Draven35

Distinguished
Nov 7, 2008
806
0
19,010
Yes, the tests were in 1920x1080.

We had something odd happen and the article is missing the page with the Chrome compile on it. We're working on restoring it.
 

mapesdhs

Distinguished
radiovan writes:
> ... I cannot wait for day when the major CAD software companies
> will optimized their base programs for multi-treading - not holding
> my breath though.

When that happens we'll probably also have piglets doing Mach 2
at 30000 feet. :D Quite incredible that sw vendors continue to
charge enormous sums for their apps when in many cases the code
can be as much as 20 years old.


> works well now, will unlikely work well in the 2014 version, but
> here is hoping.

That's a bit of a bummer.

Ian.



 

Draven35

Distinguished
Nov 7, 2008
806
0
19,010


Working on it... its likely to be Inventor tests in the near future. We're talking to vendors.

 
G

Guest

Guest
Will be interesting to see how the Inventor compares to Creo, as I know at one time Inventor used the engine that powered Pro/E - I don't know if it still does.
 

mapesdhs

Distinguished


The original Open Inventor was based on IrisGL; I was using it on
an Indy in 1994 for ocean systems VR work.

Ian.

 

utomo

Distinguished
Jan 15, 2012
16
0
18,510
It is good. many people need faster computer, for gaming and also for 3D rendering. I hope more dual processor will be offered soon. meanwhile we are waiting for 16 core, 32 core.
 

mapesdhs

Distinguished


Having lots of cores is fine, but a great many tasks are still only
written single-threaded, eg. numerous plugins in special effects
packages, the main GUI thread in lots of 3D apps, typical desktop
apps, etc. It is these tasks for which there has been no useful
improvement at all in the last few years as competition in the CPU
market has stagnated. Intel is focusing on power consumption
and extra cores, with barely a nod towards improved IPC.

Helping someone recently with an After Effects system, I observed
a scene with heavy raytracing being rendered very nicely via the
three GTX 580 cards I'd installed in their system, but then when the
scene was made to 'explode' using the Shatter plugin the render
speed ground to a crawl; GPUs not being used, only one core active
on a 3930K @ 4.7. Turns out the code for Shatter was written in the
1990s. It's this sort of dated code that is holding a lot of productive
work back.

I remember from studying industrial workflow concepts many
years ago, the oft repeated saying that a series of dependent
processes can never be any quicker than their slowest component.
Doing a render in which half the frames are computed at a speed
two orders of magnitude slower than the rest because of outdated
code is a classic example. Naturally, the AE user is looking for a
replacement plugin that's better written.

I'm sure others can think of equivalent examples in other fields.
ProE being single-threaded is typical I suppose. Hmm, it's a pity
we can't somehow impose a pricing penalty on sw that hasn't
been updated, that would be good. Maybe then sw vendors would
get on with bringing their expensive apps up to date.

Ian.

 

Draven35

Distinguished
Nov 7, 2008
806
0
19,010


Actually that is a very typical problem in AE compositing, even 3rd-party AE plugins where they haven't rewritten the core tend to do that. Some of the cores were written when two cores was the best you could get, or mmmmmaaybbbee 4 cores, and so if they *do* multithread, they only use two cores. The usual method to speed that up is to use the option in AE to render per-core at rendertime, which is what we do for our AE test. Each frame is rendered on one core with a certain amount of memory allocated to it.

Other compositing solutions tend to multithread better, but most of them were written originally for systems that were multicore (multiprocessor) machines (i.e. SGIs)

 
G

Guest

Guest
Has to be one of the neatest inside setups I have seen. Why can't Dell do better on some of their consumer desktop boxes. Maybe because their not $8000 I suppose.
 

mapesdhs

Distinguished
Draven35 writes:
> ... The usual method to speed that up is to use the option in AE to
> render per-core at rendertime, which is what we do for our AE test.

Would that adversely affect the way in which sections that can be
fully threaded are processed?


> ... Each frame is rendered on one core with a certain amount
> of memory allocated to it.

Presumably the tricky part is working out the right amount of RAM
to allocate per core, and I suppose one ought to leave out at least
one core for general system functions, likewise a degree of RAM?
At least this is what I infer from Adobe's docs. Pity the sw can't
analyse an ongoing render, decide all these settings on its own;
ought to be possible to do that these days.


> them were written originally for systems that were multicore
> (multiprocessor) machines (i.e. SGIs)

Yep, I ran Alias powerender on my 24-CPU Onyx, that was fun. :D

Ian.

 

Draven35

Distinguished
Nov 7, 2008
806
0
19,010


It really only means that they will be isolated to a single core, thus taking longer. Unfortunately, fully-multithreaded AE plugins are few and far between.

And the applications I was referring to are things like Shake (RIP), Nuke, etc. Even Composite (previously Toxik) is better at multithreading.
 

mapesdhs

Distinguished
It depends on the vendor though, eg. Discreet deliberately locked out Inferno so
that it could not use more than 8 CPUs (thus forcing one to use Burn for heavy
renders); bit ridiculous when studios often had 32+ CPU systems (I bought just
part of a system from SPI which has 36 CPUs - the full system had 48 CPUs).

I was told Nuke didn't thread very well. Has that changed or was I given false info?

Ian.

 
Status
Not open for further replies.