OpenCL And CUDA Are Go: GeForce GTX Titan, Tested In Pro Apps

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

kansur0

Distinguished
Mar 14, 2006
140
0
18,680
2 things missing in this article. WORKSTATION CARD vs. GAMING CARD side by side comparison. Second...Video Manipulation? Benchmark using "BaseMark"? All the other tests are done with 3D packages like Maya, Lightwave, 3dsmax and Blender. If you haven't heard After Effects is getting ready to release Cinema4D Lite already included and integrated into the After Effects pipeline. What may be of even greater interest is to do an article comparing the Mac platform with Final Cut Pro and Premiere since both those applications also use GPU acceleration on the timeline. But AE can be tested directly on both platforms. You could also put together a system that specs straight across since Apple uses PC parts and charges three times what they are worth. (nvidia would be easier to do since Mac finally released the GeForce 680 platform almost 2 years too late...talk about a bag of hurt)

That's my request. Anyone interested in seeing something like this boost this post. Thanks.
 

FormatC

Distinguished
Apr 4, 2011
981
1
18,990
WORKSTATION CARD vs. GAMING CARD side by side comparison
Again: please read the indruduction. You get all info at the bottom of the first page.

The Mac vs. PC comparison is another roadworks and must be done in North America. What is a Mac? Outside U.S. a Mac is mostly a very, very exotic thing without a larger user group and I was also unable to get here a Mac as system or sample cards for Mac unless I buy this extreme overpriced hardware from my own pocket (with a long time to wait of it). Sorry, but this must be done from another guy, my wife would kill me. ;)
 

somebodyspecial

Honorable
Sep 20, 2012
1,459
0
11,310
Was OpenGL tested for NV cards in Autocad? NV calls this out in their doc, so I'd be shocked if directx runs as fast as OpenGL. What is the point of testing cards with their worst implementation?

You should benchmark the fastest api for NV vs. the fastest for AMD period. Anything less is a waste of time. I would AVOID any use of OpenCL at all costs on NV if I could. Retarded people buy NV and seek OpenCL apps (and that's kind of dissing Retards...I don't think they'd even do this).
http://www.nvidia.com/docs/IO/123576/nv-applications-catalog-lowres.pdf
CTRL-F autocad they aren't saying run in DirectX :( Figures we'd get this instead.

Also wondering why Octane wasn't used for Cuda with Maya, Lightwave, 3dsmax etc, which it works for and tested against luxrender etc as opencl for AMD. Afraid to make the REAL comparison?
http://render.otoy.com/
The list of apps it can render for is long right? :) I'm wondering which is faster for 3dsmax, iray or octane? I'm hoping you'll change the plugin and find out. Luxrender basically has the same list of apps but via opencl for AMD.

Unfortunately with no PRO CLASS cards in the review, your statements like "As we saw in the Pro/ENGINEER benchmark, these numbers show why it's better to use professional-class hardware and drivers for workstation-oriented software." mean nothing to me. I need to see one included smacking around the gaming cards for this to mean something. Why was no quadro/tesla/firegl included? It looks like this is part 2 coming soon I guess.

For unigine why wasn't directx tested against opencl? Clearly NV has no interest in pushing opencl (they'd rather see you use cuda always if available), so it would have been interesting to see their directx vs. opencl (or even directx on AMD too if it's faster - why would anyone use a slower mode?). The graph shows OpenGL, so is it openCL or GL used here?
https://www.youtube.com/watch?v=AgwhfdoyTns
DirectX vs. OpenGL on a GTX660. Basically a tie, but still wondering if Titan makes one or the other better.

Can't Blender be used via opengl on AMD? If so why was it left out?
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/GPU_Rendering#What_renders_faster.2C_NVidia_or_AMD.2C_CUDA_or_OpenCL.3F
OpenCL still in experimental stage I guess, much like all opencl stuff vs. cuda. Of course, they come right out and say Nvidia/Cuda is faster so no point I guess :) But you should have run Blender on AMD also with luxrender. You guys seem to pretend when you run Cuda AMD has no option. WRONG.

Not quite sure how Titan can score the same as GTX680 in ANY cuda test with 2688cuda cores vs. gtx680's 1536. Fluid capped or something?

Again with binminging...LOL. Who does this besides botnets?
https://deepbit.net/
Shouldn't you try something outside of OpenCL for NV cards? Perhaps the cuda optimized puddinpop rpcminer? Again, OpenCL on NV will NEVER beat Cuda if available for the same job (well, DUH!). I'd be shocked if you can find ANY instance where both cuda & opencl are available and OpenCL wins vs. Cuda on NV. Testing OpenCL on NV when Cuda is available is really stupid for any NV owner (who would do this?).
https://bitcointalk.org/index.php?topic=2444.0
The guy has had it available since 2010. Why are you not running Cuda for bitmining? I'm not sure this is the only cuda option either, but it took all of 5 seconds to search for it. Also, what moron is still using a gpu for this with Asics from the likes of Avalon pushing 69hash/sec vs. 2.2 or so for 7970?
ASICMINER privately owns a 65thash/sec farm and is preparing to sell their upcoming 200thash/sec batch publicly. Do you see the T in front of the hash/sec? You aren't going to get your money back on your cards vs. this kind of crap. Regardless, you can get another 20% from NV cards using cuda (though all gpu's are a waste of time in light of what I just said). No point in benchmarking this crap with Thash systems in the wild. Your 2.2hash/sec radeon just got crapped on big time. I'm pretty sure NV doesn't give a rats A$$ about mining either :)
https://bitcointalk.org/index.php?topic=178275.0
Bid on an Asicminer :) No point in doing this crap on GPU's now, not that I would have wasted my time before this...LOL. Butterfly Labs shouldn't be too far behind offering their own stuff either.

Also why all the opencl image/video stuff? You couldn't find a copy of the #1 suite to fire up Cuda to do the same? Adobe CS6 not available. Again, this is like showing the WORST case scenario for NV in crap nobody uses. You couldn't download a trial version to test with? Only a fool buys NV and turns to OpenCL (which IMHO is currently broken until NV gives up on Cuda, which doesn't seem likely after 7yrs and nearly every big name app using it). It's just plain stupid if you can get an app that does the same with Cuda support for YEARS (since cs4 I think), or heck at least shoot for OpenGL when Cuda is missing. I mean, can I make money using rightware? Can luxmark make money? Why not benchmark what people actually USE for image/video editing? ADOBE...Duh...AMD cards only win BECAUSE you use OpenCL here. This whole review kind of stinks as it implies perf on Titan sucks; but only if you pretend you can’t do ALL of the same stuff via CUDA or OpenGL first. Only if you pretend OpenCL is used as much as CUDA in the pro app world (it’s NOT). The only point I see in Rightware/basemark is to show how poorly NV runs OpenCL...LOL. The same story exists for luxmark. Unfortunately for AMD OpenCL is useless for work apps as almost everything is ALREADY on cuda and gpgpu compute in HPC is OWNED by NV (~50 supercomputers in the top 500, vs. I think 3 for AMD - which kind of makes my point right?). I'm guessing if you actually benchmarked an app we use (adobe for any photo/video contests, or heck any app you actually pay a few hundred+ for-meaning meant for WORK and making MONEY) you'd see some real differences between these cards as opposed to showing OpenCL worst case scenarios that realistically don’t exist if you own NV.

AMD will beat NV in anything OpenCL for quite a while. But then nobody cares as you can always do the same in Cuda or OpenGL which NV is quite good at both. Testing OpenCL on NV is just the best way to show you have no idea what you bought your vid card for and should do a google immediately :) Just google Cuda+ appname (insert app or job description) and end your opencl buffoonery on NV hardware. There is a reason NV dominates the workstation/hpc sectors. Everything there runs cuda or opengl, and as a last resort is slowly adding opencl. Note Blender's comments on OpenCL. AMD has issues :)

Instead of wasting time on something like luxmark, why not use luxrender in blender for AMD and octane/iray or whatever in blender? Luxmark is purely an OpenCL benchmark which again is dumb to buy NV for. You'd clearly fire up some Cuda render engine instead (like octane or iray right?). The only thing good about luxrender is it's free. But if you're making money from your machine, I'm guessing the cost of something like Octane means NOTHING ($300?). You could have easily compared the two rendering engines in many apps as they both work in the same apps (blender, smax, c4d, DAZ, Poser etc). Why bother running NV in luxmark? Waste of time, and merely showing NV in its worst case scenario that NOBODY in their right mind would do when MANY other options using cuda are available. The entire point of buying NV for pro apps use is CUDA and a large point of Titan is 6GB which again seems like this whole article missed the boat on. A 3GB card can't load the same textures etc that a 6GB card would laugh at (as others have noted). It seems you didn't push this at all (lots of NoAA, or 2xAA etc, everything stuck at 1080p, which surely isn't too taxing on 6GB of memory!). To separate the men from the boys you have to push these cards correct?

What happens when you push stuff up to resolutions like this guy's water scene (which takes 4hrs to render on 2gtx680's BTW)- The full resolution is 3840 x 5760 px:
http://andrewtrask3dartist.tumblr.com/
Is a single card even capable of doing this? Does Titan with 6GB make a difference vs. a 680 or 7970? These are the answers I think a person spending $1000 on a card would like to know.

One more point - though a bit different render wise, it seems octane is slower than iray (not sure if you can directly compare these):
http://forum.nvidia-arc.com/showthread.php?12218-Keppler-performance-downgrade

So, thanks for wasting my time reading this so called article :) Lots of data, just most of it doesn't represent reality for any CUDA gpu owner and also completely misses many opportunities to show AMD vs NV and what 6GB is for. When you ran cuda you acted like no way to run the same on AMD. When you ran OpenCL you actually wasted time running the same crap on NV when anyone with 1/2 a brain would replace the same crap with CUDA or at worst OpenGL apps that get the same job done. What a waste.
 

FormatC

Distinguished
Apr 4, 2011
981
1
18,990
The next who was not able to read the intro. Too bad ;)

BTW:
The main API for AutoCAD 2013 is DirectX/D3D not OpenGL. And the real differences between CUDA and OpenCL are smaller than you think. OpenCL can use CPU, GPU or both together, try this with CUDA ;) But you get a complete CUDA part with NV cards of the last years incl. Kepler + Titan + GTX 680.


 

OpenCL_fun

Honorable
Apr 20, 2013
1
0
10,510
Intel Core i7-3770K (Ivy Bridge) has support for OpenCL GPU+CPU, but your tests see only OpenCL CPU. You have old drivers.
I know that you are testing OpenCL GPU over external card, but drivers should be the newest as they can, not to add noisy into final results.
 
As usual, Igor Wallossek has produced a thorough and methodical analysis of the Titan. It is certainly valuable to understand the Titan in a consideration of applying workstation applications to gaming cards, but the analysis would be even more valuable had it included comparison also to Quadros and Firepros.

There has been over the years a consistent comment that gaming cards are purposely hobbled from using workstation drivers to protect workstation cards that some consider to be inflated in price for no particular reason other than greed. However, I believe the story is more complex. Without knowing the details, it is still understandable that NVIDIA wants to protect profits on Quadros for which so much is spent on development of drivers and of which so many fewer cards are undoubtedly sold. The accusation that Quadros are also locked at comparatively lower clock speeds and lesser in other parameters is true but experience with both GeForce and Quadros suggest too, that Quadros are carefully configured to insure reliability. The CPU parallel is of course, Xeon processors, of which I think few to none were made unlocked to overclocking, typically have slightly lower clock speeds, and run at lower power.

There has been a long lament on forums that NVIVIA ended the earlier (I think G92's were the last) ability to soft-mod a GeForce into a Quadro, but this was also a combination of both commercial and consumer protection. In the end, money is likely the reason for this product line structure, but NVIDIA has consistently pushed GPU technology into new realms that I believe is near to making possible affordable Personal Supercomputers of serious capability. So, while I cringe at Quadro prices, I also applaud them for reinvesting their profits so intelligently- and probably heavily, into new technology.

I have a Dell Precision T5400 that arrived with a Quadro FX 580 (512MB) and which I eventually changed to a GTX 285- which in GPU, 512-bit, 240 CUDA cores is the 1GB cousin to the 4GB FX 5800 and $350 instead of $3,500. However, during it's use, even ultra-reliable applications crashed, renderings- which are CPU based- would run for 25 minutes and crash, there were strange shadows in Sketchup, Solidworks viewports would be balky or not open. I never understood the exact reasons for these problems, but also never had had those problems with the low end FX 580. The solution was to change the GTX 285 for a Quadro FX 4800 (1.5GB) and all my problems disappeared. The Passmark system rating dropped from 1909 to 1866, but it worked perfectly- I think a perfect summation of Geforce vs. Quadro.

So the GeForce vs. Quadro debate will continue, but when GPU's take a step into a new direction like the Titan, it would be more useful to the debate also to compare it to the cards intended for that use. I believe many long-time, serious workstation users, especially those that have had problems with consumer cards, would be as or more interested in whether buying a Titan is saving $800 from a Quadro K5000 than whether it is wasting $500 from a GTX 680.

BB

 

RussK1

Splendid
You should benchmark the fastest api for NV vs. the fastest for AMD period.

I agree, I'd rather see Cuda vs OpenCL(AMD) vs Quicksync. I know quicksync isn't supported by all apps but what it is supported by, I find Quicksync is the cats meow and dominates discrete options.
 

rubcarso

Honorable
Apr 19, 2013
2
0
10,510
OpenCL is like OpenGL (open standard too) . NVIDIA cooperates and gives support to OpenCL since it emerged. NVIDIA cards have to be tested employing OpenCL (it's a good way to do coherent comparing between AMD, NVIDIA and others). CUDA is amazing, but only to play in NVIDIA GPUs. Don't be a slave of NVIDIA's world... Open your mind!!!!
 

mapesdhs

Distinguished
somebodyspecial writes:
> Unfortunately with no PRO CLASS cards in the review, your statements like
> "As we saw in the Pro/ENGINEER benchmark, these numbers show why it's better
> to use professional-class hardware and drivers for workstation-oriented
? software." mean nothing to me. I need to see one included smacking around
> the gaming cards for this to mean something. ...

If you're impatient for info, then just look at my results, should give
you some idea:

http://www.sgidepot.co.uk/misc/viewperf.txt

A GTX 670 is a tad better than two GTX 460s, so you should be able to
scale up from that.

I'm currently hunting for a 580 3GB, no luck so far though.

EDIT Jan/2014: I was able to obtain lots of 3GB 580s eventually. My gaming
PC now has two of them (Palit 783MHz), my AE research 3930K setup has
four of them (MSI 832MHz Lightning Extremes (can be oc'd to 1GHz+), and
I obtained three more for an AE system I built for a friend (his system has a
Quadro 4000 and three Palit 783MHz cards), ie. quicker than two Titans at
a fraction of the cost. I obtained a 5th MSI L.E. just for general benching
and easier card swapping.


> ... Why was no quadro/tesla/firegl
> included? It looks like this is part 2 coming soon I guess.

This was covered in the intro, pro data coming later.


> Not quite sure how Titan can score the same as GTX680 in ANY cuda test
> with 2688cuda cores vs. gtx680's 1536. Fluid capped or something?

Because it's NOT about the number of cores; it's much more about the
available I/O bandwidth per core. Titan has a lot more cores, but it
doesn't have equivalently higher bw to feed them all. It's the same
reason why a GTX 580 is quicker than a 680 for CUDA. Also why a trio
or so of 460s leaves any of them in the dust, even though the no. of
cores involved is much less. Have a look at the example on CreativeCow:

http://forums.creativecow.net/thread/2/1019120

EDIT Jan/2014: also discovered the shaders in the 500 series run 2X
faster than those used in 600/700 series cards. The change was made
to make power & cooling issues easier to deal with, but of course it
means 2X more shaders are required to match the speed of a 580,
though in practice the bandwidth issue means even that's often not
enough either. A 780 will leave a 580 in the dust for general 3D & gaming,
but a 580 can beat a 780 for CUDA.


> What happens when you push stuff up to resolutions like this guy's water
> scene (which takes 4hrs to render on 2gtx680's BTW)- The full resolution is
> 3840 x 5760 px: http://andrewtrask3dartist.tumblr.com/ Is a single card even

If you really want to hammer a card, just experiment with any sat image
used for defense imaging, those can be 100K pixels across, 60GB+ images
files; paging teqhniques have always been used to deal with them, such
as the Electronic Light Table sw used in the 1990s and early Naughties
(don't know what's used these days). For 3D, any of the more significant
industrial CAD models would bring such cards to their knees; in 1995 a guy
at BP told me one of their full oil rig models contained more than three
trillion triangles, so they have to work on down-sampled versions constantly
extrapolated from the full database. Those guys can never have enough 3D power.


> So, thanks for wasting my time reading this so called article :) Lots of
> data, just most of it doesn't represent reality for any CUDA gpu owner and

I agree a CS6 AE CUDA test would have been useful, but may I say, I think it's
possible to make one's points without being quite so... harsh. :}

Ian.

 

alidan

Splendid
Aug 5, 2009
5,303
0
25,780
any reviews with the cards doing handbreak and sony vegas rendering with opencl?
just started doing video things and am wondering the performance difference between my 5770 and a newer card.
 

falchard

Distinguished
Jun 13, 2008
2,360
0
19,790
AMD and NVidia have different design principles and business practices. From 2006, AMD has been designing cards to meet Microsoft's standards. They have also been designing cards to meet OpenCL standards. This means all AMD has to concentrate on is making a card that will work well with standards. NVidia decided it needed proprietary software. It needed CUDA, it needed PhysX, and it needed 3D. However, when you have less than 90% market share no developer is stupid enough to use a proprietary solution.
Naturally NVidia's solutions have been hot, and expensive to do about the same workload as an AMD card. The NVidia cards were not designed for the common workload in mind.

Now if you were looking at NVidia's previous generation it would have been good at compute based work because it was based on the professional cards. With this generation they wisely decided to reduce this functionality so their card was no longer hot and expensive. Its a step in the right direction, but it will take years for NVidia to catch up to AMD in all areas.
 

ElMoIsEviL

Distinguished


While I understand the yearning for CS6 benchmarks they're pretty much going to be irrelevant in the next few months:
Photoshop: http://www.thephoblographer.com/2013/02/02/weekend-humor-adobe-set-to-release-cs7-ceo-says-dont-even-try/
After Effects: http://www.motionvfx.com/mblog/adobe_after_effects_cs7,p2367.html
Premiere Pro CS7 Press Release: http://fireuser.com/blog/adobe_premiere_pro_next_amd_firepro_acceleration/

adobe-firepro.gif


Adobe has moved to OpenCL. I think they'll still keep a CUDA path for nVIDIA cards but now their software runs OpenCL. CUDA's influence is dwindling rapidly and CUDA is nearly on its last dying breath imo.
 

Duke Nucome

Honorable
Mar 6, 2013
238
0
10,690


Agreed. nvidia shot themselves in the foot going for what amounted to a short term gain with proprietary software Physx & Cuda.
 

mapesdhs

Distinguished
[citation][nom]ElMoIsEviL[/nom]While I understand the yearning for CS6 benchmarks they're pretty much going to be irrelevant in the next few months: ...[/citation]

I hardly think a lame comparison to a K2000 is relevant, and the graph you've shown is for
Premiere, not AE. Two of the other links don't mention OpenCL at all, and yes of course
fireuser.com and fireprographics.com are definitely unbiased sources of info. :D

There are so many different factors that can affect a test, CPU, RAM I/O, effects used in the
scene or render, VRAM available, etc., that graph is a joke.

If you're seriously talking about FirePro cards with OpenCL being better, and relying on GPU
accelerated processing, then show me how that W5000 fares against a GTX 580, then I'll
take some notice. Heh, a K2000, yeah right; that has the same no. of cores as a GTX 560Ti,
so a GTX 580 would leave a W5000 far behind.


> ... CUDA is nearly on its last dying breath imo.

Nonsense, it's used in many more fields than just digital media, eg. financial transaction processing.

Ian.

 

Duke Nucome

Honorable
Mar 6, 2013
238
0
10,690


LOL cause financial transactions help the GPU run games LOL. Please also cite your link to your strange claims LOL.
 

somebodyspecial

Honorable
Sep 20, 2012
1,459
0
11,310


Well, they shouldn't have titled the article OpenCL & Cuda are go...Because I came here expecting ONE vs. THE OTHER in lots of pro apps. I got NONE of what the title implied, and have to come back just to see more workstation cards benchmarked yet again in a way that makes the results useless to me (I need to see the best from one side vs. the best from the other, no matter which app it takes to show that as long as results are same - like rip quality etc). People are overly sensitive today. They need to get over it and get some thicker skin :) The last 20yrs of PC has turned our world into wimps that can't take any criticism. :( He did waste my time. I found what I already knew, NV sucks in OpenCL and expects you to SEEK CUDA apps. The reviewer I guess, doesn't know this. I was POLITE but critical :)
 

somebodyspecial

Honorable
Sep 20, 2012
1,459
0
11,310


One more statement on your link - I'm thinking both compute perf of 580 & the 3GB of memory may have something to do with why it beats the 680 in that link. So I'm not sure it's all just about bandwidth. Too many differences between the two tested card to assume that I think. But I'd definitely like to see cuda vs. whatever AMD can run fast in adobe CS6 (photoshop, premiere and ae I guess). Since so many use this at home & at work, it should be a pretty meaty section in any PROsumer review. Titan is all about a home user trying to do some things pro users do, but also be a great gamer. We've already seen the gaming side. Adobe is surely a large part of the prosumer market, I'm not sure anyone running something like PROE buys a lic for home without getting a firegl/quadro/tesla etc.

My guess is, once OpenCL is in adobe Tom's will pretend Cuda path isn't still there and benchmark opencl AMD vs. opencl NV which will make me vomit...Cuda is there for a reason even in the next one - probably because NV will pay to keep it updated forever...LOL. I think bitmining is pointless with Asic machines blowing them away and being sold now. But they did it here, and then ignore the CUDA versions you can use to bitmine. Why? It adds 20% or so to NV's bitmining scores (maybe more, I only saw one tested there are multiple cuda ways to bitmine).
 

somebodyspecial

Honorable
Sep 20, 2012
1,459
0
11,310


The discussion above is about PRO APPS, not games. LOL because you don't even realize games were never mentioned by him. He's talking mostly adobe here and the other guy talking AE was wondering why they even benched a gaming engine (me too...Titan gaming reviews came months ago-we needed nothing in games tested).

The title of the article is "tested in Pro Apps", not "Tested in more games"
 

somebodyspecial

Honorable
Sep 20, 2012
1,459
0
11,310

Autocad may be leaning DirectX but OpenGL doesn't work worse because of it. It's there to test correct? The point is, which is faster on NV or AMD?
http://usa.autodesk.com/adsk/servlet/ps/dl/item?siteID=123112&id=5554010&linkID=9240617
That article is for Autocad 2013 (among other versions) and they're saying turn on OpenGL as a solution for a hardware acceleration. So you'll have to excuse me if I think you're incorrect or at least misrepresenting the facts here. That is the company's site saying to turn on OPENGL.

I read the intro, but I have to remember the data for the next how long to compare them? Comments comparing things I can't see yet should be left out until all the data is available. I read it the first time I read the article and immediately gritted my teeth :)

If the differences between cuda and opencl are smaller than I think then prove it. I don't believe you until I see it. Blender has all but given up on it for a while (opencl) Citing AMD problems. That's just one example but you get the point. To date you can't even run OpenCL in most stuff (again, my point, a plugin may get some use of opencl in your app - but main apps are way behind cuda support). You'd have to run cuda for NV vs. OpenGL (at times directx maybe) for amd in most things. Which should be done BTW, as that is what owners have to deal with. Instead Toms has to dip so low, you end up in bitmining, luxmark & pointless rightware benchmarks etc (instead of adobe etc? What the heck for?). Why wouldn't someone use adobe for vid/photo stuff? Because AMD doesn't have OpenCL yet?...LOL. My point exactly. They'll benchmark that though new adobe with opencl the second it comes out and claim AMD great (of course not testing CUDA on NV in the process)...LOL.
 

wiyosaya

Distinguished
Apr 12, 2006
915
1
18,990
[citation][nom]FormatC[/nom]One information for the expert:You should know that Solidworks 2013 does not work with consumer cards! I've tested all workstation cards with 2013 for the next article, but without certified drivers and hardware you can't use it.[/citation]
Thanks for the tip.

FWIW - this makes me wonder even more why SolidWorks was included in this review at all, or why there was no mention of this in the article, or which SolidWorks version was tested.

Not everyone is on a subscription, however, for people in the market for a new CAD package or people on subscription with SolidWorks that update to 2013 or later, this review has no relevance, and thus, to me, becomes a question of the integrity of the review.

Something as simple as "this was tested with SolidWorks XXXX and anyone interested should note that SolidWorks versions 2013 and later will not work with any consumer cards" would have sufficed - even if this was discovered after this review was finished. I assume Tom's does allow edits to published material.

Since it sounds like you are testing 2013 anyway with pro cards, I really hope that in cases where the software can take advantage of things beyond the rendering capabilities of the card, such as SolidWorks Simulation, that you give some benchmarks that indicate what kind of performance can be expected when using those features. People interested in that level of functionaly, if they happen to read the review, will be interested in seeing such performance measures as an overall indicator the overall value of a pricy pro card for their app.
 

Duke Nucome

Honorable
Mar 6, 2013
238
0
10,690


Oh really then in that case I recommend to OP to get a workstation card and not one that is designed to run games.
 

somebodyspecial

Honorable
Sep 20, 2012
1,459
0
11,310


http://www.overclock.net/t/1363440/nvidia-geforce-gtx-titan-owners-club/6040
How are these guys running on titans and GTX 670's in Solidworks 2013 etc then?
"Originally Posted by vhco1972 View Post

But I ran the benchmark on GTX Titan SLI for SPECviewperf® 11, SPECapcSM for SolidWorks 2013™, SPECapcSM for Maya® 2012 and SPECapcSM for 3ds Max™ 2011. All of their scores are lower than my GTX 670 SLI.mad.gif"

Sounds like they can do it and the responses don't deny it. Sorry I don't have time to read all 750 pages or so to find someone saying it's wrong right now. Just a quick google to find that.

But as usual things maybe can be hacked :)
http://hackaday.com/2013/03/18/hack-removes-firmware-crippling-from-nvidia-graphics-card/
I surprised this hasn't been done via a firmware or driver trick instead of soldering, but I haven't searched any really. I haven't bought my next card so no reason to yet...LOL. I'm not sure I'd have the balls to mod something so expensive anyway. Maybe there is a hack for the gtx670 too...LOL. I could afford that ;)

http://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/
GTX680 hacked also...670 next, yeah... :)

I think I'll be reading this thread/site for a while before buying my next card...ROFL
 
Status
Not open for further replies.

TRENDING THREADS