The First DirectX 11 Game is BattleForge

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Although it's nice to see a DX11 game come out before the OS & hardware technically support it, I'm still interested in seeing if the industry picks up on DX11 better than it did DX10.

Let's face it, DX10 was made out to be a huge deal, but was sort of a flop. Sure, it enabled some bits of eye candy, but I just don't think it became as big a deal as it should have.

ATI has my interested piqued though with their 5800 series. I sure hope nVidia is not too far off... in the least it'll make prices come down when they're both in the game.
 

ptroen

Distinguished
Apr 1, 2009
90
0
18,630
"Could you do some research before making a comment. 90% of the comments here is already AMD/ATI fanboyism; we really don't need you to spread false rumors.

Check out this article so you know Nvidia's approach to next gen GPU
http://www.anandtech.com/video/showdoc.aspx?i=3651

I know people here like underdogs (Intel vs. AMD, Nvidia vs. ATI) but some of the stuffs said are pure non-sense from a logical and technical point of view. I wish people would wait until actual review from Tom’s and Anand’s before making a claim."

The link is talking about the "Tesla" which is a corporate/scientific GPU not consumer. You are talking several thousand dollars easily. When Nvidia brings out a GPU on the market that can compete with AMD latest offering then I'm sure people here would gladly look at it. Until then AMD has the crown.
 

jarnail24

Distinguished
Aug 14, 2008
70
0
18,640
For everybody talking about the next-gen fermi gpu that gpu is not even going to be targetted at consumers. It is being specifically designed for Tesla processing. I'm not saying that it won't be better or worse than the 5800 but remember a tesla gpu starts at a couple $1000 and not designed for gaming. Now it might kill the 5800 at gaming but nothing that will compete with in price. Now Nvidia could downsize their gpu and make it compete but the latest anandtech article is talking about Nvidia aiming it towards agencies that spend millions on supercomputers not home users. But hopefully they downsize and compete with amd because we all know when their is no competition a company stops producing better products. Look at how intel lost to amd with the k8 then amd lost to intel with core 2 duo. Don't forget about after ati 2000 series Nvidia mopped the floor with them and now well ati is in the lead. competition is good and gives us better products.
 

jkflipflop98

Distinguished
[citation][nom]ptroen[/nom]"Could you do some research before making a comment. 90% of the comments here is already AMD/ATI fanboyism; we really don't need you to spread false rumors.Check out this article so you know Nvidia's approach to next gen GPUhttp://www.anandtech.com/video/showdoc.aspx?i=3651I know people here like underdogs (Intel vs. AMD, Nvidia vs. ATI) but some of the stuffs said are pure non-sense from a logical and technical point of view. I wish people would wait until actual review from Tom’s and Anand’s before making a claim."The link is talking about the "Tesla" which is a corporate/scientific GPU not consumer. You are talking several thousand dollars easily. When Nvidia brings out a GPU on the market that can compete with AMD latest offering then I'm sure people here would gladly look at it. Until then AMD has the crown.[/citation]

AMD's had the crown for all of 5 days or so. Congratulations.
 
G

Guest

Guest
ptroen you really should read the white papers on the architecture before you make a comment like that. Fermi is going to be used in BOTH the next generation of quadro and the 300 series, just binned differently. And the 300 series is going to be coming out first, probably december if I had to guess, they are beginning production in november and they usually don't start to release them to market until a month later.

And as for judging a gpu before it's even out on market, yes that is a bit crazy but if you look at the specs it is literally impossible for fermi to not perform anything less than 50% faster than a 5870. Any game that takes advantage of its mimd architecture will get a hell of a lot more speedup than that. I am personally shocked at the lack of coverage of it on sites like these. Mimd is the biggest gpu architecture change we have ever had, why is no one talking about it!
 

hannibal

Distinguished
It's 40% bigger than 5870, so propably it will be 40 more expensive, so if the power is same as size? Then this should be 40% faster...
A lot of word "if" in here.

What is sure, is that the size is 3.0 billion transistors, so it's a big chip (and so more expensive to produce). And Nvidia claims that it's faster than 5870. And GPU makers newer lie about the relative power of their products... right??? But yeah, it should be faster with those specks, but how much faster, that reamains to be seing.
 

randomizer

Champion
Moderator
[citation][nom]Sythix[/nom]Jellico, nv is cranking out the new fermi card soon, it's supposed to blow away the 5870.[/citation]
Soon? They haven't even got to the stage where they can show off a working board.
 

Ehsan w

Distinguished
Aug 23, 2009
463
0
18,790
[citation][nom]caskachan[/nom]>_> thats why you crank up the resolution up to 1920x1200 (and have a big monitor)[/citation]

yeah I mean seriously, who buys the ati 5870 or 5850 just to pair it with a 19" monitor :p
 

bk420

Distinguished
Apr 24, 2009
264
0
18,780
Can't wait until the GTX300 series for this! Go Nvidia give us something Fantastic, ATI is still lagging again, UGH!
 

omnimodis78

Distinguished
Oct 7, 2008
886
0
19,010
[citation][nom]bk420[/nom]Can't wait until the GTX300 series for this! Go Nvidia give us something Fantastic, ATI is still lagging again, UGH![/citation]
Why would you want ATI to lag? When competition is head-to-head, then we win big time because the prices are great, the technology better and driver support improves. I hope that both nVidia and ATI put out kickass cards because then I will have more options for a cheaper price!
 

iocedmyself

Distinguished
Jul 28, 2006
83
0
18,630
[citation][nom]mcnuggetofdeath[/nom]Am I the only one underwhelmed by Tesselation in DX11. More polygons arent even going to be noticeable unless youve got really high resolutions and textures. The promise of more consistent framerates has me intrigued, but ive got a superclocked 4850 that i believe will hold out for me until Nvidia drops the GTX380, if not longer.[/citation]

No, i'm sure you aren't the only one that doesn't understand what tesselation does, though it's already been part of ATI Gfx cards since the 2900HD, almost 3 years ago now. Was in the xbox before that...and was supposed to be the defining feature of DX10 before it got killed due to Intels graphing calculator quality IGPs. Don't recall the specific numbers off the top of my head but the tesselation demo's done on the 2900HD showed that tesselation could take a 100mb poly mesh and drop it down to a couple megs using a low poly mesh and end up with the same visual quality. For anyone that actually pays attention to the software that makes most of these shiny games that every whines about, you'd know that tesselation has been used for most of the last decade, as NURBS. It's just subdividing a low polygon, though i can think of a better example.

Using a 3D rendering APP like Cinema4d or Maya, you can plot to vertex points on the x-axis equal distance above and below and below the y/z axis and make an insanely high poly sphere just by grouping all existing vertex points,cloning and rotating them by half 50% of the existing angle between two points. so
Start with two points rotate them 90 degrees and replot and have a square
take all 4 rotate 45 degrees get a hexagon
take all 8 rotate 22.5 degrees
11.25 degrees
5.625
then do the same along the opisite axis and you've made a very high quality poly by replecating and rotating 2 little dots. Which is a very very simplistic example of what tesselation does.
Another example would be grayscale texture maps to generate mountain terrain, a topagraphical map

[citation][nom]Pei-chen[/nom]Could you do some research before making a comment. 90% of the comments here is already AMD/ATI fanboyism; we really don't need you to spread false rumors.[/citation]

Well actually, it's not quite being a fanboy when you're backing the winner. Especially when Nvidia has been pounding on the G80 since 2006 for the most part. 40% larger die means a hell of a lot of power and given that it's nvidia, 40% extra cost in core will probably get kicked up by 120% more. Then sticking 6 gigs of GDDR5, it might be cool, but seriously doubt that it will be worth the massive surcharge unless LCD resolutions double.

ATI/AMD isn't going for the enthusiat market because it makes up 5-10% of consumer level. the 5870 does 2.8 teraflops, the 5870x2 will be out in a couple weeks, 5.2teraflops on one card....10.4 in crossfire. Each gpu can have it's own cpu core now too.

Nvidia is still 6 months away from having a glimmer of a hope at actually launching the 300 line of cards given all the delays that have existed already, no demo's, no actual details. I wouldn't be surprised for a die shrink on a g80 core with loads of memory.

But here's a fun fact, nvidia never thought too highly of DX10, nor DX10.1...and given they never got around to dx10.1 they like DX11 evenless. Which is why they are (or at least had been) trying to sell the idea of DX11 software renderer...on their DX11 piece of HARDWARE. I used to love nvidia, refused to get an ATI card....and then the x850xt came about and i started to realize nvidia wasn't quite so great.

I would rather spend $1000 building a system and overclocking the hell out of it and maintaining a frame rate that never dips below 60 on my LCDs then spend near that on a single gpu, and again on a gpu for frame rates that wouldn't give any noticable or very worthwhile gains and theoretical capabilities that the software doesn't exist to take advantage of.

[citation][nom]bk420[/nom]Can't wait until the GTX300 series for this! Go Nvidia give us something Fantastic, ATI is still lagging again, UGH![/citation]
That was either a lame attempt at humor, or you're utterly clueless. Either way, elaborate or perhaps keep it to yourself.
 

scooterlibby

Distinguished
Mar 18, 2008
195
0
18,680
Um, back to this game, specifically. There are no actual eye candy improvements from the DX11 patch, just the promise of smoother FPS. I saw a bench the other day (sorry, do not remember where) that put a lie to that claim. I'm excited about DX11 - but it's a stretch to call this title DX11.
 

Physicz

Distinguished
Jun 5, 2008
98
0
18,630
I wonder what would happen if Larrabee came randomly into play. Would make for an exciting 3-way grudge match don't you think? Only if Larrabee would be an actual contender.... Has anyone else thought of something like that? Or am i just the one? :[
 
G

Guest

Guest
As much as people want to think that Nvidia taking the crown back is a sure thing, they may have the silicon incarnation of the Spruce Goose(legendary airplane too big to fly).

Per various online sources, the "demo card" was an obvious fake, supposedly their yields were only 2% on the initial run(i.e. nowhere near ready for production), there's no telling when or even if there will be a product here. You could make an entire wafer of silicon into one chip, but you'd get dismal yields, and be forced to run it at 40mhz. Nvidia may have really fucked up this time, ATI could have their next next-gen card out before Fermi, which would really mean trouble for Nvidia.
 
G

Guest

Guest
Physicz: Larrabee is not a contender with anything, and unless they ditch the x86 part of it altogether... How on earth do you expect an 20 to 40 Atom-like cores that are not even capable of decent instruction-per-clock to compete against 1600 stream processors? Intel design-fail inside, it'd be lucky to compete against Nehalem.
 
G

Guest

Guest
Fermi could very well wind up being a fail, but it's still less of a fail than Larrabee, here's an illustration of how Intel lies about the performance of Larrabee(and all of their other products, of course).



Comparison of Gflops per core

//Modern x86 CPU
Core i7 965 = (70gflops/4 cores = ~17gflops per core @ 3.33ghz)

//Note that supposedly Larrabee cores acheive 4x the gflops of Nehalem cores
Larrabee Core iFail = (2000gflops/32 cores = ~62gflops per core @ 2.0ghz)

//Intel Atom that is the basis of Larrabee, times 32 cores for Larrabee
//Note: This is barely better than Nehalem
Atom = 3gflops per core at 1.6ghz, 32 Atom cores x 3 = 96gflops

//extrapolating the difference in clockspeed
96gflops x 1.25 = 120gflops(before massive efficiency losses from latency)

//Radeon 5870, sounds much more realistic, right?
Radeon 5870 = (2700gflops / 1600cores = ~1.7gflops per core at 850mhz)
 

scrumworks

Distinguished
May 22, 2009
361
0
18,780
[citation][nom]Sythix[/nom]Jellico, nv is cranking out the new fermi card soon, it's supposed to blow away the 5870.[/citation]

It will supposedly blow away all your savings and skyrocket your electric bill. That is when nvidia has put some required components inside their Fermi mock-up.
 

hakesterman

Distinguished
Oct 6, 2008
563
0
18,980
If you want serious fun playing games buy a PS3, it kicks any PC in the A_ss hands down. No driver crashes no memory errors, no graghics drivers to configure, no game patches to worry about and no need to update with a $250-$400 graghics card every two years, and no hard drive pauses. PS3 Rocks the planet.




 

nottheking

Distinguished
Jan 5, 2006
1,456
0
19,310
To be honest, I'd say that my concerns over nVidia are well-founded, ESPECUIALLY as I read more and more articles. nVidia doesn't boast about ANYTHING involving gaming in ANY of them. It's all about "using GPGPUs to accelerate Flash, accelerate math apps, etc."

Obviously, Fermi's a big chip. And it's got a lot of potential math power behind it. But looking over it... I'm worried that it just may be what the specs hint it might: simply two GT200s taped together. It's simply a move up to twice as many stream processors, twice the texturing units, etc. So it might be twice as potent.

However, my worry is that nVidia forgot the OTHER part of a "new generation:" and that's the actual IMPROVEMENTS to those components. Things like beefing up SPs to more efficiently and effectively handle intensive gaming engines, or optimizing the anti-aliasing methods. One might remember that a year and a half back, nVidia dominated in AA; the Radeon HD 4800 took that away. And now, when AA is enabled, it appears the 5870 generally beats the GTX 295, which is... Two GT200s taped together. (albeit slightly cut down)

This could be problematic for nVidia: there is no question that Fermi will cost more than the 5870. Being a 40% larger chip, and also requiring 50% more RAM chips on-board (the consequence of a 384-bit memory interface) will mean we're probably looking at $500US or so for the starting flagship of the GTX 300 series. If it hits, and shows only improvements in quantity over the GTX 285, and nothing qualitative, then chances are it will only be COMPARABLE to the 5870, so costing nearly half again as much simply won't bounce.

[citation][nom]hakesterman[/nom]If you want serious fun playing games buy a PS3, it kicks any PC in the A_ss hands down. No driver crashes no memory errors, no graghics drivers to configure, no game patches to worry about and no need to update with a $250-$400 graghics card every two years, and no hard drive pauses. PS3 Rocks the planet.[/citation]
Yes, with only 3 particularly popular third-party games... Of which only one is exclusive.

Oh, and let's not forget that the PS3 DOES require updating your firmware. And can crash. And has had all sorts of problems. In fact, it's complicated enough, apparently, that people shell out a whopping $130US to Best Buy to have people hook it up for them.
 
Status
Not open for further replies.

TRENDING THREADS