Oblivion, the fourth in the Elder Scrolls RPG series, is a 3D role-playing game that ups the ante for gaming graphics quality. It features lighting effects with HDR rendering, unbelievable visibility, weather and daylight changes, hyper-realistic landscapes, and detailed game characters. All of these make outrageous demands on PC hardware. Is it worth it? Our screenshots show you exactly what your investment will buy.
Well, I have mixed feelings about this article. The first thing I noted was, on the
second page, the following line: The American version of the game has been available in stores since March 23, 2006,Quite surprising to me; I picked up my copy (without a reserve, mind you) on the 21st, the day it was actually available in stores. (the game shipped, to ALL destinations, in America and Europe, on the 20th) I personally think that to be a bit of an embarrassing error on the part of TwitchGuru.
I’m surprised that little mention was given of using the .INI file to tweak the visuals. A lot of the complaints about the visuals that people have can usually be resolved very easily there. They were hidden away because, often enough, it was a setting that it appears the Xbox 360 couldn’t handle, for some reason or other, and hence to maintain that the game “runs on 100% settings,” the settings were removed from the game’s options, but remain for adjustment. At least, that’s how it came across to me.
Also, when it comes to performance, I’m scratching my head over the means of testing. Like many benchmarks, I suspect that the testers may have largely recording during times when the game was loading a new cell, which is a time that will effectively bring a system equipped with any video card to its knees, because at those points, the video card isn’t being the bottleneck; it’s the CPU, as well as the RAM, and even the
hard drive. I normally pause during then, and resume when it’s finished in a few seconds; as a plus, when the game doesn’t have to process AI or physics, the loading goes by about 3-4 times as fast.
At any rate, the performance numbers they gave were a bit more grim than actual performance seems to me; I average 20-25fps outside, (25-30fps in clearer mountainous areas, 15-20fps in the deep parts of the forest) well good enough as I’m not playing
Counter-Strike, on my X800XT; I’m also running at 1024x768, with x6 AA, x8 AF, and all of the settings ranging from 100% to well beyond that, using the .INI file. First off, I fixed the water so that it reflects everything. Then I adjusted the scaling distances, and then the blood decal stay length. As for the AF, I note that raising it to x16 will mean the game almost always uses significantly more than 256MB, which will cause an additional, undue performance drain on my system. Granted, I can then see the mip-mapping, but the few extra FPS were well worth it.
Rob,
Great article. Being Canadian, I like to see ATI taking top honours in the frame rate and quality categories...
One thing I noticed however. I was looking at the screen shots on
Page 4, specifically the water effect
(oblivion-geforce-vs-radeon-2big.jpg) and something didn't quite look right to me... putting my nagging internal voice away, I continued on in the article. Then, on
Page 5 when looking at
(oblivion-wasserbig.jpg) it hit me... the bridge and dock are not reflected in the water. When looking at the water reflection just below the bridge, instead of seeing the reflected bridge in the water, you see the sloping hill... as if the bridge wasn't there at all... Listening to that inner voice, I went back to
(oblivion-geforce-vs-radeon-2big.jpg) and noticed the same effect there.. the columns and statue go into the water, but are not reflected back the same way that the spire on top of the hill is.
Is this an effect of the graphic cards' water effects, or due to a difference in the way the objects are defined in the game? I'm assuming that the properly reflected objects are non-interactive game elements, like movie backdrop paintings, and the incorrectly reflected objects like the bridge can be used by the character. If this is the case, then the gamer can get hints during game play by examining the way things are rendered to see whether they can, or must, interact with them (for example, a door in a hall way that can be opened might be rendered differently than a door that is just part of the backdrop).
Thoughts?
GM.
Simple solution to your long post’s question: those reflections are disabled manually. They can be re-enabled through the Oblivion.INI file, under the “water” heading, as a bunch of lines that look like “wReflectetc=0”. Simply go through them all, and switch the 0s to 1s, and they’ll reflect. By default, only the ground, and city walls/towers are reflected.
As for the performance of the game on ATi’s hardware, I personally put it up to nVidia’s attitude, which I am actually growing a bit more displeased with each day. It goes along the lines of “oh, we don’t need more pixel shaders. It’s TEXTURING POWER that games need most.”
Such cost them the performance crown, BIG TIME, back in 2002, when the GeForce 4 Ti was handed an astounding defeat by the Radeon 9700pro. This was covered on this site by the article “
ATi Radeon 9700 PRO - Pretender to the Throne.” Yes, as hard as it sounds to believe, they were of the exact same generation of cards; the previous one had been the GeForce 3 and 3 Ti vs. the Radeon 8500.
At any rate, perhaps nVidia might wake up again, and push their architecture in a similar manner to how they did to reach the GeForce 6; I was pleased with that, to see them FINALLY ditch the 4 pipe/8 TMU arrangement that had been seen in every flagship card of theirs since the GeForce 2 GTS. It’s a shame that they’re not moving much since then. I’m actually an unbiased sort of person when it comes to video cards, but I currently can’t say I have a bright outlook for the G80 against the R620, unless they really change course.
Why is the 6800 GT being compared to the X1900 XTX? That is the last generation nVidia, and the newest generation ATI.
The only notable difference between a 7900 GTX and an X1900 XTX, is that the ATI cards can use HDR and AA simultaneously in Oblivion, whereas a 7900 GTX has to pick between the two due to conflict of resources.
Regardless of whether it improves performance or not, shadows on self looks horrible when turned on. Whatever effect its supposed to produce doesn't work in half of the lighting conditions regardless of your system.
Well, there is, according to effectively every benchmark, the results that the X1900XTX quite out-performs the 7900GTX. As I predicted since the X1900’s release, it likely has a lot to do with the fact that the X1900 has 48 pixel shaders and 16 texture units, as opposed to the 24 and 24, respectively, in the 7800/7900. Since
Oblivion generally only uses one color texture per surface on models, (though at times up to four on the ground) yet applies at least three, if not more, shader maps to each surface.; every surface has at least a normal-map for angular detail, a specular map for shinyness, and a diffuse map to change how the material reacts to different lighting. (some objects, for instance, can seem to change color in ways not directly related to the lighting’s color) Further shaders would include parallax mapping to provide more detailed features, namely pock-marks in rock, as well as the active environment-mapping shader for the water. This stacks up to a shader load that dwarfs the texture usage.
As for the self-shadows, it appears to be a problem of shadow angle; for some ridiculous reason,
Oblivion’s shadowing system is adjusted so that no matter what, characters cast shadows DOWN. This means that standing next to a small campfire in a cave, you cast shadows only on the floor, and not on the ceiling like you should. This seems to result in problems for the self-shadowing part, resulting in those “Crackling” problems with fragments of shadows covering people.
I give the graphics of Oblivion mixed reviews. Some of the effects are really spectacular, though it seems Bethesda has fallen prey to what everyone else is: Using mapping techniques instead of good ol' polygons.
In my humble opinion, the only time it's completely justified (as of now, this should change in the future) to use different types of mapping to generate the illusion of 3D depth on flat objects is with the non-edge portions of flat or concave items, and in cases where there won't be any loss of realism. Furthermore, in many cases the bumpmapping should only be used for depression and not raised surfaces.
The reasons for this are simple. While having an intricate raised pattern on the handle of a sword, ridges on armor where two pieces were attached, or raised bolts on a metal device all look nice from many angles, when you look at the profile of such an item there is a loss of detail in Oblivion under cases such as these. The sword pattern, armor ridges, and raised bolts are all flat when viewed from extreme angles. Where things should be blocking your view, there is nothing. This first annoyed me when I was talking to someone wearing leather armor and I saw the nice lighting on the raised bumps of his armor where two pieces of hardened leather were attached--however, the illusion was destroyed as my gaze followed it up over his shoulder. The top of his shoulder plate was flat, even though the texture and lighting tried to make it look like the ridge continued. To keep something like this from happening, model geometry "should" only be simplified in regions where you will never see a profile of the affected surface, such a piece of glass in a window or boards/stones in a room with only concave corners. Also, the simplifications should be enough that the physics of other objects interacting won't give away the flat nature of the model.
Looking on the bright side, at least they didn't abuse bump mapping like Doom III did. I remember when I first saw a traffic cone which had the holes in it bumpmapped in. You couldn't see through the traffic cone.
Textures too... I have them turned up as high as they'll go, but still they're not so kind to the eyes (and I'm not talking about textures on a mountain a few thousand feet away, which are disgustingly simplified. Tom's own screenshots show that off). I understand that they have a huge number of objects that need to have textures, and they can't be excessively detailed. Still, I remember playing Alien vs Predator 2, and even when I was up to my nose against a wall and zoomed in several times, the texture was not pixelated... they used some sort of graphics trick to add more details when you looked closer at something. And that was cool.
Overall the graphics for Oblivion are rather pleasing, it's just that the "Hey! Let's decrease model geometry and use bump/normal/parallax mapping instead!" approach is starting to bother me. They're *supplements* and not *replacements.*
Well, part of this has to do with the perspective of the user, when it comes to the use of detail-related shaders.
First, note that, as the article states, there is NO bump-mapping in use; bump-mapping technically isn’t even a shader in the sense used today, as it’s been an option for graphics cards since DirectX 6 and 7, while programmable shaders weren’t available until DirectX 8.0. Instead, the game uses normal-mapping; while you are correct, that bump-mapping is only any good for shading depressed surfaces, normal-mapping improves by adjusting the “normal,” or light-angle, of the particular texel, so that each appears lit according to not just if it’s actually lit, but by how much, as surfaces angled against a light are less lit than those facing it directly. This has the added advantage that, because of the reduced contrast between two areas, you don’t get that “plastic wrap” appearance of bump-mapping.
As for the actual proper use of normal-mapping, they are used because it’s not actually
feasible to increase the polygon count farther than it already is, which is quite high, almost always a 6-digit number for each outdoor scene in
Oblivion. It has to do with the way graphics cards were made; the current generation of “flagship” cards has a lot of pixel shaders; even the 7900 cards have 24, while the X1900s have a whopping 48. However, each has only 8 vertex shaders; this is a great disparity that has largely remained since the introduction of these shaders. To think on it, it’s only about 15 or so times as much vertex processing power as were seen in the best video cards around the time of the original
Unreal Tournament! So in a sense, these shaders are a NECESSITY to gain “acceptable” visual quality.
Fortunately, a big change in the future is coming, in the form of the “unified shader architecture.” Starting with ATi’s next-generation chipset, and being following at an unknown date by nVidia, (I’m crossing my fingers that it might actually be G80) we’ll finally see an end to the limited number of vertex shaders; while the top cards of today can only handle 8 verticies per clock cycle, the Radeon X2800s will be able to handle 128. I think then, we’ll see games start to change toward having a lot more polygons, and less instances of features being described by normal-maps. But for now, they’re here to stay; how else are you going to make, say, a suit of chain mail look realistic? The polygon count otherwise required for decent-looking rings would be obscene, easily totaling into the millions.
Lastly, on the term of the LOD textures, those appear to largely be due in part to the memory limitations of most hardware, particularly that of the Xbox 360. Keep in mind that even without anisotropic filtering,
Oblivion uses up a whopping total of some 200MB of video RAM; using AF pushes that further, as I’ve described at the top section of this post. It’s possible through the .INI to adjust this number; open it up, and under the “general” section, look for the line “uGridsToLoad.” It will start at 5, indicating that the game will load a 5x5 grid of cells at full detail; the rest will be “LOD cells,” which will contain only trees, (typically as 2-polygon “billboards”) and a scaled-down surface and texture, with a normal-map made to help offset the low polygon count of each cell. At any rate, set this number to either 7 or 9; be warned: if you have less than 1024MB of system RAM, this will likely crash your system if you try playing
Oblivion, and in those cases, it will still likely have a negative impact on performance, as it will increase video RAM usage, possibly well over the recommended limit for a 256MB video card. Of course, users of 512MB video cards will have no problem.
I'm not trying to troll, but this article is losing me.
First off - those screen shots look absolutely horrible. I can't believe noone else is questioning them. What is up with that? Bleached out to the point there are no shadows? That pic of the horse labeled "1900XTX"? What happened there? Are you running brightness and gamma so high you are seeing all these horrible rings? What did you do to Oblivion???
What exactly was this article about anyhow? It seemed like merely random observations of the graphics engine. You make a comment about the game shaders and how ATI's implementation is superior - but you have no benchmarks? Are you assuming we are cross referencing to the Firingsquad and Anandtech articles?
You have no conclusion to the article. You assume the graphics card is the "key" while the game is also serverly cpu bound. You mention God Mode for some unknown reason - with the astute observation that... it makes the game easier???
Is this Tomshardware???
I do agree, the article leaves much to be desired. The settings that they’ve used are pretty questionable, as well. Thankfully, at least part of it, it seems, is due to the fact that
Oblivion looks far better in-game than in screenshots.
As for the performance of the game, I’ve found that at different times, the game can be bound to different components; for many people’s experiences, it can even be the speed of their hard drives as it loads more cells. In the great forest at a high resolution, the game will pretty much ALWAYS be GPU-bound, not CPU-bound. Of course, what component is the bottleneck will change if you, say, have an extremely weak component.
This is actually something that amazes me; many people, and even much of the gaming media, fail to grasp the mechanics of a game’s performance. Comments such as “a faster video card will be all you need to improve gaming speeds,” and the like are far more prevalent than they should be.
As for the benchmarks, it’s a real shame they didn’t provide any, (just like when THG opted not to make benchmarks from 3Dmark 2006) as most of the ones I’ve seen, even at major sites, often seem flawed to me.