Windows 7: Play Crysis Without a GPU

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
This can certainly be used to render aero without 3D hardware, which is a big plus consdering how many vista problems were caused by incomaptible nvidia/ati drivers...

So it is like an advanced 'safe mode': you have that by default but can use hardware if you choose to; and you can always fall back if that is not stable.
 
he probably lives in a different dimension where few people actually plays on PC OMG!

Seriously, if that scales fine.. then everyone should be happy.. i hope it will work with the upcoming driver of ATI 😀
 
people don't get carried away what the article trying to say if the video card gets toast then you can use still use the computer.
in just what port in the pc do you fit a monitor port if a card dies?
if a mobo has video output probably that is from an integrated graphics solution.
this is the most silly feature i've seen for a long time. an integrated graphics solution will be cheaper, power efficient, and a lot faster!
also, intel will produce larrabee discrete graphics in the future.

maybe this an exercise, so that OS in the future will allow idle processors of 8 or 12 core to help in 3d tasks, yet that will be inefficient than an IGP, then consume power, wasting electricity.


 
[citation][nom]noahjwhite[/nom]Ok... I have an 8800GTS (512mb) Slightly OC (not much) and I used Crysis to run multiple benchmarks when I built the system. Rest of the specs as follows.dual core E8400 3ghz overclocked to 4ghz4gb DDR RamRaptor HDVista Ultimate 64bitDX 101,680 x 1,050 (Very High/ no AA) 18-25FPS1,680 x 1,050 (High/ no AA) 28-30FPS1,680 x 1,050 (Medium/ no AA) 35-42FPS800X600 Very High/ no AA (30-45FPS - not sure why it varies so much)800x600 High / no AA 40-55800X600 Medium / no AA 45-60800X600 Low / no AA 50-60sooo... if the number MS is giving are true... I'm impressed.[/citation]


Ok!! so what i said earlier on? It's kind of true then. When applicable, Widnows 7 and WARP will be able to help the GPU in rendering 3d applications (i.e., games) such as Crysis, and so your FPS can go up. Of course, i'm sure that there'll be lots of problems, and blue screens, and so on, but if it works, hey.. pc gaming might get alot more enjoyable and cheaper.. no?
 
i meant, an IGP is a lot better than using a CPU. if you look at the charts, one must get a core i7 to get a couple of FPS but an IGP can easily do that for less power and cost.
it would have been better if windows or some company will utilize an idle IGP when combined with a discrete card, like use it for Physx, CUDA or rendering calculations.
 
you're using a core2duo. naturally, the core i7 is faster which explains the higher numbers than yours.

[citation][nom]noahjwhite[/nom]Ok... I have an 8800GTS (512mb) Slightly OC (not much) and I used Crysis to run multiple benchmarks when I built the system. Rest of the specs as follows.dual core E8400 3ghz overclocked to 4ghz4gb DDR RamRaptor HDVista Ultimate 64bitDX 101,680 x 1,050 (Very High/ no AA) 18-25FPS1,680 x 1,050 (High/ no AA) 28-30FPS1,680 x 1,050 (Medium/ no AA) 35-42FPS800X600 Very High/ no AA (30-45FPS - not sure why it varies so much)800x600 High / no AA 40-55800X600 Medium / no AA 45-60800X600 Low / no AA 50-60sooo... if the number MS is giving are true... I'm impressed.[/citation]
 
I thought this is a comeback for the IBM OS2 WARP - from warped minds come Warp Products!

Seriously, I think this has more to do with Larrabee and Advanced Vector Extensions (AVX). This effort will blur the lines between the GPU and CPU - mainly to make it easier to program for ONE system (CPU, GPU, Future-PU), then the software will automatically load-balance between the cores. This means, to run DX 999+, all you need is the driver - not any specific hardware. Not like the "old" days of dedicated hardware needed to run specific DX/OpenGL features. Such as needing a DX 10 card to get geometry shaders. This also means, when you buy new hardware, you are ONLY getting performance- NOT features/specific capability.

I know Larrabee and AVX doesn't exist yet. But it makes sense if the WARP is an exercise in this area.
 
I don't see how this feature can be that useful for bigger enterprises. Most entry-level computers used for data entry and word processing uses integrated GPU for lower cost. The "backup" feature of WARP would allow a system to boot from an integrated solution in the case a discrete would fry but, then again, what company buys systems with an integrated GPU AND a graphic card?

The only thing that might be interesting is "bypassing" the driver in case of a corruption or simply a bad driver. And we all know what ATI and Nvidia are capable of...
 
anyway, if a graphic card is defective, there is no way to get through it's connectors. no ramdac, nothing.
so this will not work just by a simple software update, I believe only certain mainboards will support the feature, it needs hardware to get to the monitor.
the important matter here is BIOS support for booting by this feature, or for booting with no video card...
sound like marketing BS to me anyway...
 
I think MS is expecting a separation of the GPU from the display hardware. This is not unrealistic, since both Intel and AMD are working on integrating on-die GPUs into their processors. Presumably, then, some video RAM and display PHYs (DVI, HDMI, etc) would be placed on the motherboard. Transmitting pixels from a RAM buffer to a display would be trivial if not for HDCP, and even then is pretty simple and doesn't require a high end GPU. It is in generating those pixels that the GPU comes in. Discrete GPU cards won't go away, but they won't be absolutely necessary for many applications.

This will allow for cheaper enterprise hardware. MS is trying to sell Windows 7 to the enterprise customers. Makes sense. Enterprise customers did not, for the most part, buy into Vista, but they will be forced at some point to buy new hardware. If Windows 7 can run on cheaper hardware, then it will be more attractive to enterprise buyers.
 
I agree with miniwhale - crysis benchmarks be damned, the real reason this will be such a boon for Microsoft is that it should help with sluggish Aero performance. Hell, we still dont recommend people use Vista if they are running IGP.

What I am interested in is if it helps gaming on netbooks and similar - The Intel Atom has SSE2 and more so should qualify. Certainly they are not made for gaming but any boost in FPS when I do want a quick game would be very appreciated.
 
is this for real? why would M$ Xbox $ Xbox360 have a graphic card in it?
 
Hmmm. 84FPS Average with an 8800 GTS, at lowest settings @ 800x600.

Think maybe tonight I'll benchmark my Q6600/8800GTS system with Crysis 1.21 @ 800x600 with lowest settings and see what I get to compare.

Seriously, although this could evolve into a nice feature, I think it's a joke that they're using 800x600 (which nobody uses anymore) @ lowest settings in Crysis to tout how great this feature is.

Nobody cares about Crysis performance at these settings anyhow. The true measure as to whether this feature is really anything for gamers to consider is whether it'll offer any substantial % increase in FPS at normal resolutions and High/Very High quality.

It's interesting how nVidia is pushing for GPUs to replace CPUs (Folding @Home, etc), and Microsoft is pushing for CPUs to supplement GPUs.
 
indeed a very cheerful news for intel core i7 920 users like me but hey just one thing,the minimum requirements for windows vista are:

- 1 GiG of memory "ddr2"
- 2 GHZ cpu "athlon64 3800+ x2"
i dont know about the middle one but better have a good CPU and HDD.

any ways good to see this as an INTEL-ATI FAN.
 
hola vengo a hablar en español
la verdad es muy interesante
pero creo que necesitaria una pc que respalde
la falta de gpu
 
Various companies are gambling that future CPUs with large
numbers of cores may be able to rival GPUs (Intel is said to
be working on designs with anywhere from 80 to 256 cores),
though the key interest is more concerned with exploiting
systems that have multiple CPUs, especially HPC, large-scale,
eg. SGI's new VUE system:

http://www.sgi.com/vue/

and their concept HPC system that uses thousands of Atom cores:

http://www.sgi.com/company_info/newsroom/press_releases/2008/november/project_kelvin.html

VUE is derived from their older VizServer product which allowed
any remote device to exploit the processing power of a large
supercomputer as if one was using it locally, by means of pixel
data compression and other methods. I was in charge of the first
test setup using VizServer in the UK (16-CPU 5-pipe 3-rack
Onyx2 IR2E); it worked quite well, but from what I've read SGI
has vastly improved how the idea works for VUE, with only minor
amounts of code from VizServer carried forward.

Personally, I'm not convinced by the using-main-CPUs-for-gfx
idea. Somehow I get the feeling there's a peculiar assumption
that GPU designs are going to stand still.

What NVIDIA/AMD have not and so far have never done is release
a genuinely scalable gfx solution. I don't mean SLI here, rather
a single board with multiple sockets, sold with (say) 1 or 2
sockets filled & one then adds further modules as one wishes to
scale performance. Ditto for the RAM; multiple sockets, buy
a card with 1 filled to give 2GB (assuming modern arts, etc.),
3 or 4 more slots empty for future expansion. Just example
numbers of course.

SGI's gfx products worked along these lines, all the way from
the old 1990 Elan system to the final InfiniteReality4 which had
up to 10GB VRAM and 1GB TRAM. Strangely though, the GE board
had space for 8 GE processing ASICS but they never released a
version with more than 4. I was told this was because it was
not expected that performance would scale all that well beyond
4 GEs (which never made sense to me given one could scale a
system up to 16 x IR4 pipes in parallel anyway), but with
hindsight (given what NVIDIA did with the technology for the
original GF) this was wrong. NVIDIA proved that multiple
processing units could easily be scaled, as modern GPUs now
show all too well. SLI scales things further, but it's not a
very direct way of spreading the processing load.

On the other hand, maybe the modern chips are just too complex
and/or fragile to package in such a way as to make it feasible
for one to buy a card with 1 or 2 fitted and then add more chips
later. Still, would be cool though, and it would not be like
SLI, ie. no driver issues and the performance increase would
apply to all operations (unlike atm where certain games behave
badly for CF/SLI dpending on driver issues, while various
professional tasks don't scale either). Heh, maybe PCs just
aren't big enough, but surely in the professional/high-end
market there is scope for such a design possibility.

Anyway, I just figure the whole idea of rendering with main
CPUs comes from a viewpoint of assuming that somehow GPU design
is going to reach a plateau - unlikely IMO. In recent years
we've had steady improvements in speed, new general features
with respect to shaders, but nothing revolutionary, but who's to
say there won't be (for example) a new idea on how to render
genuinely volumetric phenomena that does not use polygons and
thus is much faster, more accurate, etc.? (actually there already
is, called *Ray, as demo'd by SGI, but it was a software system
that made use of IR4 in a special way and never released as a
commercial product) New algorithms, breakthroughs in electronics
(eg. applying spintronics and memristors to GPU design, the
effects of shrinking down to 22nm) could all give a sudden huge
leap in hardware gfx processing power. With the emergence of
quantum technology, anything is possible.

A cynic would say what MS and SGI are doing is more a way of
boosting main CPU sales which these days, let's face it, have
kinda run up against a wall for the general home/office user,
ie. performance beyond a reasonable dual-core is blatantly
unnecessary. SGI and Intel are apparently working closely
together on these ideas; I expect MS is working alongside them
aswell.

Ian.

 
I smell "Larrabee" all over that WARP thingy.

M$ is throwing a thing where Larrabee can actually be of use for gamers.

The folks at nVidia, ATI and others should not take this one lightly IMO.

Esop!
 
I didn't keep track of the thread,but my idea is that larger tasks and threads are being done by the CPU, while the GPU focuses on the small threads.
So actually you do need a videocard,how else will you see output to screen?

I guess it's an interworking between CPU and GPU, and for older graphics cards it'll relieve them from heavier tasks.
Should your graphics card fail,they probably mean that should no acceleration through a hardware driver be available,the cpu can take over.
Using the CPU for imaging actually comes from the very first computers, and was removed around the time of the 286 DX2 with their large black slotted cards(was it Eisa ports?).

Anyway,a good interaction between the two can be a way to improve performance. Actually integrated graphic cards should benefit the most out of this one!

Yesterday I was browsing on my pc,and I wondered when MS would make an OS without icons again!

That'd definitely speed up the system drastically. I don't need icons,and often set them to the smallest, to improve performance,and see more on my screen.
 
Status
Not open for further replies.