NV40 ~ GeForce 6800 specs

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

"joe smith" <rapu@ra73727uashduashfh.org> wrote in message
news:c5ime2$6pp$1@phys-news1.kolumbus.fi...
>
> Ah,
>
> http://frankenstein.evilgeniuslabs.com/~pszuch/nv40/news.html
>
> I see from the pictures (assuming not fakes 😉 that the card should fit
> reasonably to "single" AGP (8x) slot more or less.. that's nice, but the
> best part about this debacle is two DVI ports. That is the part I like the
> most, currently using DVI + DB25 to two TFT's.
>

It says right in that same article that the new cards will take two slots.
But it is possible for vendors to come out with single slot cards.
I find it amazing that it says that Nvidia recommended that their testers
use at lest a 480W PS. That's going to be a very expensive upgrade for a lot
of people. And a lot of guys that think they have a 480+ PS will find that
there cheap PS is not up to the task.
So the Ultra is gonna start at $499 + say another $100 for a quality PS, Wow
$599 just to play games that probably don't need a fraction of the power the
new card can deliver. Let's hope that Doom 3 runs great on with this card.
Of course by the time the game finally comes out this card will probably
cost $150. JLC
 
Archived from groups: alt.comp.periphs.videocards.ati (More info?)

"Les" <a@aolnot.com> wrote in message news:bydfc.647$pL6.459@newsfe1-win...
>

> They still releasing AGP 8x versions along side PCI x16. I read somewhere
> nvidia is doing something with a bridging device while ATI is making
totally
> seperate cards, ie R420 is agp 8x and R423 is a proper PCI x16 card. I
> cannot for the life of me remember where I read it though sorry. It
*could*
> have been anandtech
>
Right on ATI's site it says that they are the only company to be making a
"True PCI Express card" It's right on there front end.
JLC
 
Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

"JLC" <j.jc@nospam.com> wrote in news:fcjfc.142647$K91.357088@attbi_s02:
> It says right in that same article that the new cards will take two
> slots. But it is possible for vendors to come out with single slot
> cards. I find it amazing that it says that Nvidia recommended that
> their testers use at lest a 480W PS. That's going to be a very
> expensive upgrade for a lot of people. And a lot of guys that think
> they have a 480+ PS will find that there cheap PS is not up to the
> task. So the Ultra is gonna start at $499 + say another $100 for a
> quality PS, Wow $599 just to play games that probably don't need a
> fraction of the power the new card can deliver. Let's hope that Doom 3
> runs great on with this card. Of course by the time the game finally
> comes out this card will probably cost $150. JLC

I _really_ want a new system right now. I mean, I'm running dual P3-800
with Ti4200 video, and it just doesn't cut it for todays games. But the
game I know I want is Doom 3 and who know's when it will be out. When it
does come out, it's anyone's guess what will be the best video card for the
game. There will be the fastest, then there will be the best price /
performance cards, a little slower, a lot cheaper, etc. I'm just going to
have to wait until the game comes out if I don't want to spend too much
money and want to be really sure, making a decision based on real
benchmarks of production code and production hardware.

But I hate waiting! My current setup is killing me! I'm sure it's not
the Ti4200's fault, it's a great card, I'm just too CPU limited. But
again, it will be interesting to see which cpu / video card combo does Doom
3 the best. More waiting!

Argh!
 
Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

> use at lest a 480W PS. That's going to be a very expensive upgrade for a
lot of people.

It sure will.

> new card can deliver. Let's hope that Doom 3 runs great on with this card.
> Of course by the time the game finally comes out this card will probably
> cost $150. JLC

That's a very good point, I was only speaking for myself. I don't play games
much at all, we do some Blackhack Down and Warcraft III TFT multiplayer a
couple of times a week. For that pretty old card would suffice. It's the
work that I need the latest features for, I won't even be paying for the
card myself anyway. :)
 
Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

> SCSI only operates at 320Mb/s.

320MB/s. But you need a lot of drives to saturate that. Any single
IDE drive could easily do 320Mb/s :) Thats only 40MB/s.

Eric
 
Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

gaf1234567890@hotmail.com (G) wrote in message news:<b7eb1fbe.0404141025.24e572f4@posting.google.com>...
> K <kayjaybee@clara.net> wrote in message news:<pan.2004.04.14.01.14.28.698900@clara.net>...
> >
> > I have a gut feeling that PCI Express will do very little for performance,
> > just like AGP before it. Nothing can substitute lots of fast RAM on the
> > videocard to prevent shipping textures across to the much
> > slower system RAM. You could have the fastest interface imaginable for
> > your vid card; it would do little to make up for the bottleneck that
> > is your main memory.
> >
> >
>
>
> But what about for things that don't have textures at all?
>
> PCI Express is not only bi-directional, but full duplex as well. The
> NV40 might even use this to great effect, with its built-in hardware
> accelerated MPEG encoding/decoding plus "HDTV support" (which I assume
> means it natively supports 1920x1080 and 1280x720 without having to
> use Powerstrip). The lower cost version should be sweet for Shuttle
> sized Media PC's that will finally be able to "tivo" HDTV.
>
> I can also see the 16X slot being used in servers for other things
> besides graphics. Maybe in a server you'd want your $20k SCSI RAID
> Controller in it. Or in a cluster box a 10 gigabit NIC.

Why even mess with a 16X PCI-e slot? A 10Gbit NIC could be handled by
3-4x PCIe. All you need is a 1X slot for most of what is out there
today. I would like to see something that could handle 8GB/s
bandwidth :)

Eric
 
Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

DaveL wrote:

> I think Nvidia learned their lesson about that from the 5800U
> debacle. It was ATI that stayed with the old standard and took the
> lead in performance. Meanwhile, Nvidia was struggling with fab
> problems.
>
> DaveL
>
>
> "Ar Q" <ArthurQ283@hottmail.com> wrote in message
> news:nmefc.10025$A_4.6776@newsread1.news.pas.earthlink.net...
> >
> > Isn't it time for NVidia to use 0.09um process? How could they put
> > some many features if still using 0.13 um process?
> >
> >





Heat generation is still too much of a risk for moving to 90-nm.



If AMD moved to .09...... I'd have a new toaster. Say goodbye to
overclocking at that point.




The "features" can be expandable as much as they like.... right now
they aren't even using the entire wafer for such optimizations, only a
small section.



A lot of those optimizations are software based too... the GPU just has
to be able to support the relative ballpark of them.
 
Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

"teqguy" <teqguy@techie.com> wrote in message
news:6Lzfc.33399$hd3.22511@nwrddc03.gnilink.net...
> DaveL wrote:
>
> > I think Nvidia learned their lesson about that from the 5800U
> > debacle. It was ATI that stayed with the old standard and took the
> > lead in performance. Meanwhile, Nvidia was struggling with fab
> > problems.
> >
> > DaveL
> >
> >
> > "Ar Q" <ArthurQ283@hottmail.com> wrote in message
> > news:nmefc.10025$A_4.6776@newsread1.news.pas.earthlink.net...
> > >
> > > Isn't it time for NVidia to use 0.09um process? How could they put
> > > some many features if still using 0.13 um process?
> > >
> > >
>
>
>
>
>
> Heat generation is still too much of a risk for moving to 90-nm.
>
>
>
> If AMD moved to .09...... I'd have a new toaster. Say goodbye to
> overclocking at that point.
>
>
>
>
> The "features" can be expandable as much as they like.... right now
> they aren't even using the entire wafer for such optimizations, only a
> small section.
>

That's going to be some big honked chip when they use the whole
wafer.

Jim M

>
>
> A lot of those optimizations are software based too... the GPU just has
> to be able to support the relative ballpark of them.
 
Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

ewitte@hotmail.com (Eric Witte) wrote in message news:<3e738765.0404150825.6b7c0fa7@posting.google.com>...
> gaf1234567890@hotmail.com (G) wrote in message news:<b7eb1fbe.0404141025.24e572f4@posting.google.com>...
> > K <kayjaybee@clara.net> wrote in message news:<pan.2004.04.14.01.14.28.698900@clara.net>...
> > >
> > > I have a gut feeling that PCI Express will do very little for performance,
> > > just like AGP before it. Nothing can substitute lots of fast RAM on the
> > > videocard to prevent shipping textures across to the much
> > > slower system RAM. You could have the fastest interface imaginable for
> > > your vid card; it would do little to make up for the bottleneck that
> > > is your main memory.
> > >
> > >
> >
> >
> > But what about for things that don't have textures at all?
> >
> > PCI Express is not only bi-directional, but full duplex as well. The
> > NV40 might even use this to great effect, with its built-in hardware
> > accelerated MPEG encoding/decoding plus "HDTV support" (which I assume
> > means it natively supports 1920x1080 and 1280x720 without having to
> > use Powerstrip). The lower cost version should be sweet for Shuttle
> > sized Media PC's that will finally be able to "tivo" HDTV.
> >
> > I can also see the 16X slot being used in servers for other things
> > besides graphics. Maybe in a server you'd want your $20k SCSI RAID
> > Controller in it. Or in a cluster box a 10 gigabit NIC.
>
> Why even mess with a 16X PCI-e slot? A 10Gbit NIC could be handled by
> 3-4x PCIe. All you need is a 1X slot for most of what is out there
> today. I would like to see something that could handle 8GB/s
> bandwidth :)
>
> Eric


Absolutely. The 16X comment was just an example. It's way more likely
that a server would have 1@16x, 3@4x, and 4@1x (or something like
that). In fact I don't see the number and/or speed of expansion slots
being a big "server vs desktop" differentiator after PCIe catches on.
I've even heard that external bus expansion housings are possible.

Anyway, PCI Express looks like it has tons of flexibility. Not that
PCI-X couldn't have lasted for a while longer. But one bus to get rid
of the four we have now *AND* increase headroom for the future *AND*
add new features that AGP lacks *AND* reduce the wire/pin count at the
same time is a Good Thing.
 
Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

On Wed, 14 Apr 2004 19:13:30 GMT, "teqguy" <teqguy@techie.com> wrote:


>Most MPEG encoding is processor dependent... I wish developers would
>start making applications that let the graphics card do video encoding,
>instead of dumping the work on the processor.

I'm pretty sure I read somewhere that the (new & improved) Prescotty
processor has been given a special hard wired instruction set which is
dedicated to encoding video, so that should speed things up some what.

I remember reading an article over a year ago which had Intel giving a
demo of a future release CPU which apparently was running 3 full screen
HD videos simultaneously rotating in a 3d cube. The processor prototype
was not specified, but it may have been a Tejas as it was rated at 5GHz.

Ricardo Delazy
 
Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

On Thu, 15 Apr 2004 20:55:42 +1000, Ricardo Delazy <celcius@ozemail.com.au> wrote:

>On Wed, 14 Apr 2004 19:13:30 GMT, "teqguy" <teqguy@techie.com> wrote:
>
>
>>Most MPEG encoding is processor dependent... I wish developers would
>>start making applications that let the graphics card do video encoding,
>>instead of dumping the work on the processor.
>
>I'm pretty sure I read somewhere that the (new & improved) Prescotty
>processor has been given a special hard wired instruction set which is
>dedicated to encoding video, so that should speed things up some what.
>
>I remember reading an article over a year ago which had Intel giving a
>demo of a future release CPU which apparently was running 3 full screen
>HD videos simultaneously rotating in a 3d cube. The processor prototype
>was not specified, but it may have been a Tejas as it was rated at 5GHz.


SSE3 won't make Intel CPUs as fast as dedicated DSPs for video encoding. It can be an improvement
over SSE and SSE2 but it's still not fast enough.
They should have embedded a full DSP (or more than one) inside CPUs to achieve the same performance.
SSE subsets are still too much tied to the general purpose x86 architecture and their efficiency
it's poor compared to dedicated DSPs.
A $40-50 floating point DSP can be 3x times faster than any SSE3 capable CPU at MPEG2/MPEG4
encoding.
If it's true that Nvidia has designed the NV40 as a full DSP then it's just a matter of time and SDK
availability to let programmers access the NV40 DSP thru DirectX or other dedicated APIs before
known Codecs such as DiVX would be able to take advantage of GPU power.
The only problem is that Nvidia needs a mainstream set of GPUs derived from this one with MPEG
encoding/decoding on the market ASAP to set a standard, before ATI releases its own DSP GPUs with
MPEG encoding/decoding capability.
If the MPEG encoding/decoding in NV40 were fixed in hardware, hardwired then it would be a pretty
low quality implementation and I really hope that the claims that the GPU it's a full DSP are true
so that programmers with DSP experience could upload their own filters codes onto the GPU DSP to
perform their own MPEG video encoding.
I also hope that the SDK to access DSP features and reprogram MPEG video encoding would be free so
that even non-commercial, freeware encoders could be available in the future to further exploit GPU
capabilities.
 
Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

On Wed, 14 Apr 2004 17:17:07 GMT, "Ar Q" <ArthurQ283@hottmail.com>
wrote:

>
>"NV55" <nvidianv55@mail.com> wrote in message
>news:1c4cde47.0404130951.5eccb20@posting.google.com...
>> the following is ALL quote:
>>
>>
>> http://frankenstein.evilgeniuslabs.com/~pszuch/nv40/news.html
>>
>>
>> Tuesday, April 13, 2004
>>
>> NVIDIA GeForce 6800 GPU family officially announced - Cormac @ 17:00
>> It's time to officially introduce the new GPU generation from NVIDIA
>> and shed the light on its architecture and features.
>>
>> So, the GeForce 6800 GPU family, codenamed NV40, today officially
>> entered the distribution stage. Initially it will include two chips,
>> GeForce 6800 Ultra and GeForce 6800, with the same architecture.
>>
>>
>> These are the key innovations introduced in NVIDIA's novelties:
>>
>> *16-pipeline superscalar architecture with 6 vertex modules, DDR3
>> support and *real 32-bit pipelines
>> *PCI Express x16, AGP 8x support
>> *222 million transistors
>> *400MHz core clock
>> *Chips made by IBM
>> *0.13µm process
>>
>
>Isn't it time for NVidia to use 0.09um process? How could they put some
>many features if still using 0.13 um process?
>

The NV40 die is .75 inches square and all the features are in there.
The part will have been stress-tested by a vector-test program to
completely exercise all of its functions before it is ever supplied to
a 3rd party for incorporation into the 6800 video card.

Future generations of this GPU will be on a smaller process. The
current NV40 chip is made by IBM. IBM is working on a .065 nm process
that AMD will use when it is sufficiently mature. No doubt nVidia will
also be one of the first users of the process also. Will shrink the
existing die area by a factor of 4 and also drop the power by about a
factor of 6. Will probably take a couple of years to get there...
nVidia will not make the mistake of ever using an immature process
again.

John Lewis



>
 
Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

"joe smith" <rapu@ra73727uashduashfh.org> wrote in message
news:c5k3hr$hn1$1@phys-news1.kolumbus.fi...
> > pfft. You don't even know what the ATI offering is as yet, much
less
> > are you able to buy a 6800 until well into next month.
>
> No, I do not. I wrote that the rumor is that ATI wouldn't have 3.0 level
> shaders.. I was commenting on a rumor, if that isn't true then the
situation
> is naturally entirely different. The confidentially / NDA ends 19th this
> month so soon after that we should begin to see cards dripping to the
> shelves like always (just noticed a trend in past 5-7 years, could be
wrong
> but I wouldn't die if had to wait even 2 months.. or 7.. or 3 years.. the
> stuff will get here sooner or later.. unless the world explodes before
that
> 😉=
>

[Snipped]

19th? Where did you get that date from?

--
Derek
 
Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

> 19th? Where did you get that date from?

"Confidential until April 19th 2004" stamped over slides, etc. material you
find from here and there.
 
Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

"joe smith" <rapu@ra73727uashduashfh.org> wrote in message
news:c5vvb4$99l$1@phys-news1.kolumbus.fi...
> > 19th? Where did you get that date from?
>
> "Confidential until April 19th 2004" stamped over slides, etc. material
you
> find from here and there.
>
>

Can't say I noticed much yesterday. :)

--
Derek
 
Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

> Can't say I noticed much yesterday. :)

Sorry, 14th.. which you obviously noticed..

http://mbnet.fi/elixir/NV40/
 
Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

"joe smith" <rapu@ra73727uashduashfh.org> wrote in message
news:c644rf$6nh$1@phys-news1.kolumbus.fi...
> > Can't say I noticed much yesterday. :)
>
> Sorry, 14th.. which you obviously noticed..
>
> http://mbnet.fi/elixir/NV40/
>
>

Actually I thought you were talking about the R420. :)

--
Derek