Upgrade Graphics Card/PSU

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
hd6670
480 shaders(each can perform 1 each of floating point multiply and add per clock)
24 texture units(includes address units to fetch texture data for the pixels making
up the surface of polygons and sample units for up to 16x anisotropic filtering/128 tap to
correctly render textures on object not at 90 or 45 degrees angles and at a distance)
8 rops(render back ends, mostly for calculating multi sample anti- aliasing/jaggie
reduction and i think high dynamic range lighting, or is that shaders too?)
800mhz gpu clock equals theoretical 768gflops(billions of floating point operations/sec)
128 bit memory interface(at 8 bits per byte is 16 bytes per clock)
gddr3 usually clocked at 800mhz(1600 double data rate) so about 25.6GB/s bandwidth.
gddr5 usually @1ghz(gddr5 is 4 transfers per clock so 4 giga transfers/sec) about 64GB/s.
bandwidth is used mostly in calculating aa( send and receive samples from frame buffer)
2GB gddr5 frame buffer( stores high res texture data, aa sample data, and possibly
several finished frames when buffering to enhance smooth rendering when the number
of frames the graphics card is completing does not match monitor refresh rate)

incidentally, it seems to me that 1GB of frame buffer is most appropriate for resolutions
of about 2 megapixels(like your monitor), 2GB for those higher res 27-30in panels
(about 4 megapixels) and 3GB for 3 panels at 1080p( about 6 megapixels). 2GB of
frame buffer on a 6670 gddr3 is total overkill, as the narrow bandwidth and small
number of rops at lower clock speed will not be fast enough to fill it up and provide
playable framerates. on to 6770 specs and will throw 6750 in too(it's between them)
(actually, a 512MB frame buffer might be more appropriate for cards such as the
hd6670 and 5670, but they may be able to use 1GB in isolated incidents. 1GB is pretty
much a standard minimum for all cards nowadays, whether they can use it or not)

hd6770(rebrand of 5770)
800 shaders
40 texture units
16 rops
850mhz gpu clock(1360gflops)
128 bit memory interface
gddr5 @1.2ghz(4.8 giga transfers/sec) x 128 bit interface is about 76.8GB/s.
1GB gddr5 frame buffer(quite appropriate for the card's rendering power and most
likely target resolutions)

hd6750(rebrand of 5750)
720 shaders
36 texture units
16 rops
700 mhz gpu clock(1008gflops)
128bit memory interface
gddr5 @1.15ghz(4.6 giga transfers/sec) x 128 bit interface is about 73.6GB/s.
512MB-1GB gddr5 frame buffer(the gpu can take advantage of this amount of
frame buffer with its level of performance)

would like to mention about restructuring of shaders in the last coupler generations
of radeon GPUs. for a long time, amd used vliw5(very long instruction word 5) where
4 out of 5 shaders in a group work on a pixel(each pixel up to 32 bit precision in red,
green, blue and the alpha transparency channel) with the last fifth fatter shader
performing the occasional transcendental function like sine, cosine, tangent, etc.

these groups of five were further grouped into 16 vliw 5s, for a block of 80 shaders
times 1(80), 2(160), 4(320), 5(400), 6(480), 9(720) 10(800), 12(960), 14(1120), 18
(1440) and 20(1600) depending on the gpu in question and if units were disable due
to binning. what amd found out was that that last unit was rarely used, so
with the hd6900 cards, amd switched to vliw4, dropping the last fat unit
and applying its extra parts to the other 4 shaders in the group. this made them all
a little fatter and you would loose a bit of performance when a group of 4 was used
to calculate a special function, but they were able to put more 16x4 bunches on the
silicon without much extra size increase at 40nm(hd6950 w/64x22 or 1408 shaders,
6970 w/64x24 or 1536 shaders). these cards also got a boost in clock speed compared
to hd5850(1440 shaders or 80x18) and hd5870(1600 or 80x20). i predicted the change
would result in a 20-25% increase in performance per theoretical gflop, and i was right.
hd6950 @800mhz with 2253gflops performs generally equal to or better than hd5870
@850mhz with 2720 gflops. they both have 32 rops and a 1-2GB frame buffer. the only
other difference is the # of texture units( always 4 for a block of 64-80 shaders), but
since even bottom level cards can do 16xaf w/o issues, i see this as irrelevant.

with the low/mid-high end hd7000 cards(lower cards are rebrands of older GPUs), they
enhanced performance more by making each shader block of 64 go from 16 groups of
4 shaders each to 4 groups of 16 shaders, making the architecture more similar to what
nvidia has been doing since the geforce 8 series. this is part of what allows a card such
as the hd7850 to perform as well at times as an hd6970 or gtx570(the two older cards
used to slug it out in price/performance from us315-350) despite an otherwise similar
architecture to 6970, lower core and vram clocks and massively reduced theoretical
gflop performance. this from a card that started life at us250 and can now be had for
a little under us200. pretty amazing, and newer drivers have been making it even better.

find out more clicking links to pages for recent radeon families on page here
http://en.wikipedia.org/wiki/Comparison_of_AMD_graphics_processing_units

might you possibly reconsider keeping your current psu and getting 1 of these hd7750s?

vtx3d 81.70 pounds with free delivery in the UK(i know nothing of this company)
http://www.amazon.co.uk/VTX3D-Graphics-128-Bit-PCI-Express-Architecture/dp/B0078XMRF8/ref=sr_1_2?ie=UTF8&qid=1346100314&sr=8-2

club3d 82.58 pounds with free delivery in the uk( decent company rep in th past)
http://www.amazon.co.uk/Club3D-Radeon-GDDR5-128-Bit-Graphics/dp/B007A3SQ7Y/ref=sr_1_7?ie=UTF8&qid=1346100314&sr=8-7

doesn't say whether these prices are before of after VAT( that's an issue, right?).
 
do they even make a single slot 6770? if not, you are out of luck on that front.

take a look at these pages

tom's latest gpu buyer's guide, august 2012(prices in US dollars)
http://www.tomshardware.com/reviews/gaming-graphics-card-review,3107.html

page 2 shows both 6670 and 7750(6670 for 1680x1050 in most games, 7750 for
1920x1200 in most games, perhaps with lowered detail)
http://www.tomshardware.com/reviews/gaming-graphics-card-review,3107-2.html

page 3 has 7770(same perf. as 6790/faster than 5770/6770 but 80w so needs pcie 6pin)
http://www.tomshardware.com/reviews/gaming-graphics-card-review,3107-3.html

some folks have taken it on themselves to overclock the 7750 a bit in cases like yours.
you wont exceed the 75w power limit coming off the peg slot, and if you undervolt, you
may get better performance AND lower power consumption. the 7750 already performs
close to 5770/6770 in some cases. a small OC could push it over the top. hope this helps.
 
Let's put it this way. If you were me, what would YOU prefer?

I would like the best card, with or without a new PSU, for the cheapest combined price.
 
just noticed the voltage on your cpu. 1.648 volts? holy cow, that looks high!

according to this:

http://en.wikipedia.org/wiki/List_of_AMD_Phenom_microprocessors#.22Rana.22_.28C2.2FC3.2C_45_nm.2C_Tri-core.29

your core voltage should be between 0.85 and 1.425v. sure some mobos make small
adjustments for stability sake, but that is a pretty big gap when talking voltage. see if you
can get into an advanced settings page of your bios and lower that puppie a bit. you may
have to change one or more other settings to make it stick without the mobo resetting it
to what it wants. I am not the best person to guide u on that tho, perhaps seek help with
that issue from more experienced folks. i do know that at a given clock, increase in voltage
has an exponential impact on power use. for example, your max volts at 2.7ghz should
be 1.425 with 95w tdp(100%). dont want to do all the math now, but say you increase
voltage 10% to 1.570 volts(not sure that is a proper multiplier, maybe more by .025v
increments), your clock speed is the same but your new voltage changes the power use
like so(110% squared or 1.1 squared is 1.21 or 121%) so with the stated tdp in the
screenshot of 96w your actual power use would be about 115w, and that is at a lower
voltage than you are currently running(1.570 vs. 1.648)! I think a lot of less extreme
overclockers would suggest not exceeding 110% of standard voltage on a cpu, gpu,
chipset or ram. you are exceeding that by a bit now! ok, calm down, calm down...

back to graphics. all things considered, a normally clocked hd6770 with the usually 1GB
of gddr5 should perform more than 2x as fast as a 6670 w/gddr3. also it would be faster
than two 6670s in crossfire running at almost 100% performance efficiency(occasionally
they may slightly exceed 200% performance of one 6670, but that is within the margin
of error for different benchmark runs. sometimes 2 cards run worse than one if support
in a game is poor and the cpu/driver is working extra hard to make it works). average
ends up being 50% but highly dependent on the games you play and their level of
support for the technology.

maybe i am mixing this up with an original post on another thread, but fyi

don't remember if anyone explained nvidia sli or amd crossfire to you, but it is a method
of using 2-4 same model(nvidia) or similar model(amd) cards together to boost
performance. this can save money compared to one more powerful card(faster for
cheaper) but does require a mobo with 2pcie(+) graphics (peg) slots, a beefier psu
and better case cooling. it also may require a stronger cpu to feed the 2 video cards.

of course this path is not without risks. in addition to various levels of support by
game developers, there is also the issue of micro stuttering. this is hinted at on several
sites that review video cards, but techreport gets to the heart of the matter. they do
short runs and record the amount of time in milliseconds it takes to render each frame.
say one card seems to portray in fraps or the like a framerate of about 50 in a given
second, but does so by rendering about 50 10ms frames and one really long 500ms(1/2
second)frame. not usually this extreme but much beyond 50ms(20fps or lower) you will
likely notice. say another card shows 40fps in fraps but does so with 40 25ms frames in
a given second. what if that first card(or 2 or more cards working together) had multiple
issues of rapidly fluctuating frame rate in the course of a 60-90 second run in a game,
while the other card(cards) maintained relatively smooth frametimes throughout the test.

this effect is noticed more easily by some, less by others. not sure why.

which one do you think you would want? sli/crossfire seems to have more issues with
this since the preferred method of splitting up the work load between 2 or 3 GPUs is to
assign each a frame to render in order(vs the driver trying to intelligently split the frame
into similarly difficult to render top/middle/and bottom portions of the frame). a 4 gpu
setup(either 4 expensive cards on a very expensive mobo, or 2 very costly dual gpu cards
on a little bit less expensive mobo) uses a combination of the 2(one card does top and
bottom of even frames, other card does the same for odd frames for 2 dual gpu or
similar to 4 separate cards). usually the best performance scaling is with 2 GPUs, drops
significantly with 3 and gets almost no use out of a fourth( why it is often better to go
with 3 high end single gpu cards vs. 2 over the top dual gpu cards). nowadays it seems
multi gpu on the high end is a bit of a waste(unless you run 3 1440/1660p screens or
6 of those or 1080p screens, but will likely run out of vram with high levels of aa).

folks can save cash with 1 gpu, smaller case, psu and slightly less feature rich mobo.
(and have just one screen that can play 1080p video at native res, not having to
worry about bezels and the additional rendering power needed to work around 'em.)

a single overclocked hd7970 can handle 3 1080p screens quite well. so in general it is
a better idea to get one more powerful card and keep it a while than to get a lesser card
with hopes to add another or 2 lesser cards in the first place. I will say tho that based
on techreport's testing methods( all the video cards they test are evalutated the same
way) nvidia gk104 based cards seem to run smoother than gcn (graphics core next) based
radeon hd7000 series cards at least as far as mulit gpu setups are concerned . this can
change for either company from one driver version to the next.

7750s started out as single slot cards. 7770s started as dual slot and i think it is those
more powerful cards that are hard to find single slot since they demand better cooling.
 
i guess when i said the 6770 would be more than 2x the speed of 6670gddr3,
i was thinking 5670 not 6670. a normal 6770 would definitely beat 2 5670 gddr3,
2 w/gddr5 might be about the same here or there, same for 6670 gddr3(more
shader power than 5670 but hamstrung in spots by lack of vram bandwidth), and
it is entirely possible that 2 6670 gddr5 working together could outperform a
single regular 6770 in a number of instances. i would edit a previous post, but
when i tried to do that a little while ago the system would let me...system...

while i have yet to post this time, another thing came back in my head. while your
current psu is likely a few years old and perhaps doesn't run quite as well as it
used to, i think you can be fairly confident in adding such a video card that its
power requirements do not exceed what you mobo's peg slot can provide.

oem computers may ship with power supplies that seem anemic compared to what
is out there(not even offering up any pcie 6pin connectors), but if they hadn't intended
for you to be able to make full use of that peg slot, they might have opted to ship the
computer with a mobo that does not include a peg slot( somewhat common in the
past to not have pcie or agp but only ordinary pci). doubt many buy a oem computer
with gaming in mind at first, but as time goes on and you want to play stuff that the
integrated graphics cant handle very well(if at all), the graphics slot is there for that
purpose(if the oem had the foresight to include one, which your oem did).

again, lowering voltage on you cpu and gpu(even your ram and mobo chipset) can
have massively a positive effect on your overall system power consumption. if you
drop the voltage so low that the pc becomes unstable , you can always bump it back
up(a little bit). and, unlike how overstressing your psu can have potentially catastrophic
consequences, instability due to undervolting is at worst some data loss of something
somewhat important to you that you had in ram at the time. if you test first(again, i am
not the best source for methods, seek those with more experience to guide you), then
significant data loss should be a non issue as well.
 
thought i might post a bit about where i'm coming from. don't think i'm super strange
or anything but in your current upgrade situation i feel somewhat connected to you
based on my own previous upgrade experiences and my current situation. this may
get a tad long, but will try to keep it concise and. here we go.

mid 1980s-1998
my first pc was a hand me down from grandpa with a 8086(or 8088, not sure)proc,
512KB(yes kilobytes) of system ram(later upgraded to 640k) a 3.5in 720KB magnetic
drive(added a 5.5in magnetic drive later), monochrome monitor(went color later) and
a printer that ran the paper through using spiked spindles and holes at the perforated
edge of the paper. ran dos(before windows existed) and used mostly to play really
basic games(by modern standards) and write the odd paper for a school assignment.
also at least once sat down with a midi music program that came with it to try and enter
the music from a song we sang in choir to make it play back as close to as how we would
perform it as possible. went off to college for a while and used computers in labs there
(games, music as a major, a few papers), then left school for various reasons, one of
which was to help take care of a sick relative. by this time the old pc(basically amongst
the first of what were referred to as PCs)had been lost to a church auction.

mid 2000 thru early 2001
got a decent paying job that i liked ok at first but learned to loathe after not very long,
and got a new pc. sony vaio w/ pentium 3 866 on 133mhz fsb(i used to think the 2
added together to make a 1ghz proc...live and learn), 256MB of sdram, 40GB hdd,
integrated sound and intel video. the sony speakers and flat 15in crt were pretty
cool, an of course it was a colossal upgrade compared to the old pc, but not well
optimized for gaming(windows ME). the relatively ancient(now) intel graphics let me
play the odd game, and ethernet(or was it usb) along with the rise of high speed
internet opened me up to the pitfalls and possibilities of the world wide web.
(oh yeah, a cd burner and a dvd drive, pc all at compusa for 1500-1600 total).

fall 2001 to early 2002(bought on sale at radio shack for 1317 after tax/shipping)
quit my old job, got a better one, then a better pc(relative, not great by a long shot)
athlon xp 1700+(1.467 ghz), 512MB ddr 266, 80GB hdd, faster opticals(same setup),
creative sound blaster pci, geforce 2MX, monsoon 2.1 speakers and compaq(the oem)crt.
also an epson printer i'm not sure i even used once, but it was in package for $1.
this system stayed with me for a while. in fact it is sitting next to me now (powered
down). unfortunately i let overlong gaming sessions interfere with my work much to
my detriment (cautionary tale), so i lost my apartment and moved back in with the
folks. got another job and another apartment(same thing happened) lost it and
went back with parent again. in and out and in church/choir during this timeframe.

fall 2004-early spring 2005(tower bought at office max, other parts elsewhere)
got another job for a while and saved up some money, then instead of wisely using
it to live off of after loosing another job, bought a new pc. more research went into
this one(perhaps not enough). bought a base compaq unit for 500 some w a64 3200+
socket 754(agp, 512MB single channel ddr400, sse2 only but otherwise same as the
popular a64 3500+ at 2.2 w/512 L2) cd burner/dvd player combo drive, upgraded in
store at purchase prior to build and shipping from 80 to 160GB hdd, 300w psu(more
on that later). if i had waited i might have gotten a socket 939 mobo that supported
the soon to arrive dual core CPUs and pci express graphics, but fools rush in/ a fool
and his money are soon parted. bought a dynex (best buy)19in 1600x1200 @75hz crt
(meant to get the flat model but wasn't paying close enough attention) for about 100+
tax, ati radeon x800xl 256MB agp at compusa for 350+21 tax minus 50 rebate(321),
sound blaster live 24bit hd for 30, logitech 2.1 speakers for 30(or 50?), win xp pre-
installed, another 512MB stick of ddr400, and dynex media keyboard w/ optical mouse.
did fairly decent on components price/perf. wise for the time, but more thought to the
future likely would have saved be grief. (considered a dell with p4 3.0-3.2ghz and a
geforce 6800. maybe i should have gone that route.)

work since then has been spotty at best so cash wanted/needed for maintenance
or little extras few and far between. after a few years, the fan on the x800xl gave out.
luckily, i caught the last couple weeks blow out sales at the local comp usa before they
closed their doors forever. got a dual slot fan that blows cold air from out side the case
directly up at the gpu heatsink(courtesy of two rear pci slots, sound in bottom, cooler
in middle and gpu on top in micro tower). picked up a few other items too.(more later).
the gpu would get very hot(monitored at around 100c during bf2142) so i would manually
adjust the fanspeed switch for the gpu fan at the back of the case to low(it's weird, it
would go thru transition from almost non corrupted, to all the soldiers looking like techni-
color origami, to normal again once a certain temp threshold was reached, then turn up
the fan , having to occasionally pause the game in single player to adjust fan up and
down to keep playing. well finally the x800xl gave up the ghost and would be nothing but
a garbled mess at post, so i was out a home computer for the better part of the year.
could sill get online at the library but gaming was sorely missed.

worked for a brief stint with a temp agency and got the funds to replace my gpu. not
ideal since the replacement generally underperformed its predecessor a bit was was the
right combo of not too horrid a price for agp and system compatible(x1650pro 512MB).

this is where the story gets like yours a bit, original poster. the x800xl was a 110nm
card that had the same architecture as the best radeons of the time but slower clocks
and fairly low power requirements( under 75 watts, so no pcie connector needed).
also performed like the nvidia 6800gt of the time.

however, since i was on agp(50w), i needed a molex(50w), no biggie. i think maybe the
extreme temperatures on the x800xl at the end of its life may have had a detrimental
effect on my psu, or its failure may have been unrelated. good thing i bought a 400w
comp usa psu dirt cheap(cheap cuz the oem 300w had more amps on the 12v rail than
the store bought 400w). the x1650pro is the 80nm shrink of the 90nm x1600xt(similar
to geforce 6600gt) but with lower power requirements. this ran off agp and a floppy
connector and added support for dx9.0c shader model 3. much lower rops so little or no
aa, similar limits for af but higher shader power than the old card. it served me fairly well
for a while til its fan gave out. had to use store bought cooler again but the fan was not
as peppy as before. then it got to the point where the monitior would blank after a few
minutes of being booted into the os. use the old compaq for a while until it rebooted and
got caught in a loop. removed the geforce 2 mx from the OLD compaq and put it in the
"current computer" if you look at the last page of tom's gpu buyer's guide, you will see a
list of amd/ati, nvidia, and intel video cards. the 7750 is a lot closer to the top, my dear
departed about half decade old cards are toward the middle(bottom)and the geforce 2mx
is just above the most ancient of intel mobo graphics at the very bottom of the chart.

dx7. yay! also, dynex crt died recently, went with compaq crt til it died(sold whole vaio
setup many years ago) and am currently using a dell crt i found on a curb walking home
one day. if i can scrape it together in the near future, i'd like to maybe get a low end
96 shader dx11 geforce that runs on an old school pci slot. the slot only gives 25w(vs
50 power req), but since i am on a 2.2 ghz single core athlon 64 and pci bandwidth is
only 133MB/s and shared between all devices on the bus(vs agp 8x at 2133MB/s, pcie
1.0x16 at 4GB/s, 2.0 8GB/s and 3.0 16GB/s) i would have to switch to mobo audio(never
used before) put the cooling fan in the bottom 2 slots and the new gpu in the slot below
agp(passively cooled, reviews say it runs hot) so as not to tax the pci bus more than
would be required. yeah, my proc and that bus will limit the data sent to the gpu so it
will perform much slower than a pcie model, but maybe the same or faster than the
better cards i used to have. or i can find stable work and get a whole new rig.

also, the heatsink popped off my mobo's chipset years ago, so i have been running pc
without. hard drive hangs from time to time and recently the store bought usb keyboard
and mouse go unresponsive at times. got out the flimsy keyboard and ball mouse that
came with the computer. already the mousewheel is broken and some of the keys on the
keyboard are drooping(i'm left handed and prefer the dexterity for keys on left and
simpler mouse commands for right, so right side of left shift used for running/walking and
single caps and left side of spacebar for jumping are the main victims of my heavy handed-
ness. these run off old ps2 ports (separate chip on mobo) and i have a usb to ps2 adapter
for my optical mouse, so i may switch back to that and regain use if the mouse wheel for
scrolling and zooming web pages. not sure what to do if the keyboard fails. funny how the
keyboard that cam with my 10yr old compaq has usb while the one that came with the
2005 compaq(no brand loyalty) is ps2. the old mouse it ps2 also and held up well for
years, but the older pc is an all white color scheme while the "newer" one is black and
silver, including the salvaged dell crt. another reason to switch back to the optical mouse
is so i wont have to clean rollers/ball underneath from time to time and generally better
responsiveness. sorry this got long. a little(or a lot) off topic and probably more than u
wanted or need to know. shutting up now.

 
jtenorj!

Sorry for not getting back to you sooner, I've been super busy.

Thanks for all the info regarding GPU's and thanks for the history lesson. Always nice to know where someone is coming from.

Still haven't got a new GPU, just saving up a lil more cash...times are hard, man! :)

 
Status
Not open for further replies.