Larrabee vs geforce 9800gt vs radeon 5770 vs gt250

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

cheesesubs

Distinguished
Oct 8, 2009
459
0
18,790
as we know larrabee will be release in january 2010. for mid range graphic card which do you think is the most cost/performance? will larrabee win the mid range gpu war?
 

jonpaul37

Distinguished
May 29, 2008
2,481
0
19,960



All your lies are belong to me!
 

ordos96

Distinguished
Nov 3, 2009
72
0
18,640
With no proof and hard set release dates, we really can't tell where larrabee will fall in comparison to ATI and Nvidia. I am in favor of Intel joining the competition as competition drives innovation for the next best thing and also drives to put out the best product for the best price. For these reasons i am looking foreward to seeing larrabee released and matured into a good gpu. But as far as the information and spec that are floating around out there, we have no soild evidence of what it will be. It may suprise us all and outpreform ATI and Nvidia ( being a first attempt at the market i don't see this happening), or it may fall flat on its face. This is an argument that we are going to have to just wait and see. I for one am at least excited to see a new face step into the ring for GPUs. I would love it if Nvidia or Motorolla were able to step in (or back in) to the PC CPU market to drive some more innovation and competition there as well. On a side not I am no fan boy of any of these companies i have used products from each and been very satisfied with what i bought.
 

cheesesubs

Distinguished
Oct 8, 2009
459
0
18,790


you are completely wrong in cache's part

fermi has only 16kib l1 cache (8kb instruction and 8kb data) per and 48kb l2 cache (16kb instruction + 24kb data + 8kb decoder) per cluster and 768kb l3 share cache. as for cpu standard it will be worse if it perform as a central processing unit compare to core 2's 64kb l1 cache and 4mb l2 cache. to both intel/amd standard nvidia's cache are relatively small. evergreen may be sucks but it had 128kb l1cache and 512kb l2 cache.

PS: my point is nvidia's cache were small like nothing....kinda reminding me of netbursts.
 

cheesesubs

Distinguished
Oct 8, 2009
459
0
18,790


how important the cache for both risc or cisc and many chip like gpu and chipset?

for example risc arch are heavily rely on cache to boot their performance in order to prevent latency(in 32bit mode it is truly important due to the lack of registry and instruction set. however 64bit solve some of the problem) for cisc since the modern x86/x64 had basically build on instruction set so it had minimumized the cache latency and optimize the performance. enhance the communication between northbridge and processor, data and cache. allow more data to go throught with smaller cache without lose the performance vastly(x64 is even more benefit to cisc). but still 32kb l1 cache is basically the minimum requirement for all computing even for cisc that had the optimization with instruction set. anything below 32k will face the serious consequence(like netburst for example).

larger cache may not guarrantee the performance boot but it will enhance and feature that the processor provide. for ex, amd's larger 128kb l1 cache did benefit to their hypertransport. intel's unified cache enhance their northbridge capability and dual channel ddr2 connection. and larger low latency cache of 64kb per core and 2~6mb l2 cache leaded on the top of any structure ever seem on the planet. cache is more important than pipeline/clockrate since amd had prove that cache is the key of performance.

cypress/evergreen was the hybrid of mixture athlon barton/thoroughbred and r700. it has 128kb l1 cache (64kb instruction + 64kb data) and 512kb l2 cache per gpu and also hypertransport enable. as far as i hate to say this if fermi that come from nividia only had 16kb l1/48k l2/768k l3...then they're simply doomed. no matter how many rops/tpu/memory interface they put into fermi. evergreen's monstrous 128kb l1 cache will just eat fermi alive. but i still believe larrabe will eventually beat evergreen.
 
cache is but 1 player here. Remember, it all comes down to SW. Cache brings with it an inherit latency, but avoids huge latencies as well, but, theres other approaches, at least on gpus.
It depends on the strings/workloads/stack/compiler/scheduler etc.
To say it has more cache means nothing at this point, until as Ive said, we have more arch info, and since we wont really have all that info before the release of these parts, and how its has effects on workloads, where the weaknesses/strengths are, its too early to make claims, tho it can be interesting to speculate.
But you bashed instead.
Opinio9ns are fine too, and in your attempt at justifying your opinion, others and myself have pointed out, you simply cant do this at this time.
If you want a discussion on the possibilities of each structure, and its possible ability, thats fine, otherwise, we still need to wait
 
Software major Infosys Technologies and Nvidia, world leader in visual technologies and inventor of Graphics Processing Unit,(GPU) on
Wednesday announced a partnership to develop Nvidia Cuda (Compute Unified Device Architecture) Technology-enabled software solutions.
http://economictimes.indiatimes.com/infotech/software/Infosys-partners-with-Nvidia-to-set-up-technology-centre/articleshow/5220127.cms
Things like this only puts more pressure on LRB, which from everything Ive read, doesnt look like itll be that prominent in the gfx arena, at least not right away, and Intel needs to egt their gpgpu solution off the ground, regardless of x86 etc.
As long as nVidia continues to do these things, have wins here and there, LRB is being attacked at its supposed greatest strength.
 

cheesesubs

Distinguished
Oct 8, 2009
459
0
18,790


as long as intel bring their hyperthreading/sse/ larger cache/smaller fabrication into larrabee nvidia will find their tough match still. that was what evergreen able to beat gtx 295 in some high float point demand game like far cry, lost planet(unreal engine) evengreen's cache is as large as athlon xp and some athlon 64 model. however nvidia's cpgpu did not see these advantage if these spec were true(16kb l1 cache and 48kb l2 cache per gpu and 768kb l3 is onboard? you are kidding me......). as nvidia's current situation i can see their would have little advantage facing both intel/amd's assault(of cause, physx that is!) and i dont see cuda had chance against em64t, sse, amd64, 3d new either.

the only one of few solution nvidia had is enlarge their cache
 

brockh

Distinguished
Oct 5, 2007
513
0
19,010


:na:

Is English your first language? Regardless, good luck with your speculation.
 

cheesesubs

Distinguished
Oct 8, 2009
459
0
18,790



no i speak martian. and i never being on this planet before. i must mention this that you may hard to understand my language(or at least try to) because you are only human.

PS: i dont recall this planet so please correct me what is this planet's name.

thank you.

--------------------------------

on topic:

that wasn't my speculation, i was suspecting that fermi's info might be a haox too. because i believe no matter how shty nvidia ahd been in these year they wouldnt make a product that is weaker than their previous gen.


larrabee had cache size of 64kb l1/ 1024kb l2 per core and cypress had cache size of 128kb l1/512kb l2 per gpu. i dont think fermi will be only 16kb l1/48kb l2 on die and 768kb onboard.
 
Thing is, until we see working drivers, and how well LRB runs everything thru its SW approach, we wont have any idea regarding true latencies, and whether they offset any gains with their cache setup/structure, and the same for Fermi, with its fixed function HW, using CUDA in gpgpu vs HW for games.
Too many new approaches. Ask PT, he may know more
 

cheesesubs

Distinguished
Oct 8, 2009
459
0
18,790


do you think cuda will enhance performance on a chip that is only 16kb l1/48kb l2 on die?



so you're telling me that even larrabee has sse4a/ssse3 and em64t, dual channel 64kb l1/1mb l2 cache 8 way associated...feature and still no match to fermi's tiny cache high latency/low cache speed?? if evergreen can do it so will larrabee
 
It depends. Using x86, c++ etc, do you think that helps LRB?
If the app is designed for it, definately, and having their feet in the water long before LRB, making a few inroads here and there , they gain more traction, and the longer LRB takes, the more wins Fermi gets.
As for HPC, it may be too late anyways for LRB, but again, its too early
 

cheesesubs

Distinguished
Oct 8, 2009
459
0
18,790


evengreen are also x86(or x64) gpu too....they were basically design based on C++. simd single instruction, multiple data. the design is way tranditional than intel's hybrid solution. however if evergreen had such huge lead over gt200s and may possibly on fermi(again, when physx is diable) i dont think a more advance larrabee will be beaten. maybe only lack on heavy spam on rops/tmu that fermi always do or massive shader spam like evergreen or even northland in mid 2010. probably that is what you were mention. cache cannot help rops/tmu/shader disavantage.......
 
And were seeing more and more needs for ram today , for the newer games.
Also, looking at the potential of the new coming AMD cpus, which will have say, a bare minimum of a 4650-70 ability for the on die IGP, it raises the bar for the devs, which is a boon for PC gaming, and may not play so nice with LRB, depending.
FF HW is always better than SW, but heavy cache and lower latencies may offset it
Who knows