AMD/ATI Accelerating GPU Flash Player 10.1 Too

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
G

Guest

Guest
Flash video's suck, and are not comparable to mp4/avi video's.
 

steiner666

Distinguished
Jul 30, 2008
369
0
18,780
CMON INTEL! the GMA 950 sucks as a graphics adapter but it would be great if flash could us it SOMEWHAT so that my netbooks cpu doesnt get the shit kicked out of it when watching hulu videos
 

jeffkro

Distinguished
Aug 26, 2009
7
0
18,510
[citation][nom]zingam[/nom]Why not? They could just disable n number of CUDA cores and you have a cheaper GPU. I bet they'll do it. With that 3 billion GPU they'll have a lot harvesting to do. You will definitely see a whole bunch of crippled cards by NVIDIA next year.[/citation]

Yup, the only market I see is for atom/ion HTPC's for watching HULU and youtube fullscreen. Still that is a pretty good nich market.
 

razor512

Distinguished
Jun 16, 2007
2,130
68
19,890
if you have a system thats using a 5870 then you most likely have other high end parts which means you wont notice any improvement through hardware acceleration.

whats needed is for low end systems, eg single core 1.6GHz with a geforce 7300 or something else low end, these systems will not handle full screen flash unless adobe comes out with a flash player thats fully GPU accelerated.

 

murphx

Distinguished
Oct 7, 2009
1
0
18,510
[citation][nom]dheadley[/nom] ... more and more complex games will be created that for all intense purposes require acceleration to be playable....[/citation]

Seriously, I just had to sign up to correct this! This is a real howler! The phrase is "all intents and purposes", as in "intentions and purposes", it has nothing to do with intensity. I can't believe I'm having to write this...
 

alextheblue

Distinguished
[citation][nom]zingam[/nom]I don't get it! If you have 4870 then you should probably have powerful CPU to. Why do you need an acceleration then?[/citation]So you're saying that producing a 40nm wafer of 1.5 billion transistor chips will produce the same number of chips (working or otherwise), as a 40nm wafer of 3 billion transistor chips? Using the same wafer size and manufacturer? The answer is no. Yes, they will be binning the GT300 - all chips are binned, my friend. But the only thing binning helps with is poor yield %, it doesn't change the actual cost to make the damned things. It allows them to sell partially-broken or underperforming parts. But this will only scale down so far.

Look no further than the GT200 for an example. The chips can scale UP as yields improve, but scaling DOWN is impossible unless you like losing money. Did they ever sell a GT200 below the 260? No. They are too expensive to sell in the cheap sector, so they used a rebadged 9800 GTX+ (G92b) and named it GTS 250 rather than selling GT200 chips that were even more gimped than the GTX 260. G92 is smaller, and therefore a hell of a lot cheaper to produce on the same process. Everything GTS 250 and below is G92 currently.

They also have trouble getting heat and power consumption issues under control with large chips. Again, take the GT200 as an example. Now look at the laptop "GTX 260M" and "GTX 280M". Guess what? They're not GT200 either! That's right, the mobile "GTX260/280" chips are G92, again.

Conclusion: the huge GT200 die was just not appropriate for the mobile and cheap markets, at 55nm. At 40nm, the huge GT300 will have problems scaling for these same markets. So for Nvidia's sake, I hope they release a GT2xx (upgraded GT200, maybe with DX 10.1 and CS 4.1), to cover these important markets.
 

alextheblue

Distinguished
Sorry, I meant to quote this Zingam's other comment (below). I was explaining why it's not always better to just sell your huge fancy new chip in all market sectors.

[citation][nom]zingam[/nom]Why not? They could just disable n number of CUDA cores and you have a cheaper GPU. I bet they'll do it. With that 3 billion GPU they'll have a lot harvesting to do. You will definitely see a whole bunch of crippled cards by NVIDIA next year.[/citation]
 

pender21

Distinguished
Nov 18, 2008
125
0
18,690
Why are so many people complaining. This is a good thing that Adobe is doing, especially for Flash Video Sites for nextgen netbooks or HTPC.
 

geoffs

Distinguished
Oct 24, 2007
276
0
18,780
[citation][nom]mlcloud[/nom]What GPU uses three times more power than a processor does? Most GPUs will use 225 watts (PCIe = 75w, 2x6pin 75w), and pairing a GPU that will require extra power connectors with even a duo-core won't get you the magical "3x power consumption". Both parts already consume electricity during idle, might as well put them to use.I don't see what you're complaining about.[/citation]Intel's mainstream desktop C2Ds use a maximum of 65W. As you pointed out, GPUs can use upwards of 200W, which is at least 3x.
 

alextheblue

Distinguished
[citation][nom]geoffs[/nom]Intel's mainstream desktop C2Ds use a maximum of 65W. As you pointed out, GPUs can use upwards of 200W, which is at least 3x.[/citation]
That's at full tilt. A GPU with 3x the power consumption but many times the raw compute power means it won't be running at full tilt. At full tilt a 5870 puts out over 2 TFlops. How about that C2D? So if the GPU is mostly idle while accelerating something easy, it may not eat as much power as you think, and more importantly, its going to eat some of that power ANYWAY even if you don't use it.

Look at the latest integrated GPUs, which are capable of accelerating blu-ray streams (including H.264). If you took away that acceleration and tried to decode it entirely in software (aka on the CPU), you'd need a fast CPU and it would eat a lot of CPU cycles. In the end, it would probably eat up more power than letting the GPU do most of the work.
 
G

Guest

Guest
so Adobe would utilize M$ computing API to decode H.264 bitstream just like what CoreAVC do (using CUDA)?
 
G

Guest

Guest
the purpose of flash is to have moving colors and graphics that require the cpu to build and render vectors and animate using very little data to be served by the content hosting website. this was the alternative to streaming video that maxed out bandwidth at the time of its invention to provide rich content for websites.

since it caught on and it works with most browsers via a plug-in, people have clicked ads and linked from one website to another and because of its success games and simulations and even programs have been written for it.

however, games like farmville or cafe world on facebook chew the hell outta all my pc's and kill battery life when logging on while mobile.

but to be fair to all that login to websites they use flash which is only accelerated to a point, and uses a single cpu core to the maximum no matter the quality level when rendering vector graphics.

the point of the technology is to have a successful install base using a uniform plug in acquired from one common source and to exploit this avenue for maximum advertisements and potential profits.

if they wanted to they could program the games to run using whatever acceleration you have on your pc, but, not everyone has that same chip or chipset and the cost for true support for it would kill any potential profits made.

remember when dvd first came to the pc, there were systems that could do it fine and others that would stutter and skip and look terrible, etc... the standard was the disc and mpeg2 compression, however the systems that played that media type were all different. so the push was to standardize the requirements and thats where intel and ati and nvidia all added acceleration for that particular data type for performance and power optimization. it was also driven by the media market so the sales of dvds would increase.
 
Status
Not open for further replies.