From what little is known of the Hollywood GPU, the ATi GPU that seems most like it (which isn't very like it at all) would be RV530, which is also a 90nm chip with roughly the same die area as Napa. (and thus ostensibly, a comparable number of transistors and logic, though the 3MB of embedded memory may screw with that... A high-res photo of the chip's bare surface would do wonders for understanding it) Of course, as it appears to be, it's the "Vegas" sub-die that has 24MB of its own embedded memory that also has the ROP structure and possibly the memory controller as well.
What could be already known is that the chip has at least 4 ROPs, 4 TMUs, hardware T&L support, and some form of shader support, and every other piece of architecture that allows it natural "backwards compatability" with the GameCube, much like how a Core2 can run anything made for a Pentium 4 in spite of being a different architecture. It apparently can handle at least normal mapping and specular mapping, both effects seen fairly extensively in some games. (like Need for Speed: Pro Street and Super Mario Galaxy) I'd judge that it does have programmable pixel shader capacity (which everything save for the Xbox, Xbox 360, and PS3 lacked) given how some games have ported over graphically. However, I'd judge that perhaps that they're not terribly flexible.
Likewise, memory bandwidth and size is a potential serious limitation. The game has an apparent hard cap on the resolution of 720x480 @24bbp, as it appears to retain the flat 1MB maximum frame buffer size from the Game Cube; those buffers thus will fill 1012.5KB. Aside from that, there's 2MB of high-speed cache on the GPU available for the TMUs, as well as the 24MB of EDRAM within Vegas, and lastly, the slowest bank of memory, 64MB of GDDR3 attached to the GPU through what is apparently a 64-bit memory interface; markings on the chips indicate that it's 1.4ns GDDR3, which would suggest a clock rate of 1400MHz; it's the same memory as is found in the Xbox 360 and PS3, though on a smaller interface; that would give it 11.2GB/sec of memory bandwidth for that, though use of that would be reduced through the use of its own dedicated frame buffer, as well as good coding to make heavy use of the cache. I suspect this is the reason why the machine typically keeps a flat 60fps in most of its games; it has the bandwidth to support higher resolutions, but its buffer size is hard-capped.
In the end, I'd say that it lies somewhere between the Radeon X1300pro and X1300XT/X1600pro. This would compare to the Xbox 360, which lies between the Radeon X1650XT and HD 2600XT, (though without full SM 3.0 support, let alone any SM 4.0 support) and the PS3 uses a watered-down version of the 7800GS, with the core clocked up to 550MHz, though crippled with a 128-bit memory interface.
In the end, I'd estimate that graphically, if I represented their graphical power as a number, this is how some of the consoles would've stacked up:
■Playstation 3 - 120
■Xbox 360 - 100
■Nintendo Wii - 35
■Xbox - 25
■Game Cube/PS2 - 15