Wii U Launch Developer Complains of Lackluster CPU

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]alphi[/nom]xbox 360 has 3 cores ps3 has 8 SPUs...the same developer makes a game that runs smoother on a 5 Year old 8 core PS3 CELL CPU than a similar title on the ultra new 3 core power7 CPU.clearly threading is not the issue here.. the original WII was heavily criticized for its underwhelming graphics.it would appear that Nintendo has overcompensated by strapping an overpowered GPU to an underpowered CPU.they probably went with the lower spec CPU to keep costs down (since the WII U seems to be much cheaper at launch than the original Wii) having to feed such a powerful GPU with data and of course maintain the ability to stream it all to the controller has got to eat into CPU performance.[/citation]

The Wii was $249 at launch. The WiiU is $299.
 
its funny how one guys tells us its cpu is slow thats why they have gpgpu in it if u know anything about specs u would know it can make up for cpu nintendo picked that so it wont overheat and they have said for a fact the wii u has much better graphics
 
[citation][nom]Weapon-X[/nom]its funny how one guys tells us its cpu is slow thats why they have gpgpu in it if u know anything about specs u would know it can make up for cpu nintendo picked that so it wont overheat and they have said for a fact the wii u has much better graphics[/citation]

If you knew anything about programming or digital system design you would know that stream processors aren't very useful for interdependent logic. GPGPU has a lot of limitations and when functional interdependence is introduced the benefits decrease dramatically
 
I completely understand that 4 cores on a VM, is not core 4 threads on the host. You are right, it doesn't work like that. The largest problem with multi-core setups is memory management/register management. Who has access to what, when is always the limitation (and will be for as long as I can see going forward).

However, based on real word results, putting 4 VM's (all windows server on 1 core vs 2 cores) generates NO PERFORMANCE increase. how is that possible? Because the 8 cores on the host are managing the threads. For this instance, IMVIRT/libvirt creates the threads, sends them to the CPU and the task is preformed (obviously within milliseconds). Switching to 4 cores per machine, makes no difference either. So, tell me how is this NOT the fault of the O/S kernel on Windows? It see's more cores yet you get the same results. Crappy kernel thread management is the answer.

Plus there are different types of hypervisors and virtualization/para-virtualization. Somehwere there is a thread (or 4 or 20) and it's getting sent down to the HOST CPU and on Linux it flies through it, on Windows it doesnt. If it's bad info, explain why real world experience is trumping what you are saying?

(its the 'net so please don't take the reply as harsh, I am just debating the real world outputs to the on paper "facts")
 
@theabsinthehare

I do understand what you are saying and you are correct in me saying that "3 cores is enough" is a invalid way to say it. That is my mistake and it misrepresents what I am trying to get across.

EVen with 10 year old technology (XB360, PS3) developers (from what I have read) are still using 1 core for function X(like sound), 1 core for function y(gui), 1 core forblah blah balh, etc.

This, in my opinion, is a patched way of doing things from a system of programming that was alwasy linear (task 1 go here, computer, onto task 2).

Now programmers have to "manage" so much code and write hundreds of thousands of lines just to even see if there is a difference/advantage in their technique. Frankly, it's like asking eveyone to start reading books upside down. Instead someone will just invent something to flip it right side up and all will be good.

Game Developers are still doing this. While things like OPENCL and CUDA are trying to help with this delema, the pickup rate isn't nearly high enough. Game developers need low level access to the hardware (or high level ,depending how you look at it). C++ is still the language of choice as Assembly would not only take forever, but be kind of a waste since the C compiler will convert to assembler, down hex, down to machine code at some point.

The problem I see isn't the hardware, its the developers. I say this because it's a MASSIVE overhaul of the system(from programming in a linear state to a multi headed, managed state) and I don't think the hardware is to fault at this point. The big 3(nin,sne, xbox) are geared towards multi million dollar developers. And these publishers/developers are looking for a profit. A quick return of investment. And as long as the big 3 are targeting non-indie developers, I think it will stay this way.

I think OUYA has a huge market and a change to bump the "system" off the grid. I also thing indie-devlopers will be the ones that help bring the non-linear future of programming to the doorstep of "mass consumers".
 
[citation][nom]antilycus[/nom]I completely understand that 4 cores on a VM, is not core 4 threads on the host. You are right, it doesn't work like that. The largest problem with multi-core setups is memory management/register management. Who has access to what, when is always the limitation (and will be for as long as I can see going forward).However, based on real word results, putting 4 VM's (all windows server on 1 core vs 2 cores) generates NO PERFORMANCE increase. how is that possible? Because the 8 cores on the host are managing the threads. For this instance, IMVIRT/libvirt creates the threads, sends them to the CPU and the task is preformed (obviously within milliseconds). Switching to 4 cores per machine, makes no difference either. So, tell me how is this NOT the fault of the O/S kernel on Windows? It see's more cores yet you get the same results. Crappy kernel thread management is the answer.Plus there are different types of hypervisors and virtualization/para-virtualization. Somehwere there is a thread (or 4 or 20) and it's getting sent down to the HOST CPU and on Linux it flies through it, on Windows it doesnt. If it's bad info, explain why real world experience is trumping what you are saying?(its the 'net so please don't take the reply as harsh, I am just debating the real world outputs to the on paper "facts")[/citation]

The exact specification of virtual SMP and virtual SMT varies by VMM vendor. However, in general:

When you assign multiple logical CPUs to a VM each logical CPU exists as a single process and each one is co-scheduled. Thus, when the host scheduler wants to enter a VM as many host-logical processors must be available as there are guest-logical processors on that VM. All physical processors then enter guest mode and restore the guest state at the same time. This is critical as having processors change state in a fashion that isn't apparent to the guest can cause all sorts of concurrency issues. This is why it is a very bad idea to assign more than half the available logical processors to a single VM as all processors will have to pause host execution while they perform guest restore and guest save operations. You can overcommit logical processors all you want to a variety of VMs, just don't do a total commit to a single VM. Hyperthreading causes a bit of an optical illusion here as well, but no performance degradation.

Once the guest state is restored the VMM loses control until a privileged instruction is called, at which point it traps and either translates that instruction or executes it within the guest-state context. This mechanism is what allows a guest OS to handle its own threads via hardware context switches, hardware interrupts, and hardware address translation, but not trigger a power cycle on the host. Guest threads do not get passed through to the host at all, they are completely decoupled by design. Were this not the case it would not be possible to run a 64 bit guest (long mode) on a 32 bit VMM (protected mode) that is running on a CPU with 64 bit extensions and VTx extensions.

It is the CPU that tells the host when a guest needs to have the host intervene, not the host kernel. The only thing that the host scheduler has to do is provide the VMM with the necessary CPU time and co-schedule the assigned virtual processors. It does not have to handle the guest threads at all. You can envision a virtual processor as a process on the host that is running multiple hardware and software isolated guest processes using its own internal scheduler (the guest scheduler).
 
The first problem when it comes to programing is how students are educated at the college and university level when publishers/developers are unwilling to retrain new hires. So things get stuck in the old days and remain inefficient. I had to drop a class at my school because they were not correctly teaching how to effectively code logically that was the most efficient. Instead they were doing the way that was most common that results in only bloated and buggy code. The same problem persists in other colleges, last but not least developers do not give enough time for code to be checked over and tested for bugs. They always want tight deadlines that are very difficult to keep and worse those who do game developing end up working much longer work weeks than normal programers. Who in their right mind wants to work a 60+ hour work week when pay isn't very good and the boss is breathing down your neck 16 or more hours a day.
 
So the Wii U has a custom Radeon 7 series in it? I thought it had a custom Radeon 4 series in it. Never the less if it's a weak Radeon 7 series gpu then they actually went backwards in performance compared to the Radeon 4870 series specs that it was rumored to have in it before.
 
[citation][nom]SteelCity1981[/nom]So the Wii U has a custom Radeon 7 series in it? I thought it had a custom Radeon 4 series in it. Never the less if it's a weak Radeon 7 series gpu then they actually went backwards in performance compared to the Radeon 4870 series specs that it was rumored to have in it before.[/citation]



Umm no it is not a weak chip at all based on many other developers the thing is a beast dont make crap up based on oh it is a 7 series GPU because that means nothing more then oh it is a 7 series GPU
 
[citation][nom]alphi[/nom]xbox 360 has 3 cores ps3 has 8 SPUs...the same developer makes a game that runs smoother on a 5 Year old 8 core PS3 CELL CPU than a similar title on the ultra new 3 core power7 CPU.clearly threading is not the issue here.. the original WII was heavily criticized for its underwhelming graphics.it would appear that Nintendo has overcompensated by strapping an overpowered GPU to an underpowered CPU.they probably went with the lower spec CPU to keep costs down (since the WII U seems to be much cheaper at launch than the original Wii) having to feed such a powerful GPU with data and of course maintain the ability to stream it all to the controller has got to eat into CPU performance.[/citation]



The Cell chip in the PS3 is not an 8 core chip it is a single core PPC based core with 8 specialized SPU units 8 of witch are for games the other 2 for OS and other stuff those SPEs share the main CPU bandwidth and power they are not at all hardly a CPU in its self or Core
 
Look the Wii U is rumored (ie we don't know for sure) to have a ship with 3 independent cores (not modules like current Intel/AMD). PS3 is 2 full cores plus 6 (to games) SPUs. Multiprocess (not multithreaded which usually doesn't run on multiple cores) is difficult to do. Unfortunately few programmers take the time to learn how to do this properly, even after we've had multicore systems for a number of years few apps and games take advantage because they rarely need to and because people are refusing to learn. This is the same argument with the PS3, it's too damn difficult for the average programmer to take full advantage of. I've had one or two classes that touched on multi-process programming but not enough for someone to really know what to do. Sadly that's the people most game companies hire to code their games.

nforcemax has it right. If you really want to be good at program/design you need to do it yourself, college doesn't get you ready and neither does the average company.
 
[citation][nom]pinhedd[/nom]Virtualization doesn't work like that.Each virtual processor exposed to a virtual machine runs as a hardware isolated process on the host machine. Inside of that hardware isolated process the thread management is performed by the virtual machine's kernel and not the host kernel.This is not the first time you've posted this incorrect information and this is not the first time I've had to correct you on it.[/citation]


Could you cut and copy someone else's typed information any better?
 
[citation][nom]therabiddeer[/nom]Because 10 years of programming on triple core x360 and the many core PS3 clearly havent taught them anything about threading... right. It is totally the developers fault and not actually just a weaker cpu.[/citation]
It's one thing to distribute different systems to their own threads, it's different when you need to get one system to work in multiple threads. Still, I'll hold my judgement of the WiiU until I know more, should provide interesting gameplay if nothing else.
 
[citation][nom]Bloob[/nom]It's one thing to distribute different systems to their own threads, it's different when you need to get one system to work in multiple threads. Still, I'll hold my judgement of the WiiU until I know more, should provide interesting gameplay if nothing else.[/citation]



the only information ya need to know is that the Wii U runs on a Power 7 chip the same chip used in watson supercomputer it has been confirmed by ibm sevral time over the past 2 years since Wii U anointment in 2011 so the only thing making the thing slower compared to the darn 360 or PS3 is that it dont run @ 3.2ghz and we know clock speed is not everything anymore

the 360 CPU is first Multi-core design by IBM to be produced and it is based around the Power 4/5 generation series chips from 2003-2004


 
[citation][nom]notuptome2004[/nom]the only information ya need to know is that the Wii U runs on a Power 7 chip the same chip used in watson supercomputer it has been confirmed by ibm sevral time over the past 2 years since Wii U anointment in 2011 so the only thing making the thing slower compared to the darn 360 or PS3 is that it dont run @ 3.2ghz and we know clock speed is not everything anymore the 360 CPU is first Multi-core design by IBM to be produced and it is based around the Power 4/5 generation series chips from 2003-2004[/citation]

I thought the xbox cpu was based on power6, which is why it can run 6 simultaneous threads. Do you have any source for the IBM comments?
 
Ugh... the last 5 paragraphs said the same thing in a different way. It really could have been condensed into a single paragraph our roughly: "We have not masted the Wii-U CPU as it will take years, so our performance is sub-par compared to the 360/PS3, but fine on the graphics end so we'll get there eventually."
 
[citation][nom]iam2thecrowe[/nom]maybe they should think about using the machines GPU power instead of so much cpu power for fpu calculations.[/citation]



No cause the Power 6 chip was not produced till 2007 nearly 2 years after the Xbox 360 now that is not to say they did not have prototypes in 2006 or so for them but yea 2007 was when the power 6 came out. the bigguest reason they would end up using the Powee 7 chip in the Wii U even if a custom one with only 3 cores would be Power the Power 7 can do more workloads per clock at lower level of clock speed and at a reasonable power level

The Power 4/5 chips needed very high clock rates to abotian good performance
 
Good GPU power is good thing. When they learn to use that GPU power to help their CPU this bill beat old 360 and PS3 really hard. Ofcource we expect more from 720 and PS4 but at this moment this is the fastest console in the market, and when it is Notendo, it is something that we don't see too often!
And it can take some time before we see new versions from MS and Sony at the market. Most propably we see new MS console next and the Sony has to delay new version a guite bit longer, because they can not sell their console at steep loss at this moment, if the new about Sonys problems are true.
So Wii U will be the strongest consose for a while and the second best a guite bit longer. Who would have gues that?
 
[citation][nom]antilycus[/nom]use a kernel that has the best thread management available (Linux). That will solve your CPU problems. I run clients windows server 2008 on a VM using Debian/Linux kernel and the performance differences is unbelievable. Thread management on WIndows kernels is a joke (xbox 360). PS3(though i cant stand it) had the right Idea by using Linux, it was their CELL that was problem (memory bottleneck and registers bottleneck due to 1 "core" managing all cores).I don't know teh O/S or Kernel used in Wii U, but if 3 processors can't get it done, your developers are stuck in 1999. They (like most programmers) need to understand writting threads, not objects.The CPU count is plenty high, the complete lack of understanding HOW to write threaded games, is the problem.[/citation]

Seems like an unlikely cause, as this is a developer who already knows how to take advantage of the multiple Cell cores. It sounds like the per-core CPU performance is just lower on the Wii U which they can't get around with the best threaded code.
 
[citation][nom]slabbo[/nom]looks like someone needs to learn about GPGPU.http://www.cinemablend.com/games/W [...] 47126.html[/citation]


GPGPU can't do everything well that a CPU does well. Plus even the best of todays GPUs take a graphics performance hit while doing GPGPU calculations, let alone the fairly low end one in the Wii U.
 
[citation][nom]SteelCity1981[/nom]So the Wii U has a custom Radeon 7 series in it? I thought it had a custom Radeon 4 series in it. Never the less if it's a weak Radeon 7 series gpu then they actually went backwards in performance compared to the Radeon 4870 series specs that it was rumored to have in it before.[/citation]


7 series = rv700 series = HD4000 series. Not 7k series.
 
[citation][nom]notuptome2004[/nom]the only information ya need to know is that the Wii U runs on a Power 7 chip the same chip used in watson supercomputer it has been confirmed by ibm sevral time over the past 2 years since Wii U anointment in 2011 so the only thing making the thing slower compared to the darn 360 or PS3 is that it dont run @ 3.2ghz and we know clock speed is not everything anymore the 360 CPU is first Multi-core design by IBM to be produced and it is based around the Power 4/5 generation series chips from 2003-2004[/citation]

Nope, IBMs official twitter account said any past refferences to being Power7 were false, merely calling it "power based" instead.
 
Status
Not open for further replies.