Yes and no. The CPU will generally have 2 to 6 cores, which are doubled if hyper-threading is enabled. The GPU generally has hundreds of cores that work well together to do various complex math and has been optimised for this purpose. So yes, the GPU would increase the performance for complex math considerably.
The only problem is the distance between CPU and GPU plus the bandwidth that would be required. It’s like living in a house with a friend across the street in a second house. Every time when you want to do some task, you get out of the house (CPU) to your friend (GPU) to have him do the task and then return home. While the GPU can do the task faster, there is the problem of having the task sent to the GPU and waiting for the response. Thus, for most tasks it would be useful, as the time it takes for the communication is compensated by the time saved by having the GPU process it. But if the task is too small or short, it might actually take more time for the GPU to handle it. For example, adding 1+1 is better done at the CPU. Calculating if 5000 points are located inside or outside a circle’s diameter is better handled by the GPU. (Do keep in mind that you have to send 5000 points to the GPU and expect 5000 booleans in return.)
Of course, the lines between CPU and GPU are extremely high-traffic so time lost is minimal. But it’s still some lost time. Finding the balance tends to depend on the quality of the GPU and how well it would perform.
How to determine when something can be executed by the GPU? Well, it is up to the developer to do so by adjusting the code so it will use the GPU for the critical math-parts. For this he would also need to get the capabilities of the GPU so the code can determine if the GPU will perform better than the CPU or not. It is a lot of complex code and because all the different GPU’s, not always very reliable. So most developers try to avoid it. The speed gain is generally too small to notice anyways. Well, except for complex graphics…