Hi there
"A graphics card is a computer's data-to-image translator. It takes the binary data from the central processing unit (CPU) and turns it into a screen image.
The CPU works with software to send data about the desired image to the graphics card.
The graphics card is responsible for working out the details of exactly how the pixels on the computer's monitor will create that image. After it does that, it sends the image through a cable to the computer's monitor."
I am building a PC system, and trying to understand whether I need to focus on optimizing the GPU or the CPU, for running charting software.
This image is of the software. A few times a second, each value will be updated; lines will move, change color, the text will change. Main point - fewer pixels change, and much less often, than for any type of gaming, or even playing a video.
For most of the items displayed, the PC will need to compute and calculate values; so it is not simply downloading and displaying raw data. This calculating gets done somewhere? (GPU, CPU, combination?)
Since there is no aspect of 3D rendering to the image, how much work would a GPU do to create this moving image?
My current computer suffers lag when processing/displaying 6 of these charts.... but I am wondering if the bottleneck is the GPU, or the CPU. (I can confirm that RAM is not the limiting factor, or the HDD)
Stats - Current system is Core2Duo 2.26 4MB L3 , 2GB@1066 , dedicated Mobility radeon HD 3650 (256MB) , 7200rpm HDD. XP 32-bit.
When charting
-both CPU cores move between 20-80% of use. At slow time, the 2nd core drops to around 00-40%.
-Software uses about 100MB of ram.
-HDD has a 0 disk queue length most of the time, occasionally spikes up to 100.
I am wondering which would provide the most appropriate benefit:
1) i7 4-core(8thread) 3.4Ghz 8MB L3, using the integrated HD 4000
or
2) 1) i5 4-core 3.4Ghz 6MB L3, and a GeForce GTS 450 (about 100-200% more powerful than the HD 4000)
It depends on whether to focus on the CPU or the GPU.
"A graphics card is a computer's data-to-image translator. It takes the binary data from the central processing unit (CPU) and turns it into a screen image.
The CPU works with software to send data about the desired image to the graphics card.
The graphics card is responsible for working out the details of exactly how the pixels on the computer's monitor will create that image. After it does that, it sends the image through a cable to the computer's monitor."
I am building a PC system, and trying to understand whether I need to focus on optimizing the GPU or the CPU, for running charting software.
This image is of the software. A few times a second, each value will be updated; lines will move, change color, the text will change. Main point - fewer pixels change, and much less often, than for any type of gaming, or even playing a video.
For most of the items displayed, the PC will need to compute and calculate values; so it is not simply downloading and displaying raw data. This calculating gets done somewhere? (GPU, CPU, combination?)

Since there is no aspect of 3D rendering to the image, how much work would a GPU do to create this moving image?
My current computer suffers lag when processing/displaying 6 of these charts.... but I am wondering if the bottleneck is the GPU, or the CPU. (I can confirm that RAM is not the limiting factor, or the HDD)
Stats - Current system is Core2Duo 2.26 4MB L3 , 2GB@1066 , dedicated Mobility radeon HD 3650 (256MB) , 7200rpm HDD. XP 32-bit.
When charting
-both CPU cores move between 20-80% of use. At slow time, the 2nd core drops to around 00-40%.
-Software uses about 100MB of ram.
-HDD has a 0 disk queue length most of the time, occasionally spikes up to 100.
I am wondering which would provide the most appropriate benefit:
1) i7 4-core(8thread) 3.4Ghz 8MB L3, using the integrated HD 4000
or
2) 1) i5 4-core 3.4Ghz 6MB L3, and a GeForce GTS 450 (about 100-200% more powerful than the HD 4000)
It depends on whether to focus on the CPU or the GPU.