If the system memory ran at a sixth, or better, of the speed of the video cards memory then I would call close enough to a draw, but the superior GPU will make a vastly huge difference.
Just because it loads 3.5GB of data to the video card does not mean that every frame rendered needs all of that data (if it did the game would get around 62.6fps assuming that there was no other bottleneck or 'delay' in the rendering pipeline, which we can safely say is false anyway).
Since one sixth of 153.6 GB/sec = 25.6GB/sec (in video card speak they only divide by 1000, not 1024), and those video cards have GDDR5-VRAM in that ballpark (actually higher, but not substantially) I'd say the GTX770 would perform better but only if every frame rendered required the use of all of that 3.5GB of data which is highly unlikely, but not impossible!.
- If the system memory was Dual-Channel at 2333 MHz (which may not be available) then it wouldn't matter.
- If every frame only uses a fraction of that 3.5GB then it also wouldn't matter.
If the GTX770 with 4GB VRAM can sustain 64.11 fps with 3.5GB of texture (and other data) loaded to it then it would actually be faster than a GTX780 with 4GB VRAM depending on the rest of the system specs.
It comes down to the individual frame complexity and what is in the scene being rendered.
If it's mostly shader or other GPU intensive stuff then the GTX780 would easily win out.
If it's mostly data being pulled from Video RAM then the GTX770 with 4GB of VRAM would pull ahead.
I'd love the see the actual figures as you could reverse engineer a lot of useful data from them!