It depends on the software. There is both CPU and GPU encoding and it really depends on the software you prefer to use and the settings you choose, but realistically if you do a lot of video encoding it is a VERY good idea to have BOTH a capable CPU and a capable graphics card. Whether it's a gaming card or a workstation card is totally up to you but the fact is that a very decent gaming card will almost always tend to be less expensive than an equally capable workstation card and in some cases, MORE capable per dollar. So having a good graphics card might well be very important in your case. Also, you should be aware of WHAT types of technologies you need. Nvidia cards support some things not well supported by AMD cards, and visa versa. Knowing if your use case favors one or the other, or the software you use does, and things along those lines are probably extremely important prior to deciding whether to go AMD or Nvidia when it comes to graphics.
Certainly both can work in most cases, but Nvidia has generally held an edge due to it's CUDA and hardware accelerated performance which has practically always been a bit better. Sometimes "better" might not be "worth" the difference in price though. If you don't do it professionally, or at least semi-professionally as a very serious hobby, then it probably doesn't matter AS much in that case.
Right now, I don't see the benefit, still, of paying more for a DDR5 platform because there is basically STILL not a benefit to the faster DDR5 frequencies since it's latency is so high still. Down the road when the process is more mature and memory kits have much lower latencies at the same frequency, that might change, and certainly if you want something that you WON'T have to again buy different memory for when you upgrade the next time, then you might just bite the bullet and pay the extra for a DDR5 memory kit and board now so it's done and over with. Then again, commercially available DDR6 is loosely expected to be seen in 2025, so if there is little chance you'd upgrade again BETWEEN now and then, then skipping on DDR5 and sticking with much less expensive DDR4 might still make a lot more sense especially if keeping the overall price of the hardware as low as possible is an important consideration.
And honestly, you are not going to see a heck of a lot of difference in performance between the 12900k and 7950x, especially in anything that uses primarily single core processing. They are very close in single core performance. The 7950x has much better multicore performance BUT ONLY if you are running either a WHOLE SLEW of programs and processes at the same time OR are running an application that is very well known to be capable of utilizing as many cores as are available and there are few programs that can do that. Aside from server type applications and maybe some purpose built scientific ones, there aren't many programs out there that could use more than ten cores/threads simultaneously when we're talking consumer applications and since both these have 16 cores then the difference in threads (24 for 12900k and 32 for 7950x) probably is not going to translate into real world gains unless as I said you are seriously multitasking and are running MANY simultaneous programs.
At some point, more cores and threads, which is really all the 7950x offers by comparison, offers dwindling returns. It does have a SLIGHTLY better single core performance though, but I'd be pretty skeptical about paying 600 bucks for a five to ten percent gain in single core performance.