I use vim throughout the day (seems you're running Linux or one of the *NIX clones). If you're building kernels, then more cores matters more than individual core speed. Any database (if actually being hit for lots of traffic...a dev workstation doesn't normally hit that much traffic) needs lots of cores and memory, though individual core speed also matters (imagine creating a lot of indices...hash sums require a single core and don't distribute across cores, but not all operations create a hash sum). You're going to have a hard time with high core count combined with good core speed
and low cost. FYI, if you are not seeing serious traffic from multiple users (if it is just a workstation for you), then even four cores would be good (though it certainly wouldn't hurt to have 8).
As soon as you add in a web server the need for more cores goes up, but probably all of them will bottleneck on hard drive access (single user testing is seldom an issue with four cores...hard drive speed is always an issue). Consider a separate (and slower/lower cost) ordinary hard drive for (a) web content, (b) SQL content, (c) primary o/s. Hard drive I/O is going to hurt far more for those cases more than slow CPU core speed or core count. You'll find hard drives limiting speed even on a single user workstation. Having a separate drive for database and http will go a long ways towards having a responsive system even if the drives are mediocre.
Quite often hard drive speed matters more for a database then individual core speed, but it depends on whether you are creating an index or just reading something already indexed. Having a lot of fast m.2 space for any database would be good, but if you plan to do this with the budget you mention, then it isn't going to happen (it's ok to have the m.2 slot for future use, but I'd expect to not have any fast drive for that total system price).
You really need to say more about what you might compile, and what kind of test case or stress testing you might do for a database or web server (e.g., 10 people hitting it, or just you).
FYI, when Linux reads the file system, and when you have lots of RAM, it tends to use the spare RAM to buffer. Subsequent access speed reading that same content gets faster until something else gets use of the buffered/cached content. If you run "xosview" and enlarge the window, take a look at the "MEM" content and how it is subdivided into different use. Now run something which will read (but not write) the disk just to see how cache and/or buffer goes up:
sudo time find / 2>/dev/null 1>/dev/null
Here's one I did:
Fedora 27
root /# time find / 2>/dev/null 1>/dev/null
real 4m22.597s
user 0m4.958s
sys 0m19.778s
root /# time find / 2>/dev/null 1>/dev/null
real 0m8.061s
user 0m2.420s
sys 0m5.516s
If you have freshly booted and have not read this content before, then you can expect the time required to read this the second time will go down (provided you didn't do something requiring giving up the cache/buffer used). So lots of RAM can also be good for performance in disk access even if you don't have applications specifically using all of that RAM. I suspect 16GB is a good amount for a lot of purposes.
Consider:
■ 6 cheaper AMD cores (best bang for the buck and 6 cores goes a long way for developers).
■ 16GB RAM.
■ Three cheaper ordinary drives mounting on "/", "/var/www/", and wherever your SQL is at.
FYI, I use exclusively NVIDIA video cards, but you have to use the NVIDIA driver most of the time due to what I consider poor quality with the Nouveau driver. Other than that it is likely you wouldn't need any special driver.