Best Configuration for Large, Complex SQL Database Queries

Solution
1. For 300 users, this really, really needs to be on a server OS. Performance as well as security.
2. For Access ->SQLServer->Postgres...you might want to have a deep look at the whole table optimization structure. I've seen many large db's that started out as Access, then migrated to something larger, and still retain a lot of the original Access table structure and queries, and the limitations of Access.
This thing may need to be completely rewritten.

And if YOU are too busy with patient care (keep doing that!), I strongly urge you to hire a local DBA or consultant.
Just like a DBA or network person should not be setting a broken bone, a medico maybe shouldn't be designing and building a 300 user database.

There comes a time...


To begin with, more RAM and a good CPU.
Next, the proper OS
Next, fast disk access. Possibly RAID of some sort.
Most importantly, good database construction. The best hardware in the world will choke if your SQL statements are crap.


Is this homework, or are you actually building something?
 

The latter. In this case a custom electronic health record system that started small and moved from Access to SQL Server to PostgreSQL and is starting to choke a bit. The tables are quite narrow generally but they number about 400 with complex relational hierarchies to account for know mechanisms of disease. It is an ever-evolving thing and it really works but I am sailing off the edge of the Earth and too busy with patient care to really dig-in. Still, it's one of the most rewarding things I have ever done and my patients and colleagues seem to agree. So that then is the background to the question.

I considered the simplicity of a corporate database server but I have never done that before and it somehow just seems to violate the spirit of the enterprise. Nothing is urgent yet except on very busy days when we have 200-300 users. THEN it gets very slow and people get upset.

Your answers are useful to me and confirm what I had vaguely suspected. I am running dual xeon 2670s on a new Asus MB with 128 gigs ECC DDR4 and 1 gbit local connections all floating on Windows 10 Pro (not a server OS).

I may try to implement RAID 10 on the MB using an Asus proprietary card they call 'PIKE II'. Again, thanks for the intelligent and relevant response. Much appreciated!
 
1. For 300 users, this really, really needs to be on a server OS. Performance as well as security.
2. For Access ->SQLServer->Postgres...you might want to have a deep look at the whole table optimization structure. I've seen many large db's that started out as Access, then migrated to something larger, and still retain a lot of the original Access table structure and queries, and the limitations of Access.
This thing may need to be completely rewritten.

And if YOU are too busy with patient care (keep doing that!), I strongly urge you to hire a local DBA or consultant.
Just like a DBA or network person should not be setting a broken bone, a medico maybe shouldn't be designing and building a 300 user database.

There comes a time when it is too large for a non-pro to manage.
 
Solution
Excellent discussion above. Just for grins, what is your current IO configuration ?

Consider also
1.Encryption if you are not already doing it. If the server is stolen in a robbery you have some liability that is mitigated with encryption.
2.Backup strategy, both for part fail and to handle application error and the need to restore data to an earlier point in time. Given internet speeds, offsite works wonders, just encrypt the data locally before sending.
3.If the bulk of the application is 'analytics' type queries, consider one of the highly encoded in-memory databases.
4.If the data is sensitive (like people's medical records) also consider increasing your network security/VPN/firewall between the client PCs generating the load and the DB server. You can configure your server system so it discards network traffics except for the known client PCs.
 
Oh, dumb question. Are you comfortable with the tuning you've done to make Postgres use your hardware? example: the defaults for shared_buffers is 128MB, but with your 128GB config that should be set to maybe 40 to 60GB. If not then having someone poke at that will work wonders.
 





Of course you are quite right. I am the type of person who would tinker something to death and in fact that may be precisely what I am doing now. The fact is that I have a unique little bit of clinical management software that might actually be salable but I'll never get from here to there by doing the same things I did to get from 10 years ago to here. I live in Albuquerque, New Mexico which severely limits my access to DBAs. This town is on a respirator and the power hasn't been paid in LONG time (to use a nasty analogy).

I really need to think about a partner (I think). I really need to spend some money too. And of course, I am in the middle of yet another busy day with too many problems. I just lost my temper and yelled at my office manager over a database "fix" that requires him to design queries... Grrr... All my fault...

If it is OK. I'll stay in touch or keep posting here or both. I am 54 and this is an opportunity that will never come again (if in fact it has come at all). I really just want to make a contribution to something askance of my training that incorporates my training and this database is a giant set of interrelationships that all relate to obesity, metabolic disease, diabetes, hypertension, lifestyle and about a hundred dietary factors that can be used to aim appropriately motivated patients and their providers towards scientifically validated solutions. It is WAY more than any other package out there. Of that I am sure.

But I digress. Your input has been fantastic and I really do appreciate it!
 

Almost there....
Thanks!

But still cannot reliably code HTML on a board.... HMM
 


I would imagine that you already know the answer... "Not exactly"... But I really do follow advice and yours is good.
 

TRENDING THREADS