News Desperate PC DIYer appeals for help after spending $20,000 on a build that doesn't work despite returning multiple parts – misfortune began with a...

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Fatal mistake I made 20+ years ago and never again after. Never rely on single WS/workhorse, never/ever. Always have a full working backup. No matter what. WS grade notebook, some secondary iTX setup or prebuilt miniPC like MacMini Pro, or high-end NUC-like PC. When You can afford 20k for a WS, You can afford 3 - 7k for a WS grade notebook/alternative.

For a person that can afford 20k WS is being our of commission for 5 days very expensive. For me currently being a day out of commission is expensive for both my reputation and income.
 
  • Like
Reactions: LabRat 891
Fatal mistake I made 20+ years ago and never again after. Never rely on single WS/workhorse, never/ever. Always have a full working backup. No matter what. WS grade notebook, or some secondary iTX setup. When You can afford 20k for a WS, You can afford 3 - 7k for a WS grade notebook.

For a person that can afford 20k WS is being our of commission for 5 days very expensive. For me currently being a day out of commission is expensive for both my reputation and income.
If you made a fatal mistake over 20 years ago how are you alive today to tell about it? Apparently it wasn't fatal.
Also this was not an upgrade to an existing computer It was a brand new computer being built as a standalone so he still has the old system.
 
You need to remove everything ssd, video, drive and only have minimal ram then try the flash if it need it with the bios flasher. use a thumb drive till the flash works then try installing the nvme. Some asus boards have multiple nvme and you need to make sure you put it in the one that is closest to the cpu usualy. good luck

Oh my I compared my I5 gen 14 65 watt cpu with ASUS z790m-plus d4 motherboard and with 128 gb ram to your machine and the difference was just sad for me.
 
Last edited:
Same problem on multiple MB. Power supply is inadequate.
Since several have mentioned the PSU, I thought I'd check the source link and see what he actually used.

The first was a Silverstone HELA 2050R Platinum, which E. Fylladitakis reviewed for Toms, saying it "demonstrates impressive electrical performance, especially considering its extreme power output" and calling it "an impressive powerhouse of a PSU":

The replacement was a MSI MEG Ai1600T Titanium, which was also reviewed here and also assessed in similarly positive terms:

Seems to me unlikely that both of these PSUs were bad. They also don't seem anywhere near incapable of at least booting the machine.
 
Pick 2, 4, or 8 sticks of RAM (whatever is the minimum) and verify that they are working in another system. Then put them into bank 2 (or whatever is the first bank) of this system and see if you can get it to boot. Also make sure all your fan headers are all still properly connected especially the CPU fan0 and case fan0 or the bios might not boot (it happened to me.)

I find it helpful to make a copy of the motherboard diagram and outline exactly what connections must be made with colors in ms-paint so i can visually check that my entire build matches the intent.

Especially the headers part. One time I spent entire weekend tearing my PC apart trying to figure out why it had no signs of life, swapping parts here and there, only to find out the issue was I didn't plug the 24-pin cable in properly. I didn't push it enough into the header thingy..

Canceled my order for a new PSU immediately.
 
  • Like
Reactions: bit_user
All stupid builds I have done. Always Is the ram... some times the gaming gpus don't work out of the box on these server/workstation builds.

Some times you need a old 8400gs or use the Vga port on the server board just to change where your graphics will be show.

Gayming computer it's not a workstation machine, some people go nuts only to waiting the check time.
 
Article says a PSU-replacement was tried.

AFAIK, most modern PSUs are now single-rail designs.

One of the very first things I'd have tried was removing RAM. Not necessarily because any of it is defective, but as others have noted, there could be something about which slots it's installed in. For basic setup and installation, you only need one stick. You can try swapping DIMMs and slots, in order to be sure that it's not the RAM. Easy thing to eliminate and a common source of errors.

BTW, on such a build, I hope they used ECC memory. Also, I'd tend to stick with the motherboard's QVL, if I were spending so much on memory. You don't want to spend a couple $G on memory, only to discover the motherboard doesn't like what you bought and now you're stuck trying to sell it and buy some other kind.
You're more up-to-date than myself on PSUs. Maybe because I don't have rhe money to splurge on new tech.
Anyways, I think you're right on starting small and eliminating POFs. It's funny though that this happens with vanilla board as well... I keep hearing that with the new tech more and more CPUs get fried by just a BIOS update. Maybe that's another thing to try: a spare CPU with the vanilla mobo.
But! What are we doing still talking about this mid-2025? Hasn't that been fixed already? All else failing, he should have taken it to a specialist shop. They tend to do less guesswork then us here...
 
That's what happens when someone not versed in DIY tries to DIY starting from the high cost...

Even the initial test was not done properly it seems: remove everything but 1 plank of RAM, leaving only PSU, MB and CPU/RAM, some $10-$50 used but proven 'text editor' GPU from local parts store if necessary. Reflash BIOS using *recovery routine*. Try to boot. Reflash older BIOS the same way. Try to boot. Change plank of RAM just to be sure, rinse, repeat. If nothing helps at this stage, it isolates it down to PSU/CPU/MOBO, then it's parts swap - and not to the same model. Once culprit is found, you f***ng return it (recording video of diagnosis process helps there).

PCI Init stage fault and PCIe SSD 'compatibility fix', I'd bet it either just needs older BIOS to be flashed, or problematic SSD(s) removed.
 
Last edited:
He should be satisfied with all the money he saved by doing it himself.

Expensive lesson learned.

He saved ~$100, if his time is worth $10/hr and he fixes it then he breaks even.

He could © his own MEME, but I already had a boss who took every opportunity to save a buck, whatever the cost; even by spending that particular buck he was claiming to save.
 
A user over on the Level1Techs forum last year ran into the same exact issue, also with a (single) 4090. They were able to eventually resolve it by swapping the GPU out for an Arc A750 and populating only a single stick of RAM. This allowed it to POST (which on this platform can take ten minutes); after it successfully POSTed, they were able to swap the 4090 back in and boot successfully. They then repopulated the full eight DIMMs and let it train and boot a final time.

https://forum.level1techs.com/t/asus-wrx90-motherboard-came-in-and-is-doa/206042/77

It's worth noting that there are a lot of other reported issues with that specific Asus board in the thread, including multiple samples from Wendell's own testing.

As one last bit to toss out there, if there's a chance that the OP has somehow acquired older stock 4090s, they may need a firmware update. Initial cards could experience black screen/no POST scenarios on some UEFI versions:

https://nvidia.custhelp.com/app/answers/detail/a_id/5411/~/nvidia-gpu-uefi-firmware-update-tool

They'd need to update the 4090s from a secondary system.

Power supply is another possibility, but the 92 POST code suggests it's a device detection issue, so a GPU swap should be the first thing to try.
 
I had a similar problem wrestling for a month with an old system. Turns out my SSD needed a firmware upgrade. First time I ever upgraded the firmware for a storage device.
 
If you made a fatal mistake over 20 years ago how are you alive today to tell about it? Apparently it wasn't fatal.
Also this was not an upgrade to an existing computer It was a brand new computer being built as a standalone so he still has the old system.
You sound like one of those people who knows nothing about nothing, but thinks themselves knowing everything about everything, with an insatiable need to piss in every corner of the room they find themselves in. It's annoying, unwanted and redundant.

Fatal, derived from word "fate", or latin fatum [derived from fātus which means speak], means predestined path, set road, fatal means premature end of this "stroll", though it can also mean forking away from an original path.

By Your sound logic, every Operating System that contracts "fatal error" should completely cease to exist, though it's not that way, is it no2. In 99,99 % of the cases OS recovers, just like a person from a fatal mistake. Lethal and life ending is only one of multiple meanings.

Being a smartass gets old fast and ultimately renders persons' presence unwelcome.

https://www.merriam-webster.com/dictionary/fatal

https://en.wiktionary.org/wiki/fatum
 
Last edited:
I had a similar problem wrestling for a month with an old system. Turns out my SSD needed a firmware upgrade. First time I ever upgraded the firmware for a storage device.
FWIW, I make a point of upgrading to the latest firmware, before I put a SSD in service. However, once it's got my data on it, the only time I'd touch its firmware is to address a specific and severe issue I'm concerned about. I'd never upgrade firmware on a SSD that's in service, just because newer firmware exists. Same goes for hard disks, actually.

BTW, I've recently started noticing some of my SSDs are coming from the factory, formatted to use 512 byte sectors. Since my filesystems always have a 4k block size, I've also started reformatting them to use 4k sectors. I do wonder whether this affects the state of the drives' wear-leveling FTL.
 
  • Like
Reactions: uplink-svk
BTW, I've recently started noticing some of my SSDs are coming from the factory, formatted to use 512 byte sectors. Since my filesystems always have a 4k block size, I've also started reformatting them to use 4k sectors. I do wonder whether this affects the state of the drives' wear-leveling FTL.
I would never use the filesystem a storage device comes preformatted with – always repartition with proper alignment, then format with proper block size.

As for NVMe low-level "advanced format", things seem a bit muddy. Some drives only support 512byte emulation mode even though the real block size is larger. Some drives seem to have performance increase if you use the native block size, some drives might have better error correction with native block size, etc.

I switch my NVMe drives to native block size if they support it, but it seems controller-specific whether you gain anything from doing so.
 
I would never use the filesystem a storage device comes preformatted with – always repartition with proper alignment, then format with proper block size.
No, I'm not talking about filesystems. I'm talking about how the device presents itself to the OS. Back in SCSI parlance, this would be considered the low-level format. The Linux command to modify it is nvme format.

As for NVMe low-level "advanced format", things seem a bit muddy. Some drives only support 512byte emulation mode even though the real block size is larger. Some drives seem to have performance increase if you use the native block size, some drives might have better error correction with native block size, etc.
If it's emulating 512 byte sectors, then there should be no benefit in terms of error-correction. By definition, emulating means that it's still some other size underneath and that will have whatever properties it has.

As for performance, doing reads or writes in increments of the native size should perform the same, on drives "emulating" smaller sectors. The performance difference would be on drives capable of truly switching their native sector size. The reason is quite simple: if you switch a drive from 4k sectors to 512 byte sectors, now every operation you do that's a multiple of 4k has to do potentially up to 8 times as much work, since each sector has its own error correction information and needs to be managed via the drive's FTL.

I switch my NVMe drives to native block size if they support it, but it seems controller-specific whether you gain anything from doing so.
I switch them to match the filesystem block size (usually 4k, on Linux). The only examples I've seen are drives supporting either 512 byte or 4k sectors.
 
No, I'm not talking about filesystems. I'm talking about how the device presents itself to the OS. Back in SCSI parlance, this would be considered the low-level format. The Linux command to modify it is nvme format.
Thought that might be what you meant :) – it's still important to have aligned partitions, thankfully most partitioning tools were updated ages ago to do proper alignment (it **hurt** performance when large-sector harddrives arrived and people had misaligned partitions 😂).

If it's emulating 512 byte sectors, then there should be no benefit in terms of error-correction. By definition, emulating means that it's still some other size underneath and that will have whatever properties it has.
Iirc some controllers/firmwares have error correction per logical sector, rather than at the erase-block-size level – I think there was a Seagate paper about this, but it's been quite a while.

As for performance, doing reads or writes in increments of the native size should perform the same, on drives "emulating" smaller sectors. The performance difference would be on drives capable of truly switching their native sector size. The reason is quite simple: if you switch a drive from 4k sectors to 512 byte sectors, now every operation you do that's a multiple of 4k has to do potentially up to 8 times as much work, since each sector has its own error correction information and needs to be managed via the drive's FTL.
If your filesystem block size is 4k, obviously the OS won't be doing smaller read/write requests. I don't know how much overhead there is in the emulated 512byte sectors, and it's going to be controller specific. One of my NVMes (iirc a Western Digital) only supports the 512byte advanced format, even though that's obviously not the native size - so they must have decided the performance hit is acceptable.