Question [Help] Tried to build pc, think i messed everything up.

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
May 23, 2019
32
1
35
Hi, Its gonna be a long one...

I bought all the components for my new pc and tried to install it all(first time). I kinda messed up. I did it watching some tutorials. Pc boots up fine and i can see bios, i didn't get pass bios since i didn't install an OS yet.
Components are:
Asrock b450m Hdv r4.0
Be quiet system power 9 500w
Ryzen 2600
Patriot Viper Steel 2x8gb 3000mhz ram
Seagate Barracuda 1tb sata3 hdd
Zalman T5 matx case
gigabyte windforce rx460 4gb gpu(i already had this one)

I think i did mainly 3 things wrong:

1-I installed cpu to mb with no issues(i had some thermal paste in my hand but not much, most of it eas still on thr cpu cooler so i just installed it.I also had some trouble installing rams, i couldn't find which way the rams should look, and it didnt fit in since only one side had the latches, i forced it a lot and it ended up fine, i think. Than when it came to install the motherboard to case i didn't know that i should have had standoffs, i just installed it right away. Luckily the case had some installed, but apparently not enough. I checked a review and it said it had only enough standoffs for a mini-itx mb and my mb is mAtx. Apparently they give you the additional standoffs for mAtx separately and i didnt use them. As far as i can check, i think the existing standoffs keep it higher but im not sure since i didnt remove it from the case. But when i read online, everyone says it can short the motherboard( it sounds bad even tho i dont know what it means) and now im freaking out.

2- After im done with PSU installation, I installed the gpu in the pciex slot. I think gpu is installed fine in the pciex slot. But gpu screw holes didn't line up with the case even though it was fit in the Pciex port. I tried everything and no luck. So I opened two holes in the case that fits the screwholes in the case and layered it.I DID IT WHILE THE ALL COMPONENTS WERE STILL IN THE CASE. As I read, dust coming out when you open holes is another cause for motherboard to short.

3- I didn't know these two were big problems yesterday so I booted up the pc. Everything seems fine, I enabled XMP profile, Cpu seems like its running hot(47 degrees)I think but not that much. Except for one thing, when i shut down the pc from the power button, it shuts down, you hear fans stopping etc. then 1 second after shut down i hear a sizzling- almost whining but kind of electric sound coming from the case for like 1 second and it stops. Happened all the times i shut down. I thought it was the psu but i listened with open case and it's coming from the middle of the case. Case fan, cpu fan or motherboard. Definitely not from the hdd.

I'm really freaked out. I feel like I spend 500 dollars for this mess. What should I do now ? I'm going to remove the motherboard and install it with other standoffs when i come back from the work. What should i do about the gpu ? Should i keep it the way it is or try installing it again with proper holes in the motherboard ? Gpu moved from its way when i screwed it to the holes i opened so im kind of nervous about it too but it seems working, fans etc. How can i know there is a short in the motherboard ? Might a short in the motherboard hurt other components too ?

Any help will be appreciated, thanks.

I'll attach some photos in an hour.

Edit:
These are the pics of the case, screwholes i opened for gpu and bios.

https://ibb.co/kgd2Kjt

https://ibb.co/tsNsrQf

https://ibb.co/hVDJtLB

https://ibb.co/5YXzQ3y

https://ibb.co/XVyj4NQ
 
Last edited:
May 23, 2019
32
1
35
Some software reads the sensors wrong it's unlikely to be that hot unless your doing something extreme definitely try other software to read it . maybe also try a bios update of there newest version if your mother boards bios is a older one 🤔
hey, in my neighborhood power outages are common and i dont wanna brick my mobo...
and i have another problem...when i plug in the power cable to the power supply, even when the switch is off, it makes an electric sound just before it plugs in. like the sound when you get electrucaded from your friends hand( i hope you understand what i mean...,) there is no smell, cable is not hot. when i unplug the cables other end from the power outlet and do the same thing, this doesn't happen. is this an issue ? it wasnt happening in my old 30 dollar psu in old pc and it happens in this much more expensive one. other than that, pc seems to be working fine until now,(except the sensors that say my mobo is on fire)
 

RayOfDark

Reputable
May 16, 2019
86
20
4,545
Short answer: It's normal.

Long answer: It's the mains interference suppression capacitors connected directly to the PSU socket on the inside causing a sudden current flow as the mains supply hits them. They see the sudden appearance of the mains as a spike and do their best to suppress it. That's what they're there for primarily. To suppress electrical spikes on the mains. It sounds like a "crack!" and will vary in loudness depending upon where in the mains cycle (it's alternating current - AC) the connection is made.

Very short answer: (y)
 
May 23, 2019
32
1
35
Short answer: It's normal.

Long answer: It's the mains interference suppression capacitors connected directly to the PSU socket on the inside causing a sudden current flow as the mains supply hits them. They see the sudden appearance of the mains as a spike and do their best to suppress it. That's what they're there for primarily. To suppress electrical spikes on the mains. It sounds like a "crack!" and will vary in loudness depending upon where in the mains cycle (it's alternating current - AC) the connection is made.

Very short answer: (y)
thank you! it just freaked me out that there was the sound while the switch on the psu is off. i wouldnt care if it woulf happen when the switch was on.
 

RayOfDark

Reputable
May 16, 2019
86
20
4,545
You're welcome.

The suppression capacitors are always connected directly to the IEC socket as this saves the PSU switch from having to handle sudden potentially huge current surges due to the capacitor suppression circuit when the switch is operated.

Think of the huge Frankenstein laboratory throw switches to get an idea of the size of switch you'd need to do it reliably if the capacitors were the other side of the PSU switch. :sweatsmile:
 
May 23, 2019
32
1
35
You're welcome.

The suppression capacitors are always connected directly to the IEC socket as this saves the PSU switch from having to handle sudden potentially huge current surges due to the capacitor suppression circuit when the switch is operated.

Think of the huge Frankenstein laboratory throw switches to get an idea of the size of switch you'd need to do it reliably if the capacitors were the other side of the PSU switch. :sweatsmile:
thanks again. im trying to be sure the pc runs ok and i tried to benchmark the hdd with a defrag program and it says the random read speed is 3.2 mb/s it seems toooo slow. even my five year old drive wasn't doing that bad in this benchmark. hdd is seagate barracuda 1tb 6gb/s(even tho it reads at 3mb/s...)
edit: i just tested with another program and this is the results. i dont knoe ehat any of them means but is this normal fot a sata 3 7200rpm hdd ?
https://ibb.co/4SDJvhL
 
Last edited:

retroforlife

Reputable
Apr 19, 2017
216
16
4,615
because of how HDD work they get slower as they get more full up while SSD dont so if my HDD was more empty it would score like

190 / 180
2.5 / 1.0 etc etc

HDD have a harder time with random read and writes since the arm has to move while a SSD are solid state aka no moving parts so it has faster response and much quicker speeds 🙂
 

Karadjgne

Titan
Ambassador
SSDs get slower as they fill and should never be loaded within 10% of max for 250 and under, 5% for larger drives.

HDDs don't get slower as they fill, only once they get full and can no longer deal with pagefile or windows temp files etc. What does affect hdds is usage. The more that's added, deleted, moved, changed the more all the bytes are seperated into different addresses, the more the armature has to move, longer things take to get together. Periodic defrags after cleaning windows/temp files solves all that.

Never Defrag an SSD. Ever.
 
Last edited:

retroforlife

Reputable
Apr 19, 2017
216
16
4,615
SSDs get slower as they fill and should never be loaded within 10% of max for 250 and under, 5% for larger drives.

my Samsung SSD did over provisioning it never gets completely full since it has a part that has no partition . 🤔

plus going from completely empty to having 70 GB left of space not including the over provisioning the speed exactly the same according to the record my Samsung app saves duno just some thing i noticed 😶
 

RayOfDark

Reputable
May 16, 2019
86
20
4,545
SSDs get slower as they fill and should never be loaded within 10% of max for 250 and under, 5% for larger drives.
I've never had an SSD get noticeably slower until it gets to the point of having too little space to work with. They tend to remain moreorless the same performance until that point then start to fall off a cliff fairly rapidly. Sensible over-provisioning cures this almost completely, and nearly all SSDs already have over-provisioning whether they list it in their specs or not, so you don't necessarily have to leave any free user space, although it will likely still help.

Enterprise SSDs tend to have more internal over-provisioning by default. That's why some of them have odd capacities like 400GB for example. Over-provision the same brand of 480GB non-enterprise SSD by a further 80GB to make it 400GB and performance can sometimes be broadly similar to the enterprise version.

On the other hand, an HDD does get progressively slower as it fills due to less storage per revolution on the inner tracks. It also can slow to a crawl if it becomes badly fragmented as the heads have to thrash around to read/write files and use the available free space. Fragmentation has very close to zero impact on an SSD.

Of course, you can over-provision an HDD too (known as short-stroking), but you need to throw away so much usable capacity to get to the point of maintaining near-constant performance that it's then not much cheaper per GB than buying a sensibly priced SSD, and the SSD will still run rings around it.

If you need the performance, SSD is king.
 
Last edited:

retroforlife

Reputable
Apr 19, 2017
216
16
4,615
I've never had an SSD get noticeably slower until it gets to the point of having too little space to work with. They tend to remain moreorless the same performance until that point then start to fall off a cliff fairly rapidly. Sensible over-provisioning cures this almost completely.

On the other hand, an HDD does get progressively slower as it fills due to less storage per revolution on the inner tracks. It also can slow to a crawl if it becomes badly fragmented as the heads have to thrash around to read/write files and use the available free space. Fragmentation has very close to zero impact on an SSD.

Of course, you can over-provision an HDD too (known as short-stroking), but you need to throw away so much usable capacity to get to the point of maintaining near-constant performance that it's then not much cheaper per GB than buying a sensibly priced SSD, and the SSD will still run rings around it.

If you need the performance, SSD is king.

i think i used to do short stroking by having my os and games on the first partition about 250 GB then the rest was for storage on my super old pc years and years back try in to keep the main first part as free of junk as i could 🤣
 

RayOfDark

Reputable
May 16, 2019
86
20
4,545
i think i used to do short stroking by having my os and games on the first partition about 250 GB then the rest was for storage on my super old pc years and years back try in to keep the main first part as free of junk as i could 🤣
Me too. It also meant that the small OS partition was quicker to clone to a small backup drive.

I still do the same now with SSDs. The OS lives in a 56GB partition that automatically clones to a second 128GB SSD once a week with Macrium Reflect. The rest of the 128GB SSD is used with Intel SRT to cache a RAID 1 data disk array where the Documents, Downloads, Pictures, Music and Videos folders are relocated to.

If/when the main OS SSD fails, a quick change of boot order will get me straight back in.
 

Karadjgne

Titan
Ambassador
lesson to be learned here dont buy super small SSD there kinda pointless unless you like constant cleaning of junk files to keep space free
Ppl aren't really buying super small SSDs much at all, what ppl are doing, is like myself, bought the second biggest ssd available (128Gb) 7 years ago and don't have the budget to buy something bigger today.

In practice, an SSD’s performance begins to decline after it reaches about 50% full. This is why some manufacturers reduce the amount of capacity available to the user and set it aside as additional over-provisioning. For example, a manufacturer might reserve 28 out of 128GB and market the resulting configuration as a 100GB SSD with 28% over-provisioning. In actuality, this 28% is in addition to the built-in 7.37%, so it’s good to be aware of how vendors toss these terms around. Users should also consider that an SSD in service is rarely completely full. SSDs take advantage of this unused capacity, dynamically using it as additional over-provisioning.
If you want to tell Seagate it's FOS, that SSDs don't slow down, be my guest.
If your device includes a solid-state drive (SSD), you probably noticed that as it fills up the performance slows down dramatically. You’ll see this when opening apps and copying files will take longer than usual, and sometimes this will also cause freezing issues making your device harder to use.
Or these guys...
 
Last edited:

RayOfDark

Reputable
May 16, 2019
86
20
4,545
Why would I want to tell them they're full of shit?

They're right in terms of large files being written to an almost full DRAM-less and SLC-bufferless SSD which is what they're describing, but that doesn't apply to most modern SSDs with DRAM and SLC buffering unless the file is absolutely enormous.
 

Karadjgne

Titan
Ambassador
You missed the point entirely. The vast majority of SSD users do not have the recently released, top line SSDs that contain Dram and SLC buffering. That didn't come out until the release of the Samsung 860 QVO, Crucial BX500 etc at the end of last year.

So anyone using even a slightly older model is going to start loosing performance somewhere around 50% fill.
 

RayOfDark

Reputable
May 16, 2019
86
20
4,545
Here's my old 2010 Intel 40GB X25-V SSD still outperforming the rated read and write specs by some margin at 91% full...

https://drive.google.com/open?id=12ahu9dQBAyZSUC8YfCK6jEvUOibEiQHh

It had a 32MB DRAM buffer even back then, although that alone can't account for zero loss of performance when transferring 1GB files (over 30 times larger than the buffer) to/from ancient 34nm Intel MLC NAND Flash.


Here's a pair of budget DRAM-less SLC-cached Silicon Power 1TB SATA SSDs in RAID 0 holding on to 77% of their read and 92% of their raw write performance at 98% full. I'm not prepared to break the array just to prove a point, but the results would be the same in percentage terms per single drive.

https://drive.google.com/open?id=1dBiM-GcmwAKAaOPUp8ECkPpFwYMs-6CW


I have a folder full of similar screenshots from budget SSDs of various ages and mostly low capacities dating back to 2010 all showing basically the same thing. The only time I ever experienced a slowdown severe enough to notice in the real world was when running SSDs for extended periods on OSes that didn't support TRIM.

I'm not sure who's made entirely cacheless SSDs in recent years, but anyone looking for performance and longevity would have avoided them like the plague if there was a sensibly priced alternative. Even the likes of uber-budget KingDian and Tcsunbow have them, not that I'd recommend either of those brands to anyone.
 
Last edited:
May 23, 2019
32
1
35
Here's my old 2010 Intel 40GB X25-V SSD still outperforming the rated read and write specs by some margin at 91% full...

https://drive.google.com/open?id=12ahu9dQBAyZSUC8YfCK6jEvUOibEiQHh

It had a 32MB DRAM buffer even back then, although that alone can't account for zero loss of performance when transferring 1GB files (over 30 times larger than the buffer) to/from ancient 34nm Intel MLC NAND Flash.


Here's a pair of budget DRAM-less SLC-cached Silicon Power 1TB SATA SSDs in RAID 0 holding on to 77% of their read and 92% of their raw write performance at 98% full. I'm not prepared to break the array just to prove a point, but the results would be the same in percentage terms per single drive.

https://drive.google.com/open?id=1dBiM-GcmwAKAaOPUp8ECkPpFwYMs-6CW


I have a folder full of similar screenshots from budget SSDs of various ages and mostly low capacities dating back to 2010 all showing basically the same thing. The only time I ever experienced a slowdown severe enough to notice in the real world was when running SSDs for extended periods on OSes that didn't support TRIM.

I'm not sure who's made entirely cacheless SSDs in recent years, but anyone looking for performance and longevity would have avoided them like the plague if there was a sensibly priced alternative. Even the likes of uber-budget KingDian and Tcsunbow have them, not that I'd recommend either of those brands to anyone.
hey, i think its gonna be a little bit off topic but, i still cant find a reliable way to see my mobo and cpu temperatures. when i use hw monitor, tmpin6 and tmpin8 temparatures are way high to be true(115-107) and they are constant no matter pc is idle or gaming etc. they change only 1 2 degrees sometimes. there are a lot of people who experience the same thing with tmpin6 and tmpin8 temps

i tried speccy, and it shows my cpu at 110, 100, 90... it cant be true since no bottleneck or performance issues in pc.

when i check cpu temp in aida64 it says about 40 degrees. i dont know what to believe seriously... on bios, temp is between 40 and 50.
 

retroforlife

Reputable
Apr 19, 2017
216
16
4,615
i use coretemp for cpu it lets you have a icon on the task bar can set it to show highest temp core you shouldnt have any problems with temps any who but like the other posts said try get a front case fan to help the air flow :)
 
  • Like
Reactions: RayOfDark
May 23, 2019
32
1
35
i use coretemp for cpu it lets you have a icon on the task bar can set it to show highest temp core you shouldnt have any problems with temps any who but like the other posts said try get a front case fan to help the air flow :)
so, you mean i should just ignore tmpin6 and tmpin8 temps(105-117) in hw monitor and speccy temps ?
 

Karadjgne

Titan
Ambassador
HWMonitor was written quite some time ago, when mobo's were slightly different. Everything has an address and that software looked at certain addresses for certain information. Those addresses are now vastly different and in many cases no longer viable sensors, but something else entirely. HWMonitor reads tmpin4 on my MSI Z77 as -125°C and tmpin6 as 255°C, both of which are physically impossible and have a mobo actually work. It also reports my 12v rail as 8.12v.

The worst part is nobody, even the author, has any clue as to exactly what is what anymore, since things have changed so much. Southbridge chipset is gone, Northbridge chipset no longer has anything to do with ram, it's all pcie, you have additional Sata ports and pcie x4 ports in different places than just the block below the main power buss etc.

So you'll have to take it with a large grain of salt, it is software, so therefore can be unreliable.

The best method to get any sort of assumed reliability is to run 5 or 6 different software and notate the similarities. Aida64, HWinfo64, speccy, realtemp, coretemp etc. Look for specific values, not necessarily specific names. (don't run simultaneously). This double or triple checks one against the other, and iff you get a wild temp from one, you'll know that program is bunk for the job.
 
May 23, 2019
32
1
35
HWMonitor was written quite some time ago, when mobo's were slightly different. Everything has an address and that software looked at certain addresses for certain information. Those addresses are now vastly different and in many cases no longer viable sensors, but something else entirely. HWMonitor reads tmpin4 on my MSI Z77 as -125°C and tmpin6 as 255°C, both of which are physically impossible and have a mobo actually work. It also reports my 12v rail as 8.12v.

The worst part is nobody, even the author, has any clue as to exactly what is what anymore, since things have changed so much. Southbridge chipset is gone, Northbridge chipset no longer has anything to do with ram, it's all pcie, you have additional Sata ports and pcie x4 ports in different places than just the block below the main power buss etc.

So you'll have to take it with a large grain of salt, it is software, so therefore can be unreliable.

The best method to get any sort of assumed reliability is to run 5 or 6 different software and notate the similarities. Aida64, HWinfo64, speccy, realtemp, coretemp etc. Look for specific values, not necessarily specific names. (don't run simultaneously). This double or triple checks one against the other, and iff you get a wild temp from one, you'll know that program is bunk for the job.
thanks for the help. it seems its software. because i just played metro exodus for 2 hours and the only change in temp values between 2 hour gaming and idle is 2-3 degrees. it was 113 before gaming and 115 after gaming.they differ in an interval like tmpin6 is between 110-117 and tmpin8 is 103-105. if it was that hot i think pc would shutdown.

cpu temparature is only high in speccy(about 100). aida, coretemp, msi afterburner all gives similiar results.(62 max at gaming, 40-45 idle).

so i think its ok according to what you say.