Question [Help] Tried to build pc, think i messed everything up.

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
May 23, 2019
29
1
35
0
Some software reads the sensors wrong it's unlikely to be that hot unless your doing something extreme definitely try other software to read it . maybe also try a bios update of there newest version if your mother boards bios is a older one 🤔
hey, in my neighborhood power outages are common and i dont wanna brick my mobo...
and i have another problem...when i plug in the power cable to the power supply, even when the switch is off, it makes an electric sound just before it plugs in. like the sound when you get electrucaded from your friends hand( i hope you understand what i mean...,) there is no smell, cable is not hot. when i unplug the cables other end from the power outlet and do the same thing, this doesn't happen. is this an issue ? it wasnt happening in my old 30 dollar psu in old pc and it happens in this much more expensive one. other than that, pc seems to be working fine until now,(except the sensors that say my mobo is on fire)
 
May 16, 2019
85
20
45
2
Short answer: It's normal.

Long answer: It's the mains interference suppression capacitors connected directly to the PSU socket on the inside causing a sudden current flow as the mains supply hits them. They see the sudden appearance of the mains as a spike and do their best to suppress it. That's what they're there for primarily. To suppress electrical spikes on the mains. It sounds like a "crack!" and will vary in loudness depending upon where in the mains cycle (it's alternating current - AC) the connection is made.

Very short answer: (y)
 
May 23, 2019
29
1
35
0
Do you mean when the outlet switch is off, or PSU switch is off?
when i plug the cable to psu while the switch on the psu is off, and the other side of the cable is plugged in the outlet it makes the sound. when i unplug everything and plug the cable to psu first, no sound. i
 
May 23, 2019
29
1
35
0
Short answer: It's normal.

Long answer: It's the mains interference suppression capacitors connected directly to the PSU socket on the inside causing a sudden current flow as the mains supply hits them. They see the sudden appearance of the mains as a spike and do their best to suppress it. That's what they're there for primarily. To suppress electrical spikes on the mains. It sounds like a "crack!" and will vary in loudness depending upon where in the mains cycle (it's alternating current - AC) the connection is made.

Very short answer: (y)
thank you! it just freaked me out that there was the sound while the switch on the psu is off. i wouldnt care if it woulf happen when the switch was on.
 
May 16, 2019
85
20
45
2
You're welcome.

The suppression capacitors are always connected directly to the IEC socket as this saves the PSU switch from having to handle sudden potentially huge current surges due to the capacitor suppression circuit when the switch is operated.

Think of the huge Frankenstein laboratory throw switches to get an idea of the size of switch you'd need to do it reliably if the capacitors were the other side of the PSU switch. :sweatsmile:
 
May 23, 2019
29
1
35
0
You're welcome.

The suppression capacitors are always connected directly to the IEC socket as this saves the PSU switch from having to handle sudden potentially huge current surges due to the capacitor suppression circuit when the switch is operated.

Think of the huge Frankenstein laboratory throw switches to get an idea of the size of switch you'd need to do it reliably if the capacitors were the other side of the PSU switch. :sweatsmile:
thanks again. im trying to be sure the pc runs ok and i tried to benchmark the hdd with a defrag program and it says the random read speed is 3.2 mb/s it seems toooo slow. even my five year old drive wasn't doing that bad in this benchmark. hdd is seagate barracuda 1tb 6gb/s(even tho it reads at 3mb/s...)
edit: i just tested with another program and this is the results. i dont knoe ehat any of them means but is this normal fot a sata 3 7200rpm hdd ?
https://ibb.co/4SDJvhL
 
Last edited:

retroforlife

Commendable
Apr 19, 2017
162
14
1,595
1
because of how HDD work they get slower as they get more full up while SSD dont so if my HDD was more empty it would score like

190 / 180
2.5 / 1.0 etc etc

HDD have a harder time with random read and writes since the arm has to move while a SSD are solid state aka no moving parts so it has faster response and much quicker speeds 🙂
 

Karadjgne

Titan
Herald
SSDs get slower as they fill and should never be loaded within 10% of max for 250 and under, 5% for larger drives.

HDDs don't get slower as they fill, only once they get full and can no longer deal with pagefile or windows temp files etc. What does affect hdds is usage. The more that's added, deleted, moved, changed the more all the bytes are seperated into different addresses, the more the armature has to move, longer things take to get together. Periodic defrags after cleaning windows/temp files solves all that.

Never Defrag an SSD. Ever.
 
Last edited:

retroforlife

Commendable
Apr 19, 2017
162
14
1,595
1
SSDs get slower as they fill and should never be loaded within 10% of max for 250 and under, 5% for larger drives.
my Samsung SSD did over provisioning it never gets completely full since it has a part that has no partition . 🤔

plus going from completely empty to having 70 GB left of space not including the over provisioning the speed exactly the same according to the record my Samsung app saves duno just some thing i noticed 😶
 
May 16, 2019
85
20
45
2
SSDs get slower as they fill and should never be loaded within 10% of max for 250 and under, 5% for larger drives.
I've never had an SSD get noticeably slower until it gets to the point of having too little space to work with. They tend to remain moreorless the same performance until that point then start to fall off a cliff fairly rapidly. Sensible over-provisioning cures this almost completely, and nearly all SSDs already have over-provisioning whether they list it in their specs or not, so you don't necessarily have to leave any free user space, although it will likely still help.

Enterprise SSDs tend to have more internal over-provisioning by default. That's why some of them have odd capacities like 400GB for example. Over-provision the same brand of 480GB non-enterprise SSD by a further 80GB to make it 400GB and performance can sometimes be broadly similar to the enterprise version.

On the other hand, an HDD does get progressively slower as it fills due to less storage per revolution on the inner tracks. It also can slow to a crawl if it becomes badly fragmented as the heads have to thrash around to read/write files and use the available free space. Fragmentation has very close to zero impact on an SSD.

Of course, you can over-provision an HDD too (known as short-stroking), but you need to throw away so much usable capacity to get to the point of maintaining near-constant performance that it's then not much cheaper per GB than buying a sensibly priced SSD, and the SSD will still run rings around it.

If you need the performance, SSD is king.
 
Last edited:

retroforlife

Commendable
Apr 19, 2017
162
14
1,595
1
I've never had an SSD get noticeably slower until it gets to the point of having too little space to work with. They tend to remain moreorless the same performance until that point then start to fall off a cliff fairly rapidly. Sensible over-provisioning cures this almost completely.

On the other hand, an HDD does get progressively slower as it fills due to less storage per revolution on the inner tracks. It also can slow to a crawl if it becomes badly fragmented as the heads have to thrash around to read/write files and use the available free space. Fragmentation has very close to zero impact on an SSD.

Of course, you can over-provision an HDD too (known as short-stroking), but you need to throw away so much usable capacity to get to the point of maintaining near-constant performance that it's then not much cheaper per GB than buying a sensibly priced SSD, and the SSD will still run rings around it.

If you need the performance, SSD is king.
i think i used to do short stroking by having my os and games on the first partition about 250 GB then the rest was for storage on my super old pc years and years back try in to keep the main first part as free of junk as i could 🤣
 
May 16, 2019
85
20
45
2
i think i used to do short stroking by having my os and games on the first partition about 250 GB then the rest was for storage on my super old pc years and years back try in to keep the main first part as free of junk as i could 🤣
Me too. It also meant that the small OS partition was quicker to clone to a small backup drive.

I still do the same now with SSDs. The OS lives in a 56GB partition that automatically clones to a second 128GB SSD once a week with Macrium Reflect. The rest of the 128GB SSD is used with Intel SRT to cache a RAID 1 data disk array where the Documents, Downloads, Pictures, Music and Videos folders are relocated to.

If/when the main OS SSD fails, a quick change of boot order will get me straight back in.
 

Karadjgne

Titan
Herald
lesson to be learned here dont buy super small SSD there kinda pointless unless you like constant cleaning of junk files to keep space free
Ppl aren't really buying super small SSDs much at all, what ppl are doing, is like myself, bought the second biggest ssd available (128Gb) 7 years ago and don't have the budget to buy something bigger today.

In practice, an SSD’s performance begins to decline after it reaches about 50% full. This is why some manufacturers reduce the amount of capacity available to the user and set it aside as additional over-provisioning. For example, a manufacturer might reserve 28 out of 128GB and market the resulting configuration as a 100GB SSD with 28% over-provisioning. In actuality, this 28% is in addition to the built-in 7.37%, so it’s good to be aware of how vendors toss these terms around. Users should also consider that an SSD in service is rarely completely full. SSDs take advantage of this unused capacity, dynamically using it as additional over-provisioning.
If you want to tell Seagate it's FOS, that SSDs don't slow down, be my guest.
If your device includes a solid-state drive (SSD), you probably noticed that as it fills up the performance slows down dramatically. You’ll see this when opening apps and copying files will take longer than usual, and sometimes this will also cause freezing issues making your device harder to use.
Or these guys...
 
Last edited:
May 16, 2019
85
20
45
2
Why would I want to tell them they're full of shit?

They're right in terms of large files being written to an almost full DRAM-less and SLC-bufferless SSD which is what they're describing, but that doesn't apply to most modern SSDs with DRAM and SLC buffering unless the file is absolutely enormous.
 

Karadjgne

Titan
Herald
You missed the point entirely. The vast majority of SSD users do not have the recently released, top line SSDs that contain Dram and SLC buffering. That didn't come out until the release of the Samsung 860 QVO, Crucial BX500 etc at the end of last year.

So anyone using even a slightly older model is going to start loosing performance somewhere around 50% fill.
 
May 16, 2019
85
20
45
2
Here's my old 2010 Intel 40GB X25-V SSD still outperforming the rated read and write specs by some margin at 91% full...

https://drive.google.com/open?id=12ahu9dQBAyZSUC8YfCK6jEvUOibEiQHh

It had a 32MB DRAM buffer even back then, although that alone can't account for zero loss of performance when transferring 1GB files (over 30 times larger than the buffer) to/from ancient 34nm Intel MLC NAND Flash.


Here's a pair of budget DRAM-less SLC-cached Silicon Power 1TB SATA SSDs in RAID 0 holding on to 77% of their read and 92% of their raw write performance at 98% full. I'm not prepared to break the array just to prove a point, but the results would be the same in percentage terms per single drive.

https://drive.google.com/open?id=1dBiM-GcmwAKAaOPUp8ECkPpFwYMs-6CW


I have a folder full of similar screenshots from budget SSDs of various ages and mostly low capacities dating back to 2010 all showing basically the same thing. The only time I ever experienced a slowdown severe enough to notice in the real world was when running SSDs for extended periods on OSes that didn't support TRIM.

I'm not sure who's made entirely cacheless SSDs in recent years, but anyone looking for performance and longevity would have avoided them like the plague if there was a sensibly priced alternative. Even the likes of uber-budget KingDian and Tcsunbow have them, not that I'd recommend either of those brands to anyone.
 
Last edited:
May 23, 2019
29
1
35
0
Here's my old 2010 Intel 40GB X25-V SSD still outperforming the rated read and write specs by some margin at 91% full...

https://drive.google.com/open?id=12ahu9dQBAyZSUC8YfCK6jEvUOibEiQHh

It had a 32MB DRAM buffer even back then, although that alone can't account for zero loss of performance when transferring 1GB files (over 30 times larger than the buffer) to/from ancient 34nm Intel MLC NAND Flash.


Here's a pair of budget DRAM-less SLC-cached Silicon Power 1TB SATA SSDs in RAID 0 holding on to 77% of their read and 92% of their raw write performance at 98% full. I'm not prepared to break the array just to prove a point, but the results would be the same in percentage terms per single drive.

https://drive.google.com/open?id=1dBiM-GcmwAKAaOPUp8ECkPpFwYMs-6CW


I have a folder full of similar screenshots from budget SSDs of various ages and mostly low capacities dating back to 2010 all showing basically the same thing. The only time I ever experienced a slowdown severe enough to notice in the real world was when running SSDs for extended periods on OSes that didn't support TRIM.

I'm not sure who's made entirely cacheless SSDs in recent years, but anyone looking for performance and longevity would have avoided them like the plague if there was a sensibly priced alternative. Even the likes of uber-budget KingDian and Tcsunbow have them, not that I'd recommend either of those brands to anyone.
hey, i think its gonna be a little bit off topic but, i still cant find a reliable way to see my mobo and cpu temperatures. when i use hw monitor, tmpin6 and tmpin8 temparatures are way high to be true(115-107) and they are constant no matter pc is idle or gaming etc. they change only 1 2 degrees sometimes. there are a lot of people who experience the same thing with tmpin6 and tmpin8 temps

i tried speccy, and it shows my cpu at 110, 100, 90... it cant be true since no bottleneck or performance issues in pc.

when i check cpu temp in aida64 it says about 40 degrees. i dont know what to believe seriously... on bios, temp is between 40 and 50.
 

retroforlife

Commendable
Apr 19, 2017
162
14
1,595
1
i use coretemp for cpu it lets you have a icon on the task bar can set it to show highest temp core you shouldnt have any problems with temps any who but like the other posts said try get a front case fan to help the air flow :)
 
Reactions: RayOfDark
May 23, 2019
29
1
35
0
i use coretemp for cpu it lets you have a icon on the task bar can set it to show highest temp core you shouldnt have any problems with temps any who but like the other posts said try get a front case fan to help the air flow :)
so, you mean i should just ignore tmpin6 and tmpin8 temps(105-117) in hw monitor and speccy temps ?
 

Karadjgne

Titan
Herald
HWMonitor was written quite some time ago, when mobo's were slightly different. Everything has an address and that software looked at certain addresses for certain information. Those addresses are now vastly different and in many cases no longer viable sensors, but something else entirely. HWMonitor reads tmpin4 on my MSI Z77 as -125°C and tmpin6 as 255°C, both of which are physically impossible and have a mobo actually work. It also reports my 12v rail as 8.12v.

The worst part is nobody, even the author, has any clue as to exactly what is what anymore, since things have changed so much. Southbridge chipset is gone, Northbridge chipset no longer has anything to do with ram, it's all pcie, you have additional Sata ports and pcie x4 ports in different places than just the block below the main power buss etc.

So you'll have to take it with a large grain of salt, it is software, so therefore can be unreliable.

The best method to get any sort of assumed reliability is to run 5 or 6 different software and notate the similarities. Aida64, HWinfo64, speccy, realtemp, coretemp etc. Look for specific values, not necessarily specific names. (don't run simultaneously). This double or triple checks one against the other, and iff you get a wild temp from one, you'll know that program is bunk for the job.
 
May 23, 2019
29
1
35
0
HWMonitor was written quite some time ago, when mobo's were slightly different. Everything has an address and that software looked at certain addresses for certain information. Those addresses are now vastly different and in many cases no longer viable sensors, but something else entirely. HWMonitor reads tmpin4 on my MSI Z77 as -125°C and tmpin6 as 255°C, both of which are physically impossible and have a mobo actually work. It also reports my 12v rail as 8.12v.

The worst part is nobody, even the author, has any clue as to exactly what is what anymore, since things have changed so much. Southbridge chipset is gone, Northbridge chipset no longer has anything to do with ram, it's all pcie, you have additional Sata ports and pcie x4 ports in different places than just the block below the main power buss etc.

So you'll have to take it with a large grain of salt, it is software, so therefore can be unreliable.

The best method to get any sort of assumed reliability is to run 5 or 6 different software and notate the similarities. Aida64, HWinfo64, speccy, realtemp, coretemp etc. Look for specific values, not necessarily specific names. (don't run simultaneously). This double or triple checks one against the other, and iff you get a wild temp from one, you'll know that program is bunk for the job.
thanks for the help. it seems its software. because i just played metro exodus for 2 hours and the only change in temp values between 2 hour gaming and idle is 2-3 degrees. it was 113 before gaming and 115 after gaming.they differ in an interval like tmpin6 is between 110-117 and tmpin8 is 103-105. if it was that hot i think pc would shutdown.

cpu temparature is only high in speccy(about 100). aida, coretemp, msi afterburner all gives similiar results.(62 max at gaming, 40-45 idle).

so i think its ok according to what you say.
 

ASK THE COMMUNITY

TRENDING THREADS