Intel i7 7700T for my Low Power Desktop

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

valeman2012

Distinguished
Apr 10, 2012
1,272
11
19,315
Planning to get a Intel i7 7700T for my low power Desktop, Will this be enough as for 7th gen?

A24G_1_201702121051283876.jpg


I going put this CPU on a
H270 Motherboard with a 64GB DD4 Memory.

Mostly not use for gaming, and but i need this.\

Is it good?
 
Solution
Oh, and for a little comparison about performance....

PC = Core 2 Quad 9550, 8GB 1066MHz DDR2 RAM, SATA2 bus, 1x 250GB SSD, 2x 320GB 7200RPM HDD's in RAID 0, 2x 1TB 7200RPM HDD's in RAID 0
LT (LapTop) = Core i7 4710HQ, 24GB 1600MHz DDR3 RAM, SATA3 bus, 1x 250GB SSD, 1x 1TB 5400 RPM HDD

PC's SSD (remember, SATA2 limited):
Sequential Read (Q= 64,T= 4) : 283.243 MB/s
Sequential Write (Q= 64,T= 4) : 268.415 MB/s
Random Read 4KiB (Q= 64,T= 4) : 203.193 MB/s [49607.7 IOPS]
Random Write 4KiB (Q= 64,T= 4) : 189.629 MB/s [46296.1 IOPS]
Sequential Read (T= 1) : 273.324 MB/s
Sequential Write (T= 1) : 258.424 MB/s
Random Read 4KiB (Q= 1,T= 1) : 23.890 MB/s [5832.5 IOPS]
Random Write 4KiB (Q=...


Wow, there is so much misinformation in this thread.... But the worst of it is in this post, so I'll begin here.

1st - RAID 0 is NOT mirrored, it's striped. RAID 0 is best for performance, but worst for data integrity. RAID 1 is mirrored, is better than RAID 5 for read speeds but worse for write speeds, and is the best protection because all data is duplicated. However, if you're using 4 drives (SSD or HDD) then you would be best with RAID 10, as it would offer striping for performance AND mirroring for integrity. RAID 5 and all other parity-type RAIDs are crap because they create more CPU overhead due to calculating parity information for all data written, they offer less redundancy than RAID 1 or 10, and they offer less performance than RAID 0 or 10. Just remember, depending on your RAID controller, you can set it up multiple ways. Some only offer RAID 10 however they do it, but others allow you to create 2 RAID 0 or 1 arrays and then do it again to tie the arrays together. Meaning, you can set up 2 RAID 0 arrays and then RAID 1 the 2 arrays together or create 2 RAID 1 arrays and RAID 0 them together. There are some distinct differences between the methods and advantages to each.

2nd - SSD vs HDD. Dude, SSD's are INCREDIBLY more reliable than HDD's now. If you're still questioning that, your information is about 7-10 years out of date. However, as with anything, you get what you pay for. Don't think money equates to a more reliable drive directly, but don't buy some unknown-name drive. That said, almost all SSD's have higher reliability rates than HDD's and, as said, will generally last for decades before the write operations wear out the drive. You will be about 99% guaranteed to need to replace the entire PC for other reasons before the SSD fails. To find a good SSD, though, do some research on SSD benchmarking and endurance. Some SSD's have very high burst read/write speeds but very low sustained speeds (still generally MANY times higher than even the best HDD's, though). I bought a pair of SK Hynix 250GB SSD's because they got a decent review (not the best, not the worst) and I got them for a price I couldn't pass on. I stuck 1 in my old PC used mostly as a file server but also for some gaming. It maxed out the SATA2 bus no problem. The SSD in my laptop is using SATA3 and can achieve about double the bandwidth in tests which were limited by the SATA2 bus. And compared to the RAID 0 array, it smoked it in all cases. Even using the limited SATA2 bus, in tests where the RAID 0 array achieved 160MB/s, the SSD pulled about 270-280MB/s (maxing out the SATA2). In tests where the RAID 0 array achieved a pathetic 0.6-3.5MB/s, the SSD achieved anywhere from 24MB/s to 270MB/s (the huge difference depends on the specific test). So using RAID 0 across HDD's can't even begin to compare to using a single SSD. After you get all set up, to test this yourself, download Crystal Disk Mark.

Another thing to consider on this is HDD's not only have higher failure rates due to the moving parts, but they also have limited write cycles the same as SSD's. You simply don't hear about it often because the moving parts tend to wear out long before the magnetic media loses its ability to be written to. SSD's have no moving parts, so that cause of failure goes out the window, leaving the other cause of failure of how much data can be written. In regards to this, not all SSD's are created equal. Most SSD's use overprovisioning to give you a fallback when the cells begin to fail. As stated before, this will probably take decades before even beginning to be noticeable. However, not all SSD's overprovision in the same manner. Some will sell you a 250GB drive but will actually give you, say, a 300GB drive, but only 250GB will be accessible to you and the other 50GB will be used for the overprovisioning. Other drives will sell you a 250GB drive which is 250GB, but use unpartitioned space for the overprovisioning, meaning if you format & use the entire drive for your partition, you end up with no overprovisioning anyways. This is also why some SSD's are more expensive than others, but it isn't the only reason so don't go simply by price. The other biggest factor of price is the type of flash they use, being TLC, MLC, or 3D, but you'll need to research the differences yourself because I won't get in to that here.

3rd - The belief SSD's wear out faster because they write faster. This is utter nonsense. All it means is they write faster and stop writing sooner. Actually, Ecky's statement on this using the stomach/food/eating analogy was actually perfect, you simply don't seem to understand it, so I'll try to explain it better. Whether it is an SSD or HDD, the same amount of data is being written to it based on whatever you are doing with it. The speed at which it is written is irrelevant to how fast the drive will wear out. Whether an SSD writes at 250MB/s or 550MB/s or 2500MB/s (the last only being possible with a PCIe SSD), if it will last around 150TB of data written before it wears out, it will last around 150TB of data written before it wears out. However, another difference between causes of failure to be aware of, SSD's are basically only active while being written to or read from. Meaning, as soon as they are done performing the read/write operation (which they finish MUCH faster), they go into a sorta standby state until the next operation comes through. In contrast, HDD's must spin up to do anything (which takes several seconds), and therefore tend to be kept on for long periods. In the case of your use, for a surveillance system, this means HDD's will always be on. The SSD's *could* always be on, depends on how your software uses the drive. If it constantly writes, the SSD's will also always be on, but it really isn't a problem in terms of causing extra wear on SSD's as it is with the HDD's. However, if the software doesn't write constantly, say it builds up a specific amount of data or length of time in a recording buffer before writing it to the drive, then the SSD will be able to go into standby in-between read/write operations, but the HDD will ALWAYS be active because they generally have around a 10-30 minute wait time before they go into standby and stop spinning needlessly. HDD's also wear out more during changes between active/standby than when running, but running 24/7 will also wear them out faster.

4th - Power usage. You must now know the difference between SSD's & HDD's in terms of power usage. SSD's typically consume only about 2W during usage, and somewhere around 0.02-0.2W during standby, whereas an HDD typically consumes around 10-12W during usage and 1-2W during standby. Meaning there will be a HUGE difference in power consumption between using SSD's and HDD's, both because of the direct difference in power consumption AND because the HDD's will needlessly spin for long periods of time before going in to standby whereas the SSD's consume very little power relatively and can go into standby frequently without it causing any extra wear on them.

5th - RAM. Yes, you will use more energy by having 64GB of RAM installed than 16GB of RAM. For your use case, 16GB would almost certainly be perfectly fine. RAM these days is becoming much more efficient than older RAM, though, both by using less power and by doing more work for the same power.

6th - Your original question, is the 7700T a good enough CPU. Yup. For what you're doing, you don't need anything better than that. However, another piece of incorrect info is the 7700T & 7700K would use the same amount of power to run at the same clock speed. The best example I can give for this is using cars. If you compare 2 engines, say both from the same car but tuned differently, 1 engine could hit its peak power much lower in the RPM range than the other would. They would also have vastly different peak efficiency points. For instance, 1 engine could be tuned for low-end torque, but the other for high-end horsepower. Along those lines, Intel has stated this is basically how their CPU's work. Some are designed to run at higher max clock speeds, but they are less efficient at lower speeds. An easy thing to Google for examples on this is the Nvidia Tegra line of mobile processors. Look for their reasoning on the early versions of when they began building the 4+1 core SoC's (which is 4 high-powered cores + 1 low-powered core). They did it for this exact reason. So if what you want is raw processing power, go for a higher clock speed i7. If you want enough power to do what you plan and then want more efficiency, go with the 7700T. If it was me building a PC for what you're planning, or any sort of always-on server/system, I would go with the 7700T. The claim a 7700T being fully utilized would only run at the 3.0GHz-ish mark is nonsense. While it won't run at its highest 3.8GHz speed when maxing out all cores, it would be around 3.5-3.6GHz. In other words, roughly the same drop as the 7700K. Intel basically drops the clock speed 0.1GHz for each 1-2 cores the CPU maxes out, usually being a 0.2-0.3GHz drop when the CPU is being fully utilized.

7th - Back to the SSD/HDD comparison. As stated by somebody already (didn't pay attention to who said it), which is better for your use in terms of performance could also be based on your specific use. For instance, there is a HUGE difference in the stream size of a 480p 30Hz video stream than a 1080p 30Hz stream, let alone a 1080p 60Hz stream or a 4k/2160p 30Hz or 4k 60Hz stream. And whether you're software is writing the raw video stream or using compression. I once tried a screen capture software which wrote uncompressed video to my RAID 0 array. Yeah, it nearly maxed it out, something like 100-120MB/s constantly. An SSD could handle this MUCH easier than any HDD, whether in a RAID 0 or not.

8th - SSD/HDD again. Price. IF #7 isn't a problem because your software is compressing the video stream so you aren't writing huge amounts of data constantly which virtually require an SSD, so you could use HDD's if you want, then price becomes another factor. Before we get to that, how much history do you want? Do you only need the last 1-3 days, or do you need 3 months? Because back to the price point, HDD's are still around 8-12x larger for the same price. In other words, a 500GB SSD is around $150, but you can find 4TB HDD's for around $130-150. Another possibility is to use 1 or 2 SSD's for the performance, then use a single 4TB HDD for backup/archiving.

That last method is what I tend to use in all my systems. It works fine for me, but doesn't provide as good of protection from loss as a RAID 1 would. For instance, even if you back up every day, or every hour, if your main drive goes down, you lose all data since the last backup, but a RAID 1 would still have all your data unless both drives fail before you replace the failed drive. However, there is also plenty of testing which proves your chance of HDD failure goes up insanely for each drive which fails before being replaced when using any form of RAID with redundancy (RAID 1/5/10/etc). In fact, replacing the drive actually causes the higher failure rate, because the RAID array must rebuild the lost data on the new drive which causes all the other drives to work overtime trying to rebuild the new drive. This doesn't affect SSD's as much, because writing is what wears them, not reading.

I didn't intend to write such a wall when I started, but hopefully it helps.

Edit: I forgot what I intended to be #2, because it was a very important point of misinformation in the post I quoted. His statements about power supplies. They are NOT most efficient when highly loaded. They are actually most efficient around 50% load. However, there is not much of a difference in their efficiency overall. For instance, a PSU may be 80% efficient at 10% load, 85-90% efficient at 50% load, and 80% efficient at 90% load. It's an arcing curve which peaks around 50% efficiency. Basically, any PSU which is at least Bronze rated will be very efficient, though of course each level is more so. However, Silver is barely any more efficient, and Platinum usually costs MUCH more up front, to the point you probably won't save enough to make up the difference before you replace it, so Gold is best in my opinion. Well, unless you were building a massive server which ran around 70-90% load 24/7. Platinum would probably be worth it then, but we're talking about a full-blown server the likes of which aren't the topic we're discussing. In the end, I think you would be better off getting at least a 450W Bronze PSU, because you wouldn't be losing any/much efficiency and it would offer more upgradeability if you need it. Just be sure to get a good quality PSU, such as Antec or Corsair. Don't get low-quality PSU's such as EVGA or Cooler Master or Thermaltake. They typically fail (or begin to fail) within about 1-2 years. I have 2 decades of experience in the IT field to back that up.
 
Oh, and for a little comparison about performance....

PC = Core 2 Quad 9550, 8GB 1066MHz DDR2 RAM, SATA2 bus, 1x 250GB SSD, 2x 320GB 7200RPM HDD's in RAID 0, 2x 1TB 7200RPM HDD's in RAID 0
LT (LapTop) = Core i7 4710HQ, 24GB 1600MHz DDR3 RAM, SATA3 bus, 1x 250GB SSD, 1x 1TB 5400 RPM HDD

PC's SSD (remember, SATA2 limited):
Sequential Read (Q= 64,T= 4) : 283.243 MB/s
Sequential Write (Q= 64,T= 4) : 268.415 MB/s
Random Read 4KiB (Q= 64,T= 4) : 203.193 MB/s [49607.7 IOPS]
Random Write 4KiB (Q= 64,T= 4) : 189.629 MB/s [46296.1 IOPS]
Sequential Read (T= 1) : 273.324 MB/s
Sequential Write (T= 1) : 258.424 MB/s
Random Read 4KiB (Q= 1,T= 1) : 23.890 MB/s [5832.5 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 42.690 MB/s [10422.4 IOPS]

PC's RAID 0 OS array (64KB stripes):
Sequential Read (Q= 64,T= 4) : 159.846 MB/s
Sequential Write (Q= 64,T= 4) : 251.527 MB/s
Random Read 4KiB (Q= 64,T= 4) : 2.894 MB/s [706.5 IOPS]
Random Write 4KiB (Q= 64,T= 4) : 3.688 MB/s [900.4 IOPS]
Sequential Read (T= 1) : 154.920 MB/s
Sequential Write (T= 1) : 156.272 MB/s
Random Read 4KiB (Q= 1,T= 1) : 0.629 MB/s [153.6 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 2.973 MB/s [725.8 IOPS]

PC's RAID 0 Data array (128KB stripes):
Sequential Read (Q= 64,T= 4) : 194.944 MB/s
Sequential Write (Q= 64,T= 4) : 74.270 MB/s
Random Read 4KiB (Q= 64,T= 4) : 2.000 MB/s [ 488.3 IOPS]
Random Write 4KiB (Q= 64,T= 4) : 2.359 MB/s [ 575.9 IOPS]
Sequential Read (T= 1) : 99.991 MB/s
Sequential Write (T= 1) : 176.118 MB/s
Random Read 4KiB (Q= 1,T= 1) : 0.620 MB/s [ 151.4 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 2.503 MB/s [ 611.1 IOPS]

LT's SSD (SATA3):
Sequential Read (Q= 64,T= 4) : 554.816 MB/s
Sequential Write (Q= 64,T= 4) : 401.169 MB/s
Random Read 4KiB (Q= 64,T= 4) : 304.907 MB/s [74440.2 IOPS]
Random Write 4KiB (Q= 64,T= 4) : 215.366 MB/s [52579.6 IOPS]
Sequential Read (T= 1) : 377.659 MB/s
Sequential Write (T= 1) : 439.367 MB/s
Random Read 4KiB (Q= 1,T= 1) : 31.369 MB/s [7658.4 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 91.583 MB/s [22359.1 IOPS]


As you can see, not only do SSD's outperform a RAID 0 easily, but RAID 0 arrays can have vastly different results depending on how you build them. Using the same # of drives which have equivalent performance, something like the size of the stripe you use to build the RAID 0 can have a big impact on its performance. Whether it performs better or worse depends on whether you mostly write files smaller than the stripe size or larger than the stripe size. But many other factors also affect performance, so you're still better off using an SSD/HDD solution in my opinion.
 
Solution
\

The Cpu is LOW power and i do not need to worry about heat as i going be leaving it on, The Low Power Desktop is not going be use for games, but others stuff like Viewing Many Camera Footage at once. I need a Low Power and Good Perfromance CPUI (Intel i7 7700T and 64GB Memory) Memory Power all i need to ensure smooth performance on low power desktop when doing multiple tasked. Making sure all run smooth without a spike (or simply recovering from a lag spike)
 
OP , you've had many perfectly correct answers here, you seem to be working from a position of you know best, so build it, tell us what the ram usage is, or listen to advice. And lose the attitude in some of the answers you come across as rude, if you know enough to be rude, why are you asking the questions, if you don't then help us to help you get to the right answer, you attitude will make people walk away.
 


I don`t why you string things up, but okay. I did had a answer but someone else explained further.

Low Power Desktop with Intel i7 7700T with 64GB of DD4 Memory and keeping computer keep on a 24/7 on computer.