Build Advice Advice on new build, starting with motherboard manufacturer, for touchscreens, floppy disks, which Windows version ?

Status
Not open for further replies.

Richard1234

Distinguished
Aug 18, 2016
277
5
18,685
Hi,

I have built a few PCs over the years, I built this one around 2010, and because its becoming a bit obsolete, time to build a new one with a complete overhaul, in the past I would recycle components, but this time I think I need to upgrade most things. I will only recycle the loudspeakers, mouse and keyboard.

the motherboard is the startpoint for building a PC, and this one's is by Gigabyte and is ATX.

I would firstly like advice as to which motherboard manufacturer, eg ones with more or better early startup facilities, eg ability to boot from USB drives. I also want ancient floppy disk support, because I have dabbled with writing software on floppy disks which boot directly without operating system. I dont know if all ATX's support the ancient floppy drives. Also a manufacturer whose motherboards are reliable.

this Gigabyte one has been good but after 13 years of use, the USB seems to sometimes malfunction for the wireless USB dongle, I think the USB support on the motherboard is somehow worn out. And eg hard drives attached to the USB hub attached to the machine will vanish.

I would also like legacy support for PS/2 mouse and keyboard, ie PS/2 sockets at the back for mouse and keyboard. Basically I want as much legacy hardware support as possible, as I have programmed PS2 mouse and keyboard directly for my floppy booting software. I know USB can emulate legacy, but I would like the legacy hardware directly.

Some months ago I bought a laptop with touchscreen and windows 11. I dont know if one can buy touchscreen monitors for PCs? Also can windows 10 support such, or does it have to be windows 11. As I would need to buy Windows also. I have some spare licensed copies of Windows 10 not yet installed, bought so they can be used when windows 10 is no longer supported.

Then advice on tower cases for the system, I would like a transparent one where I can see everything inside the machine without having to open up. But I would like one also which uses old school slotting, my existing one has nonstandard plastic things, which I find very confusing, and some have broken. I would like as many bays as possible, eg I have lots of SATA drives.

any advice on specific such tower cases, and specific UK vendors who sell these, I dont know if I am allowed to ask such advice on this forum. I found PC World no use for tower cases, and had problems finding anything any good on ebay.

with my existing machine, I found the expansion slots a bit cramped, where the graphics card is too near the slot I use for the USB3 adapter, I dont know if this is a limit of the ATX specification, or if it is a manufacturer design limitation. it would be nice if there were more space between the sockets.

This existing system, has various bare wire USB sockets on the motherboard, which I found a bit confusing to connect up, I would prefer proper USB sockets on the motherboard accessible at the back. But maybe all have this problem?

I think I would like DV-I and hdmi support, with my existing machine, its hdmi doesnt do audio, I dont know if newer machines have integrated audio into the hdmi, and what to look for to get hdmi with audio. that way I could record a session on the machine for say Youtube. I dont know if this is a graphics card question rather than a motherboard question, and if a graphics card question, advice on which graphics cards.


it is lots of questions, and the central question is which motherboard manufacturer, and the constraints of both ancient hardware eg floppies and ps/2 but also modern hardware eg touchscreens, which might constrain which version of windows or the graphics card etc.

with my touchscreen laptop, I found the early startup controls a bit limited, and it doesnt allow enough time for the external USB bluray drive to get ready to boot say Linux mint, I had to configure the hubs a certain way before I could get that to boot, which is a limitation of the early startup.

anyway, many thanks for any advice
Richard
 

Aeacus

Titan
Ambassador
I also want ancient floppy disk support, because I have dabbled with writing software on floppy disks which boot directly without operating system. I dont know if all ATX's support the ancient floppy drives. Also a manufacturer whose motherboards are reliable.
This is a tall order and perhaps most complex to solve within your requirements.

3.5" FDDs are all PATA (IDE), while at current moment, 5.25" ODD, 3.5" HDD and 2.5" SSD are all using SATA. To my knowledge, there is no 3.5" FDD that uses SATA.

Modern 3.5" FDDs use USB connection, e.g:
amazon: https://www.amazon.co.uk/External-Portable,Floppy-Computer-Accessory-Removable/dp/B099PVQVWQ

But with this, i don't think you can direct boot without first booting into OS, since that FDD operates over USB protocol.

2nd option is to use PATA to SATA conversion, e.g:
amazon: https://www.amazon.co.uk/Cablecc-Female-Converter-Adapter-Desktop/dp/B081YP2S5R/
alongside SATA power to FDD power (since that adapter has MOLEX connector for HDDs),
amazon: https://www.amazon.co.uk/DeLOCK-Cable-Power-15-Pin-Floppy/dp/B018NKPUIA/

I would also like legacy support for PS/2 mouse and keyboard, ie PS/2 sockets at the back for mouse and keyboard. Basically I want as much legacy hardware support as possible, as I have programmed PS2 mouse and keyboard directly for my floppy booting software. I know USB can emulate legacy, but I would like the legacy hardware directly.
This is another hurdle since most modern MoBos doesn't include PS/2 port anymore. Though, some still do, but even then, it is only single, combo port, supporting both in the same port. E.g:

ee0c027df8b70e6ad35a77422af94046.1600.jpg

MoBo on picture: ASRock Z790 Pro RS

So, with this combo port, you need to use PS/2 Y-splitter, like this one,
amazon: https://www.amazon.co.uk/Kentek-Female-Extension-Splitter-Keyboard/dp/B07KVF6D4Y

But even that Y-splitter has issues. Some say it doesn't work for them, while others say that you need to connect KB/mice as not shown on the cable itself, but vice-versa.

71hSN1YB44L.jpg


I think I would like DV-I and hdmi support
This is issue since modern MoBos doesn't include DVI port on back I/O anymore. Only HDMI and/or DisplayPort. But not all is lost, since you could use dedicated GPU that has DVI port. Or adapter that is HDMI/DP to DVI.

I dont know if one can buy touchscreen monitors for PCs? Also can windows 10 support such, or does it have to be windows 11.
Sure you can. And you don't need Win11 to operate touch screens. Win10 works fine.

Then advice on tower cases for the system, I would like a transparent one where I can see everything inside the machine without having to open up. But I would like one also which uses old school slotting, my existing one has nonstandard plastic things, which I find very confusing, and some have broken. I would like as many bays as possible, eg I have lots of SATA drives.
If you don't mind the PC case size, then i'd suggest full-tower ATX case. Current trend is that side panel is transparent (either acrylic or tempered glass) and this isn't an issue.

Speaking of complete build, here's something to guide you along. Since i don't know your budget, look at it as a guideline;

PCPartPicker Part List

CPU: Intel Core i5-13600K 3.5 GHz 14-Core Processor (£278.88 @ Amazon UK)
CPU Cooler: Thermalright Peerless Assassin 120 SE 66.17 CFM CPU Cooler (£43.48 @ Amazon UK)
Motherboard: MSI PRO Z790-P WIFI ATX LGA1700 Motherboard (£183.48 @ Amazon UK)
Memory: Corsair Vengeance 32 GB (2 x 16 GB) DDR5-6000 CL36 Memory (£101.99 @ Amazon UK)
Storage: Samsung 980 Pro w/Heatsink 2 TB M.2-2280 PCIe 4.0 X4 NVME Solid State Drive (£126.00 @ Amazon UK)
Video Card: MSI VENTUS 2X XS OC GeForce RTX 3050 8GB 8 GB Video Card (£223.88 @ Amazon UK)
Case: Phanteks Enthoo Pro Tempered Glass ATX Full Tower Case (£124.95 @ Amazon UK)
Power Supply: SeaSonic FOCUS GX-750 ATX 3.0 750 W 80+ Gold Certified Fully Modular ATX Power Supply (£143.16 @ Amazon UK)
Total: £1225.82

Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2023-12-20 07:05 GMT+0000


Few words;
13th gen Core i5 is solid all-around CPU. Sure, latest is 14th gen but for that, you may need to update MoBo BIOS, before able to use the PC. And i5-14600K also costs ~30 quid more. So, to save you from the possible headache, put in i5-13600K, which has 14 cores (6 P-cores, 8 E-cores) and 20 threads. CPU also comes with integrated GPU: UHD 770, if you like to use PC without dedicated GPU and/or the display output ports on MoBo. And since CPU is K-series, it has unlocked multiplier, which you can use, to overclock your CPU, if you so desire as well.

CPU cooler is the best price-to-performance ratio air cooler out there. It even beat the long-time king of air coolers (Noctua NH-D15) by performance (better cooling), price (cheaper) and noise (quieter). So, Peerless Assassin 120 SE is the new king of air coolers.

MoBo isn't anything fancy but is feature rich due to being Z790 chipset. Still, didn't put in a MoBo that costs a fortune, but it has enough bells and whistles, including the rare PS/2 combo port you need.

Latest RAM is DDR5, so, put in 2x 16GB 6000 Mhz RAM. 32GB should last you for years to come. Though, at current date 16GB total is considered the norm, unless you play latest AAA games, where some may require 32GB RAM.

For OS drive, put in M.2 NVMe SSD, which is 2TB in size, comes with heatsink and operates at PCI-E 4.0. On top of that, Samsung drives are known for their performance and reliability (almost all drives i have in my PCs, are also Samsung, both M.2 and 2.5" SATA).

GPU is there to give you your DVI port. RTX 3050 is one generation old (latest is RTX 40-series) and RTX 3050 isn't much for proper gaming GPU. Though, proper gaming GPUs cost a lot and often doesn't include DVI port either. So, for time being, put in RTX 3050, which is still good performer at 1920x1080 resolution.

PC case is full-tower ATX and comes from Phanteks. It has tempered glass side panel, so you can easily see into it but it also has "old school" design, by offering: 3x 5.25" external bays, 6x 3.5" HDD slots and 2x 2.5" SSD mounts,
specs: https://www.phanteks.com/Enthoo-Pro-TemperedGlass.html

Most modern cases do not offer any 5.25" external bays anymore and support for 3.5" HDD mounts is also dropping, since many people nowadays use M.2 drives that are mounted directly on the MoBo. Phanteks Enthoo Pro TG is one of the few cases, that has modern look but also retains old school features. Other such rare cases are: Corsair 780T, Corsair 760T (have that), Corsair 750D AF Edition (have that too) and Thermaltake Core X71. Of course, finding any on sale at current date is ordeal.

Oh, if you don't like tempered glass side panel (it makes PC case quite heavy), then there is original Phanteks Ethoo Pro as well, with acrylic widows on it's side. Internals are same between the two,
specs: https://www.phanteks.com/Enthoo-Pro.html
amazon: https://www.amazon.co.uk/dp/B00K6S1B3Q

And lastly, PSU. PSU is of high quality and with latest ATX 3.0 PSU standard. Fully-modular cables, 80+ Gold efficiency and 10 years of warranty. Sure, wattage wise it's too much, since the build above could live with 550W unit, but 750W is actually cheaper than 550W units currently and it also enables you to go with better GPU, if you so desire. Up to RTX 4070 Ti. (RTX 4080 would need 1kW PSU and RTX 4090 would need 1.6kW PSU.) Overall, Seasonic Focus series is reliable and one of the better PSUs you can buy. Of course, with more money, you can even get better quality PSUs, like Seasonic Vertex or Seasonic PRIME, but Focus is also good enough for most people.
Oh, all 3x of my PCs are also powered by Seasonic, while i have two PRIME units and one Focus unit. (Full specs with pics in my sig.)


As far as your main desire about new PC goes, namely using old hardware (FDD, PS/2) with latest PC goes - well, you might get it working as i talked above or you may face issues. Personally, i'd use modern PC for everyday tasks, while for hobby, have 2nd, era correct PC, to access/use the old hardware/software. Actually, i've already done that. My old AMD build is running WinXP Pro SP2 and i keep it around for retro gaming (pre-2005 games).

Edit:
Right, touchscreen monitor too;
Further reading: https://www.digitalcameraworld.com/buying-guides/the-best-touch-screen-monitors
Or if you want something smaller (e.g next to your main monitor) then Asus ProArt PA148CTV is one option,
review: https://www.tomshardware.com/reviews/asus-proart-pa148ctv-portable-monitor
 
Last edited:
Modern 3.5" FDDs use USB connection, e.g:
amazon: https://www.amazon.co.uk/External-Portable,Floppy-Computer-Accessory-Removable/dp/B099PVQVWQ

But with this, i don't think you can direct boot without first booting into OS, since that FDD operates over USB protocol.
Booting from usb floppy is a pretty common feature, I don't know if they all have it though.
Thing is that even then you would have to disable secure boot because old software is probably not going to work with uefi.
232131d1556558443t-how-do-i-boot-my-usb-stick-img_1248.jpg
 
  • Like
Reactions: Aeacus

Richard1234

Distinguished
Aug 18, 2016
277
5
18,685
This is a tall order and perhaps most complex to solve within your requirements.

3.5" FDDs are all PATA (IDE), while at current moment, 5.25" ODD, 3.5" HDD and 2.5" SSD are all using SATA. To my knowledge, there is no 3.5" FDD that uses SATA.

Modern 3.5" FDDs use USB connection, e.g:
amazon: https://www.amazon.co.uk/External-Portable,Floppy-Computer-Accessory-Removable/dp/B099PVQVWQ

But with this, i don't think you can direct boot without first booting into OS, since that FDD operates over USB protocol.

2nd option is to use PATA to SATA conversion, e.g:
amazon: https://www.amazon.co.uk/Cablecc-Female-Converter-Adapter-Desktop/dp/B081YP2S5R/
alongside SATA power to FDD power (since that adapter has MOLEX connector for HDDs),
amazon: https://www.amazon.co.uk/DeLOCK-Cable-Power-15-Pin-Floppy/dp/B018NKPUIA/
many thanks for the extensive reply! will need to process it gradually, have just quoted the bit looking at first,

my Gigabyte motherboard from 2010 has the following socket labelled FDD on the motherboard:

http://www.directemails.info/tom/floppy.jpg

very tricky to photograph, by chance that photo came out right, further ones totally blurry! the "FDD" label is just about visible, and has a blue arrow pointing at it. I dont know what you'd call that socket?

PC hardware is backwards compatible to the start approx 1980, so you can boot from a bootable floppy with the ancient system, in the boot priorities of the early startup if you put the floppy first, if the first sector is marked bootable, the BIOS loads it into memory and begins running the machine code of that first sector, which then can say load further sectors via BIOS commands, and do things via the BIOS with the CPU in a 16 bit mode. where the BIOS is like a very low level operating system. with some machines at least, a USB floppy drive of mine by SONY emulates the non USB floppy, where my floppy programs work as if its an old era floppy. but I dont know if this behaviour only works with some floppies or some motherboards. I dont know if the emulation is done by the BIOS or by the floppy, but I dont need to program USB to fully control that floppy! eg there are commands to recalibrate the read/write head as the position gradually gets out of synch.

as regards things working or not working, I think it does depend on the motherboard manufacturer, some are more meticulously and more extensively implemented: the early startup isnt part of the PC specification eg different ones have different keys to enter this. I dont know which manufacturers of motherboards you have personally used. this is why I was asking about motherboard manufacturers. eg the one for my HP Spectre laptop has a very limited early startup, whereas the Gigabyte one has a very extensive one.

with the Gigabyte one from 2010, Gigabyte did at the time a huge amount of different motherboards, with some having extensive legacy support, and others only the most modern support of that time eg no floppy and ps2 support.

at some points in time I was programming the ancient hardware directly which is great fun! you have to go via the floppy, otherwise it is major effort trying to decipher how to read CDs or IDE drives, sata then a different problem.
 

Richard1234

Distinguished
Aug 18, 2016
277
5
18,685
I should add that when you switch on a PC, it is a 16 bit computer from approx 1980, the OS eg Windows or Linux then initially is 16 bit and in 1980, it then has to upgrade the CPU mode to 32 bit via some supervisor level gymnastics, ie this needs assembly language, and not user level languages such as C. and then upgrade again to 64 bit if the OS and CPU are 64 bit. Intel kept backwards compatibility for their CPUS, where anything from any past era should function, where to maintain compatibility the machine powers on as 16 bit.

with the 16 bit programming, there is a zone of memory which produces the graphics, if the program writes a byte there, a character appears on the screen corresponding to the ascii value. where the different byte locations correspond to different screen locations. eg I think the BIOS early startup screen is done thus. it cant be done with pixel graphics as there isnt enough memory for a 16 bit computer!

but an OS such as Windows or Linux, then later in the boot goes to say 32 bit and uses a totally different system for the graphics, where the program reads or writes pixels, and has to do its own fonts.

I havent programmed it for more than 10 years, so am a bit rusty, but am thinking of doing this again. Anyway, it is possible that all boot drives pretend to be floppies for the boot process to occur. I havent tested this out nor read this, but just noticed that my floppy disk programs functioned correctly when I booted the machine from a USB floppy drive from some different machines to hand eg my ancient Fujitsu Siemens laptop. The reason for wanting legacy hardware is it is the easiest to program, I have programmed CD drives and IDE hard drives from above an OS, also more than 10 years ago, and IDE drives are a bit of a nightmare. I contacted someone from the committee responsible for the IDE interface, where he gave me some source code for reading and writing IDE hard drives. At the time sata was emerging, but it wasnt in his jurisdiction. I never got as far as programming USB, but would like to eventually. Main problem is each aspect of the programming is a quagmire, eg with the floppy programming one has to figure out how to get the code onto the floppy disk, as you are writing to the sectors directly and not via the MSDOS filesystem. To try things from a CD, one would have to figure out how to write to the CD sectors directly.

I think I got the program onto the floppy via programs written for the MSDOS environment on Windows, where Windows emulates the BIOS calls. I have to go through my notes, in totality the scenario is a bit of a jungle.
 

Aeacus

Titan
Ambassador
I dont know what you'd call that socket?

That's PATA 34-pin socket.
Further reading: https://www.cablestogo.com/learning/connector-guides/internal#34pin

PC hardware is backwards compatible to the start approx 1980
Not all hardware though. E.g when you have modern M.2 PCI-E SSD, i'd like to see how you could get it working with system from ~'90s, that doesn't have even SATA, let alone M.2 PCI-E, only having PATA.

As far as floppies go, and while my days with PCs (some 30 years ago) also started out with floppies (5.25" at first, then moving on to 3.5"), the last time i needed for floppy was back in early 2000s. Since my ancient Pentium II build, running Win98 SE, has FDD and also recovery floppy, just in case OS craps out and i need to get it working again. Haven't turned that build on for years and i don't think the PSU in it is sound anymore. Did think getting new PSU to it but that would be hassle since it has different power connectors than modern ATX PSUs have. Also, figured that i can buy PATA to SATA adapter instead, IF i want the data on it.

I dont know which manufacturers of motherboards you have personally used. this is why I was asking about motherboard manufacturers. eg the one for my HP Spectre laptop has a very limited early startup, whereas the Gigabyte one has a very extensive one.
As of MoBo manufacturers that i've used in my years, well, i don't know them all since i've had loads of PCs in my years. If memory serves me correct, then from oldest:
* IBM XT
* IBM AT
* 286
* 386 (at least 3 different PCs)
* 486 (at least 2 of these)
* Pentium I 133 Mhz
* Pentium II 266 Mhz (still have it)
* AMD build (prior main build) - MoBo: ECS A750GM-M (V7.0)
* Haswell build (missus'es PC) - MoBo: MSI Z97 Gaming 5
* Skylake build (current main PC) - MoBo: MSI Z170A Gaming M5

Up to Pentium II, i have no clue who was the MoBo manufacturer in those old PCs. Though, i have pics of my Pentium II internals and if you're up to it, feel free to perhaps identify the MoBo manufacturer based on looks, if you like;

Nothing fancy here. 2x HDDs, C: is 4.6GB, D: is 6.2GB, runs Win98SE. Also has FDD and ODD.

Z7eyZc9.jpg


Add-in cards up close.

BHlpcRd.jpg


CPU cooler.

PvDRG7p.jpg


PSU label.

8soaHbt.jpg


Read end.

nMBA9oL.jpg


Front end.

w6ATRFM.jpg

Though, laptop with proprietary MoBo (HP) isn't quite comparable to desktop PC, especially the one that doesn't include proprietary MoBo. Due to this, standard desktop MoBos, outside of proprietary ones (e.g like Dell prebuilt), have many options unlocked. Especially when you have MoBo meant for enthusiasts, like Intel Z-series.

Intel 700-series chipset comparison,
link: https://ark.intel.com/content/www/us/en/ark/compare.html?productIds=229719,229720,229721
 

Richard1234

Distinguished
Aug 18, 2016
277
5
18,685

ok, didnt realise that was PATA!

I am winging it to some extent, because the totality of things is a quagmire, where my expertise is highly selective, and geared to ancient peripheral hardware.

Not all hardware though. E.g when you have modern M.2 PCI-E SSD, i'd like to see how you could get it working with system from ~'90s, that doesn't have even SATA, let alone M.2 PCI-E, only having PATA.
backwards to 1980 subject to various small print!

if you connect a USB floppy drive, via emulation either by the BIOS and or the drive, I think you may be able to run ancient floppy based software.

for a given machine, you can clone the hard drive to a different hard drive technology and it might function correctly, eg I cloned my HP Spectre laptop's internal drive to an external USB3 drive, and booted from that USB3 drive just fine to the identical Windows 11, where the internal drive is probably sata.

and I have replaced magnetic drives of laptops with SSDs by cloning the sectors, but where its the same socket. its best if the destination drive is bigger, as 250G by one manufacturer might be less than 250G by another, where the cloning might overrun. thus best to say clone a 250G drive to a 500G one. For the HP laptop, I cloned a 1T internal drive to a 2T USB3 one. unless you determine the exact number of sectors.

whether cloning an IDE boot disk to a sata one on the same machine will work, I dont know, only way to tell is to try it out.

for cloning drives, I use a technical Linux shell command, eg something like:

sudo dd if=/path_to_source_drive of=/path_to_destination_drive

where the form of the path depends on which version of Linux you use, eg Ubuntu has a different scheme from Mint,

where I do this from a "try without installing" Linux option, be careful about installing Linux as it might overwrite Windows. do a sector by sector backup of Windows before trying to instal Linux.

if you run the Linux partition editor you can determine the paths. and you need to do this each time you boot up, as it might assign a different path, depending on the order in which the drives got recognised.


on my 2010 machine, the cloning could take 8 hours to complete, whereas with my modern HP laptop and USB3 it might take less than 2 hours. (I'd have to check my notes).

the copy needs to be a sector clone, not a filesystem level copy, and it will only work on that machine, wont work on any other machine.


the CPU is backwards compatible to approx 1980, and the BIOS also, I havent programmed this in a long time but I think the first 640K of memory is where the 16 bit software loads to, and then above that you have memory mapped hardware, eg the rudimentary text you see with the early startup. With Bill Gates famous quote that no computer could need more than 640K!

the Intel specification of x86 is it begins as 16 bit, then the boot software can if it chooses promote to 32 bit, if the cpu is 32 bit, and that promote if it chooses to 64 bit, if the x86 is 64 bit.

when you install 32 bit Windows, that promotes up to 32 bit, and 64 bit Windows on the same machine promotes to 64 bit.

basically if you program the CPU directly without OS, you programmatically start with a machine from approx 1980, and it is major effort to upgrade the programming to later technology!

eg for pixel graphics I used an interface I think called VESA, but I never got as far as programming graphics cards, and didnt program audio either.

As far as floppies go, and while my days with PCs (some 30 years ago) also started out with floppies (5.25" at first, then moving on to 3.5"), the last time i needed for floppy was back in early 2000s. Since my ancient Pentium II build, running Win98 SE, has FDD and also recovery floppy, just in case OS craps out and i need to get it working again. Haven't turned that build on for years and i don't think the PSU in it is sound anymore. Did think getting new PSU to it but that would be hassle since it has different power connectors than modern ATX PSUs have. Also, figured that i can buy PATA to SATA adapter instead, IF i want the data on it.
I only got my first PC in 2004, so I dont know the earlier technology, but I learnt to program it!

when I learnt to program the hardware, the standard was IDE drives, but I retrod to the 3.5" floppy era, not realising 5.25" was even earlier!

earliest PCs I used were XP, but if your earlier version installed from floppies, and you still have the instal floppies, you can probably instal it to a modern PC via a USB floppy, you need to configure the early startup to boot from the floppy drive first, or select that drive from the early startup.

you'd then need to instal any other software from scratch,

in general you cant use installed software on a different machine, but you could reinstall to a different machine.

ultimately you have to experiment a bit, as a lot of stuff isnt documented, or is really obscurely documented somewhere out there on the internet.

you need to do sector backups of system drives before experimenting, in case things go wrong.

with each version of Windows, it is worth buying a few copies before it eventually gets discontinued, because once discontinued you may be stranded.

best time to buy spare copies is when the next version of Windows is first released, as that previous version is now fully matured.


As of MoBo manufacturers that i've used in my years, well, i don't know them all since i've had loads of PCs in my years. If memory serves me correct, then from oldest:
* IBM XT
* IBM AT
* 286
* 386 (at least 3 different PCs)
* 486 (at least 2 of these)
* Pentium I 133 Mhz
* Pentium II 266 Mhz (still have it)
* AMD build (prior main build) - MoBo: ECS A750GM-M (V7.0)
* Haswell build (missus'es PC) - MoBo: MSI Z97 Gaming 5
* Skylake build (current main PC) - MoBo: MSI Z170A Gaming M5

Up to Pentium II, i have no clue who was the MoBo manufacturer in those old PCs. Though, i have pics of my Pentium II internals and if you're up to it, feel free to perhaps identify the MoBo manufacturer based on looks, if you like;
its if you build the PC from scratch, that you have to go via a specific motherboard manufacturer,

ie where you buy separately: motherboard, CPU, memory, PSU, tower case, mouse, keyboard, monitor, loudspeakers, and then connect them all up. The CPU and memory need to be compatible with the motherboard, most wont be compatible, and it needs some research to get compatible ones.

I bought a ready made motherboard + CPU + memory combo from Maplins, and it kept crashing outright when I tried to instal XP whenever it got to installing something to do with .net . asking online, I was told probably the memory was incompatible. I then found a Corsair memory which was compatible, and now XP installed perfectly.

but a ready built PC can be mystifying who its really by, no idea who the one in the photos is by,
does the hardware device manager give any info?


Nothing fancy here. 2x HDDs, C: is 4.6GB, D: is 6.2GB, runs Win98SE. Also has FDD and ODD.

Z7eyZc9.jpg


Add-in cards up close.

BHlpcRd.jpg


CPU cooler.

PvDRG7p.jpg


PSU label.

8soaHbt.jpg


Read end.

nMBA9oL.jpg


Front end.

w6ATRFM.jpg

Though, laptop with proprietary MoBo (HP) isn't quite comparable to desktop PC, especially the one that doesn't include proprietary MoBo. Due to this, standard desktop MoBos, outside of proprietary ones (e.g like Dell prebuilt), have many options unlocked. Especially when you have MoBo meant for enthusiasts, like Intel Z-series.

Intel 700-series chipset comparison,
link: https://ark.intel.com/content/www/us/en/ark/compare.html?productIds=229719,229720,229721

forwards compatibility is much trickier, eg a 32 bit machine might not cope with really huge hard drives, eg my XP cannot cope with 5T drives, but does handle 2T without problem. It handles SOME 4T ones, not others. depends whether the system was designed with forwards compatibility.

I had a non PC computer, Commodore Amiga 1200, and bought 2 SCSI-2 interfaces, SquirrelSCSI via the PCMCIA slot, and I think "blizzard" SCSI. Then when I got some SCSI-3 hardware, it only worked with the one SCSI-2 interface, the SquirrelSCSI not the other! basically the one it worked with had forwards compatibility programmed in, where it didnt assume anything about the SCSI command sizes, some SCSI-3 commands use more bytes than SCSI-2 commands.
 
Last edited:

Richard1234

Distinguished
Aug 18, 2016
277
5
18,685
That's PATA 34-pin socket.
Further reading: https://www.cablestogo.com/learning/connector-guides/internal#34pin


Not all hardware though. E.g when you have modern M.2 PCI-E SSD, i'd like to see how you could get it working with system from ~'90s, that doesn't have even SATA, let alone M.2 PCI-E, only having PATA.
the really big limiting factor of a specific PC is what sockets it has, and these divide into different categories:

1. very specific sockets eg for the loudspeaker, cpu, memory. eg the cpu socket will limit it to a specific subset of one of AMD and Intel.
2. abstract sockets: where the same socket allows many different things eg USB2, and the PCI sockets.

the abstract sockets subdivide into different levels of generality, eg the PCI kind of sockets are more general than USB, because you can get a USB card for the PCI kind of socket.

the PCI kind of socket is probably based on a motherboard bus, which is lower level, hardwired on the circuitboard, which would allow more impressive performance, whereas USB is higher level which wont be so impressive.

the lowest level bus will be the one the CPU and memory are on, I am not sure how the other buses are fed in, probably there is an interface attached to the CPU's bus, where that interface then owns say the PCI's bus.

a bus is literally some parallel wires, with multiple gizmos using the same wires to communicate with each other. a protocol is needed so that only 1 item writes to the bus at a time, but more than one could read the bus. a 16 bit PC typically has a 16 bit bus, ie 16 wires, where there might be some further metalevel wires for requesting and accepting the bus.

but with USB that bus has to go through an external socket which might be imprecisely inserted, and thus the timing cant be as precise.


say a PS/2 socket I think only allows say a mouse, a keyboard and maybe a PS/2 splitter.

I think the old parallel and serial ports were put to innovative uses.

each socket will also have a data rate limit, where any socket conversion via that socket will be same speed or slower, eg USB2 has a very specific limit.

laptops are much more limited for sockets, as its kind of just the USB sockets available for general use.

I am not sure what kind of generality the sata sockets have, so far I have only used them for SSDs.


the CPU, memory and BIOS are ALWAYS backwards compatible for all PCs back to approx 1980, thus if you can connect a floppy drive, and there is emulation of the old drives, then you can run systems from the floppy disk era. forwards compatibility is never guaranteed, it depends how well designed the PC side and the external hardware side are designed and even then may not be viable. I think the IDE drives had a very specific upper limit to the size.
this is where its best to get a motherboard with floppy disk sockets and all legacy sockets on the motherboard, as then the machine is guaranteed to work with systems right back to the year dot

which generally is only viable with ATX motherboards, as big enough to have everything.

where if an ancient system fails, you can reinstall it to a new machine, but only if you have the installation disks. I bought many dozen blank MSDOS 3.5" floppy disks whilst these were still available.

the only way to know for sure whether something works is to try it out on a specific machine, eg will an OS installed to an IDE drive, function the same if you clone that drive to a SATA drive?

some surprising things will work, eg when I found that the more recent Norton's wont work on XP, they told me I could install Windows 10 on my 2010 machine. Tried that and they were right! where I then could install Norton.

I installed Windows 10 without problem also on someone's Windows 7 laptop also.

Because I installed XP myself, I made the 2010 machine multiboot, where I can either boot to my old XP system, eg my Canon camera from 2012, has software which only works properly on XP.

Also for Ubuntu 8.10, that only recognises the drives properly if I boot to XP and then restart to Ubuntu. if I use Windows 10, then reboot to Ubuntu, some drives arent available!

whereas Linux Mint handles them properly.

my Sony USB floppy booted my floppy based software which doesnt go via the OS, without problem on my Fujitsu Siemens laptop from maybe 2007 via the USB2 sockets.



but in general, an OS installed on one machine, wont work on another machine, cloning only works for the specific machine. even the same manufacturer and model might fail because it might be a different production run with some components different or updated.

eg I bought an HP Pavilion in 2004, with an intel CPU, some months later went back to the shop, and that product now had an AMD CPU.
 
Last edited:

Aeacus

Titan
Ambassador
I only got my first PC in 2004, so I dont know the earlier technology, but I learnt to program it!

when I learnt to program the hardware, the standard was IDE drives, but I retrod to the 3.5" floppy era, not realising 5.25" was even earlier!

earliest PCs I used were XP, but if your earlier version installed from floppies, and you still have the instal floppies, you can probably instal it to a modern PC via a USB floppy, you need to configure the early startup to boot from the floppy drive first, or select that drive from the early startup.

For vintage hardware, we have dedicated topic in the forums, which would be interesting read for you (perhaps even learn something new) and/or you can even post your ideas/experience in there,
link: https://forums.tomshardware.com/threads/vintage-pc-technology-mega-discussion-thread.2817216/

My history with PCs is also written in that topic, post #19. :sol: My reply quote from that topic:
(Note: That reply was written in 2016, before i bought my current Skylake build.)

My 1st PC was an IBM XT in back of 1992. It had 2Mhz CPU and 10MB HDD. OS MS DOS, no mouse, no speakers, black & white monitor and 5¼-inch floppy drive. Had 1 or 2 games on it that i loved to play.

Next upgrade was IMB AT since it was faster than XT.

3rd PC i owned was 286 with CGA monitor.

Big upgrade came with 386. It had turbo button for instant CPU Mhz boost, EGA monitor, speakers, 2 button mechanical mouse (no scroll wheel), 3½-inch floppy drive and OS in form of Norton Commander. My gaming era began with 386. One game that i still casually play while emulating DOS is Supaplex.

Next PC was 486 with VGA monitor and faster CPU with more HDD space. OS Windows 3.1.

After that i went on to the Pentium series. Pentium 1 133Mhz that got upgraded to Pentium 1 166Mhz. OS Windows 95.

Next upgrade was Pentium 2 266Mhz with Windows 98SE. I still have it in running conditions with 2 PATA HHDs: 4.2GB for system and 6.1GB for data.

Years passed until Pentium 2 got too old and i bought my current AMD build in 2011 (specs in this topic) that i'm constantly upgrading to keep up with time.

Nowdays, PC building is very easy. Everything gets auto dedected and it doesn't matter witch way you connect your SATA drive.
Back then there was no such luxury. You had to manually read from HDD label the values of Size, Cyls, Head, Precomp, Landz, Sector and insert them into BIOS in hopes for the PC dedects the drive. Also the orientation of PATA cable was importnant. Not to mention the Master and Slave drives. You couldn't install two Master drives on a single PATA cable in hopes that both of them will work.

So, yeah, i've lived with many OSes in my time, starting out with MS-DOS.

Oh, i've also used some GNU/Linux distros in my time. Namely with my laptop (Asus Eee PC 701). It 1st came with XandrosOS. That was very complex OS to use, so wiped it and installed Ubuntu Eee OS instead, which was optimized for my Eee series laptop. When Ubuntu Eee went EoL, i tried Debian on it. Didn't like it due to being way barebones (also tried it based on friend's suggestion). So, wiped Debian and installed Lubuntu to it, which lives on my laptop to this day.

I also have Linux Mint on USB thumb drive, as bootable OS (loads itself into RAM to boot from), just in case any of my current PC Win installations decide to crap out and i need to recover my data.

If it weren't for Steam and games, i'd be using Linux Mint problem free. But since i like to game, i'm stuck with Win. And i know that i can emulate Win under GNU/Linux distro but not all games are happy with emulation and it's quite an ordeal to set it up. Easier to put up with Win and it's flaws.

but a ready built PC can be mystifying who its really by, no idea who the one in the photos is by,
does the hardware device manager give any info?
Can't fire up the old Pentium II build since i haven't turned it on for ~10 years or so. Capacitors in PSU are for sure long gone and i'd get a "boom" + magic smoke if i tried to power it on. On top of that, the PSU is original, from 1998, when the PC was bought brand new (that's 25 years old PSU in there). Caps on MoBo might be gone as well.
So, best i could do, is take the PATA drives out from it and hook them to my current PC, to get to the personal data i have on there.

I installed Windows 10 without problem also on someone's Windows 7 laptop also.
Any PC that can have Win7, can also have Win10 on it, since hardware requirements wise, there isn't much change (if any).

E.g my Skylake and Haswell builds also started out with Win7, namely Win7 Pro OEM version. And after some time, i was able to upgrade both to Win10 Pro, without any cost (my OEM key also activated the Win10 upgrade).

Because I installed XP myself, I made the 2010 machine multiboot, where I can either boot to my old XP system, eg my Canon camera from 2012, has software which only works properly on XP.
In my years, i've also used dual-boot. With my now old AMD build, for some time, i had it dual-boot between WinXP Pro SP2 and Linux Mint. Didn't end up using Linux Mint that much though since gaming was done with Win. :)

As far as programming in general goes, i get that it can be exiting hobby. Though, programming as such is too complex for my brain and i like to keep myself away from it. My missus already knows how to program (she earns the living with programming), while i like to keep myself on the hardware side of things. And while i have the know-how of some software troubleshooting as well, i don't like dealing with software issues. I prefer hardware. :)

Back to your initial build advice;
I suggest that you 1st make up if you want to get Intel or AMD CPU, since that narrows down the MoBo selection a lot. Once CPU is in place, then you can look at chipset series (people usually look at latest/newest chipset), and then look what features MoBos offer.

If the DVI port on MoBo is a must, then you should be looking at older MoBos, rather than what is currently the latest. For example, here's back I/O of my Skylake build MoBo:

1024.png


It does have DVI port, but it is only DVI-D, meaning it only carries digital signal. So, if you have analog monitor (e.g with VGA connector), you'd need either DVI-A (rare) or DVI-I port (which carries both, digital and analog signal). My MSI Z170A Gaming M5 MoBo is from 2016 and if interested what other features it has;
specs: https://www.msi.com/Motherboard/Z170A-GAMING-M5/Specification
(pick Detail tab)
 

Richard1234

Distinguished
Aug 18, 2016
277
5
18,685
For vintage hardware, we have dedicated topic in the forums, which would be interesting read for you (perhaps even learn something new) and/or you can even post your ideas/experience in there,
link: https://forums.tomshardware.com/threads/vintage-pc-technology-mega-discussion-thread.2817216/

My history with PCs is also written in that topic, post #19. :sol: My reply quote from that topic:
(Note: That reply was written in 2016, before i bought my current Skylake build.)
looks interesting, the start post says it even covers punched cards!


So, yeah, i've lived with many OSes in my time, starting out with MS-DOS.

Oh, i've also used some GNU/Linux distros in my time. Namely with my laptop (Asus Eee PC 701).
I think Asus manufacture motherboards, what is your impression of Asus, eg is the early startup more fully featured?

It 1st came with XandrosOS. That was very complex OS to use, so wiped it and installed Ubuntu Eee OS instead, which was optimized for my Eee series laptop. When Ubuntu Eee went EoL, i tried Debian on it. Didn't like it due to being way barebones (also tried it based on friend's suggestion). So, wiped Debian and installed Lubuntu to it, which lives on my laptop to this day.

I also have Linux Mint on USB thumb drive, as bootable OS (loads itself into RAM to boot from), just in case any of my current PC Win installations decide to crap out and i need to recover my data.

If it weren't for Steam and games, i'd be using Linux Mint problem free. But since i like to game, i'm stuck with Win. And i know that i can emulate Win under GNU/Linux distro but not all games are happy with emulation and it's quite an ordeal to set it up. Easier to put up with Win and it's flaws.
Currently I just use Linux from optical disk, try without installing, that way afterwards the PC is unchanged. Mint seems very compatible and modern, but I like Ubuntu 8.10, only thing is it doesnt work on all systems, try without installing has more of the stuff I need.

Can't fire up the old Pentium II build since i haven't turned it on for ~10 years or so. Capacitors in PSU are for sure long gone and i'd get a "boom" + magic smoke if i tried to power it on. On top of that, the PSU is original, from 1998, when the PC was bought brand new (that's 25 years old PSU in there). Caps on MoBo might be gone as well.
So, best i could do, is take the PATA drives out from it and hook them to my current PC, to get to the personal data i have on there.

you'd need to check the battery for the clock also, that eventually goes flat, and can lead to problems. if you buy a voltmeter you can check with that if it is flat.

Any PC that can have Win7, can also have Win10 on it, since hardware requirements wise, there isn't much change (if any).
main problem with win 10 on my 2010 machine, is it doesnt use the higher res modes of my more recent monitor, but maybe that can be fixed with a new graphics card?


E.g my Skylake and Haswell builds also started out with Win7, namely Win7 Pro OEM version. And after some time, i was able to upgrade both to Win10 Pro, without any cost (my OEM key also activated the Win10 upgrade).


In my years, i've also used dual-boot. With my now old AMD build, for some time, i had it dual-boot between WinXP Pro SP2 and Linux Mint. Didn't end up using Linux Mint that much though since gaming was done with Win. :)

As far as programming in general goes, i get that it can be exiting hobby. Though, programming as such is too complex for my brain and i like to keep myself away from it. My missus already knows how to program (she earns the living with programming), while i like to keep myself on the hardware side of things. And while i have the know-how of some software troubleshooting as well, i don't like dealing with software issues. I prefer hardware. :)
in the old days, it was easier to get into programming, because fewer options, and all were good. nowadays there is too much proliferation of too many options, and its difficult to focus on anything.

I originally got into programming around 1992 as a hobby, then very actively from maybe 2004 to 2006,
with some up to 2010. after that I kept getting interrupted by real world problems, eg recently trying to manage many investments, but I want to return to programming again.

main problem is to try and keep really focussed and therefor also really selective. basically laser beam focus, where you focus on one really narrowly defined topic.

There is too much stuff going on, and then also say smartphone programming also which I know nothing about.

if you do programming as a job, then the job kind of decides what you should focus on, and forces you to focus on that. if you go on a course also, the course forces you to focus. but if you program as a hobby, it is very difficult to focus nowadays, and not just on programming but on anything, because way too many interesting things going on, and eg I have gotten quagmired with investing topics.

I think its a philosophical problem, that if you can have anything at all that you want, but only allowed one thing, what would you choose? whatever I look at, something even more interesting comes up. sometimes I focus on something for a few weeks, but then something else gets my attention or I have to deal with some real world problem. Whereas in the old days, there was nothing at all to do, and in a year you might find one interesting new thing to do, and you'd get absorbed in it.

Back to your initial build advice;
I suggest that you 1st make up if you want to get Intel or AMD CPU, since that narrows down the MoBo selection a lot. Once CPU is in place, then you can look at chipset series (people usually look at latest/newest chipset), and then look what features MoBos offer.
probably I'll go for AMD, with Intel multicores they often use hyperthreading, where some of the cores are faked in software, whereas with AMD multicore really is multicore, eg if it says 4 core, it really is 4 separate CPUS, but an intel 4 core might be 2 CPUS pretending to be 4.

I was thinking I might delay the graphics card till later, that way I can see if the motherboard has other compatibility, and then jump ship if necessary. Gigabyte is the devil I know. But I want to try some other manufacturer to get some perspective, and was considering Asus.

If the DVI port on MoBo is a must, then you should be looking at older MoBos, rather than what is currently the latest. For example, here's back I/O of my Skylake build MoBo:

1024.png


It does have DVI port, but it is only DVI-D, meaning it only carries digital signal. So, if you have analog monitor (e.g with VGA connector), you'd need either DVI-A (rare) or DVI-I port (which carries both, digital and analog signal). My MSI Z170A Gaming M5 MoBo is from 2016 and if interested what other features it has;
specs: https://www.msi.com/Motherboard/Z170A-GAMING-M5/Specification
(pick Detail tab)
possibly I might abandon the DVI constraint, as DVI is neither ancient nor modern!

do any of the newer PCs or cards do HDMI with audio?

HDMI with tvs is both audio and picture, but with my existing PCs it seems to only do picture, I suppose you might need integration of the graphics card with the audio card for it to output both, but DVD players etc cope with that so it ought to be viable, maybe a card which does both graphics and audio. I dont know if such exist?
 

Aeacus

Titan
Ambassador
I think Asus manufacture motherboards, what is your impression of Asus, eg is the early startup more fully featured?
Out of 4 mayor MoBo manufacturers: MSI, Asus, Gigabyte and EVGA, my 1st choice is MSI and 2nd one would be Asus.

Asus is known for building reliable MoBos, without much fuss or shady dealings. Also, i've heard that their customer support is good (haven't personally contacted Asus about anything).

MSI is my personal brand preference and i'm happy with MSI products. Thus far, i have: 2x MSI MoBos, 3x MSI GPUs and 1x MSI monitor. All work without issues. Sure, MSI has gotten some bad press lately but who hasn't. I still consider MSI as my #1 brand for MoBos and GPUs.

EVGA used to make plenty high-end MoBos but took a bit of time to reflect on their business (e.g EVGA stopped GPU production with RTX 40-series, leaving RTX 30-series the last GPU series EVGA made). MoBo wise, EVGA is like the "odd man out", focusing on niche aspect of MoBo manufacture, namely high CPU overclocks. For example, here's latest EVGA Z790 chipset MoBo;
specs: https://www.evga.com/products/product.aspx?pn=121-RL-E798-KR
It really looks unique. It's also E-ATX in size and two most apparent features, contrary to "normal" MoBo layout, are: 90 degrees turned CPU socket, which puts the RAM slots above the CPU, and not right from it, as most conventional MoBos have them + power connections that are rotated 90 degrees to be parallel to MoBo itself, rather than sticking right up, like most MoBos have it. This makes the power cable management far cleaner and nicer to look at.
With MoBos they make and GPUs they did make, EVGA always stood out as "odd man out", by making hardware contrary to conventional design. EVGA likes to push the limits and try something else with MoBos/GPUs they make/made. And EVGA products are known to be reliable as well.

Gigabyte is something i don't suggest to anyone, mostly due to their shady dealings with MoBo revisions.
Article: https://web.archive.org/web/2015022...hing-it-a-motherboard-revision-too-far,3.html
Since original article is gone, i had to use WayBackMachine to load the webpage, luckily the images do show up, so you can compare the MoBo revisions Gigabyte did.
As of late, Gigabyte screwed up with their new PSU lineup as well. It took several videos from Steve Burke (GamersNexus) to bring light to the issue, before Gigabyte finally succumbed and made a recall of their PSUs. I can post the PSU saga if interested.
Also, i used to have Gigabyte PC case (Gigabyte GZ-G2 Plus) and it was a poor one. So, no, i won't be buying anything Gigabyte nor suggest getting Gigabyte either.

AsRock is another MoBo manufacturer and mostly known for cheap, low-end stuff. But some AsRock MoBos are good. Just need to pick the right one.

NZXT is another niche MoBo manufacturer, only having few MoBos to choose from within chipset. Since NZXT is mainly PC case manufacturer, their MoBos are best suited to be used with NZXT PC cases, for unified clean looks. E.g NZXT N7 Z790 MoBo;
specs: https://nzxt.com/product/n7-z790
As of brand itself, they are good. While i don't personally like the PC cases they make (too bland/boring for me), their RGB solutions are nice. Even i have two sets of NZXT HUE+ ARGB controllers + LED strips in my PCs, for that ARGB eyecandy + 140mm ARGB fans as well, which have one of the highest CFM fans in 140mm fan segment. Oh, their customer support is also good. I've had few run-ins with them and all have been pleasant.

Biostar is another MoBo manufacturer and old one. Though, Biostar mostly makes cheap/low-end MoBos and isn't something one would usually consider.

And lastly, there is Colorful. Colorful MoBos are fancy and mostly marketed to Asia market. Essentially Chinese cheap brand. Fancy looking but questionable reliability. Same goes for the GPUs they make.

Of course, there are more MoBo manufacturers out there but above listed are the ones that offer currently the latest consumer grade chipset MoBos (e.g Intel 600/700-series). Another good MoBo brand is Supermicro, especially when you're looking for server MoBo, for e.g Intel Xeon CPU.

you'd need to check the battery for the clock also, that eventually goes flat, and can lead to problems.
Dead CMOS battery usually isn't much of an issue and system does work even without CMOS battery. Sure, older PCs have issues keeping the time right and you can't save BIOS settings, since CMOS battery is needed for it.

main problem with win 10 on my 2010 machine, is it doesnt use the higher res modes of my more recent monitor, but maybe that can be fixed with a new graphics card?
Yeah, better (newer) GPU most likely fixes the issue since it's up to GPU what kind of reso it can support.

This reminds me instances from days of old, where i had issues of monitor not being able to support higher resolution. E.g within OS settings, i could pick 800x600 reso, but once "Apply", i got bunch of colored lines and a garbled mess, since VGA monitor wasn't capable of showing such "high" resolution. So, had to hook up SVGA monitor that did support this resolution, just so that i can bring the resolution back down do lower one: 640x480, which VGA monitor did support. :cheese:

I think its a philosophical problem, that if you can have anything at all that you want, but only allowed one thing, what would you choose? whatever I look at, something even more interesting comes up. sometimes I focus on something for a few weeks, but then something else gets my attention or I have to deal with some real world problem. Whereas in the old days, there was nothing at all to do, and in a year you might find one interesting new thing to do, and you'd get absorbed in it.
Well, my take on this is, that with better communication (internet namely), one can open their mind for far more possibilities and even end up with information overload, whereby it would be very hard to focus on single/few things. It takes quite a lot of willpower to stand against all that temptation and force yourself to focus on what you like the most (that is, if you even know what you like).

probably I'll go for AMD, with Intel multicores they often use hyperthreading, where some of the cores are faked in software, whereas with AMD multicore really is multicore, eg if it says 4 core, it really is 4 separate CPUS, but an intel 4 core might be 2 CPUS pretending to be 4.
Hyperthreading isn't only Intel thing, but AMD does it as well. Nowadays, it is called "threads". So, every single CPU core can have two threads (aka hyperthreading).
E.g top-of-the-line AMD CPU: Ryzen 9 7950X3D,
specs: https://www.amd.com/en/products/apu/amd-ryzen-9-7950x3d
Has 16 physical cores but with hyperthreading, it has 32 threads.

Latest/fastest CPU, both on Intel side and AMD side, that doesn't have hyperthreading (thus virtual cores that you for some reason despise), are:
Intel 9th gen: Core i5-9700K - 8 cores and 8 threads
AMD 3000-series: Ryzen 5 3500X - 6 cores and 6 threads

Anything newer/better than these two, will give you virtual cores inside CPU. This is the norm today (actually have been so for a decade).

do any of the newer PCs or cards do HDMI with audio?

HDMI with tvs is both audio and picture, but with my existing PCs it seems to only do picture, I suppose you might need integration of the graphics card with the audio card for it to output both, but DVD players etc cope with that so it ought to be viable, maybe a card which does both graphics and audio. I dont know if such exist?
All HDMI versions, from the very beginning of 1.0, up to the latest: 2.1, all of them are capable of carrying audio, alongside the image.
Full version specs comparison here: https://en.wikipedia.org/wiki/HDMI#Main_specifications

So, if you can't get audio over HDMI, better look at what hardware/software issue you have, since HDMI itself is capable of carrying both, regardless the version. After all, HDMI was created to be a single-cable digital audio/video connector interface.
 

Richard1234

Distinguished
Aug 18, 2016
277
5
18,685
first, regarding your earlier posting, where I want both modern and ancient, I want the modern side to be as modern as possible, because I wish to maybe use the PC for maybe even 10 years, like my 2010 one has been used for 13 years. But in terms of say speeds and number of cores, a compromise with cost. I want the modern to be the most modern, and the ancient to be the most ancient: I want everything!

Out of 4 mayor MoBo manufacturers: MSI, Asus, Gigabyte and EVGA, my 1st choice is MSI and 2nd one would be Asus.
I'll try MSI then,

Asus is known for building reliable MoBos, without much fuss or shady dealings. Also, i've heard that their customer support is good (haven't personally contacted Asus about anything).

MSI is my personal brand preference and i'm happy with MSI products. Thus far, i have: 2x MSI MoBos, 3x MSI GPUs and 1x MSI monitor. All work without issues. Sure, MSI has gotten some bad press lately but who hasn't. I still consider MSI as my #1 brand for MoBos and GPUs.
when you say GPU, I understand the general concept of a GPU, but are you talking of a GPU instead of a graphics card? integrated into the motherboard?

also will the GPU be a standardised interface, where different GPUs can run the same code, or is it something very proprietory, where you need software programmed for the specific GPU?

eg standardisation can be where the GPU implements at a hardware level some standardised programming interface, the way IDE drives are interchangeable.

but if the interface is a Windows driver, that is a problem if I try to program the interface directly without an intervening OS. I may try to program the hardware directly without an OS, where windows drivers are cold comfort. I dont want to have to write different software for each GPU.

EVGA used to make plenty high-end MoBos but took a bit of time to reflect on their business (e.g EVGA stopped GPU production with RTX 40-series, leaving RTX 30-series the last GPU series EVGA made). MoBo wise, EVGA is like the "odd man out", focusing on niche aspect of MoBo manufacture, namely high CPU overclocks. For example, here's latest EVGA Z790 chipset MoBo;
specs: https://www.evga.com/products/product.aspx?pn=121-RL-E798-KR
It really looks unique. It's also E-ATX in size and two most apparent features, contrary to "normal" MoBo layout, are: 90 degrees turned CPU socket, which puts the RAM slots above the CPU, and not right from it, as most conventional MoBos have them + power connections that are rotated 90 degrees to be parallel to MoBo itself, rather than sticking right up, like most MoBos have it. This makes the power cable management far cleaner and nicer to look at.
With MoBos they make and GPUs they did make, EVGA always stood out as "odd man out", by making hardware contrary to conventional design. EVGA likes to push the limits and try something else with MoBos/GPUs they make/made. And EVGA products are known to be reliable as well.
are theirs just for Intel? although you dont recommend them next, Gigabyte I think do motherboards for both AMD and Intel cpus. Do MSI do motherboards for both AMD and Intel, or only one of these 2?

Gigabyte is something i don't suggest to anyone, mostly due to their shady dealings with MoBo revisions.
Article: https://web.archive.org/web/2015022...hing-it-a-motherboard-revision-too-far,3.html
reading through that, it concurs with my Gigabyte motherboard, that there were different revisions, and I had to look for stuff relating to my specific revision, eg when I contacted them, they asked which revision!

this sounds like "bait and switch", which is a sharp trading practice, where you are shown something impressive as "bait", but what you receive is something worse, the "switched" item.

where probably they are developing an expensive boosted version as bait, and then delivering something worse.
https://web.archive.org/web/2015022...hing-it-a-motherboard-revision-too-far,3.html
Since original article is gone, i had to use WayBackMachine to load the webpage, luckily the images do show up, so you can compare the MoBo revisions Gigabyte did.
As of late, Gigabyte screwed up with their new PSU lineup as well. It took several videos from Steve Burke (GamersNexus) to bring light to the issue, before Gigabyte finally succumbed and made a recall of their PSUs. I can post the PSU saga if interested.
Also, i used to have Gigabyte PC case (Gigabyte GZ-G2 Plus) and it was a poor one. So, no, i won't be buying anything Gigabyte nor suggest getting Gigabyte either.

AsRock is another MoBo manufacturer and mostly known for cheap, low-end stuff. But some AsRock MoBos are good. Just need to pick the right one.

NZXT is another niche MoBo manufacturer, only having few MoBos to choose from within chipset.

could you clarify what you mean by chipset?

Since NZXT is mainly PC case manufacturer, their MoBos are best suited to be used with NZXT PC cases, for unified clean looks. E.g NZXT N7 Z790 MoBo;
specs: https://nzxt.com/product/n7-z790
As of brand itself, they are good. While i don't personally like the PC cases they make (too bland/boring for me), their RGB solutions are nice. Even i have two sets of NZXT HUE+ ARGB controllers + LED strips in my PCs, for that ARGB eyecandy + 140mm ARGB fans as well, which have one of the highest CFM fans in 140mm fan segment. Oh, their customer support is also good. I've had few run-ins with them and all have been pleasant.

Biostar is another MoBo manufacturer and old one. Though, Biostar mostly makes cheap/low-end MoBos and isn't something one would usually consider.

And lastly, there is Colorful. Colorful MoBos are fancy and mostly marketed to Asia market. Essentially Chinese cheap brand. Fancy looking but questionable reliability. Same goes for the GPUs they make.

Of course, there are more MoBo manufacturers out there but above listed are the ones that offer currently the latest consumer grade chipset MoBos (e.g Intel 600/700-series). Another good MoBo brand is Supermicro, especially when you're looking for server MoBo, for e.g Intel Xeon CPU.


Dead CMOS battery usually isn't much of an issue and system does work even without CMOS battery. Sure, older PCs have issues keeping the time right and you can't save BIOS settings, since CMOS battery is needed for it.
main problem is firstly not saving BIOS settings, if you are using nonstandard ones, and also problems when using windows, eg https certificates timeline errors, may depend if the version of Windows resynchs time from the internet like smartphones do, but even then can go wrong. Also if you use the computer without internet, the time will be wrong, this causes problems with file timestamps, where eg say I created a file and later forget what a file was called and forget which directory. I can try and locate it by looking at the timestamps of the directories, eg say I created a file yesterday, but forget which directory and subdirectory. But I havent created any other files for some weeks. I can then just look for the top level directory with a december 2023 timestamp, then within that, list the directory by timestamp, and just check the december 2023 ones, and thus navigate down to the lost file where I dont even remember what it was called! But if the clock is wrong, I may never locate the file again!

also I can easily filter out all the recent files I created via timestamps. correct timestamps is highly important! eg with *nix, they have rebuild commands (the make command and makefiles), which keep track of file dependencies, eg
say file1 is dependent on file2, file3, file4, the rebuild command will only rebuild file1 if its timestamp is earlier than the existing file2, file3, file4,

this all goes kaput if timestamps are incoherent!

the principle is that complex software builds have lots of automatically generated files, and if you just alter one source file, you only want dependent files rebuilt, not the entire system!




Yeah, better (newer) GPU most likely fixes the issue since it's up to GPU what kind of reso it can support.
This reminds me instances from days of old, where i had issues of monitor not being able to support higher resolution. E.g within OS settings, i could pick 800x600 reso, but once "Apply", i got bunch of colored lines and a garbled mess, since VGA monitor wasn't capable of showing such "high" resolution. So, had to hook up SVGA monitor that did support this resolution, just so that i can bring the resolution back down do lower one: 640x480, which VGA monitor did support. :cheese:
yeah, its one of those scary circumstances where the video mode isnt compatible, and so you cant get out as that is via the video!

Well, my take on this is, that with better communication (internet namely), one can open their mind for far more possibilities and even end up with information overload, whereby it would be very hard to focus on single/few things. It takes quite a lot of willpower to stand against all that temptation and force yourself to focus on what you like the most (that is, if you even know what you like).
the human brain wasnt designed for today's information overload, I think humans and mammal + bird brains are designed for a fixed terrain, where the animal has a home, and then makes excursions from that in different directions, where most info is established and fixed or changes gradually eg with the seasons, where an animal will know intensely its geographical zone. birds migrate but I think will return to the exact same spot. if you drive a car, you get this with the roads of your district, eg if I see a jam ahead, I know alternative routes to avoid those, and I know how to evade left only left lanes, and right only right lanes. where if you are driving straight on, you can get forced off the road if in the wrong lane! Also with shops, these remain fixed for years or even decades, where you can head off to a specific shop say 10 miles away, because you already know that subset of the domain. Also stuff within a shop remains in a fixed location for years. when they do change locations of items, you get total chaos!

eg if I want to buy some WD40 or sandpaper, I head to B&Q, there are several, so I might combine that with a visit to some other shop, and head for the nearest B&Q to that, where I then head off say to a specific dual carriageway, and make perfect lane choices, etc.

if I drive in an unfamiliar town, I make many mistakes!

but today, we get an incessant torrent of totally different info, which also kind of pushes out established memories, and one kind of ends up knowing nothing, because whatever one's brain latches onto is soon eclipsed by a snowstorm of other stuff. "moss doesnt gather on a rolling stone"! memory is informational moss! if you are stuck in a river, you can progress to the sea by going downstream. but if you are stuck in an ocean, its not apparent which way to head!

in the old days, what I knew, I knew really well, because always raking over and navigating the same environment of things and facts. most of my ideas and opinions are from the early era of the internet or earlier.


Hyperthreading isn't only Intel thing, but AMD does it as well. Nowadays, it is called "threads". So, every single CPU core can have two threads (aka hyperthreading).
E.g top-of-the-line AMD CPU: Ryzen 9 7950X3D,
specs: https://www.amd.com/en/products/apu/amd-ryzen-9-7950x3d
Has 16 physical cores but with hyperthreading, it has 32 threads.

Latest/fastest CPU, both on Intel side and AMD side, that doesn't have hyperthreading (thus virtual cores that you for some reason despise), are:
various reasons I dont like hyperthreading:

firstly, I think multitasking should be left to the OS, I'd rather have raw power unitasking, and the OS can multitask the 1 cpu to as many tasks as it wishes. where single threaded cores give me raw power for the OS to multitask.

the hyperthreading is just shunting multitasking to the microcode, and its by hardware people who usually are inept on the software side! hardware people's software is always jawdroppingly carmudgeonly. when hardware people design a software protocol, eg that for IDE drives, it is a nightmare from hell!

but when software people design a software protocol, it is clean and efficient.

for the classic case of how hardware people are inept, look no further than USB2, where the hubs are always problematic. where some sockets just wont work unless in the right mood. direct sockets are much better, because the hardware people dont get too clever, eg sata sockets. but with USB, there are so many problems. and USB3 is even worse, with titchy cables. I have literally 7 hard drives, beyond the scope of USB3 to connect all those simultaneously.

its the same problem in my opinion as CISC versus RISC. With CISC eg Motorola and Intel, the CPU tried to do everything, and eg Motorola CPU even has binary coded decimal instructions. but with RISC, they left as much as possible to the compiler and software above the CPU, resulting in much more efficient chips, eg ARM chips, which were 4 times as fast as say Motorola and Intel for the same technology. the simplified hardware side, meant they could split instructions into 4 phases, and process 4 instructions simultaneously, leading to 4x the speed.

the original ARM for the Acorn Archimedes computer was really radical, where ARM="Acorn Risc Machine", where Acorn produced the Acorn Atom ancient home computer, and the BBC computer. with that ARM chip, I think it didnt even have a multiply instruction, this was done in software! they jettisonned as much structure as they could, and it is one of the most successful chip series ever!

even the instruction formats are standardised, where the bits of the instruction can be processed in parallel.

for ARM, they analyzed the code actual programs used, and found that a lot of the instruction codes of CPUs of that era were never used, basically a waste of space and cluttering up the electronics.

Intel eventually went for a RISC core, where the CPU is itself a software sham of hardware above some inner RISC chip, main advantage is they preserve backwards compatibility to 1980.

around 2006, I used to communicate a lot with an american guy with a degree in electrical engineering, and when I said Intel vs AMD, the guy said: AMD hands down the best because the multicores are true cores and not phoney software level cores. at that time true cores, evidently no more, the rot has set in!

your 16 cores with 32 threads presumably is each core multitasking in the microcode as 2 cores, but with OS level multitasking, one core can multitask any number of tasks, 2, 5, 50, 100, etc, AND the OS can choose the level of granularity, eg switch tasks more rapidly or less rapidly,

hyperthreading just seems inept duplication of effort, leave to the OS what the OS does best.

your hyperthreading is just limited to a fixed number of threads per core, that is very inept! it shows they are kludging the multitasking! quality multitasking has an unlimited number of threads!

it just sounds completely stupid and pointless for say 16 cores to multitask 32 threads. why not just have 16 true cores, and let the OS multitask 100 or 1000 threads?

I want my multicores to really be parallel computers, whereas hyperthreading isnt parallel as the core is either doing the one thread or the other. its basically a confused concept and a classic case of people of one specialism embarrassing themselves in another specialism.

what is really great about unthreaded multicores, is you have total asynchronicity, where all the cores are doing stuff AT THE IDENTICAL TIME.

if you program a system above hyperthreading, some subtle bugs will get through, which only manifest when you have true parallelism. because with hyperthreading, the core has to flit between the one pretence of a CPU and the other pretence of a CPU. and the multithreading is an illusion, under a microscope the hyperthreaded dual core is just one thing at a time. but the unthreaded dual core is CONTINUALLY 2 THINGS IN PARALLEL.

amongst other things it is going to bloat up the microcode, and in an inept way.

secondly there is a problem that x86 CPUs have a LOT of hardware configuration registers, and hyperthreaded ones share some of these registers, this complicates keeping track of the registers.

its the programming problem of remote side effects, or "action at a distance" eg

first instruction: x = 1 ;
second instruction: y = 2 ;

with the second instruction, I dont expect x to now become 2. I want different names to be different variables. but with hyperthreading, the one CPU setting a configuration register, can cause another CPUs configuration register to change. action at a distance is a known cause of subtle bugs.

perhaps if the hyperthreading didnt share variables, I would be less cross, and that is the ineptitude of Intel, like their ineptly asymmetrical USB sockets. where its tricky to get them the right way round, finally with USB3 they did symmetric sockets, a problem I knew right from the earliest USB!

adept sockets are symmetrical, you can put them either way, eg bayonet bulbs and headphone sockets, the most inept ones are probably the PS/2 ones, which are a total pain to insert correctly! its not rocket science, but the electrical engineering phds of Intel totally bodged this!


one of the radical things about the original C language, is the design got rid of action at a distance, whereas I think Fortran programs had really subtle bugs because of action at a distance problems.

C++ then trampled over many of the innovations of C, where C++ was an extension of C by a totally different person ignorant of the history that led up to C. I went on a course on the history of computer languages, where the guy explained all the major innovations over the decades, and then C++ trampled over the lessons of history!

I have dabbled with multicore programming, just for AMDs of my era, where there wasnt multithreading, to program the hyperthreaded Intel ones of that era, I will have to scour the Intel manuals and have to code around shared registers, its a nuisance as I have to keep track of which cpu registers are shared and which are individual, a total pain!

it really is the old CISC problem of CPU designers trying to do stuff which the software people dont need any help with!

software people will make one cpu be 100 tasks, but hardware people make 1 cpu be 2 tasks!

that is totally unprofessional! if it were good, the hyperthreads would be unlimited!

you call that multitasking, 1 cpu can be 2? absolute rubbish!

to do pre-emptive multitasking is quite complex, as it requires timer interrupts and asynchronous acrobatics, hardware level multitasking is a really dubious idea as that kind of complexity is best done in software, not at the inner level of the CPU.

and that is why their cores will say just have 2 threads or other constant number, as that is kludged multitasking!

multitasking is a software problem, multicores is a hardware problem. multitasking above multicores is a software problem. Intel have tried to do it in the hardware and ended up with pathetic multitasking of 1 core being 2 threads, which sounds like inept fixed table multitasking.

the hordes of engineering phds of Intel burnt the midnight oil for months and came up with 1 cpu multitasking 2 threads! tell it to the hand!

anyway, that's my rant!

its a bit like the inbuilt satnav of my car, lots of roundabouts dont exist on it, because they didnt exist at the time it was manufactured. much better just to use a standalone satnav eg Tomtom and Garmin. satnavs are a software problem, cars are a mechanical engineering problem, mechanical engineers should not try to dabble with software, as they'll make a dog's breakfast of it!

Intel 9th gen: Core i5-9700K - 8 cores and 8 threads
AMD 3000-series: Ryzen 5 3500X - 6 cores and 6 threads

Anything newer/better than these two, will give you virtual cores inside CPU. This is the norm today (actually have been so for a decade).
I forget the precise era I was programming such things, but I think between 2006 and 2010, which is 13 years or more ago! I'd have to look at the timestamps of my files to see which era I was doing the stuff!

in my era, AMD cores were true cores, today they are Intel junk.

All HDMI versions, from the very beginning of 1.0, up to the latest: 2.1, all of them are capable of carrying audio, alongside the image.
Full version specs comparison here: https://en.wikipedia.org/wiki/HDMI#Main_specifications

ah yes, HDMI has always carried both audio and video, but all graphics cards on my PC only carry graphics via the HDMI! if I connect them to my bluray recorder or TV, only picture emerges, no sound!

for sound, I have to connect the PC's headphone socket to some speakers, but my bluray recorder cant integrate graphics from HDMI with audio from another cable!

So, if you can't get audio over HDMI, better look at what hardware/software issue you have, since HDMI itself is capable of carrying both, regardless the version. After all, HDMI was created to be a single-cable digital audio/video connector interface.

looking on Google, it seems its a configuration problem! I think they have set things up by default so no sound, and then you have to reconfigure.

if you google for "pc hdmi no sound", you will see its a very frequent topic.

if you install windows, a lot of things are defaulted to dubious settings, eg the windows which "snap" on windows 10, a total nuisance, and Microsoft Edge defaulting to a webpage with highly politically biassed to Labour news stories. where you have to battle to halt that propaganda page showing.

the ombudsman should ban Microsoft defaulting to political interference in Britain.
 
Last edited:

Richard1234

Distinguished
Aug 18, 2016
277
5
18,685
I should add that what you said about Gigabyte, sounds like "Fraud by false representation", see this URL for the UK legal definition of this:

https://www.legislation.gov.uk/ukpga/2006/35/section/2

where the operative clauses here are (1)(b)(i) and (2)(a),

ie where it is true but misleading, and it is to trick people into buying something when they get something else.

its a civil offence, and you can go to jail for it, but you have to do a private lawsuit, and in actual court cases you have to show they deliberately misled, ie it wasnt just a mistake and the liability needs to be above £25000, In practice this kind of thing is settled out of court via refunds, and the defendants are free to reoffend.

as regards multitasking, the entire point of software multitasking is to make 1 piece of hardware to pretend to be several, and I think originated thus:

originally a university would have a computer which filled a room, and academics would book sessions to use it, eg astronomers feeding in data about stars and galaxies.

but this crowded out most people, where maybe 1 astronomer could hog the computer for several days analyzing the stars of the milky way! so they invented a way for everyone to use the computer at the same time, namely multitasking, where the computer switched between the people, via terminals.

multitasking above hardware in fact adds very little latency if done adeptly.

and there is absolutely no point in hyperthreading, its as stupid as other Intel ideas such as asymmetric USB sockets,where 50% of the time one tries the socket the wrong way. and the overcomplicated USB system, where the hubs never function properly, with some sockets just not functioning or are very slow inexplicably. where the manufacturers of hubs are either bodging the hubs or the control circuitry is bodged.

if you do the multitasking in software above a true core, its much more elastic than this ridiculous hyperthreading idea.

the entire idea of multitasking, is each task acts as if its another CPU, have 10 tasks, and its like having 10 cpus. Intel just reinvented the wheel and their one is a square wheel! ie worse.

the Commodore 500 computer had 1/2 megabyte of memory and had pre-emptive multitasking on a Motorola 68000 CPU, which was 7 or 8 MHz maybe around 1988.

with the Commodore Amiga series, you could even inspect the multitasking, and each task has its own virtual CPU.

also I think it was the first personal computer with pre-emptive multitasking, even Unix was only cooperative multitasking at that time, and Windows wasnt until 1995.

commodore amiga windows are asynchronous to the task, with Windows, when you open a file requester, you cannot access the program window, but with the Amiga 500 in 1988, you can always access the program window! really superb system, and eg Electronics Arts originally was developing just for the Amiga, eg Deluxe Paint and Deluxe Music, and eventually moved to PCs.

if you want 1 cpu to pretend to be 10 equal ones, you just set up 10 tasks and give each equal time on the real cpu. But if you want one of the 10 to have twice as much usage, you give it twice as much time on the cpu. it is vastly more mouldable than hyperthreading. if you want it to have 10% more usage, just give it 10% more time. its so much better than this hyperthreading junk.

but to do multitasking properly, requires a moderate amount of code, as its kind of the core of an operating system, where to do that at the hardware level, you have the most difficult part of an operating system in the microcode, eg you need memory management, which is kind of ridiculous: more efficient to just externalise the microcode beyond the cpu,

which also allows alternative OS's to be done, eg Linux versus Windows,

hyperthreading is ultimately a sham, its to fool people that they have multiple cores, when in fact it is just shoddy multitasking, and you'd be better of having just 16 true cores and no hyperthreading than 16 cores and 32 threads.

AMD probably had to follow suit otherwise Intel would bamboozle people that their wares were twice as many cores for the same level of technology.

the fact Intel's hyperthreading is 2 threads from 1 core, strongly suggests it is table based multitasking, and this is very inept. Its the clunky kludged multitasking of a wannabe!

table based, because doing it properly would bloat up the microcode, and probably be beyond the scope of the hardware engineers, as this kind of thing is a very tricky software problem as it is truly asynchronous parallel programming.

Intel's multicore architecture if used for true cores is brilliant, but done hyperthreaded is a complete no no!

the electrical engineer I used to communicate with was really keyed in with the best ideas, eg when I mentioned the floating point instructions of Intel, he said absolutely no point as you can do those instructions FASTER in software! I was going to use the floating point instructions, and he guffawed at this idea!

just another example of the ineptitude of Intel. some of their ideas are great, but some are dog's breakfasts.

their CPUs are filled with heaps of junk that are a complete waste of transistors!

their idea of a RISC core is brilliant, as it marries RISC with backwards compatibility, and thus outdid Motorola who were unable to take their rival 68000 series forwards beyond I think the 68060.
 

Aeacus

Titan
Ambassador
when you say GPU, I understand the general concept of a GPU, but are you talking of a GPU instead of a graphics card? integrated into the motherboard?
There are 3 forms of GPUs out there;
* GPU - Graphics Processing Unit - usually referred to dedicated GPU. E.g like my MSI GTX 1660 Ti Gaming X 6G that i have in my Skylake build.
* iGPU - Integrated Graphics Processing Unit - as name says, this is the integrated GPU and namely the one that is inside the CPU. E.g for my i5-6600K, it would be Intel HD Graphics 530. No modern MoBo has integrated GPU on the MoBo itself. Ancient MoBos did have iGPU on the MoBo itself.
* eGPU - External Graphics Processing Unit - this refers to the dedicated desktop GPU, that is in separate dock and is used to be hooked to a laptop, to considerably increase laptop GPU output performance.

also will the GPU be a standardised interface, where different GPUs can run the same code, or is it something very proprietory, where you need software programmed for the specific GPU?
All PCI-E based GPUs have the standard interface. E.g my GTX 1660 Ti has PCI-E 3.0 interface, while latest GPUs offer PCI-E 4.0. Of course, PCI-E revisions are backwards compatible and you can easily run PCI-E 4.0 GPU inside PCI-E 1.0 slot (with some performance reduction, ~10%).

Once you 1st install GPU in the system, it uses default Windows drivers to show image. All modern GPUs are capable of that. Once you boot to OS, then you need to go to the manufacturer website (Nvidia, Radeon or Intel), to download GPU drivers for your GPU. This will essentially "unlock" full performance of your GPU, where you can utilize far greater resolutions and refresh rates, compared to what default Windows graphical drivers are capable of delivering.
E.g for Nvidia, driver download is from here: https://www.nvidia.com/download/index.aspx

As of programming of GPU goes, well, it should be same within the same architecture but most likely different if you switch architecture.
For example: Nvidia Turing architecture, it would either be, where the same program code works on all Turing architecture GPUs,
full list here: https://www.techpowerup.com/gpu-specs/?architecture=Turing&sort=generation
Or on those GPUs who's GPU chip is same. E.g TU116.
GPUs with TU116 chip: https://www.techpowerup.com/gpu-specs/nvidia-tu116.g902

Of course, i really don't know since i'm not a programmer (my missus is).

are theirs just for Intel? although you dont recommend them next, Gigabyte I think do motherboards for both AMD and Intel cpus. Do MSI do motherboards for both AMD and Intel, or only one of these 2?
EVGA yes, makes only Intel chipset MoBos.

For AMD, namely AM5 CPU socket (latest for AMD), you have brand choice between: AsRock, Asus, Biostar, Gigabyte, MSI and NZXT.
Full list here: https://uk.pcpartpicker.com/products/motherboard/#s=41&sort=name&page=1

could you clarify what you mean by chipset?
In a nutshell:
In a computer system, a chipset is a set of electronic components on one or more integrated circuits that manages the data flow between the processor, memory and peripherals. The chipset is usually found on the motherboard of computers. Chipsets are usually designed to work with a specific family of microprocessors. Because it controls communications between the processor and external devices, the chipset plays a crucial role in determining system performance.
Source: https://en.wikipedia.org/wiki/Chipset

Examples with latest CPUs/chipsets;
Intel 12th generation CPUs (Alder Lake architecture) were made alongside Intel 600-series chipset.
Intel 13th generation CPUs (Raptor Lake architecture) were made alongside Intel 700-series chipset.
Intel 14th generation CPUs (Raptor Lake-S Refresh architecture) doesn't have their own dedicated chipset.

If you have Intel 600-series chipset Mobo, Intel 12th gen CPUs work off the bat. But to use 13th gen or 14th gen CPU, you need to have latest BIOS.
In Intel 700-series MoBo, Intel 12th gen and 13th gen CPUs work off the bat. But to use 14th gen CPU, you need to have latest BIOS.

AMD Ryzen 7000-series CPUs (Raphael architecture) were made alongside AMD 600-series chipset.

Older AMD Ryzen CPU series, like 5000- or 3000-series will not work with AMD 600-series chipset since older AMD Ryzen CPUs use AM4 CPU socket, while latest, Ryzen 7000-series CPUs are using AM5 CPU socket.

(Perhaps a bit complex to understand, but if you need clarification, let me know.)

if you want 1 cpu to pretend to be 10 equal ones, you just set up 10 tasks and give each equal time on the real cpu. But if you want one of the 10 to have twice as much usage, you give it twice as much time on the cpu. it is vastly more mouldable than hyperthreading. if you want it to have 10% more usage, just give it 10% more time. its so much better than this hyperthreading junk.
I don't know that in-depth about CPUs and hyperthreading, especially coding the software for CPU to execute. I'm more of a: "jack of all trades but master of none", whereby i can explain the complex world of PC hardware to a person who has 0 idea about PC hardware. But i haven't focused in-depth into any of the minute workings of the said hardware itself (e.g CPUs and hyperthreading or GPUs and how they actually compute graphics). Since this kind of in-depth knowledge isn't something common person wants to know, when they are looking for what PC to buy or are struggling to make their PC to work right.

But as far as i've understood hyperthreading - it allows 1 CPU core to compute 2 tasks in parallel. Without hyperthreading 1 CPU core can only compute 1 task at a time.

E.g like the part i just quoted, where you can code CPU to do 10 tasks and when giving each task equal time on CPU core, you can code it so, that CPU will finish all 10 tasks at the same time. But CPU is still able to compute one task at a time.
But if you'd have hyperthreading, whereby 1 CPU core is able to compute 2 tasks is parallel (and i know it's far more complex to code all that), you could split the tasks between threads equally, whereby it takes half of the time to finish all the 10 tasks.

AMD probably had to follow suit otherwise Intel would bamboozle people that their wares were twice as many cores for the same level of technology.
Last time AMD didn't have more threads that CPU cores, was back in 2012, with Piledriver architecture CPUs (e.g FX-6300). But AMD took hiatus from CPU market for 5 years, until coming out with Ryzen 1000-series in 2017. At that point forwards, AMD tried to gain sales by producing high core count, loads of threads CPUs, while Intel was focusing on 4 core CPUs but with high core frequency.

E.g Ryzen 5 1600 (mid tier CPU) had 6 cores and 12 threads. While Intel counterpart at that time, 7th gen Core i5-7600 had 4 cores and 4 threads, but higher core frequency.
For several years, AMD pushed more and more cores/threads into their CPUs while Intel still focused on mainly 4 core CPUs but high core frequency, while reserving hyperthreading only for high-end CPUs (Core i7). Intel was slow to catch up in terms of core amounts in their CPUs. 8th gen and 9th gen mid-tier CPUs were 6 cores 6 threads (i5-8600 and i5-9600). Starting with 10th gen CPUs, Intel caught up to AMD and offered 6 core, 12 thread CPUs as their mid-tier CPU (i5-10600).

So, it's more of the AMD fault who pushed high core/thread amount CPUs to mainstream, rather than Intel, who only had hyperthreading for high-end CPUs.
Note: Core i3/Ryzen 3 - low-end (office build), Core i5/Ryzen 5 - mid-tier (gaming), Core i7/Ryzen 7 - high-end (3D render), Core i9/Ryzen 9 - mythical level.

Mid tier CPU (Core i5/Ryzen 5) core/thread timeline since 2016 (also added in Core i7 to show that hyperthreading was for high-end);
Intel i5-6600 - 4c/4t
Intel i7-6700 - 4c/8t (hyperthreading)
AMD R5 1600 - 6c/12t (hyperthreading)
Intel i5-7600 - 4c/4t
Intel i7-7700 - 4c/8t (hyperthreading)
AMD R5 2600 - 6c/12t (hyperthreading)
Intel i5-8600 - 6c/6t
Intel i7-8700 - 6c/12t (hyperthreading)
AMD R5 3600 - 6c/12t (hyperthreading)
Intel i5-9600 - 6c/6t
Intel i7-9700 - 8c/8t
Intel i5-10600 - 6c/12t (hyperthreading)
Intel i7-10700 - 8c/16t (hyperthreading)
Intel i5-11600 - 6c/12t (hyperthreading)
Intel i7-11700 - 8c/16t (hyperthreading)
Intel i5-12600 - 6c/12t (hyperthreading)
Intel i7-12700 - 12c/20t (hyperthreading)
AMD R5 5600 - 6c/12t (hyperthreading)
Intel i5-13600 - 14c/20t (hyperthreading)
Intel i5-13700 -16c/24t (hyperthreading)
AMD R5 7600 - 6c/12t (hyperthreading)
Intel i5-14600K - 14c/20t (hyperthreading)
Intel i7-14700K - 20c/28t (hyperthreading)

So, this is how things are now with CPUs and cores/threads.
In my opinion, 4c/4t CPU is more than enough for mid-tier use and i don't get why we need that many cores/threads at current date. Most tasks are single-core regardless, whereby only some games may utilize 2, 4 or up to 6 cores. And anything above that, e.g 8 cores, in my opinion, is only for heavy workload like CPU render, where you need high core amout.
 

Richard1234

Distinguished
Aug 18, 2016
277
5
18,685
many thanks for extensive reply, also that pcpartpicker URL looks like the place to buy, it pays to have a brand beginning with A eg Asus as you appear at the top of the list! I had to go to the 2nd page to find MSI,
There are 3 forms of GPUs out there;
* GPU - Graphics Processing Unit - usually referred to dedicated GPU. E.g like my MSI GTX 1660 Ti Gaming X 6G that i have in my Skylake build.
* iGPU - Integrated Graphics Processing Unit - as name says, this is the integrated GPU and namely the one that is inside the CPU. E.g for my i5-6600K, it would be Intel HD Graphics 530. No modern MoBo has integrated GPU on the MoBo itself. Ancient MoBos did have iGPU on the MoBo itself.
* eGPU - External Graphics Processing Unit - this refers to the dedicated desktop GPU, that is in separate dock and is used to be hooked to a laptop, to considerably increase laptop GPU output performance.
intuitively I expected external GPU to be a better idea, as a more ancient machine potentially could have its GPU replaced by a more modern one, where the implementation circuitry is much faster.

for me, a GPU just means a graphics processing unit, ie hardware which does graphics, even if it is relatively simple, eg with the Amiga, they had hardware sprites, where the sprite could be moved without moving data, only changing the coordinates, done by merely overlaying the sprite on the video beam, which also meant very fast collision detection, as the sprite feeds to the video beam would detect if 2 sprites were outputting to the same pixel, whereas in software it is slow. The amiga graphics was synchronised with the video beam, where you could have a new output image per screen scan, leading to ultra smooth graphics.

and they had the blitter for rectangular transforming of graphics memory, eg to shunt a rectangle, or to shunt and modify a rectangle. I would class both the sprite hardware and the blitter as GPUs. The GPU internally may well be software in the microcode or ROM of the circuitry, but using its own internal CPU. eg potentially with multicore, you could utilise one of the cores as a GPU, a roll your own improvised GPU.

when the Acorn Archimedes came out, people were surprised it didnt have custom chips for graphics, their argument was that the CPU could do the graphics faster than the Amiga's blitter.

a guy called David Braben did a 3D game called Virus, originally for the Archimedes to show how good it is, where it was real time 3D graphics, which for that era wasnt viable on home computers. eventually with the Archimedes failing commercially, he ported it to the Amiga, and it was real time 3D graphics, where he didnt use the blitter. where it didnt seem viable to do that real time 3D graphics on the Amiga 500. Today's PCs are so much faster that its not a big deal where they can do it by brute force. Interviewed he said that the time to set up the blitter was a false economy, and he got the 68000 to shunt the data directly. Probably the Atari ST used the CPU for its graphics, rival 68000 computer to the Amiga.

a GPU potentially can have hardware optimised for what it does, eg if it only shunts images, it can have a purpose built CPU which only has relevant instructions, which could have its own faster clock also.

All PCI-E based GPUs have the standard interface. E.g my GTX 1660 Ti has PCI-E 3.0 interface, while latest GPUs offer PCI-E 4.0. Of course, PCI-E revisions are backwards compatible and you can easily run PCI-E 4.0 GPU inside PCI-E 1.0 slot (with some performance reduction, ~10%).
PCI-E is specifically for graphics?

Once you 1st install GPU in the system, it uses default Windows drivers to show image. All modern GPUs are capable of that. Once you boot to OS, then you need to go to the manufacturer website (Nvidia, Radeon or Intel), to download GPU drivers for your GPU. This will essentially "unlock" full performance of your GPU, where you can utilize far greater resolutions and refresh rates, compared to what default Windows graphical drivers are capable of delivering.
E.g for Nvidia, driver download is from here: https://www.nvidia.com/download/index.aspx

As of programming of GPU goes, well, it should be same within the same architecture but most likely different if you switch architecture.
For example: Nvidia Turing architecture, it would either be, where the same program code works on all Turing architecture GPUs,
full list here: https://www.techpowerup.com/gpu-specs/?architecture=Turing&sort=generation
Or on those GPUs who's GPU chip is same. E.g TU116.
GPUs with TU116 chip: https://www.techpowerup.com/gpu-specs/nvidia-tu116.g902

Of course, i really don't know since i'm not a programmer (my missus is).


EVGA yes, makes only Intel chipset MoBos.

For AMD, namely AM5 CPU socket (latest for AMD), you have brand choice between: AsRock, Asus, Biostar, Gigabyte, MSI and NZXT.
Full list here: https://uk.pcpartpicker.com/products/motherboard/#s=41&sort=name&page=1
that URL looks very useful, I'll try to purchase from there.


In a nutshell:

Source: https://en.wikipedia.org/wiki/Chipset

Examples with latest CPUs/chipsets;
Intel 12th generation CPUs (Alder Lake architecture) were made alongside Intel 600-series chipset.
Intel 13th generation CPUs (Raptor Lake architecture) were made alongside Intel 700-series chipset.
Intel 14th generation CPUs (Raptor Lake-S Refresh architecture) doesn't have their own dedicated chipset.

If you have Intel 600-series chipset Mobo, Intel 12th gen CPUs work off the bat. But to use 13th gen or 14th gen CPU, you need to have latest BIOS.
In Intel 700-series MoBo, Intel 12th gen and 13th gen CPUs work off the bat. But to use 14th gen CPU, you need to have latest BIOS.

AMD Ryzen 7000-series CPUs (Raphael architecture) were made alongside AMD 600-series chipset.

Older AMD Ryzen CPU series, like 5000- or 3000-series will not work with AMD 600-series chipset since older AMD Ryzen CPUs use AM4 CPU socket, while latest, Ryzen 7000-series CPUs are using AM5 CPU socket.

(Perhaps a bit complex to understand, but if you need clarification, let me know.)


I don't know that in-depth about CPUs and hyperthreading, especially coding the software for CPU to execute. I'm more of a: "jack of all trades but master of none", whereby i can explain the complex world of PC hardware to a person who has 0 idea about PC hardware. But i haven't focused in-depth into any of the minute workings of the said hardware itself (e.g CPUs and hyperthreading or GPUs and how they actually compute graphics). Since this kind of in-depth knowledge isn't something common person wants to know, when they are looking for what PC to buy or are struggling to make their PC to work right.

But as far as i've understood hyperthreading - it allows 1 CPU core to compute 2 tasks in parallel. Without hyperthreading 1 CPU core can only compute 1 task at a time.
but are those 2 tasks really in parallel?

I probably have to locate an expert on the hardware architecture to clarify,
but my understanding from the electrical engineer is that the hyperthreading is essentially software multitasking within the CPU rather than above the CPU, ie that core at any instant of time is only doing 1 thing at a time, but is flitting between the 2 virtual CPUs pretending to be 2 CPUs.

ie the 2 CPUs are an illusion, its like a magazine where there is just one author, but he writes different articles using different pen names, creating the illusion there are several authors. But at any one point in time, just 1 person composing 1 article. whereas if you had several authors, at a given point in time, 2 could be composing simultaneously.

multitasking is an illusion, it is the CPU doing 1 thing at a time, but so fast it seems to be happening in parallel.


E.g like the part i just quoted, where you can code CPU to do 10 tasks and when giving each task equal time on CPU core, you can code it so, that CPU will finish all 10 tasks at the same time. But CPU is still able to compute one task at a time.
E.g like the part i just quoted, where you can code CPU to do 10 tasks and when giving each task equal time on CPU core, you can code it so, that CPU will finish all 10 tasks at the same time. But CPU is still able to compute one task at a time.But if you'd have hyperthreading, whereby 1 CPU core is able to compute 2 tasks is parallel (and i know it's far more complex to code all that), you could split the tasks between threads equally, whereby it takes half of the time to finish all the 10 tasks.


Last time AMD didn't have more threads that CPU cores, was back in 2012, with Piledriver architecture CPUs (e.g FX-6300). But AMD took hiatus from CPU market for 5 years, until coming out with Ryzen 1000-series in 2017. At that point forwards, AMD tried to gain sales by producing high core count, loads of threads CPUs, while Intel was focusing on 4 core CPUs but with high core frequency.

the point of multiple cores, is at a specific technology, more cores equals more speed of the totality,
whereas with hyperthreading a core, you'd need a faster inner frequency, which means more advanced technology.

with modern x86,these have RISC cores, where the x86 cpu is just a software facade for an inner RISC CPU. so the hyperthreading is probably where that inner CPU just maintains 2 facades,

but my argument is it is better to maintain 1 facade at say twice the speed, and leave the multitasking to the OS above the core, because as you allude to elsewhere often one only needs 1 CPU, its only certain kinds of things that work better with multiple CPUS. and there is metalevel work distributing a work to multiple CPUs where that adds latency, if you distribute to too many it can become a false economy.

if the different CPUs share the same memory as is the case with multicore x86, only 1 CPU can write to the shared memory at a time, enforced in hardware which causes the other cores memory caches to be flushed, automagically: doesnt need effort by the programmer.

but that shared memory is a bottleneck, if each had its own memory, you'd have to shunt memory around, which would slow things down. ultimately its a balancing act, same way big firms become very beurocratic, as that is the metalevel work of distributing the works, where workers have to get authorisation from a manager, who might also need authorisation from even higher up etc.

Microsoft overtook IBM, because they found that for businesses, PCs connected by networks were more effective than a mainframe via terminals. originally businesses were using mainframes, but this was overkill for most things. IBM today is comparable in size to Nike and Pepsi, but in the old days it was the largest firm on earth, until Bill Gates decided to support the clones, and now Microsoft became the biggest firm on earth!

Many things people do arent speed critical, where a delay of 0.1 seconds wont be noticed. for graphics you want speed, but once the speed is faster than perception, eg 30 frames per second, that extra speed is wasted. eg there is no point having a 200mph car for a place with narrow winding roads, as you wont be able to react fast enough at that speed. the faster the speed, the bigger the radius of curvature, which is why motorways curve really gradually, as they are for 70mph.

E.g Ryzen 5 1600 (mid tier CPU) had 6 cores and 12 threads. While Intel counterpart at that time, 7th gen Core i5-7600 had 4 cores and 4 threads, but higher core frequency.
For several years, AMD pushed more and more cores/threads into their CPUs while Intel still focused on mainly 4 core CPUs but high core frequency, while reserving hyperthreading only for high-end CPUs (Core i7). Intel was slow to catch up in terms of core amounts in their CPUs. 8th gen and 9th gen mid-tier CPUs were 6 cores 6 threads (i5-8600 and i5-9600). Starting with 10th gen CPUs, Intel caught up to AMD and offered 6 core, 12 thread CPUs as their mid-tier CPU (i5-10600).

So, it's more of the AMD fault who pushed high core/thread amount CPUs to mainstream, rather than Intel, who only had hyperthreading for high-end CPUs.
I think what we have here is racing! Intel only were doing hyperthreading, the AMD has to do these to appear the same, but they then need to outdo Intel at their own game, and they were expert at multicores, so add in hyperthreads and they are ahead of Intel, then Intel have to up their ante, etc.

Note: Core i3/Ryzen 3 - low-end (office build), Core i5/Ryzen 5 - mid-tier (gaming), Core i7/Ryzen 7 - high-end (3D render), Core i9/Ryzen 9 - mythical level.

Mid tier CPU (Core i5/Ryzen 5) core/thread timeline since 2016 (also added in Core i7 to show that hyperthreading was for high-end);
Intel i5-6600 - 4c/4t
Intel i7-6700 - 4c/8t (hyperthreading)
AMD R5 1600 - 6c/12t (hyperthreading)
Intel i5-7600 - 4c/4t
Intel i7-7700 - 4c/8t (hyperthreading)
AMD R5 2600 - 6c/12t (hyperthreading)
Intel i5-8600 - 6c/6t
Intel i7-8700 - 6c/12t (hyperthreading)
AMD R5 3600 - 6c/12t (hyperthreading)
Intel i5-9600 - 6c/6t
Intel i7-9700 - 8c/8t
Intel i5-10600 - 6c/12t (hyperthreading)
Intel i7-10700 - 8c/16t (hyperthreading)
Intel i5-11600 - 6c/12t (hyperthreading)
Intel i7-11700 - 8c/16t (hyperthreading)
Intel i5-12600 - 6c/12t (hyperthreading)
Intel i7-12700 - 12c/20t (hyperthreading)
AMD R5 5600 - 6c/12t (hyperthreading)
Intel i5-13600 - 14c/20t (hyperthreading)
Intel i5-13700 -16c/24t (hyperthreading)
AMD R5 7600 - 6c/12t (hyperthreading)
Intel i5-14600K - 14c/20t (hyperthreading)
Intel i7-14700K - 20c/28t (hyperthreading)

So, this is how things are now with CPUs and cores/threads.
In my opinion, 4c/4t CPU is more than enough for mid-tier use and i don't get why we need that many cores/threads at current date. Most tasks are single-core regardless, whereby only some games may utilize 2, 4 or up to 6 cores. And anything above that, e.g 8 cores, in my opinion, is only for heavy workload like CPU render, where you need high core amout.
sounds right, for the OS also, multicores and threads is a confusion factor, because firstly multithreads or cores needs the programmer to use this, a specific program will just run on one thread, and the program has to do some gymnastics to utilise further cores, which can become a brain twister especially if the program doesnt know how many cores might be available, the one user might have a 16 core machine, and the other a 4 core, the programmer has to make their code to adapt to how many cores, and this can be too much hassle or not viable given the available time for the work, or the monetary gain or the worth of the work.

when I program, I often just need something basic in a hurry, and speed isnt the issue. eg I wrote some small programs to rejig cut and paste text into csv files. Where I have scripts calling the small programs for different scenarios. eg one program just removes all text up to some marker text. Another substitutes one arbitrary string by another. Another script removes all text after some marker.
These scripts arent very fast, but they might take 30 seconds or something. But to do it fast is too much hassle. The priority is simplicity and rejiggability, where I can easily rejig the scripts to new problems.


by default, only different programs will be on different cores. eg say you had Firefox browser, and Microsoft Word, Firefox might run on one core, and Word on the other. The OS cannot run Firefox on 2 cores simultaneously UNLESS Firefox arranges this via the OS. It could run 2 different instances of Firefox on 2 cores simultaneously, but the one instance would be on one core, unless Firefox sets up a further task or thread on a different core.

The OS cannot split one instance of some software on multiple cores, as in general this isnt viable, and the OS cannot determine whether or how it is viable, a programmer has to do that. it would be major work to determine whether or how a specific program might be splittable: its beyond the scope of an OS to determine.

2 instances, of course are just 2 different programs where its just a coincidence they are the same code.

eg you launch Wordpad twice, the one instance editting email.txt and the other instance editting disclaimer.txt. those can be on different cores. but the editting of email.txt would be just 1 task on one core, splitting that between cores could be done but would require work by the programmer and might be too complicated to make noticeable gain.


with multicore, you can have "cpu affinities", where a specific program might keep to the same CPU, eg this might reuse
caches better, but in general, the OS might flit a program between different cores, according to supply and demand. eg when it is that programs turn to run, it might get run from whichever core is free at that time. But if the program or user requests an affinity, it might wait for that specific core to be free, or at least timeout for that core. eg wait up to 1 microsecond for that core, otherwise whichever core is available. This is an OS software problem requested by the user or program. The OS also can ignore such requests, and everything will run just fine, maybe faster or slower than the user or programmer expected!

I sometimes modified programs to make them faster, and in fact they became slower, because Intel anticipated how programmers do things, and optimised for such. so when I wrote the code unimaginatively it ran faster than when I tried to be clever, as Intel had anticipated the kludge!

as a general rule, it pays to not get too clever. just keep things simple, try clever ideas, but keep these simple. ie be clever but not too clever.

*nix (ie Unix and Linux) programs do threading via the fork() command, but this is where the program splits in 2 at a point in the program, and the 2 clones now share the same data. I find it very confusing as it is intrinsic multitasking. In contrast the Amiga went for external multitasking of programs, extrinsic multitasking. These OSes always multitask, but a program has to opt to multitask, where it has to manufacture the further tasks, and follow protocols for shared data, and with the Amiga it doesnt use fork(), but it sets up an external program, which usually doesnt share data, which I find a much cleaner concept.

anyway, the typical use of this kind of multitasking, is where say the GUI of a program is a different task from the main program, and some options from the GUI can then launch even further tasks, where you can continue using the other tasks. Whereas a 1 task program, when you go for an option, the rest of the program is frozen.

you could have 1 task managing the graphics, another task managing a hard drive, where potentially these tasks could be shared by different programs, where they are a bit like servers.

but most programs are 1 task, because it is often more trouble than it is worth using multiple tasks.

where multiple tasks are only OS level, not program level.


with the Amiga, those different tasks can then communicate via the Amiga's inter-task communication mechanisms. where all this is say on the same 68030 CPU, where it is software level asynchronicity.

there are 2 forms of asynchronicity: software level, and hardware level.

software level asynchronicity, is where different pieces of software are running on the same machine, and each has no idea at all where the other has reached, and you need protocols to synchronise, eg the one task can wait till the other has arrived, or wait till it has left.

but at a lower level, only 1 thing is done at a time, nothing happens in parallel, where parallelism is just an illusion.


hardware asynchronicity, is where the other software is on a different machine, and things can happen at the same instant of time. hardware asynchronicity is much more problematic. software asynchronicity is like 2 people trying to phone your landline. hardware asynchronicity is person A phoning your landline, and person B phoning your mobile. with a dual simm smartphone, person A phoning the one simm and person B phoning the other simm would be hardware asynchronicity.

your landline can only process one caller at a time, but across time different callers can call, where you might be communicating with say 4 people via different calls. conference calls are another story. the landline I think only processes one byte at a time, so at the lowest level is not asynchronous. But the conference call probably interleaves or adds the callers into one bytestream. A TV antenna has different stations asynchronously via different frequencies, but a specific TV station is synchronous, with 1 byte at a time. things can get complicated.
 

Aeacus

Titan
Ambassador
intuitively I expected external GPU to be a better idea, as a more ancient machine potentially could have its GPU replaced by a more modern one, where the implementation circuitry is much faster.
eGPU isn't that great of an idea, especially for laptop, since end result would look quite an eyesore.

E.g like so:

egpu-hero-100726717-orig.jpg

Source: https://www.pcworld.com/article/423...owerhouse-with-an-external-graphics-card.html
That source is actually a guide for eGPU on laptops that you can read. It's a bit old but overall info in there is good, IF you want to go with eGPU route.

But more apparent problem is the connection speed between eGPU dock and the old PC/laptop. Now, with old PC, one can't expect it having USB 3.0 or even Thunderbolt connector, to utilize eGPU at it's fullest (or close to it).
For example: USB 2.0 max speed is 60 MB/s, while PCI-E 1.0 x16 has it at 4 GB/s.

Then, there is of course the old hardware (CPU and MoBo) making sense of the new GPU and if it can even detect or use the much newer (and faster) GPU.

PCI-E is specifically for graphics?
No.
PCI-E is the successor of PCI.

Nowadays, mostly dedicated GPUs are connected into PCI-E slot, but PCI-E can utilize other add-in cards as well, like:
* sound cards (for audiophiles)
* network cards (for wi-fi connection or LAN connectivity)
* storage cards (e.g converting PCI-E slot to M.2, so one can install 2-3 M.2 drives; OR giving more SATA ports; OR dedicated storage card like Intel Optane)
* port expandability (converting PCI-E to USB ports)
And the list goes on.

Further reading: https://en.wikipedia.org/wiki/PCI_Express#Derivative_forms

With PCI-E, there are two different things to note;
* revision
* slot size

Revision is to do with bandwidth and is written in from of: PCI-E 1.0, PCI-E 2.0, PCI-E 3.0, PCI-E 4.0 and the latest PCI-E 5.0.
PCI-E revisions are backwards compatible. So, you can use PCI-E 3.0 or 4.0 GPU in your MoBo if your MoBo has PCI-E 1.0 or 2.0. But it would work with reduced bandwidth, since every PCI-E revision doubles the bandwidth.

Slot size is written as how many lanes the slot has. E.g PCI-E x1, PCI-E x4, PCI-E x8 or PCI-E x16.

This image shows nicely the PCI-E slot sizes;

PCI-Express-X16-vs.-X8-vs.-X4-vs.-X1-Slot-on-motherboard.jpg


All GPUs have PCI-E x16 slot and only fit into x16 slot. This is even true when GPU only uses 8 lanes (PCI-E x8), but it still has the physical PCI-E x16 slot. One such example would be AMD Radeon RX 7600, which uses 8 PCI-E lanes but has 16-lane slot,
specs: https://www.techpowerup.com/gpu-specs/radeon-rx-7600.c4153

RTX 3060,
specs: https://www.techpowerup.com/gpu-specs/geforce-rtx-3060-12-gb.c3682
is PCI-E 4.0 x16. Meaning it uses PCI-E revision 4.0 protocol and 16-lane slot.

Also, when typing out the PCI-E, don't mix it up by typing e.g: "PCIx16". Since that actually refers to the old, PCI protocol, predecessor of PCI-E. On the image above, you see two PCI x16 slots as well. PCI is obsolete nowadays.

When it comes to PCI-E slots, other add-in cards, e.g sound card, is usually PCI-E x1. But you can plug them into longer slot as well and they work in there too. But you can't physically plug longer slot device (e.g GPU with x16) into smaller slot (e.g into x8).
PCI-E x8 slot is very rare and hardly ever used on today's MoBos. Same goes to PCI-E x4 slot as well. Nowadays, MoBos have PCI-E x16 and PCI-E x1 slots.

but are those 2 tasks really in parallel?
They can be. Depending how it is coded software wise and if that particular tasks works better in parallelization or in single core with high CPU frequency.

The type of instruction being processed also matters. Modern game engines often take advantage of parallelization and delegate high-intensity tasks to multiple cores.

Certain computing tasks run faster when more cores can process instructions in parallel. Other tasks are better optimized for processors with high clock speeds. All PC games rely on both types of tasks to deliver a seamless gameplay experience.
Source: https://www.intel.com/content/www/us/en/gaming/resources/5-reasons-to-overclock-your-next-pc.html

Also, this article from Lenovo goes in-depth about what parallelization is,
article: https://www.lenovo.com/us/en/glossary/parallelization/

These scripts arent very fast, but they might take 30 seconds or something. But to do it fast is too much hassle. The priority is simplicity and rejiggability, where I can easily rejig the scripts to new problems.
Writing small scripts or even programming as a whole, is way too complex for my brain and for the sake of my own sanity :pt1cable:, i keep myself away from all of it.

Years ago, one of my friends tried to teach me how to write simple script that automatically renames the file names to the ones i prefer. Since i had to manually type in the output name into the script for each file + the full source path of the file (so script knows which file it needs to rename into what), i found it way too tedious to do.
Mainly because the manual work i had to do, was far greater than my usual preferred way of: navigating into correct folder, select the correct file, press F2 to open up filename rename option and type new name as file name. All that individually for each file.
 

Richard1234

Distinguished
Aug 18, 2016
277
5
18,685
before the replies, would it be possible for you to upload some photos of the MSI early startup options? in particular relating to boot disk priorities, and also cpu cores, I forget if the cpu core options are only "soft" options, ie advice to the system, I'll have to run my multicore code to see if all cores activate if I disable some in the early startup, my code eventually activates all the available cores: first problem is to get itself ready before it tries to activate the others, and I got each core to write text to the screen stating which core it is, where I think they are numbered in the order they activate. each core doesnt know what the other cores are, so the code has to figure this out. at power on, only 1 core (thread for hyperthreading) is active, and the boot code has to deliberately activate the other ones via a supervisor level protocol, and I think there is a protocol between the first core and the other ones for the waking up, its too complicated to remember, but if I study the source code I can find what the protocol was.

there are 3 scenarios: the documentation, the coding, and the comprehension. comprehending the documentation is the trickiest, once comprehended you can code, but that might fail because you miscomprehended, so there is some experimentation, eventually the code is correct, and you can forget the comprehension as its now all on autopilot! Comprehending is now based on the source code as this leaves nothing to doubt, whereas documentation is often confusing and can be wrong or ambiguous.

my code would wake up all the other cores, and then wait till all the others were ready and waiting for action before proceeding to other matters. Once all initialisation was complete, it could send the other cores stuff to do. I had to devise very complicated asynchronous protocols, because all are acting in parallel. you can get very complicated asynchronous bugs where the system freezes up, and I would rake through the code to be sure it was correct: comprehension is the last defence, if your set top box or PC freezes up, this is often because of an asynchronicity bug, eg A waiting for B, and B waiting for A, leading to the system freezing up. this problem can occur in a very subtle way, where they are waiting at different parts of a protocol and comprehension is the only way to know if this cant happen.

a similar problem is where one entity gets frozen out but not the others.

if a system is perfectly written, its impossible for it to freeze up except from hardware failure. this is a danger with driverless cars, that eventually some component wears out, and now you'll get mayhem, maybe in 6 years time when the first ones start to wear out! with my manual car from 2013, the back wiper comes on uninitiated, and I get it serviced and MOTed every year without this being detected. Sometimes the satnav panel emerges uninitiated, and sometimes the radio (I drive without radio or music), and the car has had many services and MOTs with this problem undetected! 3 different electronics subsystems doing stuff uninitiated, rather than merely not functioning! active malfunction rather than passive malfunction. maybe it is poltergeists!

managing the cores is the most difficult code I ever wrote! where a major problem is stuff sneaking through, a bit like someone opening their car with their remote, and the bionic man sprinting in and stealing something before they have the time to react! where the bionic man here is another core or even cores, where several other cores sneak in before this core has the chance to stop them. I had to rake over the documenation to figure out how to deal with the problems.

in films, people sometimes sneak into a building by immediately following someone with access!


I forget how it knew when all were woken up. I'll have to study the code for the MO.

I found out at the time whether early startup inactivation of cores shut them out from my code, but I forget what I found out! I think possibly the BIOS deactivated ones really do vanish, which would mean some very low level undocumented control of the cores by the BIOS, as I think I used that to test if the code worked with 1, 2, 3 or 4 AMD cores. But its so long ago that I forget.

eGPU isn't that great of an idea, especially for laptop, since end result would look quite an eyesore.

E.g like so:

egpu-hero-100726717-orig.jpg

Source: https://www.pcworld.com/article/423...owerhouse-with-an-external-graphics-card.html
That source is actually a guide for eGPU on laptops that you can read. It's a bit old but overall info in there is good, IF you want to go with eGPU route.
was thinking of tower builds, because I didnt even know you could use alternative graphics cards with laptops!

I have the HP Spectre x360 mentioned at that URL,

with the laptop, the person at technical support at PCWorld, didnt know what the non USB3 sockets on the laptop were! I also dont know! the machine is an HP Spectre x360 2-in-1 laptop, and has 3 sockets I dont recognise with no mention on the enclosing box. I think there is a danger of complexity barrier, where things become too complicated for people to cope with! this is a problem with bitcoin, that its too complicated for most people.

are eGPU's exclusively for laptops, or could I switch it between tower system and laptop for experimenting?

also will it work for both Intel and AMD, or do AMD's need a different eGPU?

But more apparent problem is the connection speed between eGPU dock and the old PC/laptop. Now, with old PC, one can't expect it having USB 3.0 or even Thunderbolt connector, to utilize eGPU at it's fullest (or close to it).
For example: USB 2.0 max speed is 60 MB/s, while PCI-E 1.0 x16 has it at 4 GB/s.

Then, there is of course the old hardware (CPU and MoBo) making sense of the new GPU and if it can even detect or use the much newer (and faster) GPU.


No.
PCI-E is the successor of PCI.

Nowadays, mostly dedicated GPUs are connected into PCI-E slot, but PCI-E can utilize other add-in cards as well, like:
* sound cards (for audiophiles)
* network cards (for wi-fi connection or LAN connectivity)
* storage cards (e.g converting PCI-E slot to M.2, so one can install 2-3 M.2 drives; OR giving more SATA ports; OR dedicated storage card like Intel Optane)
* port expandability (converting PCI-E to USB ports)
And the list goes on.

Further reading: https://en.wikipedia.org/wiki/PCI_Express#Derivative_forms

With PCI-E, there are two different things to note;
* revision
* slot size

Revision is to do with bandwidth and is written in from of: PCI-E 1.0, PCI-E 2.0, PCI-E 3.0, PCI-E 4.0 and the latest PCI-E 5.0.
PCI-E revisions are backwards compatible. So, you can use PCI-E 3.0 or 4.0 GPU in your MoBo if your MoBo has PCI-E 1.0 or 2.0. But it would work with reduced bandwidth, since every PCI-E revision doubles the bandwidth.

Slot size is written as how many lanes the slot has. E.g PCI-E x1, PCI-E x4, PCI-E x8 or PCI-E x16.

I'll try to get the most advanced technology I can afford at the time of purchase, to maximise how long the machine will last, and where I can get max utilisation of say Windows 10 or 11. I'll probably arrange a triple boot with XP, Windows 10 and Windows 11.

so if I can afford it, I would go for PCI-E 5.0 and x16, but would also want ancient hardware support. hardware of eras between ancient and modern less important. maybe if I work on it, I can do away with ancient hardware support, but it will cost time and effort and stress.

This image shows nicely the PCI-E slot sizes;

PCI-Express-X16-vs.-X8-vs.-X4-vs.-X1-Slot-on-motherboard.jpg

is the spacing between these sockets always identical, or do some mobos have them further apart? because I found the sockets on my Gigabyte a bit too close together. if there are lots of sockets, I suppose I could skip consecutive ones.


All GPUs have PCI-E x16 slot and only fit into x16 slot. This is even true when GPU only uses 8 lanes (PCI-E x8), but it still has the physical PCI-E x16 slot. One such example would be AMD Radeon RX 7600, which uses 8 PCI-E lanes but has 16-lane slot,

specs: https://www.techpowerup.com/gpu-specs/radeon-rx-7600.c4153

RTX 3060,
specs: https://www.techpowerup.com/gpu-specs/geforce-rtx-3060-12-gb.c3682
is PCI-E 4.0 x16. Meaning it uses PCI-E revision 4.0 protocol and 16-lane slot.

Also, when typing out the PCI-E, don't mix it up by typing e.g: "PCIx16". Since that actually refers to the old, PCI protocol, predecessor of PCI-E. On the image above, you see two PCI x16 slots as well. PCI is obsolete nowadays.

When it comes to PCI-E slots, other add-in cards, e.g sound card, is usually PCI-E x1. But you can plug them into longer slot as well and they work in there too. But you can't physically plug longer slot device (e.g GPU with x16) into smaller slot (e.g into x8).
PCI-E x8 slot is very rare and hardly ever used on today's MoBos. Same goes to PCI-E x4 slot as well. Nowadays, MoBos have PCI-E x16 and PCI-E x1 slots.


They can be. Depending how it is coded software wise and if that particular tasks works better in parallelization or in single core with high CPU frequency.


Source: https://www.intel.com/content/www/us/en/gaming/resources/5-reasons-to-overclock-your-next-pc.html

Also, this article from Lenovo goes in-depth about what parallelization is,
article: https://www.lenovo.com/us/en/glossary/parallelization/


Writing small scripts or even programming as a whole, is way too complex for my brain and for the sake of my own sanity :pt1cable:, i keep myself away from all of it.

Years ago, one of my friends tried to teach me how to write simple script that automatically renames the file names to the ones i prefer. Since i had to manually type in the output name into the script for each file + the full source path of the file (so script knows which file it needs to rename into what), i found it way too tedious to do.
Mainly because the manual work i had to do, was far greater than my usual preferred way of: navigating into correct folder, select the correct file, press F2 to open up filename rename option and type new name as file name. All that individually for each file.
with programming, discretion is the greater part of valour!

you need to be very selective what you opt in to, and opt out of most things,

eg just with scripts, there are different script systems, eg the Amiga has a very nice to use system, whereas *nix scripts are more powerful but are trickier to use, and MSDOS has its own scripts, namely the .bat files.

what I do is use Amiga scripts for some things via the Amiga Forever emulator above Windows, where it accesses the Windows' disks, and reboot to Linux for some scripts, and use Windows MSDOS scripts for other things.

I dont try to do everything via the same system! but it is courses for horses, and that is a problem of comprehension, where for a specific problem to comprehend which system is best and sometimes combine systems.

eg for creating my boot floppies, I use the MSDOS environment on Windows shells (command prompt window), as I can use the emulated BIOS to write the file directly to the floppy sectors.

as you point out, sometimes manual controls are best!

for converting cut and paste to a csv, you can just use WordPad, and then cut the earlier and later text, and then use text substitute repeatedly eg to remove commas from numbers, and then convert tabs to commas to get a comma separated list, etc. In fact originally I was doing it that way, and eventually automated it via a combination of Amiga scripts and C programs.

where with a webpage, I choose "select all", then "copy", then paste this to a file on Wordpad, then save this, then run a script on AmigaDOS such as

convert_to_csv somefile.txt

which 30 seconds later outputs a file somefile.csv, which I load on Excel, improve further, and then save as somefile.xlsm


some things are best done by C programs, and others best done by scripts.

Linux in fact has several shells, eg the bourne-shell, the C shell, the default shell sh, etc,

with all these things, you have to battle for some weeks with total chaos before you arrive at something useful. as its a no man's land, and metalevel. ordinarily people work within a system, eg they might be driving a manual car in Britain, but the metalevel is where you are outside all systems, and you can choose the system, a bit like choosing which mobo to get!

once you buy one specific mobo, you are now trapped in that choice, and the limits and liberties of that choice!

droll comment: at uni, with the ethernet at lunchtime, everyone would access their emails and the system would jam up, it was quicker to walk across the room with a floppy disk than to transfer the data with the ethernet!

anyway, a plan is gradually forming of what PC to build. not there yet, I'll need to maybe shortlist a set of components and then scrutinise those before buying any. Once you start buying, you are stuck with those decisions.
 

Aeacus

Titan
Ambassador
would it be possible for you to upload some photos of the MSI early startup options?
I don't really get what you ask of me. :unsure:

Are you asking pictures of the POST + any follow up images on my monitor, prior to booting into OS?
If so, then there isn't much to show/see. When i start my PC, POST screen is black (if i smash Del key at that time, i can enter UEFI. Pressing F11 instead brings me to Boot Menu). After POST, there is MoBo splash screen in form of quite nice MSI wallpaper (i enabled it from UEFI). Then image turns black and next to show is Win10 log-in screen. - All that happens within 10 (or 20) seconds.

And if you want to see how the UEFI looks like, you can download my MoBo manual and look it from there.

if a system is perfectly written, its impossible for it to freeze up except from hardware failure.
Yeah, but the thing is, no code is ever perfect since it would take insane amounts of time to get the code near perfection, but never at perfection. In any code (or any PC hardware in that matter), there is always some margin of error.

Speaking of it, this "no code is ever perfect" and "why", is nicely explained in GameRanx video that explains why players fall through in-game surfaces;
(Either watch the whole video of from 4:25.)

View: https://www.youtube.com/watch?v=mjQtLrQYIIg

with the laptop, the person at technical support at PCWorld, didnt know what the non USB3 sockets on the laptop were! I also dont know! the machine is an HP Spectre x360 2-in-1 laptop, and has 3 sockets I dont recognise with no mention on the enclosing box.
My research says these unknown ports are Thunderbolt 3.

Source:
(At 1:00 mark.)

View: https://www.youtube.com/watch?v=nsIcANeu1l4

I think there is a danger of complexity barrier, where things become too complicated for people to cope with!
This is an age old problem where it takes both sides (manufacturer and consumer) to work together. On manufacturer's side, idea is to create a product that does what it is supposed to do, with highest efficiency. And on consumer's side, it takes some learning on how to operate the said product. Hence why many products come with user manual, to teach the consumer how to use the said product.

Now, it consumer isn't bothered to read and learn from the user manual, then it's consumer fault when the product doesn't work as it's supposed to. Not all products are simple enough to use, that one can just use their intuition (or the lack of it), to make sense of it and use the product as it is intended for.

are eGPU's exclusively for laptops, or could I switch it between tower system and laptop for experimenting?
eGPU is a desktop (tower) dedicated GPU that is jerry-rigged to work with a laptop (by using the PCI-E x16 slot with riser cable, that usually turns into Thunderbolt or USB-C connector, to be connected to a laptop + ATX PSU to power the GPU itself).
Due to that, you can easily move the GPU around. If it comes inside eGPU dock, just remove the GPU from the dock and plug it into desktop PC MoBo. And vice-versa.

also will it work for both Intel and AMD, or do AMD's need a different eGPU?
There is no brand preference when it comes to GPUs. All GPUs (be it Nvidia, AMD Radeon or Intel ARC), will work with Intel and AMD CPU.

It's like when you have a car and are asking: can i put Michelin tires under my car or i'm only able to use Goodyear tires? :LOL:

so if I can afford it, I would go for PCI-E 5.0 and x16
Only PCI-E 5.0 devices currently out, are M.2 NVMe SSDs. GPUs, even the latest ones, operate at PCI-E 4.0 protocol.

As of current moment, i don't know which PCI-E protocol the upcoming Nvidia RTX 50-series and AMD RDNA4 series are going to use. If release dates aren't pushed back, then AMD RDNA 4 is expected to launch in Q3 2024 and Nvidia RTX 50-series in Q4 2024.

is the spacing between these sockets always identical, or do some mobos have them further apart?
On micro-ATX, ATX and E-ATX MoBo, the spacing of PCI-E slots is same, but to combat the issue of many GPUs being dual- , triple- or even quad-slot, MoBo manufacturers have put PCI-E x1 slots between PCI-E x16 slots, so that you have actually space to install another PCI-E x16 card (be it GPU or something else) when your main GPU is multi-slot and overhangs PCI-E slot below it.

E.g here's an image of my MoBo:

1024.png


From top to bottom, i have:
* PCI-E x1
* PCI-E x16
* PCI-E x1
* PCI-E x1
* PCI-E x16
* PCI-E x1
* PCI-E x16

Usually, the top most PCI-E slot is x1, so that CPU cooler has more room to be in. Also, 1st M.2 slot is usually just right of the 1st PCI-E x1 slot.
2nd slot from the top is usually PCI-E x16 slot, which is the main slot where to plug in the GPU.
Now, my MoBo has 2x PCI-E x1 slots before there is 2nd PCI-E x16 slot. This means that i can put triple-slot GPU into my main slot and then install 2nd GPU into 5th slot from top (2nd PCI-E x16 slot) without issues.

anyway, a plan is gradually forming of what PC to build. not there yet, I'll need to maybe shortlist a set of components and then scrutinise those before buying any. Once you start buying, you are stuck with those decisions.
Well, if you want the latest, then CPU wise;
AMD Ryzen 7000 series or Intel 14th generation.

MoBo chipset wise;
When going with AMD CPU - A620, B650, B650E, X670 and X670E.
When going with Intel CPU - B760, H770, Z790 (also compatible are: H610, B660, H670, Z690).

A bit more about different chipsets;
On AMD side:
A-series is the bottom of the barrel and cheapest. I don't suggest A-series.
B-series is mid-tier and good enough for most people. But if you're going with high-end CPU, like Ryzen 7 or 9, i suggest getting X-series instead.
X-series is high-end and best what you could get. Also the most feature rich.
E-suffix essentially means "Extreme edition" and is upgrade over initial chipset.

On Intel side:
B-series is for business use and usually offers least amount of features (since there is no need for flashy RGB or better audio in an office PC).
H-series is for home use and is mid-tier. Good enough for most people. But if you go with high-end CPU, like Core i7 or i9, especially when CPU has K-suffix, i suggest getting Z-series instead. (K-suffix on Intel CPU means that you can overclock the CPU.)
Z-series is high-end and for enthusiasts. Also the most feature rich.

Diff between H610 and H670;
H610 is the bottom of the barrel and cheapest of them all. Essentially comparable to AMD A620 chipset, due to lack of features. Also, reliability for H610 (alongside A620) is questionable. H670, in the other hand, is solid mid-tier option and only few notches down from the Z-series.

The two MoBos in my and missus'es build, are both Intel Z-series (Z97 and Z170 chipset).
 

Richard1234

Distinguished
Aug 18, 2016
277
5
18,685
I don't really get what you ask of me. :unsure:

Are you asking pictures of the POST + any follow up images on my monitor, prior to booting into OS?
If so, then there isn't much to show/see. When i start my PC, POST screen is black (if i smash Del key at that time, i can enter UEFI. Pressing F11 instead brings me to Boot Menu). After POST, there is MoBo splash screen in form of quite nice MSI wallpaper (i enabled it from UEFI). Then image turns black and next to show is Win10 log-in screen. - All that happens within 10 (or 20) seconds.
what I am requesting is the MSI version of these kinds of images for my ancient Gigabyte:

http://www.directemails.info/tom/cores_and_boot.jpg
http://www.directemails.info/tom/drives.jpg

where the first photo shows the enabling or disabling of cores, where for illustration I have disabled core "2", and also have put the floppy drive as the 3rd boot device: would make that first one when booting my floppies.

I'll have to check the documentation whether you can actually call a core "2", hyperthreading with shared variables does lead to the problem of how to tell if 2 threads are the same core, but I think I single thread the setting of those configuration registers to the same values for all threads, so maybe it doesnt matter if one is set repeatedly by different cores to the same value as the other ones are sleeping, I think I use a special machine code instruction to sleep the extra threads probably till each is woken by an inter thread interrupt. ie wake up extra cpu thread, self initialise it, supervised by the initial cpu then sleep it until later. There is a potential problem if the waking thread is the same core as the initial one as both are active.

the way I deal with the system is like a British Gas engineer, where they have a general understanding, but for a specific boiler they summon up the manual on their laptop. where I have a general idea what each thing does, but for the specifics I need to check either the code or the documentation.

that second photo is then the attached drives.


And if you want to see how the UEFI looks like, you can download my MoBo manual and look it from there.
Yeah, but the thing is, no code is ever perfect since it would take insane amounts of time to get the code near perfection, but never at perfection. In any code (or any PC hardware in that matter), there is always some margin of error.
you can get some parts of a system perfect, eg say calculating of squareroots, which uses the newton raphson algorithm:

to calculate squareroot(m) you do:
x(1) = 1 for the initial approximation,

and
x(t+1)=1/2(x( t ) + m/x( t )) for successive ones.

eg m = 2, x(1) = 1,
x(2) = 1.5
x(3)=1.41667
x(4)=1.41422
x(5)=1.41421
x(6)=1.41421

and decide by deriving some theorems on paper when to halt the algorithm, eg here at the 6th iteration.

you also need to filter out negative numbers. Because the scenario is a small bit of code, you can get that perfect by raking over the logic rigorously, you do need to derive some theorems to guarantee accuracy.

but when the quantity of code or data becomes large some subtle bugs will get through, eg say a printer driver or a filesystem. Also as hardware eg memory becomes big, it becomes more likely that some flip flops corrupt. I think the disks of internet servers become slightly corrupted from overuse, eg they'll regularly make backups of all files, where a script can be perfectly good, then one day an error emerges inexplicably.

eg once my boot IDE drive's first sector got corrupted, as that sector is usually the first one accessed in each session, and after a year could be accessed 1000x, eventually it can wear out!

and eg with newton raphson above, if that bit of the server disk corrupts, the code could become wrong!

Speaking of it, this "no code is ever perfect" and "why", is nicely explained in GameRanx video that explains why players fall through in-game surfaces;
(Either watch the whole video of from 4:25.)

View: https://www.youtube.com/watch?v=mjQtLrQYIIg
ok, I did watch the full video. with the Amiga 500, people created ray tracing programs for photorealism, but that took many hours to create one frame, which might be saved to disk for a later animation. I think nowadays they no longer use ray tracing as its too slow, but just use simplifying assumptions. With ray tracing, things like shadows and reflections would be accurate, eg 2 silver spheres, would reflect in each other, and you'd get reflections of the reflections etc. But today it would probably just be a silver sphere without reflections!

My research says these unknown ports are Thunderbolt 3.

Source:
(At 1:00 mark.)

View: https://www.youtube.com/watch?v=nsIcANeu1l4

I have uploaded photos of the 3 sockets, one looks like a headphone socket:

http://www.directemails.info/tom/socket1.jpg

one looks like a USB socket:

http://www.directemails.info/tom/socket2.jpg

and one doesnt look like anything I know of, marked by a question mark:

http://www.directemails.info/tom/socket3.jpg

This is an age old problem where it takes both sides (manufacturer and consumer) to work together. On manufacturer's side, idea is to create a product that does what it is supposed to do, with highest efficiency. And on consumer's side, it takes some learning on how to operate the said product. Hence why many products come with user manual, to teach the consumer how to use the said product.

this HP laptop doesnt come with any manual, not even a quickstart, there should be one online, but that is a catch-22 problem if its your first PC! that you need to know how to use it in order to install it to then get to the internet to then read how to install it!

its also bad design to not label newer sockets, laptops will usually label a headphone socket with a pictogram of a headphone, and maybe a microphone socket with a pictogram of a microphone.

if the technician at the PCWorld helpdesk didnt know the sockets, then they need to be labelled!

I blame HP here, for shabby communication!

the customer is always right!

Now, it consumer isn't bothered to read and learn from the user manual, then it's consumer fault when the product doesn't work as it's supposed to. Not all products are simple enough to use, that one can just use their intuition (or the lack of it), to make sense of it and use the product as it is intended for.
I honestly think desktops are beyond the scope of most people, the internet only became mainstream via smartphones.

only more enlightened people have PCs. but virtually everyone today has a smartphone.

a lot of stuff today involves a lot of runaround for any usage, you have to boot up the set top box, boot up the catch up app, select browse categories, select category, go through page after page of programs, sometimes no description, and eg I watched a BBC catchup australian series called "black snow", where I had to fast forward through all earlier episodes to watch the next one. usually catchup goes directly to where you last reached.

and the BBC catchup rewind doesnt work properly, after a few seconds could rewind 20 minutes, and no progress indicator, whereas say C5 and C4's catchup rewind is reasonable.


the ITV catchup app had a huge amount of dramas, but all arranged alphabetically, where it took 10 years to reach things later in the alphabet. in the end I started using search instead and say putting z to find what was at the end of the list! because most stuff I watched was near the start of the alphabet eg the great Angela Black drama.

with online payments, sometimes they send you directly to your bank account, but you need to authenticate with say the card reader, enter numbers on the card reader, enter numbers on the website, a lot of people dont have that level of patience! they'll say "I cant be done with this",

you need both IQ and patience, eg to go through hurdle after hurdle after hurdle, with some involving long waits, to get interesting things done. and then the phone queues where they tell you all kinds of stuff before you get to the options, and then you wait exactly 30 minutes before someone takes the call!

this might be deliberate to put off most people! giving people the runaround, where eventually they quit!

with my HP pavilion, the DVD drive one day vanished from the desktop, and I phoned their technical support, and they said I had to reinstall windows, because they ongoingly change which one is supplied, and dont keep track, so I stupidly reinstalled windows XP. But in later times I realised all I had to do was buy a new drive and install that, no need to reinstall windows!


eGPU is a desktop (tower) dedicated GPU that is jerry-rigged
jerry-rigged = ?


to work with a laptop (by using the PCI-E x16 slot with riser cable, that usually turns into Thunderbolt or USB-C connector, to be connected to a laptop + ATX PSU to power the GPU itself).Due to that, you can easily move the GPU around. If it comes inside eGPU dock, just remove the GPU from the dock and plug it into desktop PC MoBo. And vice-versa.


There is no brand preference when it comes to GPUs. All GPUs (be it Nvidia, AMD Radeon or Intel ARC), will work with Intel and AMD CPU.

It's like when you have a car and are asking: can i put Michelin tires under my car or i'm only able to use Goodyear tires? :LOL:
yeah, I asked that because I dont assume anything!

eg memory chips are really specific to the cpu or motherboard, at least they were approx 2010. get the wrong memory chip and it fits perfectly in the slot, but the machine will crash.

I dont know if they have standardised these now.

with optical disks, my first DVD drive by Liteon only accepted some recordable DVDs, not all!

I know someone with a Windows 7 laptop, which doesnt accept I think Verbatim DVD-R's.

whereas more recent machines and not had such problems.


Only PCI-E 5.0 devices currently out, are M.2 NVMe SSDs. GPUs, even the latest ones, operate at PCI-E 4.0 protocol.

As of current moment, i don't know which PCI-E protocol the upcoming Nvidia RTX 50-series and AMD RDNA4 series are going to use. If release dates aren't pushed back, then AMD RDNA 4 is expected to launch in Q3 2024 and Nvidia RTX 50-series in Q4 2024.
I'll probably go for 4.0 then, same way I went for Windows 10 rather than 11, as 10 is matured, whereas 11 is probably still early days. because with 5.0, they'll eventually release 6.0.

On micro-ATX, ATX and E-ATX MoBo, the spacing of PCI-E slots is same, but to combat the issue of many GPUs being dual- , triple- or even quad-slot, MoBo manufacturers have put PCI-E x1 slots between PCI-E x16 slots, so that you have actually space to install another PCI-E x16 card (be it GPU or something else) when your main GPU is multi-slot and overhangs PCI-E slot below it.

E.g here's an image of my MoBo:

1024.png


From top to bottom, i have:
* PCI-E x1
* PCI-E x16
* PCI-E x1
* PCI-E x1
* PCI-E x16
* PCI-E x1
* PCI-E x16

Usually, the top most PCI-E slot is x1, so that CPU cooler has more room to be in. Also, 1st M.2 slot is usually just right of the 1st PCI-E x1 slot.
2nd slot from the top is usually PCI-E x16 slot, which is the main slot where to plug in the GPU.
Now, my MoBo has 2x PCI-E x1 slots before there is 2nd PCI-E x16 slot. This means that i can put triple-slot GPU into my main slot and then install 2nd GPU into 5th slot from top (2nd PCI-E x16 slot) without issues.
that spacing ought to mitigate the problem.


Well, if you want the latest, then CPU wise;
AMD Ryzen 7000 series or Intel 14th generation.
what I was thinking was to go for an AMD 4 core and 8 threads, that way I can test out my code for AMD multithreaded,

is there an MSI mobo with AMD 4 core 8 threads, and say PCI-E 4.0 and x16?

those 8 lanes, 16 lanes etc, are what I was saying about buses. if 16 wires are a million bytes per second, at that same technology, 32 wires will be 2 million bytes per second, 64 would be 4 million etc,

they tend to double the number each step, as that is aligned with the smaller widths,

they could just make say 128 lanes, for some blistering data rate, but they wont make as much money!
instead they'll do 8 then when that market dries up, do 16, then when that dries up do 32, etc, and thereby make much more money!

eg instead of a card, have a square socket with a 16 x 16 lanes, with some ginormous leap in speeds. but they wont for cynical reasons, except maybe for the military!

even a slow technology could be made really fast for data shunting, just by having more parallel wires! ie a wider bus, and eg have memory spread across simms,

eg 8 simms of 1Gig, would shunt data 8x as fast as 1 simm of 8Gig.

code loading also 8x as fast, eg load 64 bits from each simm at a time, and you load 8 instructions at a time, where you could say keep all code in 8 x 64 bit aligned batches.



there is often a conflict of interest for giving consumers too much stuff, namely less money to be made.


MoBo chipset wise;
When going with AMD CPU - A620, B650, B650E, X670 and X670E.
When going with Intel CPU - B760, H770, Z790 (also compatible are: H610, B660, H670, Z690).

A bit more about different chipsets;
On AMD side:
A-series is the bottom of the barrel and cheapest. I don't suggest A-series.
B-series is mid-tier and good enough for most people. But if you're going with high-end CPU, like Ryzen 7 or 9, i suggest getting X-series instead.
X-series is high-end and best what you could get. Also the most feature rich.
E-suffix essentially means "Extreme edition" and is upgrade over initial chipset.

I'll definitely avoid the cheap end, and go for the highest I can afford, but also subject to other constraints.

high speed things can create more heat and use more electricity, eg with bitcoin mining, one guy on Youtube researched this, and found his electric bill went through the roof where it was a false economy!

people dont realise that data and computation create heat and cost money! it costs money and generates heat to shunt an electron, unless you have superconductivity!

as Samsung found out when their batteries started to explode from the excess heat of the torrents of data.

On Intel side:
B-series is for business use and usually offers least amount of features (since there is no need for flashy RGB or better audio in an office PC).
H-series is for home use and is mid-tier. Good enough for most people. But if you go with high-end CPU, like Core i7 or i9, especially when CPU has K-suffix, i suggest getting Z-series instead. (K-suffix on Intel CPU means that you can overclock the CPU.)
Z-series is high-end and for enthusiasts. Also the most feature rich.

Diff between H610 and H670;
H610 is the bottom of the barrel and cheapest of them all. Essentially comparable to AMD A620 chipset, due to lack of features. Also, reliability for H610 (alongside A620) is questionable. H670, in the other hand, is solid mid-tier option and only few notches down from the Z-series.

The two MoBos in my and missus'es build, are both Intel Z-series (Z97 and Z170 chipset

where you said in the earlier posting:

Years ago, one of my friends tried to teach me how to write simple script that automatically renames the file names to the ones i prefer. Since i had to manually type in the output name into the script for each file + the full source path of the file (so script knows which file it needs to rename into what), i found it way too tedious to do.
Mainly because the manual work i had to do, was far greater than my usual preferred way of: navigating into correct folder, select the correct file, press F2 to open up filename rename option and type new name as file name. All that individually for each file.

this is a metalevel problem, you should only use a system if there is "gain", eg it should be less effort to use the system than not to, other forms of gain are eg savings of time, money, stress, space

eg shelving allows more stuff per sq metre of floor, the shelves cost money, but you save space.

or a car costs money, but now you can get somewhere in 10 minutes instead of 30.

the example your friend gave, could potentially be didactic, where its just to teach an idea rather than to be practical. eg with school maths, most of the problems are didactic, not practical, eg integrate 1/sin( x ), not very practical!

whether you should use scripts is a metalevel question, depends where you are, where you are trying to go, what you are doing, what you already know,

it is certainly empowering to learn some scripting, but it is also empowering to learn how to use Excel, and empowering how to use Photoshop, and empowering to learn some carpentry, and empowering to learn some basics of consumer law and contract law: you can evade and deal with a lot of disputes, ...

but ultimately you dont have the time to do all the empowering things you ought to,

but you ought to do SOME empowering things, people will generally enthuse about the empowering things they know, where often they know these by being at a certain place at a certain time,

eg I learnt C merely because I had an Amiga 500 computer and wanted to program it, and the examples in the manuals were all in C.

people are victims of their era and other circumstances, where they know the things which were pushed in their era or circumstance, eg in my era at school, everyone was taught BASIC programming, but not examinable.

people who have too much money become victims of their wealth, where they throw money at problems and become lazy, and often spend all their time managing their money, and can become overweight because they can afford to!

at uni, I didnt have the money or time to become overweight! but decades later I did, but over several months got the problem under control, losing more than 20kg!

anyway success is often from circumstances, which force you to be productive,

but to be really elite is to rise above circumstances and CHOOSE things rather than be forced into things, and this is tricky to get right, and requires metalevel skills.
 
Last edited:

Aeacus

Titan
Ambassador
what I am requesting is the MSI version of these kinds of images for my ancient Gigabyte:

Since i have Z-series MoBo and UEFI, it has sub-menus under sub-menus and it took me some time to find the correct spot, from where to disable CPU cores. So, there are plenty of images. :cheese:

ZWTEQVU.jpg


jsa0szq.jpg


urG0gMh.jpg


l8mybN8.jpg


ecBgMIP.jpg


N6q2Lx2.jpg


0hS2WFa.jpg


wsfLi8U.jpg


pKm6Vh4.jpg


FVKvKW6.jpg


LmmEE7E.jpg


uqib4SJ.jpg


08ZLTwq.jpg


aYNkfqS.jpg


vOTFPxx.jpg

with the Amiga 500, people created ray tracing programs for photorealism, but that took many hours to create one frame, which might be saved to disk for a later animation. I think nowadays they no longer use ray tracing as its too slow, but just use simplifying assumptions. With ray tracing, things like shadows and reflections would be accurate, eg 2 silver spheres, would reflect in each other, and you'd get reflections of the reflections etc. But today it would probably just be a silver sphere without reflections!
Ray Tracing is a thing again. Started out with RTX 20-series, continued with RTX 30-series and even latest RTX 40-series GPUs support Ray Tracing.
E.g if you were to read reviews of latest GPUs, there are two FPS counters, one is with Ray Tracing disabled, another is with Ray Tracing enabled (since Ray Tracing reduces FPS considerably),
RTX 4070 review: https://www.tomshardware.com/reviews/nvidia-geforce-rtx-4070-review/5
Those charts that have DXR as one of the configurations, have Ray Tracing enabled.

With Ray Tracing (DXR):
Nd3FKdCKE4BrTfgEv9kCGL-970-80.png.webp


Without Ray Tracing:
mry2o6cVcVdTCHwg6qXedL-970-80.png.webp


Ray Tracing as such, is Nvidia thing. On AMD Radeon side, they have Radeon Rays, which is far inferior to the Ray Tracing Nvidia has.
Further reading if interested: https://www.tomshardware.com/features/amd-vs-nvidia-best-gpu-for-ray-tracing

I have uploaded photos of the 3 sockets, one looks like a headphone socket:

http://www.directemails.info/tom/socket1.jpg
:cheese:
That's 3.5mm mic/headphone combo jack.

:cheese:
USB Type-A port (the rectangular one), either USB 2.0 or USB 3.0.

and one doesnt look like anything I know of, marked by a question mark:

http://www.directemails.info/tom/socket3.jpg
That's SD/microSD card slot.

if the technician at the PCWorld helpdesk didnt know the sockets, then they need to be labelled!
Nowadays, i've seen quite a several people being hired into positions that require the know-how but many don't have it. Makes me wonder, who even hired them. :unsure:

E.g these 3 ports you have, those can be identified by how the ports themselves look like. It's not like you have some obscure, rarely used ports there, where one struggles to understand what they are.

the customer is always right!
Yeah.... No.
It takes an expert within that specific field to tell what is what, not the customer who may "think" they are right without any prior knowledge within that specific field.
I honestly think desktops are beyond the scope of most people, the internet only became mainstream via smartphones.

only more enlightened people have PCs. but virtually everyone today has a smartphone.
Internet was mainstream long before there were smart phones. Though, what sets desktop PC apart from smart phone, is the ease of use, or more precisely, what you can do with one. With smart phone, your options are limited. Phone calls, SMS/MMS, take picture, record video, surf the web (and if you install gaming apps, then game on it as well). Where as of desktop PC, what you can do with it - is limitless. There is no task desktop PC can't do.

As of having either desktop PC or smart phone. Well, most smart phones are cheaper than desktop PCs, so, more people have access to one. Also, smart phone is very portable and serves a main function of calling to others while on the go.

Smart phone that i have, i bought it 2 years ago. Before that, i had mobile phone (various Nokia phones), since i didn't have a need for a smart phone. Only after i was gifted activity monitor (Fitbit Inspire HR) need arose to get a smart phone, so i can use the Fitbit app and sync my activity monitor via Bluetooth. Though, i still know several people who doesn't have smart phone since they doesn't have a need for it.

jerry-rigged = ?
Definition here: https://www.dictionary.com/e/jury-rigged-vs-jerry-rigged/

But in the context of eGPU, well, just look at the image i shared earlier. It doesn't look good if next to your laptop is desktop GPU in a PCI-E x16 slot and ATX PSU as well, to power the GPU. Whereby one essentially takes incompatible components from another PC (desktop PC in this case) and somehow manages to combine them so, that they somehow work with a laptop.

eg memory chips are really specific to the cpu or motherboard, at least they were approx 2010. get the wrong memory chip and it fits perfectly in the slot, but the machine will crash.

I dont know if they have standardised these now.
Nowadays, RAM is quite standard. Either DDR4 or DDR5. Though, since AMD CPUs are capricious when it comes to RAM, you need to look out for those DIMMs that have "AMD compatible" written on them. Else-ways, you may have issues running faster frequency than JEDEC default frequencies.

Best option is to read specific MoBo memory QVL list, where MoBo manufacturer has tested several RAM sticks to see if they work with their MoBo and if they do, do they work in 1, 2 or all 4 slots and also at what frequency and voltage. This is the safest bet to buy compatible RAM for your PC.
E.g i did the same when buying RAM for my Skylake and Haswell builds, i picked the RAM listed in MoBo memory QVL list, to have a guarantee the RAM working as tested by MoBo manufacturer.

is there an MSI mobo with AMD 4 core 8 threads, and say PCI-E 4.0 and x16?
There are no AMD Ryzen 7000-series CPU with 4c/8t. Lowest you can get, is 6c/12t,
full list here: https://www.amd.com/en/processors/ryzen
(Scroll down to Specifications.)

But if you want to have AMD CPU with 4c/8t, then best would be Ryzen 5 3400G. E.g like so:
(G-suffix with AMD CPU means that CPU has iGPU in it, Radeon Vega 11 in this case.)
PCPartPicker Part List

CPU: AMD Ryzen 5 3400G 3.7 GHz Quad-Core Processor (£171.76 @ Amazon UK)
Motherboard: MSI MPG X570S EDGE MAX WIFI ATX AM4 Motherboard (£302.18 @ Amazon UK)
Memory: Corsair Vengeance LPX 32 GB (2 x 16 GB) DDR4-3200 CL16 Memory (£75.93 @ Amazon UK)
Total: £549.87
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2023-12-24 13:36 GMT+0000


Ryzen 3000-series uses AM4 CPU socket, meaning that best chipset to use is X570 and also, only DDR4 RAM is supported. DDR5 RAM is supported only with AMD Ryzen 7000-series CPUs with AM5 CPU socket.
And there's another caveat. When using Ryzen 3000 G-series CPU, MoBo is able to run PCI-E x16 slot only in x8 mode.

So, a choice:
#1 4c/8t AMD CPU - DDR4 RAM - PCI-E 4.0 x8.
#2 6c/12t AMD CPU - DDR4 or DDR5 RAM - PCI-E 4.0 x16.

they could just make say 128 lanes, for some blistering data rate, but they wont make as much money!
Oh, CPU with 128 PCI-E lanes is very much a thing.

AMD Threadripper CPUs (4 of them) have all 128 PCI-E lanes. And flagship CPU: Ryzen Threadripper Pro 3995WX, has 64 cores and 128 threads.
AMD Threadripper CPU is counterpart to Intel Xeon CPU. Both are server CPUs.
Article: https://www.pcworld.com/article/393...-pcie-lanes-and-8-channel-memory-support.html

In server market, 128 PCI-E lanes makes sense since there's a need for that (else-ways, 128 PCI-E lane CPU wouldn't be created). But in consumer market, 20 to 24 PCI-E lane CPU is most what one would need. With 24 PCI-E lanes in CPU, one can easily connect GPU (using 16 lanes) and 1-2 M.2 SSDs (each using 4 lanes of PCI-E).

even a slow technology could be made really fast for data shunting, just by having more parallel wires! ie a wider bus, and eg have memory spread across simms,

eg 8 simms of 1Gig, would shunt data 8x as fast as 1 simm of 8Gig.
Yes, that i know of.
This is also the reason why i don't have 1x 16GB DIMM or 2x 8GB DIMMs in my system, but instead i have 4x 4GB DIMMs, for total of 16GB of RAM. Sure, MoBo memory controller has more workload on it due to having more DIMMs, but i have high-end MoBo that can sustain it easily. Cheaper MoBos - not so much.
 

Richard1234

Distinguished
Aug 18, 2016
277
5
18,685
Since i have Z-series MoBo and UEFI, it has sub-menus under sub-menus and it took me some time to find the correct spot, from where to disable CPU cores. So, there are plenty of images. :cheese:

the images are too glitzy for the 1980 architecture, they must be rejigging from 32 bit or 64 bit, and then rebooting back from 1980 with new settings?

which is a valid pathway, but would be a larger and more complex ROM, eg that dragon image will probably devour more memory than was possible with the 1980 architecture!

ZWTEQVU.jpg


jsa0szq.jpg
I see USB Floppy as item 4 in that list, presumably you can rearrange the order? eg make USB floppy Boot option #1?


when it says processor cores, does that mean all threads on that core?

ie presumably each here being 2 virtual CPUs?

virtual CPU is probably a better word than thread and hyperthreading, because the word thread usually means a subtask. the virtual CPU no doubt is a thread of the RISC core, but its not a thread for the programmer! for the programmer it is a CPU. when they say core, I think they mean RISC core.

x86 CPUs are software above a RISC core.


Ray Tracing is a thing again. Started out with RTX 20-series, continued with RTX 30-series and even latest RTX 40-series GPUs support Ray Tracing.
E.g if you were to read reviews of latest GPUs, there are two FPS counters, one is with Ray Tracing disabled, another is with Ray Tracing enabled (since Ray Tracing reduces FPS considerably),
RTX 4070 review: https://www.tomshardware.com/reviews/nvidia-geforce-rtx-4070-review/5

I think for Hollywood films, they may use raytracing, as speed doesnt matter because the rendering is only done when making the film, not when showing the film.

there is a logical flaw with ray tracing, that ray tracing starts with the eye, and traces rays from the eye to each pixel on the viewing rectangle, but that isnt how reality works! with reality, rays end up at the eye, not start at the eye! people used to think seeing was active, rather than reactive, they thought a force went out from your eye till it reached an object, like with ray tracing, in fact a force comes from the object TO your eye, not FROM your eye to the object.

with reality, rays start at light sources, and then bounce around like pinball, with some wafer thin percentage of them reaching your eye.

I am not convinced tracing rays from the eyes will form the same image as tracing rays from light sources.

eg say you have the sun behind some ginormous column of cloud cover, 99.999999% of the ambient light will originate from the sun, but to trace back from the eye seems a fool's errand, its at best a guesstimate. how are you going to ray trace through a cloud? each pixel of your eye originates in a nightmarishly complex way from the sun!

but ray traced images nonetheless are impressive, its the old dilemma of should an image be accurate or should it look good? with photoshop, you can say scan an image, and then make it better than the original!



https://www.tomshardware.com/reviews/nvidia-geforce-rtx-4070-review/5
Those charts that have DXR as one of the configurations, have Ray Tracing enabled.

With Ray Tracing (DXR):
Nd3FKdCKE4BrTfgEv9kCGL-970-80.png.webp


Without Ray Tracing:
mry2o6cVcVdTCHwg6qXedL-970-80.png.webp


Ray Tracing as such, is Nvidia thing. On AMD Radeon side, they have Radeon Rays, which is far inferior to the Ray Tracing Nvidia has.
Further reading if interested: https://www.tomshardware.com/features/amd-vs-nvidia-best-gpu-for-ray-tracing


:cheese:
That's 3.5mm mic/headphone combo jack.

it doesnt have any label, so I am left to guess!

I have never heard of 1 socket doing both microphone and headphone, it needs a label.

:cheese:
USB Type-A port (the rectangular one), either USB 2.0 or USB 3.0.
the confusion is because I expect USB sockets to have plastic segments and not have lids, and often colour coded eg USB3 a blue segment, but this one is just metal. usually USB sockets have a specific hieroglyph which looks like a network. I expected any old format USB socket on this machine to be USB3 and to have a blue segment in it.

no label, no manual, no nothing, its not good enough. With CD technology, you have to put the relevant logos, otherwise it is infringement of copyright. All CD equipment has to be certified compatible by Sony or Philips, and the logo shows it has been certified.

I think USB is copyrighted by Intel, I think HP may have infringed the copyright by not putting the well known USB logo!

it is rogue behaviour to not put some form of USB hieroglyph or text.

I checked my Fujitsu Siemens of probably 2007 era, and all the sockets have hieroglyphs to one side, eg the 2 USB sockets on the one side have one USB hieroglyph to the left, the headphone and microphone sockets have hieroglyphs. on my PC, the headphone and microphone sockets are colour coded, I think the headphone socket is green. there is a green, blue and pink socket.


That's SD/microSD card slot.


Nowadays, i've seen quite a several people being hired into positions that require the know-how but many don't have it. Makes me wonder, who even hired them. :unsure:
the local PCWorld appear to hire MBA students, in order to get free or low paid skilled labour!

in the old days, Dixons which is the same firm as PCWorld, was notorious for hiring people who didnt have the remotest clue, and one would always regret asking their advice!

maybe nothing has changed!

E.g these 3 ports you have, those can be identified by how the ports themselves look like.
that's not good enough, those ports dont look like any I am familiar with, I have never seen a USB socket with a lid and a metal segment, all audio sockets I have seen have a circular inner surround, this one doesnt.

there is a "communication" problem, even if I guess they are USB how do I know whether USB2, USB3, USB4 etc?

I need some clue via colour coding once you go beyond USB2,

I have built 3 PCs and find it confusing, so that means its a bad design!

its not user friendly to not label the sockets, its a bit "autistic". I could behave in a non user friendly way, but I wont make many friends!

in any case, I wont be buying anything at all by HP ever again, their inkjets are a load of junk, Epson ecotank is vastly superior, 10x as cheap, the ink doesnt dry up if not used for months, much better images, one set of ecotank bottles might be £40 and print 4000 pages, (depends which version of ecotank), with inkjets you'd need easily 20 cartridge changes for that. all in one cartridges are particularly stupid, as you have to junk the entire catridge the moment 1 colour is used up. I dont understand how HP can design an all in one cartridge, it is such a bad idea!

https://www.epson.co.uk/en_GB/for-home/ecotank

I asked the PCWorld salesman why Epson continue manufacturing inkjets, when ecotank is so superior, he said: its because the customers demand inkjets, so they give customers what they want.

with ecotank, if one colour runs out, you just buy a new bottle of that ink, which typically costs about 10 quid, pour it in, do some config stuff, and you are ready to go.
It's not like you have some obscure, rarely used ports there, where one struggles to understand what they are.

Yeah.... No.
It takes an expert within that specific field to tell what is what, not the customer who may "think" they are right without any prior knowledge within that specific field.
what I mean is if the customer struggles to understand, perhaps it is the firm that is incompetent. I have built 3 PCs so you can hardly put the blame on me for being stupid!

its like some multi part dramas are a real challenge to follow, eg unrememberable names, similar looking characters, too many subplots etc, is that a problem of the viewer being stupid, or is it bad production? I say it is bad production, I shouldnt have to struggle to understand a drama.

in fact it is intelligent not to guess when using expensive equipment, explain how I am supposed to guess something I havent seen before?

my Fujitsu Siemens laptop had mostly well established sockets, BUT every single socket was labelled.

someone on the internet once gave me some very sound advice, he said: dont assume anything!

my Fujitsu Siemens has all sockets labelled, and my Gigabyte mobo has all relevant sockets labelled, eg SATA, FDD, etc. No guesswork.

with my Fujitsu Siemens also, I could change the hard drive and memory without voiding the warranty, it had 2 zones for opening it up, the warranty seal was only for an inner level, the access to the hard drive and memory no warranty seal, brilliant design. This HP Spectre, has proprietory screws for everything, so the user cannot change anything. Also, with my Fujitsu, I could remove the battery, where if I use it at home or in a hotel, I just use the mains. But with the HP, I cannot remove the battery, so when I use it from the mains, it is continually topping up a full battery, which cannot be good, and is wear and tear. how difficult is it to have a disconnect battery switch? the design is atrocious,

batteries dont last forever, which idiot decided to have sealed in unremovable batteries?

I suppose I am the bigger idiot for buying the junk, but I dont seem to have a choice.

Originally, I backed up the hard drive overnight on batteries, and the batteries ran out midway! so for long ops, eg sector backups, I have to use the mains, and the battery is then ongoingly topped up, it is STUPID design!

plumbers long ago figured out how to make waterproof joins, you dont need a sealed in battery for waterproofing, you can make a waterproof lid.


my camera also has all sockets labelled with hieroglyphs

so it is definitely unacceptable.

the same way a cinema should have all fire exits labelled, with high viz signs visible from any point saying where the nearest one is, if they dont it is probably illegal because of health and safety legislation.

its just snooty and autistic to expect people to know what things are without any clue or label, and net effect is people will buy other products, eg I wont ever buy HP again,

there isnt any manual with the laptop, which is the catch 22 problem, its not good enough!

the machine itself has a font so tiny I need a lens to read what it says in the initial screen, and by the time the lens is placed, the initial microscopic text has vanished! The machine HASNT BEEN BETATESTED, the firm is incompetent. it was a big and expensive mistake to buy this laptop, someone afterwards gave me some good advice: never buy a laptop with less than 16" screen. also the touchscreen often clicks when the mouse pointer is above an item where I havent clicked at all, caused an ebay bid to go through too early. with ebay, you need to bid in the last few seconds in some aggressive markets, to prevent other bidders having enough time to react and outbid you. luckily in that case, I wasnt outbid. but this is monetary jeopardy.

with the touchscreen, you can expand out webpages like with a smartphone, but you cant expand out system windows. if you enlarge fonts, things dont fit on the screen. Ultimately a mistake to buy a 13" screen. the tiny fonts are migraine inducing. as you grow older, I think the retina grows, and you cant read smaller fonts so well. the machine hasnt been betatested.

I have a very dim view of HP, especially when they told me to reinstall XP when the DVD drive vanished from the desktop of my HP Pavilion in 2004, when in fact vastly better idea to just buy a new DVD drive.

within a few months, the HP Spectre no longer booted, and a lot of stuff vanished from the hard drive, and vanished from disk salvage also, and not on One Drive also, this is unheard of for me!

with other machines, even after years of use, if a drive wore out, I could still salvage everything, but not with this HP Spectre.


Internet was mainstream long before there were smart phones.
when I said mainstream I meant used by most people, most people didnt use the internet,

a lot of people used the internet, but an even huger amount DIDNT use the internet, dont confuse "lots of people" with "most people"! eg lots of people go to soccer matches, but most people never go to soccer matches, eg 99% of women dont, and that is some 50% of the population with no interest in soccer. add in the men who dont, and you have "most people" dont go to soccer matches.

its just a matter of numbers, how many stadiums are there? a few per city, usually 1 per soccer team, how many people can fit in a stadium? Wembley is 90000.

basically the numbers only add up to a small percent of the population, conclusion: most people dont go to soccer matches. but lots of people do go to soccer matches, eg 90000 might go to a Wembley match.

its only with smartphones that there was a huge increase in internet usage,

and things like Facebook work by simulating the experience of texts.

most people did use mobiles, whereas only lots of people had PCs or used the internet,

even now, most people dont reply to emails, or just give 1 liner replies. so even emails now arent really used by most people.

majority of people dont give inline replies to emails, they'll just comment at the start of the email.

to the extent I have to comment only at the start, as most people cannot cope with inline replies, thankfully Tom's hardware does inline replies!

Though, what sets desktop PC apart from smart phone, is the ease of use, or more precisely, what you can do with one. With smart phone, your options are limited. Phone calls, SMS/MMS, take picture, record video, surf the web (and if you install gaming apps, then game on it as well). Where as of desktop PC, what you can do with it - is limitless. There is no task desktop PC can't do.
my one cant do QR codes and mobile apps!

its only viable for me to work at a tower system, laptop mice are unsatisfactory.

laptop keyboards are unsatisfactory, eg this HP Spectre doesnt have a numeric keypad, and I need a nonstandard escape key for the function keys, where the F1, F2 etc are in a tiny illegible font,

also tricky to find how to get to the early startup, with my Gigabyte mobo, it says plainly at the start to press Del to get to the startup options. with the Spectre it is hidden away elusively.

its touchscreen is great for the web version of Google Earth where I can spin around and stretch the planet with my fingers, but the screen gets really smudged up with fingerprints. The .exe version of Google Earth I cant do all that!


As of having either desktop PC or smart phone. Well, most smart phones are cheaper than desktop PCs, so, more people have access to one. Also, smart phone is very portable and serves a main function of calling to others while on the go.
most are cheaper maybe, but a top end Samsung is more expensive than my HP Spectre laptop, which PCWorld told me was the best laptop in the shop,

if by desktop you specifically mean tower systems, and dont want top end, you could build one cheaper than a top end Samsung or Apple smartphone.

my Samsung Galaxy Note 4 cost me 460 quid new on ebay, 600 in the shops in 2015, whereas my PCs generally cost me some 500 to build.

obviously if you build a server grade tower, then the sky is the limit for prices and it would eclipse the price of a top end Apple or Samsung smartphone.

Smart phone that i have, i bought it 2 years ago. Before that, i had mobile phone (various Nokia phones), since i didn't have a need for a smart phone. Only after i was gifted activity monitor (Fitbit Inspire HR) need arose to get a smart phone, so i can use the Fitbit app and sync my activity monitor via Bluetooth. Though, i still know several people who doesn't have smart phone since they doesn't have a need for it.

Definition here: https://www.dictionary.com/e/jury-rigged-vs-jerry-rigged/

But in the context of eGPU, well, just look at the image i shared earlier. It doesn't look good if next to your laptop is desktop GPU in a PCI-E x16 slot and ATX PSU as well, to power the GPU. Whereby one essentially takes incompatible components from another PC (desktop PC in this case) and somehow manages to combine them so, that they somehow work with a laptop.

and I thought you meant it was impressive, lucky I asked what jerry rigged meant!

its a dog's breakfast to have components strewn around the place, but at the same time that does happen eg with USB hubs and power cables to drives, where I have so many hard drives I run out of drive labels on Windows! I usually keep most of them unconnected.

Nowadays, RAM is quite standard. Either DDR4 or DDR5. Though, since AMD CPUs are capricious when it comes to RAM, you need to look out for those DIMMs that have "AMD compatible" written on them. Else-ways, you may have issues running faster frequency than JEDEC default frequencies.
frequencies are the big problem, its not good enough to guess, but the only proper option is what you say later about only buying those in the mobo manual's compatibility list.

at school in history, we were taught that one of the big breakthroughs of the industrial revolution was standardisation, eg say with the design of screws, where they standardised the geometry, and direction of the thread. and eg lightbulbs, as you get division of labour, where different firms can make different parts, and everything will connect together without problem. But with memory, they didnt properly standardise, similarly car tyres arent standardised, with many many different ones: this is very inefficient as Goodyear have to produce a huge range, and each one has a smaller market, ie you get a division of the market, inefficient.

the reason PCs are so successful, is so much of the architecture is standardised and open, but with memory sockets they failed to standardise properly, the CPU sockets also seem nonstandard.

its not rocket science to standardise such things! and its stupid to not standardise.

the simms should have some notches, so you cant insert an incompatible one.

I bought a mobo + cpu + memory combo from Maplins, and the machine kept crashing when installing the mobo drivers, the moment it reached installing .net, and eventually on advice from someone, I replaced the memory with one in the mobo manual, a Corsair, and everything then worked perfectly. I actioned a refund on the price of the Corsair from Maplins, but they had to call in the manager to approve the refund.

that combo is an example of what happens when people guess! Maplins had guessed the memory simm, and it worked for many minutes, then a total crash of the PC, every time on installing .net.

but with the correct simm by corsair, it sailed through the .net installation, and I would leave it running overnight sometimes, eg backing up large hard drives, and no problem the next day.
when I was learning to drive, I asked the instructor about when to overtake, because there might be an oncoming vehicle. he said: never guess, only overtake if you can see the entire scenario ahead, dont overtake blind. if it is impossible to see, eg a broken down vehicle ahead, you have to "peep and creep", move very slowly into the unknown, so an oncoming car sees you gradually emerge into view, and has time to halt, if he is also driving properly with safe distances. crashes only happen because people drive stupidly, eg rushing into the unknown and assuming no vehicles there, or driving so close to the next vehicle, they dont have time to react to the next car braking.

sometimes you have to guess, but not with the sockets of some expensive technology!

LABEL ALL SOCKETS!

ITS NOT DIFFICULT, ITS NOT EXPENSIVE, ITS NOT ROCKET SCIENCE!

Best option is to read specific MoBo memory QVL list, where MoBo manufacturer has tested several RAM sticks to see if they work with their MoBo and if they do, do they work in 1, 2 or all 4 slots and also at what frequency and voltage. This is the safest bet to buy compatible RAM for your PC.
E.g i did the same when buying RAM for my Skylake and Haswell builds, i picked the RAM listed in MoBo memory QVL list, to have a guarantee the RAM working as tested by MoBo manufacturer.


There are no AMD Ryzen 7000-series CPU with 4c/8t. Lowest you can get, is 6c/12t,
full list here: https://www.amd.com/en/processors/ryzen
(Scroll down to Specifications.)
what I was told in the 2006 era, is that AMD might manufacture a 4 core, but if 1 of the cores fails quality control, they deactivate that core and market it as a 3 core,

if 2 fail, they deactivate those and market it as a 2 core,

if 3 fail, they deactivate those and market it as a 1 core,

if all 4 fail, they junk the cpu!

the 6 core is probably an 8 core, where 1 or 2 cores have failed,

I would imagine they always manufacture 1, 2, 4, 8, 16, etc cores. so any core count which isnt a power of 2, eg 6 and 12 are probably ones where some of the cores have failed quality control.

you probably wont use those extra cores anyway!

there is an anecdote of a manufacturer ordering a large quantity of chips from a japanese manufacturer insisting on a 1% defect rate. a large box arrived, and a small box arrived, with a note, that the small box was the 1% defective ones! the large box no defects. ie the japanese had total quality control.

I think today's AMD is probably like that, where they will test all manufactured CPUs, and some will have defects, eg one speck of dust might land on the circuits during manufacture, no matter how careful you are, some specks of dust might get in a clean room! it is unfalsifiable whether the clean room really is clean! will you trawl the room with a drone + microscope?

when I installed the screen protector on my Samsung smartphone, I think 5 specks of dust landed under the protector.


But if you want to have AMD CPU with 4c/8t, then best would be Ryzen 5 3400G. E.g like so:
(G-suffix with AMD CPU means that CPU has iGPU in it, Radeon Vega 11 in this case.)
PCPartPicker Part List

CPU: AMD Ryzen 5 3400G 3.7 GHz Quad-Core Processor (£171.76 @ Amazon UK)
Motherboard: MSI MPG X570S EDGE MAX WIFI ATX AM4 Motherboard (£302.18 @ Amazon UK)
Memory: Corsair Vengeance LPX 32 GB (2 x 16 GB) DDR4-3200 CL16 Memory (£75.93 @ Amazon UK)
Total: £549.87
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2023-12-24 13:36 GMT+0000


Ryzen 3000-series uses AM4 CPU socket, meaning that best chipset to use is X570 and also, only DDR4 RAM is supported. DDR5 RAM is supported only with AMD Ryzen 7000-series CPUs with AM5 CPU socket.
And there's another caveat. When using Ryzen 3000 G-series CPU, MoBo is able to run PCI-E x16 slot only in x8 mode.

So, a choice:
#1 4c/8t AMD CPU - DDR4 RAM - PCI-E 4.0 x8.
#2 6c/12t AMD CPU - DDR4 or DDR5 RAM - PCI-E 4.0 x16.
I'll have to go for the 6c/12t because of the x16!

Oh, CPU with 128 PCI-E lanes is very much a thing.

AMD Threadripper CPUs (4 of them) have all 128 PCI-E lanes. And flagship CPU: Ryzen Threadripper Pro 3995WX, has 64 cores and 128 threads.
AMD Threadripper CPU is counterpart to Intel Xeon CPU. Both are server CPUs.
Article: https://www.pcworld.com/article/393...-pcie-lanes-and-8-channel-memory-support.html

In server market, 128 PCI-E lanes makes sense since there's a need for that (else-ways, 128 PCI-E lane CPU wouldn't be created). But in consumer market, 20 to 24 PCI-E lane CPU is most what one would need. With 24 PCI-E lanes in CPU, one can easily connect GPU (using 16 lanes) and 1-2 M.2 SSDs (each using 4 lanes of PCI-E).


Yes, that i know of.
This is also the reason why i don't have 1x 16GB DIMM or 2x 8GB DIMMs in my system, but instead i have 4x 4GB DIMMs, for total of 16GB of RAM. Sure, MoBo memory controller has more workload on it due to having more DIMMs, but i have high-end MoBo that can sustain it easily. Cheaper MoBos - not so much.
sounds like a good plan!

some of the super efficiency has to be done by the hardware architecture and the firmware,

with CPUs, big data + instruction caches also boost things,

eg I once dabbled with SCSI CDRW above AmigaOS, writing directly to the sectors without filesystem, and the Yamaha drive I bought, I was surprised it had huge caches, something like 4MB, I thought that was insane, because I was used to CPUs with say 256 byte caches! but I then noticed that the write led would be on unflickering for ages, whereas with hard drives of that time, the led's would flicker away like crazy,

the Yamaha was caching big amounts before writing, and then long uninterrupted write, much better!

the bigger the CPU caches the better! I want ginormous ones!

it is known that instruction caches are one of the secrets to much faster programs, as accessing the cache is much faster than accessing the physical memory.

I forget the specifics of the AMD caching, there are general principles of caching, and then the specific MOs of a particular chip, I think possibly the Motorola 68000 didnt have caches, but I might be wrong.

the AMD architecture automagically keeps the caches of the cores coherent, if one core writes out to memory, the architecture ensures the caches of the other cores remain correct, I forget the specifics: I have to wade through the manuals. but I remember reading some strange mechanism they use.

Your software doesnt need to keep track of which cache is accessing what, and refresh stale data, the hardware does that automagically.

you have to be careful that memory mapped hardware is fake memory, where the hardware intercepts the memory accesses as instructions, so those zones of memory mustnt be cached.


the CPU isnt aware of external hardware, all it knows is interrupts and memory, some of which is external hardware pretending to be memory. eg if you press a key on the keyboard, I think that causes an interrupt, and the interrupt handler then reads some fake memory location which says what keyboard action happened. Things like IDE have nightmarish protocols to communicate with. they could really have designed far simpler protocols, it is incompetent design! as a programmer you are trapped in the CPU.

at the moment, my programming of the low level hardware is more concerned with getting things to function, rather than super efficiency. eg the graphics I have so far programmed is all pixel by pixel graphics via the VESA interface, I havent tried to program graphics cards directly. I designed my own font for the keyboard characters, with the minimum width and height to allow capitals, and also eg %, M, m, w, W, Q, *, &, etc. fixed width fixed height font, and all characters entered manually as program data, designed with pen and paper! I actually dont like variable width fonts, with fixed width, you can align successive rows of text in clever ways. where the text is structured both horizontally and vertically. variable width fonts trash up vertical alignment schemes of characters, and are no good for ascii art. I find I can read fixed width fonts much better than variable width, as the regularity is quicker to read. eg old era text done with a typewriter is much nicer to read than fancy variable width fonts. variable width fonts are confusing and jarring for the brain, as the i is narrow, the W is wide, etc and your brain has to fight to process the text. programming written in variable width fonts is horrible to read! variable width doesnt add anything to the experience, only subtracts.

when I program, first thing I do is use a font where all characters are identical width and height.
 

Richard1234

Distinguished
Aug 18, 2016
277
5
18,685
to avoid confusion, putting this as a further posting rather than editting the existing post, as someone who has read the earlier post wont want to trawl through for edittings,

here are 2 photos of the sockets of my Fujitsu-Siemens laptop, and how all sockets are labelled:

http://www.directemails.info/tom/fujitsu_siemens.jpg

now admittedly, some of those labels are difficult to read because of the coarse low grade plastic, and that is points deducted! but with a lens I can make sense, eg I guessed what looks like an orange PS/2 socket might be a dual purpose mouse or keyboard PS/2 socket, but with the lens the label is a rectangle with 2 antennae, so in fact it is a video socket! dont guess! scrutinising a PS/2 socket the geometry is different, but someone guessing might try forcing the PS/2 cable and damage the pins.

it is good design that the PC side doesnt have pins, eg I once damaged the pins a bit with an Amiga socket, where I was trying to force a similar cable. I had to carefully unbend the pins! pins should be on the cable side. this kind of thing is "design", bad design and good design. old era USB sockets, terrible design. USB itself terrible, because it often doesnt work, either because misimplemented, but that means poor certification, and that the idea is too complicated, or the design itself is flawed eg might not supply enough power or data transfer rate for max socket usage. a good design arranges things so you cant go wrong and is a subject in its own right, best understood by using both a superbly designed item and an atrociously designed one. eg the Fujitsu Siemens has a very elegant and legible label on the lid, but difficult to read at the sockets, so they have the manufacturing technology to do legible labelling, but are more interested in people reading their logo than reading the sockets! its a difficult one to photo, as not so nice in a photo, I had to change the contrast to give an idea:

http://www.directemails.info/tom/fujitsu_siemens_logo.jpg

wasnt able to get photos in focus,

the communication problems I allude to are where someone assumes because they understand or know something, everyone else does, when in fact everyone understands different things, and part of growing up is realising other people dont know what you know and thus the need to communicate, and eg in this era many people have only used smartphones, so it is unreasonable even to assume someone knows an old era USB socket. needs a label. with smartphones, I struggle to know which sockets are which, and various are asymmetric which is shoddy design of the standard. where you have to waste time trying to figure out which way round the socket points, and then waste time figuring out which way round the cable end is. the new USB3 small sockets, no time wasted as symmetric, but the cable lengths are dubiously short and too few sockets! how am I supposed to connect up my dozens of USB devices? the wifi dongle, 7 2T WD drives, flash drives, USB encased sata drives, the handshake mouse dongle, am I supposed to have 10 USB3 hubs? nice speeds, but a bad design.

anyway, mere betatesting with the general public would quickly identify that one needs labels.

an archaeologist from 100 years in the future might try out the HP Spectre machine in a museum, where they no longer use sockets but everything is via wireless or Musk chips embedded in people's brains, and they need labels! also with machines from the 1980s, 1990s etc, these can have sockets no longer used today, eg parallel and serial ports, PCMCIA, serial mouse, and some really strange sockets, etc, without labels no idea. That floppy socket on the Gigabyte mobo, no idea except for the FDD label!

even with my own programming, something from years ago, no idea at all without some comments in the file, or in an associated file. eg program.c might have program.txt explaining it. a good organisational system, doesnt just have stuff, but it assists navigating to stuff via labels or documents. eg I put stickers on my tower saying which colour of socket is headphones and microphone. and stickers on my keyboard saying which key goes to the boot options.

the HP Spectre after repair by PCWorld no longer says at all which key goes to boot options, luckily I stuck a sticker saying F9 above the F9, because tricky even to read which key is F9.

in Germany they have the TuV independent grading of EVERYTHING, to zap this kind of bad stuff.

where this HP laptop ought to get a low TuV grading and isnt production quality, but maybe prototype testing quality.
 
Last edited:

Aeacus

Titan
Ambassador
the images are too glitzy for the 1980 architecture, they must be rejigging from 32 bit or 64 bit, and then rebooting back from 1980 with new settings?
Well, what i have, is UEFI and not Legacy BIOS.

Most apparent difference is in GUI:

UEFI-vs-Legacy-BIOS-Explained-Twitter-768x432.jpg


Image source + differences explained: https://www.cgdirector.com/uefi-vs-legacy-bios-boot-mode/

I see USB Floppy as item 4 in that list, presumably you can rearrange the order? eg make USB floppy Boot option #1?
Yes, i can select any of the option shown there and rearrange the list as i see fit.

when it says processor cores, does that mean all threads on that core?

ie presumably each here being 2 virtual CPUs?
I have 4 core and 4 thread CPU (i5-6600K) and i don't have hyperthreading. But i do think that when you disable a core of hyperthreaded CPU, you'll also loose two threads.

virtual CPU is probably a better word than thread and hyperthreading, because the word thread usually means a subtask. the virtual CPU no doubt is a thread of the RISC core, but its not a thread for the programmer! for the programmer it is a CPU. when they say core, I think they mean RISC core.
Not much point to contemplate over definitions of the words, since those change as time goes onward.

For example, for me;
Virtual = Something created out of the thin air, usually by software emulation. E.g Virtual Machine, that runs Win10 as OS, but emulates WinXP. Giving almost the same experience as running WinXP off the bat. Or in terms of CPU, a software emulating CPU.
Thread = Process string of a physical CPU.
Hyperthreading = physical CPU being able to compute two process strings at once, in parallel.

but ray traced images nonetheless are impressive, its the old dilemma of should an image be accurate or should it look good? with photoshop, you can say scan an image, and then make it better than the original!
I, personally, don't favor ray tracing that much. Since to me, the gains of image quality do not exceed the performance impact of creating said image.

It takes a lot of GPU compute power to create the ray traced image, to get only marginally better image. If the impact on performance would be considerably smaller and/or improvement in image considerably better, then it would be cost effective. But as it stands today - not worth it.

it doesnt have any label, so I am left to guess!
Or you can ask it from reputable expert who knows their stuff.

Even i don't know all and everything there is to know about PC hardware. I have 0 problem to ask for a help from more knowledgeable person. And even when i'm quite certain of something, i like to cross-validate it, just in case i could get it wrong (memory fades).

I have never heard of 1 socket doing both microphone and headphone, it needs a label.
3.5mm jack mic/audio combo port is actually quite old. Mostly used for these:

66705-B-600x613.jpg


the confusion is because I expect USB sockets to have plastic segments and not have lids, and often colour coded eg USB3 a blue segment, but this one is just metal. usually USB sockets have a specific hieroglyph which looks like a network. I expected any old format USB socket on this machine to be USB3 and to have a blue segment in it.
Expectation and reality are often two different things.

The metal lid over the ports you have, serves a purpose and a good one. Besides it making the laptop sides look cleaner (without any port cavities), it's main purpose is to shield the port itself from outside debree and foreign objects, that can intrude into the exposed port, thus making the port itself unusable.

These port covers are often found on hardware that is portable or are otherwise expected to be used in outside conditions.
Desktop PCs hardly move anywhere and aren't introduced into conditions that may get foreign objects crammed into the otherwise "bare/open" ports.

As of color coding of USB type-A ports goes, that isn't mandatory and is optional.
Further reading: https://en.wikipedia.org/wiki/USB_hardware#Colors

To test it, plug USB 3.0 thumb drive into it and make copy/paste to it. Note the peak speed of file transfer. USB 2.0 can do up to 480 Mbit/s while USB 3.0 can do up to either 5 Gbit/s or 10 Gbit/s (depending on which revision USB 3.0 it is).

no label, no manual, no nothing, its not good enough. With CD technology, you have to put the relevant logos, otherwise it is infringement of copyright. All CD equipment has to be certified compatible by Sony or Philips, and the logo shows it has been certified.

I think USB is copyrighted by Intel, I think HP may have infringed the copyright by not putting the well known USB logo!

it is rogue behaviour to not put some form of USB hieroglyph or text.

I checked my Fujitsu Siemens of probably 2007 era, and all the sockets have hieroglyphs to one side, eg the 2 USB sockets on the one side have one USB hieroglyph to the left, the headphone and microphone sockets have hieroglyphs. on my PC, the headphone and microphone sockets are colour coded, I think the headphone socket is green. there is a green, blue and pink socket.
USB Implementers Forum (USB-IF) is not copyrighted by Intel. USB-IF is non-profit organization created to promote and maintain USB.
Intel is just one of the 7 founding members of USB-IF.
The USB-IF was initiated in 1995 by the group of companies that was developing USB, which was made available first during 1996. The founding companies of USB-IF were: Compaq, Digital Equipment Corporation, IBM, Intel, Microsoft, NEC and Nortel.
Notable current members include: HP, NEC, Microsoft, Apple Inc., Intel, and Agere Systems.
Source: https://en.wikipedia.org/wiki/USB_Implementers_Forum

Also, it costs money to put the USB label on the port, and not just the printing process, but instead royalty tax!
A vendor identification is necessary for obtaining a certification of compliance from the USB-IF. The USB-IF is responsible for issuing USB vendor identification numbers to product manufacturers. The cost for issuing this number is US$6,000 per year. Additionally, the use of a trademarked USB logo to identify certified devices requires a license fee of US$3,500 for a 2-year term.
My math says that HP has to fork out $15.500 USD every 2 years, so that they can put that USB logo on the USB port on that specific laptop. Different laptop = another device and another $15.5K to fork out every 2 years.
If you sell 10x different laptops and put the USB label on all of them, you're looking to pay $155K USD every two years, just to have that USB label on your device.

that's not good enough, those ports dont look like any I am familiar with, I have never seen a USB socket with a lid and a metal segment, all audio sockets I have seen have a circular inner surround, this one doesnt.

there is a "communication" problem, even if I guess they are USB how do I know whether USB2, USB3, USB4 etc?

I need some clue via colour coding once you go beyond USB2,

I have built 3 PCs and find it confusing, so that means its a bad design!
Oh, come on, don't tell me that you can't identify the USB rectangular port. It's the most well known port in the world and basically everyone knows it's USB when they see the rectangular port, even regardless what color it is inside (white, black or blue).

Rectangular USB port (type-A) can only be USB 1.0, 2.0 or 3.0. USB 4.0 uses type-C port (oval one). Type-C port has advances over type-A, namely it doesn't matter which way you plug the connector into the port, it works both ways. While with type-A port, the connector goes in only one way. E.g most modern smart phones use type-C port as charging port nowadays. Including my Samsung Galaxy A52S 5G.

its not user friendly to not label the sockets, its a bit "autistic". I could behave in a non user friendly way, but I wont make many friends!
No point to label (and pay royalty tax) over those ports that is common knowledge. E.g your smart phone, is the charging port of it labeled as what port it exactly is?

in any case, I wont be buying anything at all by HP ever again, their inkjets are a load of junk, Epson ecotank is vastly superior, 10x as cheap, the ink doesnt dry up if not used for months, much better images, one set of ecotank bottles might be £40 and print 4000 pages, (depends which version of ecotank), with inkjets you'd need easily 20 cartridge changes for that. all in one cartridges are particularly stupid, as you have to junk the entire catridge the moment 1 colour is used up. I dont understand how HP can design an all in one cartridge, it is such a bad idea!

https://www.epson.co.uk/en_GB/for-home/ecotank

I asked the PCWorld salesman why Epson continue manufacturing inkjets, when ecotank is so superior, he said: its because the customers demand inkjets, so they give customers what they want.

with ecotank, if one colour runs out, you just buy a new bottle of that ink, which typically costs about 10 quid, pour it in, do some config stuff, and you are ready to go.
The printer "genuine" ink cassette war isn't only HP thing. Canon does it as well.
I know this 1st hand since i have Canon Pixma TS8352 scanner/copy machine/printer combo for myself. Though, i don't print that often, so, i can afford to buy the "genuine" Canon ink cassettes.

Not being able to use any other brand ink isn't that much of an issue for me, as it is the different ink cassette sizes. Canon ink cassettes are: standard, XL and XXL. When i bought my printer, it came with standard ink cassette size. I could maybe print 40 pages and ink ran out. So, now i'm buying XXL size ink cassettes. But one thing that i don't get, is why Canon (or HP or anyone else with proprietary ink cassettes) are selling different sizes ink cassettes. Since people would be buying the biggest ink cassettes for longest lasting regardless, skipping the middle sizes. It's such a waste or material and ink to make the middle sizes that no-one is going to buy.

batteries dont last forever, which idiot decided to have sealed in unremovable batteries?
At this day in the world - essentially everyone.

Can you tell me any proper smart phone which has user replaceable battery? Almost none. Apple, Samsung, OnePlus, Xiaomi, LG, Motorola, Honor etc. Maybe there is one or two, very rare and obscure brand of smart phones where you can replace the battery. But 99% of smart phones have built-in battery and device can last as long the battery is sound.

That's why i prefer "dumb phones" like what Nokia makes. Before my Samsung A52S, i had Nokia 108i, which has user replaceable battery. Almost all Nokia "dumb phones" have user replaceable battery, including, but not limited to: Nokia 3310, 3210, 3110, 1610 that i've used over the years.

I suppose I am the bigger idiot for buying the junk, but I dont seem to have a choice.

Originally, I backed up the hard drive overnight on batteries, and the batteries ran out midway! so for long ops, eg sector backups, I have to use the mains, and the battery is then ongoingly topped up, it is STUPID design!

plumbers long ago figured out how to make waterproof joins, you dont need a sealed in battery for waterproofing, you can make a waterproof lid.
Planned obsolescence is a thing today.
Further reading: https://en.wikipedia.org/wiki/Planned_obsolescence

Used to be, where products were made to last as long as possible. Today, idea is to buy new one when old one dies. Even repairing the said products (especially electronics) is often far more expensive than buying a brand new product.

when I said mainstream I meant used by most people, most people didnt use the internet,
Mainstream as such depends on location (country). E.g if you were to travel to Africa, you'd be hard to find wi-fi or any internet connection, even in 2023. But if you were to travel to Estonia (where i live), then 92% of all households will have access to internet connection.

my one cant do QR codes and mobile apps!
Both are doable with desktop PC.

For reading QR code, you need a camera. Take any web camera of your choosing. Then, it is just the matter of running correct piece of software that can read QR code the web camera sees.
And mobile apps, namely Android apps, are supported in Win11.

its only viable for me to work at a tower system, laptop mice are unsatisfactory.

laptop keyboards are unsatisfactory, eg this HP Spectre doesnt have a numeric keypad, and I need a nonstandard escape key for the function keys, where the F1, F2 etc are in a tiny illegible font,

also tricky to find how to get to the early startup, with my Gigabyte mobo, it says plainly at the start to press Del to get to the startup options. with the Spectre it is hidden away elusively.

its touchscreen is great for the web version of Google Earth where I can spin around and stretch the planet with my fingers, but the screen gets really smudged up with fingerprints. The .exe version of Google Earth I cant do all that!
Laptop touchpads are poor ones yes. Same with most laptop KBs. But when laptop has USB port, you can connect desktop KB/mice to it.

Web version of Google Earth also has the freely spinnable globe. Just need to hold the left mouse button down to spin it around. Zoom in/out is with mouse scroll wheel.
Link: https://earth.google.com/

most are cheaper maybe, but a top end Samsung is more expensive than my HP Spectre laptop, which PCWorld told me was the best laptop in the shop,

if by desktop you specifically mean tower systems, and dont want top end, you could build one cheaper than a top end Samsung or Apple smartphone.

my Samsung Galaxy Note 4 cost me 460 quid new on ebay, 600 in the shops in 2015, whereas my PCs generally cost me some 500 to build.

obviously if you build a server grade tower, then the sky is the limit for prices and it would eclipse the price of a top end Apple or Samsung smartphone.
And my Skylake build costed me far more than highest-end smart phone + then some. Same goes for my Haswell build (missus'es PC). So, desktop PC, when made with care and dedication, can cost a LOT.

the reason PCs are so successful, is so much of the architecture is standardised and open, but with memory sockets they failed to standardise properly, the CPU sockets also seem nonstandard.

its not rocket science to standardise such things! and its stupid to not standardise.

the simms should have some notches, so you cant insert an incompatible one.
With RAM DIMMs, different generation DOES have notch at different spot, whereby you can't physically insert e.g DDR3 into any other, non-compatible slot, e.g. DDR4 or DDR5.

main-qimg-c24762f86485c689a29d4480e6db9ca6-lq


As seen above, notch on the DIMM and also inside the slot is at different place between different versions of RAM.

Also, you call RAM sticks as SIMM, which in itself isn't wrong, IF you refer to the RAM sticks in older hardware, in use from 1980 to early 2000.
Current, modern RAM sticks are called DIMM, which started usage from late 1990. DDR and it's revisions are all DIMMs.
Wiki SIMM: https://en.wikipedia.org/wiki/SIMM
Wiki DIMM: https://en.wikipedia.org/wiki/DIMM

what I was told in the 2006 era, is that AMD might manufacture a 4 core, but if 1 of the cores fails quality control, they deactivate that core and market it as a 3 core,

if 2 fail, they deactivate those and market it as a 2 core,

if 3 fail, they deactivate those and market it as a 1 core,

if all 4 fail, they junk the cpu!

the 6 core is probably an 8 core, where 1 or 2 cores have failed,

I would imagine they always manufacture 1, 2, 4, 8, 16, etc cores. so any core count which isnt a power of 2, eg 6 and 12 are probably ones where some of the cores have failed quality control.

you probably wont use those extra cores anyway!
This is in use even today and not only by AMD but by Intel as well. This is cost effective way to make CPUs. Since when 1 or 2 cores in otherwise 8 core CPU fail, no point to throw away entire CPU, which otherwise works fine. Just disable the unstable cores and sell it as 6 core CPU.

the bigger the CPU caches the better! I want ginormous ones!
But do you have ~550 quid to fork out just for CPU?

Since if you do, you can go for Ryzen 9 7950X3D,
Which has:
L1 cache: 1MB
L2 cache: 16 MB
L3 cache: 128 MB
specs: https://www.amd.com/en/products/apu/amd-ryzen-9-7950x3d
pcpp: https://uk.pcpartpicker.com/product...x3d-42-ghz-16-core-processor-100-100000908wof

Currently the best from Intel, i9-14900K, has L2 cache 32MB and total smart cache 36MB,
specs: https://ark.intel.com/content/www/u...rocessor-14900k-36m-cache-up-to-6-00-ghz.html

eg I guessed what looks like an orange PS/2 socket might be a dual purpose mouse or keyboard PS/2 socket
Gave a look at your image and that round port doesn't look like PS/2 KB/mice port to me, since it has too few round pins. 4 pins isn't enough for it to be PS/2 KB/mice socket. PS/2 KB/mice socket has more round pins in it. (Here, my knowledge of how the PS/2 port looks like, including pin amount, came into play.)

the new USB3 small sockets, no time wasted as symmetric, but the cable lengths are dubiously short and too few sockets! how am I supposed to connect up my dozens of USB devices? the wifi dongle, 7 2T WD drives, flash drives, USB encased sata drives, the handshake mouse dongle, am I supposed to have 10 USB3 hubs? nice speeds, but a bad design.
If one needs more hardware to be connected to smart phone/laptop/desktop than there are available ports, then there are hubs out there that one can buy, to extend the possible devices to be connected to primary device.

For example, my Skylake MoBo has two USB 2.0 internal headers, but since i have more than 2 devices that require that internal USB 2.0 header, i was forced to buy NZXT internal USB 2.0 hub.

nzxt-usb-hub-internal.jpg


It takes one USB 2.0 internal header and expands it into 3x USB 2.0 internal headers and 2x USB 2.0 type-A ports. Sure, one USB 2.0 internal header can carry only two USB 2.0 ports and if i were to hook up all 5, i'd have performance reduction. But i only needed to hook up two devices to that hub, retaining the performance.

Though, bigger underlying question is, if people do really need that many ports on their device, or can get by just a few ports per device. Thus far, consensus is, that if you need to connect more devices than there are ports on primary device, one can use hub to extend the ports.
Since if manufacturer would add, let's say 10x ports to a device. Besides it increasing the price (and maybe size) of the device, how many people, out of 1000 would utilize all 10x ports? Maybe 1-2 would, while the rest, 998 people won't. So, it wouldn't be cost effective to add that many ports to a single device.
 

Richard1234

Distinguished
Aug 18, 2016
277
5
18,685
Well, what i have, is UEFI and not Legacy BIOS.

Most apparent difference is in GUI:

UEFI-vs-Legacy-BIOS-Explained-Twitter-768x432.jpg


Image source + differences explained: https://www.cgdirector.com/uefi-vs-legacy-bios-boot-mode/
its all new to me! I had noticed the UEFI on the HP laptop, but there is so much jargon nowadays that I have lost curiousity!



Yes, i can select any of the option shown there and rearrange the list as i see fit.


I have 4 core and 4 thread CPU (i5-6600K) and i don't have hyperthreading. But i do think that when you disable a core of hyperthreaded CPU, you'll also loose two threads.

Not much point to contemplate over definitions of the words, since those change as time goes onward.
as someone who originally did maths, choice of words is very important, in maths people will spend a lot of time choosing just the right words, and the right choice of words can lead to much less work.

I want some words which wont need to be changed in the future. its annoying when each system uses different words for the same thing, eg Windows' "command prompt", which is the Amiga's CLI, I prefer to use the word "shell". I dont like the Windows terminology as it is 2 words, when 1 is adequate.

For example, for me;
Virtual = Something created out of the thin air, usually by software emulation.

technically "virtual" relates to "virtually", but the word has assumed a new meaning with computing, to mean an apparent but not actual entity, where a virtual something behaves identically to a real something, but if you scrutinise it, it isnt that real something. eg a chatbot is a virtual person, it behaves like an annoying real person, but it isnt a real person!

eg "virtual memory" is probably the central original usage of this idea, where say the virtual memory is address $10000, but the actual memory is $48300, then the software will act as if the memory really is $10000.

basically in computing "virtual" is subtly different from everyday english usage. in every day usage it means almost but not quite. but in computing it means behaving identically to something at some level.
eg virtual memory behaves identically to physical memory at the level of programming.

eg most money today is virtual, where its just some numbers on a server, but it behaves just like real money, you can earn it, spend it, invest it, etc. so it is virtual money. but a banknote and coin is real money.

banknotes originally were virtual gold, where they gave out coupons for stored gold.


thus a virtual CPU is correct terminology, as it behaves identically to a CPU at the level of user mode programming.

language evolves with time, "virtual memory" borrowed from the everyday use, but then people borrowed the word "virtual" from the "virtual memory" usage, and it is a bit different from the original meaning of "virtual" which means almost but not, and is usually used in the form "virtually" eg "I have virtually finished this book" eg 3 pages left of 500.

with x86, the apparent cpu is virtual, as it behaves like an old school CPU such as a Motorola 68000, but in fact it is some software above a RISC core.

eg you can make a file pretend to be a disk drive on some systems, that is a virtual drive. because for programs and the user, its the same, it will have an icon, properties, and you can format it, and move files to it etc. if you use the Amiga emulator "Amiga Forever" on Windows, you can make windows files pretend to be AmigaOS hard drives, and Amiga programs will access the drive identically to accessing a genuine drive.

a physical drive will be some spinning disks. actually originally the drive was the machine you inserted the disks into, the disks are the things you insert. but with hard disks, the drive and the disk became indivisible. although there was iomega where I think you inserted hard disk cartridges, but I never used those so am not sure.

a hard drive itself may be virtual at some level which is a confusion factor, that inside the hard drive is a physical drive, but there is control software between the physical drive and the socket, where at the socket its a virtual drive. so you can generalise "virtual" to mean both apparent and actual, where "virtual" now just means "behaves as if" even if it really is.

where a floppy disk, and a file on a disk pretending to be a disk drive, can be considered to be virtual disks.

the terminology is to be able to discuss things without hassle. because if we have to call cpus threads, then there is a danger of going round in circles, as a thread really is something running on a cpu. if I say "virtual CPU", what I mean is "something behaving like an old school CPU".

that way "virtual XYZ" means something behaving like XYZ for an old school computer, eg Motorola 68000 system with floppy disk drive, mouse, monitor etc.

the word "abstracted" could be an alternative, abstracted memory, abstracted disks, an abstracted computer etc.


at uni, this guy said "hardware is software", but software runs on hardware, so we are getting into a circular problem, where nothing is what it is and the language loses its power.

"virtual" is about what something does, not what it is. something which does memory like things is virtual memory, eg really large virtual memory is implemented via hard disks, and the apparent memory in fact is a disk!

something which does disk like things is a virtual disk. eg with "flash drive", the word "drive" originally comes from "disk drive", and the word drive relates to the word "driven", ie a disk drive is driven by a motor. but the disk drive isnt just driven by a motor, but stores data. the word "drive" has then been rederived to mean the storage aspect of a "disk drive", and now we have "flash drive", even though there is no motor.

and to drive a car, originally was to regulate the motor of the car to propel it along.


E.g Virtual Machine, that runs Win10 as OS, but emulates WinXP. Giving almost the same experience as running WinXP off the bat. Or in terms of CPU, a software emulating CPU.
Thread = Process string of a physical CPU.
Hyperthreading = physical CPU being able to compute two process strings at once, in parallel.
thread had an earlier generally accepted meaning, where a program splits off some further servant programs which all participate in the same work as the original program.

its a special form of multitasking,

I would personally say hyperthreading isnt threading, but is multitasking, as the 2 cores wont always be participating in the same work, the word "thread" is overspecific. furthermore at the one moment in time, the so called thread might be running a game, at another point in time it could be running a text editor, so that in my book is multitasking, its not a thread! its an example of bad language usage.

also its not hyper- but is sub-, those virtual CPUs are subtasks or subthreads of the core, further bad use of language.

bad use of english is itself a concept in english, the misnomer, where people recognised that some usages are bad, eg a headache isnt an ache but is a pain, the term is a misnomer, it would be more accurate to say headpain, or even more accurately brainpain.

language isnt just something we are given and have to blindly salute!

if you do that, the language will steadily deteriorate, languages like russian and german are powerful because at some point they cleaned up the language. I think the original russian was old russian, and the new german language is hoch deutsch, where they removed latin words. english and dutch are in fact forms of nieder deutsch.

we have the power to resist and change and rethink usage. in France, they have committees to decide what words are allowed!

when I did maths, the faculty arranged a standardised usage of language across all courses, where all lecturers used the same jargon, but if you look at books from other countries, there can be different usages. this way, if you did the one course, then did another course, the same word had the same meaning. eg I have a book on logic from another country, where the guy talks of denumerable sets. at our uni, no such language was used, it was always a "countable set". Over the 3 years, one might do 24 courses, and all would use the word countable, no course ever used the word denumerable! if there are 5 different words for the same thing, it creates confusion. the faculty very carefully arrived at a standardised vocabulary.

the very original meaning of thread means the string in woven fabric, and can also mean string, especially a finer string, versus say a rope which is made from many parallel strings. if a rope or carpet is worn out, it is "threadbare", ie where the threads are breaking.

this is why thread in programming means where the different threads are part of the same, like with fabric.

also in forums, a discussion starting from a particular message is a thread, and you can have subthreads also, with email hosts also you can have a thread, eg with the Mozilla Thunderbird email host.

major defining example of program threads is the fork() command in Unix and Linux. whereas when the OS launches a new program, that is a task not a thread. you could argue its a thread of the OS, but that isnt the established usage of language. some of these ideas are relative, eg relative to the user, its a task, but relative to the OS its a thread.

these things are a matter of opinion, and argument, language is alive! and I am trying to resist the current by suggesting virtual CPU. I dont just look on Google how a word is used! the english language continually changes, eg it has changed significantly since say 1900. if you read stuff written in the early 1900s, or early films, the way they talk is different from now.

I, personally, don't favor ray tracing that much. Since to me, the gains of image quality do not exceed the performance impact of creating said image.

It takes a lot of GPU compute power to create the ray traced image, to get only marginally better image. If the impact on performance would be considerably smaller and/or improvement in image considerably better, then it would be cost effective. But as it stands today - not worth it.
its good for precomputed images, like for a Hollywood film, where they might spend a week ray tracing 30 seconds of phantasmagoric special effects.

a video game is different, in that the rendering has to be live, where you might not have the time to scrutinise the details as you are more busy trying to escape an angry demon with a sword!

but the graphics whilst noone is playing could be fixed ray traced sequences.


Or you can ask it from reputable expert who knows their stuff.

Even i don't know all and everything there is to know about PC hardware. I have 0 problem to ask for a help from more knowledgeable person. And even when i'm quite certain of something, i like to cross-validate it, just in case i could get it wrong (memory fades).


3.5mm jack mic/audio combo port is actually quite old. Mostly used for these:

66705-B-600x613.jpg
but can I use a loudspeaker or headphones or microphone with that socket?

or does it have to be a special dual purpose socket like in the photo?

I dont have any microphones to hand, so cannot scrutinise the plugs.


Expectation and reality are often two different things.

The metal lid over the ports you have, serves a purpose and a good one. Besides it making the laptop sides look cleaner (without any port cavities), it's main purpose is to shield the port itself from outside debree and foreign objects, that can intrude into the exposed port, thus making the port itself unusable.

These port covers are often found on hardware that is portable or are otherwise expected to be used in outside conditions.
Desktop PCs hardly move anywhere and aren't introduced into conditions that may get foreign objects crammed into the otherwise "bare/open" ports.

As of color coding of USB type-A ports goes, that isn't mandatory and is optional.
Further reading: https://en.wikipedia.org/wiki/USB_hardware#Colors

To test it, plug USB 3.0 thumb drive into it and make copy/paste to it. Note the peak speed of file transfer. USB 2.0 can do up to 480 Mbit/s while USB 3.0 can do up to either 5 Gbit/s or 10 Gbit/s (depending on which revision USB 3.0 it is).


USB Implementers Forum (USB-IF) is not copyrighted by Intel. USB-IF is non-profit organization created to promote and maintain USB.
Intel is just one of the 7 founding members of USB-IF.

Source: https://en.wikipedia.org/wiki/USB_Implementers_Forum

Also, it costs money to put the USB label on the port, and not just the printing process, but instead royalty tax!

My math says that HP has to fork out $15.500 USD every 2 years, so that they can put that USB logo on the USB port on that specific laptop. Different laptop = another device and another $15.5K to fork out every 2 years.
If you sell 10x different laptops and put the USB label on all of them, you're looking to pay $155K USD every two years, just to have that USB label on your device.
but this is where again I have to oppose their idea! just because things are done a certain way, doesnt mean it is good. You cant just meekly salute whatever junk spews out of the top of society!

they made a bad decision monetising the logo, because then the logo goes unlabelled, and people dont recognise the logo. its like with many cars, you have to pay extra if you dont want the car to be silver. but then most people go for silver as cheaper, and the cars look less impressive. if they just allowed any colour for the same price, their cars would stand out more.

Sony got into jeopardy with Betamax, which some say was better than VHS, they learnt from this and with CDs, they and Philips made the format available to anyone, I think for free. but you had to get it certified as conforming to the standard by Sony or Philips. where any fee I think is probably just to get it tested by the Sony or Philips labs. that then quality assures the standard.


Oh, come on, don't tell me that you can't identify the USB rectangular port. It's the most well known port in the world and basically everyone knows it's USB when they see the rectangular port, even regardless what color it is inside (white, black or blue).
I dont recognise it here, because the laptop's USB socket has the central rectangle much thinner than normal. if A looks different from B, at some point one decides it is different from B.

I recognise normal USB sockets, because they look similar, this one is a different geometry from any I have seen, I tried a USB device and it did fit.

its not apparent in the photo, but if you see it for real, the internal rectangle is much thinner than I expect for USB.

Rectangular USB port (type-A) can only be USB 1.0, 2.0 or 3.0. USB 4.0 uses type-C port (oval one). Type-C port has advances over type-A, namely it doesn't matter which way you plug the connector into the port, it works both ways. While with type-A port, the connector goes in only one way. E.g most modern smart phones use type-C port as charging port nowadays. Including my Samsung Galaxy A52S 5G.


No point to label (and pay royalty tax) over those ports that is common knowledge. E.g your smart phone, is the charging port of it labeled as what port it exactly is?
there is a point to say it SHOULD be labelled, you are arguing about what it is, I am arguing how it should be, and voicing objection to the decisions forced on me,

and that their system is bad! we must discuss and oppose stupid decisions, this is one, if we blindly wave it through, then even worse decisions will emerge!

HP arent the best of firms in the PC ecosystem, they arent that much better than Dell, just connecting up standard components in a standard way and sticking their logo on it.

we must talk about how things should be, and not just politely stay in the shadow of how things are!

I am a believer in the conditional future tenses!

about what things should and could and shouldnt be, the magic of alternative futures, its one of the ways humans outdo the animals, that they can think about what they should have done, in order to next time make a better decision, they can think about what they could do right now, and think what will happen, and then maybe opt out etc.


smartphones typically dont label, and this is also a problem as I cant tell the difference between the different equally lousy systems, and dont know what to ask for! eg I have an x1 smartphone, and no idea what the socket is called, thus I have to show it in shops to try and buy a new cable.

some of these smartphone plugs, one wonders if one is damaging the smartphone when trying to insert the plug!

also the cables generally seem a bit flimsy, with my satnav and dashcam, I have had to replace the cables more than once because flimsy, use some thicker plastic coating of the cables!

basically as earlier, most smartphone sockets are stupid designs, and it is stupid to not label them, stupid for the sockets to not be symmetric, stupid to keep making the same stupid decisions, and stupid to not criticise the stupidity!

did you read the story of the naked emperor? where the emperor was naked, but the tailors said he was wearing fine clothes, and only intelligent people could see the clothes.so everyone said the clothes were fine, eventually a young boy yells out that the emperor is naked, and now everyone realised the emperor was naked!

as you point out, because of the fees, nobody does label them, and they dont make any money from the fee, so the fee arrangement is financially stupid! would be a much better decision to charge a small royalty, eg 10 cents, then they'd rake in money from the arrangement, its like one of Aesop's fables!


The printer "genuine" ink cassette war isn't only HP thing. Canon does it as well.
I think you misunderstood, perhaps you dont know the Epson ecotank idea?

with ecotank, you dont have ink cassettes, the printer just has 4 ink tanks, and you pour in the ink liquid from a bottle. it is 10 times as cheap to print compared to inkjets, if you dont use the printer for 5 months, it still prints fine, whereas with inkjets it wont print if you left it unused for too long because the ink dries up and jams the cartridge.

I think Canon has their own version of liquid ink tanks.

Google for Epson ecotank printers. I forget what the Canon version is called, I think some of the Canon versions have 5 or 6 inks, either an extra light blue or cyan, and a light yellow, something like that, for more subtle colours.

I have an A3 Epson ecotank, really sublime printing A3 photos! it will also print a roll of A3, for a banner, but I havent tried that as not sure what I'd print!

I know this 1st hand since i have Canon Pixma TS8352 scanner/copy machine/printer combo for myself. Though, i don't print that often, so, i can afford to buy the "genuine" Canon ink cassettes.

Not being able to use any other brand ink isn't that much of an issue for me, as it is the different ink cassette sizes. Canon ink cassettes are: standard, XL and XXL. When i bought my printer, it came with standard ink cassette size. I could maybe print 40 pages and ink ran out. So, now i'm buying XXL size ink cassettes.
dont bother with those cassettes! get an Epson ecotank printer, or the Canon version, junk your ink cassette printer and the cassettes, you'll thank me when you start using Epson ecotank.

But one thing that i don't get, is why Canon (or HP or anyone else with proprietary ink cassettes) are selling different sizes ink cassettes. Since people would be buying the biggest ink cassettes for longest lasting regardless, skipping the middle sizes. It's such a waste or material and ink to make the middle sizes that no-one is going to buy.


At this day in the world - essentially everyone.

Can you tell me any proper smart phone which has user replaceable battery? Almost none. Apple, Samsung, OnePlus, Xiaomi, LG, Motorola, Honor etc. Maybe there is one or two, very rare and obscure brand of smart phones where you can replace the battery. But 99% of smart phones have built-in battery and device can last as long the battery is sound.

That's why i prefer "dumb phones" like what Nokia makes. Before my Samsung A52S, i had Nokia 108i, which has user replaceable battery. Almost all Nokia "dumb phones" have user replaceable battery, including, but not limited to: Nokia 3310, 3210, 3110, 1610 that i've used over the years.

which is my earlier point, that they are all making the same stupid decisions!

this is where the EU can be useful that they can start clamping down on this waste.

Planned obsolescence is a thing today.
Further reading: https://en.wikipedia.org/wiki/Planned_obsolescence

Used to be, where products were made to last as long as possible. Today, idea is to buy new one when old one dies. Even repairing the said products (especially electronics) is often far more expensive than buying a brand new product.

but this is by design, to make more money, where you have to buy new rather than repair.

its an example of "conflict of interest", where what is good for the one person is bad for the other, so the first person does stuff which is bad for the 2nd person.

eg if they find a medicine which cures all diseases, doctors will become unemployed and the medicine manufacturers will go bankrupt. so instead they focus on medicines which only temporarily fix problems!

but the EU are now moving towards forcing electronics to be repairable, and more wares today are reconditioned and refurbished.

its very easy to fix these problems, eg if they standardise the geometry of laptops and smartphones, you could just replace the screen and other parts. its in fact very high risk to have too many different formats, as you divide the market.

Mainstream as such depends on location (country). E.g if you were to travel to Africa, you'd be hard to find wi-fi or any internet connection, even in 2023. But if you were to travel to Estonia (where i live), then 92% of all households will have access to internet connection.
I thought you were from the north of England!

if you learnt english at school, that would explain why you like existing definitions of words, as your teachers would have been strict about the official meanings.

but if its your first language, then you dont care about how other people use the language and you'll argue about what a word should mean, and not just what it does mean!

if you did maths at our uni in England, you'll know why you should fight the existing definitions, our department changed a lot of word usage for maths, with impressive results.

one of George Orwell's books has the idea that you can disempower a society by gradually changing the meanings of words, but I go for the opposite process of empowering myself by improving the usage of words. remove bad usages, replace with better usages etc.

eg USB speeds are given in Gigabits, but that makes them seem larger than they really are, you need to measure USB speeds in Gigabytes, eg 10 Gigabits you mention later sounds impressive, but it is only 1.25 Gigabytes, which doesnt sound so impressive, but is good.

I heard mobile phones became very important in Finland, because larger distances between people, hence Nokia being an early pioneer of mobile phones.

in Britain anyone can have internet, via their landline, but many people no longer use landlines, but could go via their mobile, my observation is most people spend all their time talking on their mobiles, rather than browsing the web.

the truth is most people arent interested in ideas, the word philosophy literally means "love of cleverness", some people just enjoy clever ideas, but a lot of people dont care, they just want pleasure, they arent interested in interestingness.

Both are doable with desktop PC.

For reading QR code, you need a camera. Take any web camera of your choosing. Then, it is just the matter of running correct piece of software that can read QR code the web camera sees.
And mobile apps, namely Android apps, are supported in Win11.


Laptop touchpads are poor ones yes. Same with most laptop KBs. But when laptop has USB port, you can connect desktop KB/mice to it.
my argument is why not just make a laptop without the keyboard, mouse and screen!

ie just a much smaller box with some sockets, then you can attach a proper screen, mouse, keyboard etc. why do the mouse, keyboard, and monitor have to be welded together indivisibly?


Web version of Google Earth also has the freely spinnable globe. Just need to hold the left mouse button down to spin it around. Zoom in/out is with mouse scroll wheel.
Link: https://earth.google.com/


And my Skylake build costed me far more than highest-end smart phone + then some. Same goes for my Haswell build (missus'es PC). So, desktop PC, when made with care and dedication, can cost a LOT.

with PCs its a question of budget versus requirements,

most of my usage isnt processor demanding, eg it is say writing emails, using a spreadsheet for monitoring my investments, enhancing photos, etc,

but because I am programming some of the hardware, I like to have the more advanced functionality, but the faster speeds are less important. just so I can test my software to be compatible with more advanced machines.

my existing 2010 PC is actually good enough for me, I am only upgrading because the USB seems to be worn out, and I cannot use the more modern drives or higher res monitors.

the faster drives are better, for making backups. I dont like drives to be too huge, because if and when the drive malfunctions, a lot of stuff is at risk.

I would quite happily use this PC for another 10 years!

With RAM DIMMs, different generation DOES have notch at different spot, whereby you can't physically insert e.g DDR3 into any other, non-compatible slot, e.g. DDR4 or DDR5.

main-qimg-c24762f86485c689a29d4480e6db9ca6-lq


As seen above, notch on the DIMM and also inside the slot is at different place between different versions of RAM.
I wasnt aware of that, but they need this kind of idea for compatibility of timing. ie where currently a memory card might fit, but doesnt function properly.

ie the above isnt a full standard as it doesnt standardise the timing,


Also, you call RAM sticks as SIMM, which in itself isn't wrong, IF you refer to the RAM sticks in older hardware, in use from 1980 to early 2000.
Current, modern RAM sticks are called DIMM, which started usage from late 1990. DDR and it's revisions are all DIMMs.
Wiki SIMM: https://en.wikipedia.org/wiki/SIMM
Wiki DIMM: https://en.wikipedia.org/wiki/DIMM
its an abuse of language! its where I have redefined SIMM to mean slot in memory.

its like some people call a car a "motor", when it is a lot more than a motor!

the word simm has become mainstream because of mobile phone simms!

This is in use even today and not only by AMD but by Intel as well. This is cost effective way to make CPUs. Since when 1 or 2 cores in otherwise 8 core CPU fail, no point to throw away entire CPU, which otherwise works fine. Just disable the unstable cores and sell it as 6 core CPU.


But do you have ~550 quid to fork out just for CPU?

Since if you do, you can go for Ryzen 9 7950X3D,
Which has:
L1 cache: 1MB
L2 cache: 16 MB
L3 cache: 128 MB
specs: https://www.amd.com/en/products/apu/amd-ryzen-9-7950x3d
pcpp: https://uk.pcpartpicker.com/product...x3d-42-ghz-16-core-processor-100-100000908wof
I have 550 if its like that!

what I do is I mostly dont spend money, eg the main things I buy are food and fuel, I dont buy other stuff because I bought too much in the 2005 era, and I ran out of storage space! but if something is good, I will spend a lot on it, but only if its really good!

eg I will probably buy a higher end Samsung smartphone, because I got talked into buying an X1 and it is dreadful compared to my earlier Samsung Galaxy Note 4,

what's the largest L1 cache size available?

what I want is the L1 cache to be as big as possible, that L3 cache maybe is too big!

I would rather have a bigger L1 and a smaller L3, but maybe 1MB L1 is huge,

it might depend on what you are trying to do, I suppose that arrangement is a kind of cascade

ultimately one would have to benchmark to know which set of sizes is best, but bigger ought to always be better, unless 2 cores caches clash repeatedly.

my main query with top end CPUs eg server grade ones, is do they consume too much electricity?
do they generate too much heat?

because that might be an argument against them, eg there was a youtube vid of a guy who tried bitcoin mining, and he said the big problem was too much heat and a big electricity bill. where he had to open all the windows it got so hot!

do they give data on the heat generation and electricity costs of the different mobos and cpus and graphics cards, I think graphics cards can get very hot!

maybe you can use the server for a central heating system and for cooking food?

I find USB connectors also can get to a higher temperature, eg for flash drives and wireless.

why is this flash drive getting so hot, when I havent accessed it for 30 minutes?

something wrong with the engineering?


Currently the best from Intel, i9-14900K, has L2 cache 32MB and total smart cache 36MB,
specs: https://ark.intel.com/content/www/u...rocessor-14900k-36m-cache-up-to-6-00-ghz.html


Gave a look at your image and that round port doesn't look like PS/2 KB/mice port to me, since it has too few round pins. 4 pins isn't enough for it to be PS/2 KB/mice socket. PS/2 KB/mice socket has more round pins in it. (Here, my knowledge of how the PS/2 port looks like, including pin amount, came into play.)


If one needs more hardware to be connected to smart phone/laptop/desktop than there are available ports, then there are hubs out there that one can buy, to extend the possible devices to be connected to primary device.

the problem is the USB3 hubs PCWorld sold me had just 3 and 4 outsockets respectively, for 20 drives, I would need hubs attached to hubs attached to hubs, and these ones draw power from the computer, so its just not going to work out!

if they were powered hubs, I'd need an array of transformers!

I attached the wireless dongle directly to a USB3 hub and found it was slower than when attached to a USB2 hub attached to the same USB3 hub, so something not engineered properly!


For example, my Skylake MoBo has two USB 2.0 internal headers, but since i have more than 2 devices that require that internal USB 2.0 header, i was forced to buy NZXT internal USB 2.0 hub.

nzxt-usb-hub-internal.jpg


It takes one USB 2.0 internal header and expands it into 3x USB 2.0 internal headers and 2x USB 2.0 type-A ports. Sure, one USB 2.0 internal header can carry only two USB 2.0 ports and if i were to hook up all 5, i'd have performance reduction. But i only needed to hook up two devices to that hub, retaining the performance.

the problem is we are back to USB2! which then is slower, and that defeats the point of a USB3 drive, that it is much faster than USB2.

also I found that with USB2 hubs, eg I have one with 7 sockets, that some sockets dont work, and some are very slow, maybe USB1?

if I put the wireless dongle in the wrong socket, webpages load really slowly, like could take 3 minutes.

the technology I have found a bit disappointing, its nice that they created a general socket that can do anything, but the extendibility and scaleability are bad.

a properly scaleable system, can be expanded a lot, but with USB I think it would have been much better just to have a 2 level system, a general socket on the PC, and then you connect that to a rack of sockets, eg 5 or 10 or 15 or 20.

instead they have made the hub sockets the same technology as end gizmo sockets, and this is too clever, where it is very problematic. I havent studied the architecture, nor have I programmed it yet, I want to, but that want is fighting with a crowd of other wants! so currently I cannot say where the problem is, but there is a major problem with USB.

each socket has to be ready for a hub to be connected, that must greatly complicate the engineering.

probably some of the control software is badly written.


SCSI was a good system, where they limited it to I think 7 items plus the controller, I think each encoded by 3 binary digits, and that more limited architecture meant they could get it right.

keep it simple.

with SCSI, I could attach more devices satisfactorily.

each item only connected to one upstream and one downstream, with the controller at one end and a terminator at the other end. nice simple system, easier to design properly.

Though, bigger underlying question is, if people do really need that many ports on their device, or can get by just a few ports per device. Thus far, consensus is, that if you need to connect more devices than there are ports on primary device, one can use hub to extend the ports.
Since if manufacturer would add, let's say 10x ports to a device. Besides it increasing the price (and maybe size) of the device, how many people, out of 1000 would utilize all 10x ports? Maybe 1-2 would, while the rest, 998 people won't. So, it wouldn't be cost effective to add that many ports to a single device.

I definitely want 10 ports, or 20 ports!

AND for a tower system at home I dont care about the size, I am an ATX man, I dont want a tiddler mobo with limited socketing.


they should have modularised the problem, have a mobo USB bus, then junction sockets, and then end sockets which cannot be extended any further. eg where you said USB3 can do 10Gigabits,

but the problem is 2 devices at 10Gigabits on a hub would get 5Gigabits each?

and 5 devices would get 2Gigabits each?

instead why not say have a bus with 10 Gigabits, then say 2 junction sockets which are up to 10 Gigabits each, but you have some jumpers to spread the bits, eg 2Gigabits to this one, 8 to that one,

and then you attach a splitter to the junction, where say 10 sockets, and jumpers to spread the bits,

eg 4 Gigabits to the first socket, 3 to the next, 1 to the next

eg if you want to clone a huge disk, you reconfigure so that the read disk has say 2gigabits, and the write disk has 8gigabits, or whatever is most efficient.

with cameras, some things can only be photographed properly with manual controls, including changes of lens.


in this era, you could soft configure in the UEFI, without having to touch any hardware!


whereas currently presumably all the USB devices and hubs are fighting for the datarates,

and maybe a wrongly positioned one in the arrangement gets too few bits?

eg a downstream one devours all the bandwidth, with the upstream one starved.


their error is trying to make a totally general arrangement, and leaving it to the system to decide things,

when in fact its better for the user to configure the data access,

SCSI is better by being less general.

I would gladly pay an extra 200 quid to have 20 USB3 sockets on a plate linked by a fat cable to the pc.

put a dozen sata sockets whilst you are at it, and even better!

many of my drives are USB encased sata ones.

the amount of time and hassle dealing with malfunctioning USB costs a lot more than 200 quid.

but so far never a problem from sata! much better system.
 
Status
Not open for further replies.