Yeah, my initial assumption was that they meant days, not hours. That'd be 6.8 years, which is remarkable but not unheard of, for more modern hardware.maybe 2500 days would be worth talking about.
Sound card drivers weren't really a thing in the DOS days, you just set your ULTRASND/BLASTER environment variables to tell software what the IRQs, port ranges, DMAs, etc. were and most software went bare-metal from there. The only "drivers" I remember were for sound card emulation (mainly AdLib and Roland) and I don't remember those working particularly well.What InvalidError said about printers : exactly true (the PC's character table contained invisible characters dedicated to basic printer control). About drivers : there were few, but still - mouse driver, CGA emulation on Hercules graphics, but also some sound cards (if you had one) or ATAPI drives needed a DOS driver. Stuff like modems or network cards, too.
ANSI.SYS was only necessary if you wanted to do things like colors from batch files or pipe ANSI-formatted serial data straight to console to render a remote text interface without parsing the stuff yourself. If you don't want your local text output to feel like a serial console from all of the encoding and decoding overhead that goes with ANSI, you write directly to the character buffer at b800:0000.Then you have the standard device drivers like ANSI.SYS. If you are using a parallel port storage device like a Zip drive you have a driver there.
I don't think any modern *NIX could ever run on that hardware, apart from MINIX. And the simple act of porting would introduce this kind of bugs in droves. OTOH, 16-bit dev meant you could almost track your pointers by eyeballing them...@mbbrutman , by any chance did you do any software development on other systems? If it were me, I'd develop and unit test as much of it as I could under Linux, where I could use tools like valgrind to check for memory errors.
What language, compiler, and runtime libraries did you use?
You mis-quoted my message. Please fix your post, even if you just add a bare
[quote]
tag, at the beginning.I said "as much of it as I could", meaning all the parts of the TCP stack, etc. which didn't interact with DOS or the bare hardware.I don't think any modern *NIX could ever run on that hardware, apart from MINIX.
Assuming it's C code, probably not. In general, getting code to work with another toolchain and even on another OS usually improves the quality, but that also depends on how much you have to touch just to make it portable. Hence, I'd focus on unit testing just the pure, generic C routines.And the simple act of porting would introduce this kind of bugs in droves.
You mis-quoted my message. Please fix your post, even if you just add a bare[quote]
tag, at the beginning.
I said "as much of it as I could", meaning all the parts of the TCP stack, etc. which didn't interact with DOS or the bare hardware.
Assuming it's C code, probably not. In general, getting code to work with another toolchain and even on another OS usually improves the quality, but that also depends on how much you have to touch just to make it portable. Hence, I'd focus on unit testing just the pure, generic C routines.
This factoid gets trotted out a lot, despite not being true. The Voyager probes have triplicate main processors and many subsystem specific processors (e.g. comms subsystem, instrument processors, etc), and resets are hardly uncommon, nor are software updates (which end with a reset to switch to the new code). The Voyagers also contain multiple CLRTs (Command Loss Reset Timers) for multiple subsystems, so components can reset themselves if they lose communication - and part of the regular communication with the probes involves sending regular CLRT timer rest commands - in addition to commanded resets. Total system resets have also occurred, e.g. for Voyager 2 in 2010: https://voyager.jpl.nasa.gov/news/details.php?article_id=162.93 years might sound like a long time between restarts, but the computer in the Voyager spacecraft has been running for over 48 years (and counting) without a reboot, for example.
Erm... The Ryzen 2900 and 2950, you sure about that ? There are Threadripper CPUs under these names - 300W beasts that require a LOT of cooling to work properly. Loading them 100% for weeks at a time will sure tax a motherboard quite a lot. A LOT more than Celeron or Pentium processors, that are weak-@r$e entry level processors that use up only little power. An i5 is not much better.It all depends on how much data the cpu handles. AMD Ryzen 2900 and 2950x for instance, never made it further than a week to two weeks of uptime under 100% load. They're horrible for full load applications. Intel on the other hand, some of my modern celerons, pentiums, and core I5 units, ran quite literally for months before freezing.
I believe intel runs more stable than amd, but the cpu isn't the only one at fault. Sometimes ram running xmp profiles can cause errors too.
For a dos server, the guy might as well have ran it from an atom cpu with registered ram. Uses the same power, but is quite literally 100-400x faster, resulting in lower lag.
The only thing that might be somewhat of a big deal is the stack of bodges surviving the traffic surge from such a story which likely prompted some people to try crashing the thing by testing whether its IP stack and minimal HTTP server could handle weird packets.IDK if missing some aspect of this or what. I saw the posts where individual(s) are discussing what old hardware it is and whatnot...but >105 days uptime is supposed to be a big deal?
For a dos server, the guy might as well have ran it from an atom cpu with registered ram. Uses the same power, but is quite literally 100-400x faster, resulting in lower lag.
Good point about the types changing sizes. Not using the same sizes could mean failing to find overflow bugs in counters and index variables.3. it would still need to be on a 16-bit OS (or with a C compiler that allows cross-platform compilation towards a 16-bit OS), as the meaning for integer, long etc. may change from one to the other. There isn't many of them that still support 16-bit platforms, the port itself may thus hit bugs in the compilation toolchain.
I'm not saying you're wrong, only that the port itself may hit more bugs than one would solve when compared with developing on period-correct software.
Source? If you're not using ECC memory, then I could see how you might get a kernel panic that way. Otherwise, I find that claim highly suspect.It all depends on how much data the cpu handles. AMD Ryzen 2900 and 2950x for instance, never made it further than a week to two weeks of uptime under 100% load. They're horrible for full load applications. Intel on the other hand, some of my modern celerons, pentiums, and core I5 units, ran quite literally for months before freezing.
Intel Xeons are specified for 24/7 heavy compute loads. Most of their client processors aren't.I believe intel runs more stable than amd, but the cpu isn't the only one at fault. Sometimes ram running xmp profiles can cause errors too.
Agreed that old hardware is inefficient. That clearly wasn't the point.For a dos server, the guy might as well have ran it from an atom cpu with registered ram. Uses the same power, but is quite literally 100-400x faster, resulting in lower lag.
Most Intel i3 models support ECC memory, if you use them in a motherboard which supports it. In the case of Alder Lake and Raptor Lake, they opted to do the same for most of the upper product stack, rather than making a Xeon E-series version.At least the AMD Zen platforms support ECC memory, that's one thing worth considering for a server - where Intel requires you to go Xeon for ECC.
Yeah, well, since DDR5 requires ECC, they had little choice but to enable it - or look stupid when asked why the same motherboard with the same CPU would support ECC on its DDR5 slots and not on the DDR4 ones.Most Intel i3 models support ECC memory, if you use them in a motherboard which supports it. In the case of Alder Lake and Raptor Lake, they opted to do the same for most of the upper product stack, rather than making a Xeon E-series version.
Intel Enables ECC Memory on Consumer Alder Lake CPUs
You need a W680 chipset, though.www.tomshardware.com
Tsr interrupt 21Depends on what the software was doing. Because it wasn't multithreaded, if the software would go away and do something for a while, you could easily get ahead of it.
DOS had TSR's (Terminate and Stay Resident programs - sort of like a UNIX process running in the background), but those tended to be extremely simple so you basically had just 1 program running at a time.
It's hard to believe the 386 launched in 1985 and it took another 10 years for Windows 95 to come along and finally put the first nail in DOS' coffin.
There are a few points of confusion, here.Yeah, well, since DDR5 requires ECC, they had little choice but to enable it
Oh, and if you want to use ECC UDIMMs on a LGA-1700 CPU, you'd better have a motherboard with the W680 chipset.- or look stupid when asked why the same motherboard with the same CPU would support ECC on its DDR5 slots and not on the DDR4 ones.
AMD played games with ECC support. First, there was the whole game around whether it was officially supported. Then, the question of whether you could get reporting of ECC errors. Finally, they took a page from Intel's book and disabled it on their non-Pro APUs.You could enable ECC on a Ryzen 1600 on a B350 motherboards and DDR4 ECC RAM - ECC support required at motherboard level, but there were some - it wasn't a matter of chipset, more of wiring and BIOS/UEFI support.