News 1-bit CPU for ‘super low-performance computer’ launched – sells out promptly

Status
Not open for further replies.
When fully assembled and powered up, this device can do just three things: flash an LED, turn the LED on, and turn it off.

Nice, buy why does it need (4) ICs to accomplish this?

Instead of hanging a processor on everything today, I'd be more impressed with clever circuitry and a minimum number of components.
 
Are you asking why a CPU made of discrete common DIP packages needs 4 chips? Because it is made from off the shelf parts you used to be able to pick up at Radio Shack. Look at the video thumbnail, a breadboard.

Likely a very power inefficient solution, but good luck getting a company to produce this circuit as an integrated chip. All the costs would be in the packaging and you would still need a board for I/O.
 
  • Like
Reactions: bit_user
Nice, buy why does it need (4) ICs to accomplish this?

Instead of hanging a processor on everything today, I'd be more impressed with clever circuitry and a minimum number of components.
They are VERY basic integrated circuits, so small that all of the chips have multiples of each circuit. I had to buy some of these for an introductory class on digital circuits.
  • (74HC74) Dual D Flip−Flop with Set and Reset (8x 2-input NAND gates & 2x NOT gates)
  • (74HC153) Dual 4-input multiplexer (4x 3-input NAND gates & 2x NOT gates)
  • (74HC14) Hex Schmitt−Trigger Inverter (1x NOT gate & 2 resisters)
  • (74HC00) Quad 2-Input NAND Gate (1x 2-input NAND gate, 4 of them in a package)
The product names are even an industry standard: https://electronicsclub.info/74series.htm
 
We don't need a hardware solution when we have Windows.

We can even run python on windows and easily beat all the slowness it can give.
 
Last edited:
Man you guys in this forum have absolutely no imagination. The point of the kit is to be a fun educational novelty if you enjoy computer architecture and extremely simple logic circuits. The chips are more complex and less efficient than required because they are off-the-shelf components used to build the educational demo. If you made a custom IC it would drastically increase the cost at these low production runs and if you simply made an analog circuit that could do the same job it would no longer be educational about computer architecture or novel.
 
  • Like
Reactions: palladin9479
I prefer the old learning labs/trainers, not sure if they still make them. RadioShack had rebranded some Elenco stuff in more recent times, but Elenco doesn't seem to be making them anymore either.

300 in 1 kit, came with ICs, transistors, LEDs, etc so you could experiment with breadboarding, logic and that sort of stuff. The trainer had a power supply, potentiometers, a speaker, and other things to hook up as well.
 
  • Like
Reactions: George³
A 1-bit CPU packing ‘super low performance computer’ has been launched, offering a very limited scope of use, but may be a fun DIY project.

1-bit CPU for ‘super low-performance computer’ launched – sells out promptly : Read more
I don’t understand the point of this other it being a weird extreme “lo-fi” novelty ala pixel art. Perhaps a basic teaching tool? Like I got the Rasberry Pi right away even if I didn’t have a need for it. Aside from a “because we can”, is there any other reason for this product to exist?
 
I think so. I believe add, jmp, xor is sufficient for turing completeness. You'd have to provide a substantial external storage and display obviously
From the first paragraph:

" ... only runs at approximately 1 Hz, has a bus width of 1 bit, an address space of 2 bits, and a ROM capacity of 4 bits."

So, the answer is a resounding "no". Computers in the 1940's were much faster and had far more capacity. Also, Turing Completeness assumes you have effectively unlimited memory and time.

Even for an educational toy, this is a piece of junk. I had better learning kits back in 1989.
 
Last edited:
Can it run Quake if I REALLY turn down the resolution? I mean, ALL computers can run Quake, right?
Again, the Turing Completeness argument only applies if you have enough RAM (and enough time), which it obviously doesn't.

Also, for Quake to be deemed "playable", I think you'd have to stipulate a minimum resolution & framerate of about 160x120 and 15 fps. That's probably achievable on a 486DX-33. Not sure whether a 486SX could manage it, since Quake used floating point in its pixel loop.

BTW, even if you allowed a resolution of just 1x1, don't forget that the game's AI and physics still needs some CPU cycles to run.
 
No, but it can play Crysis if you don't mind 1 frame every other month at 160x100 in monochrome 😛
I've already mentioned the lack of any real RAM, which basically prevents it from running any kind of software program, whatsoever.

Now, let's consider the issue of performance. The 1 Hz clock speed surely means it's not going to perform anywhere near as well as your estimate. If we assume Crysis' software renderer runs at 1 FPS on a 1 GHz CPU, it would take 33 years to render a frame at a clock speed of 1 Hz, not even accounting for the IPC discrepancy. If the discrepancy is even 33x (which I think is quite a low estimate), that would probably increase frame rendering times to about 1 millennia. I'd actually peg the IPC discrepancy at more like 330x, which means you'd get a frame every 10k years. Give or take.
 
I've already mentioned the lack of any real RAM, which basically prevents it from running any kind of software program, whatsoever.

Now, let's consider the issue of performance. The 1 Hz clock speed surely means it's not going to perform anywhere near as well as your estimate. If we assume Crysis' software renderer runs at 1 FPS on a 1 GHz CPU, it would take 33 years to render a frame at a clock speed of 1 Hz, not even accounting for the IPC discrepancy. If the discrepancy is even 33x (which I think is quite a low estimate), that would probably increase frame rendering times to about 1 millennia. I'd actually peg the IPC discrepancy at more like 330x, which means you'd get a frame every 10k years. Give or take.
Still better FPS than most computers get in Minesweeper.
 
I've already mentioned the lack of any real RAM, which basically prevents it from running any kind of software program, whatsoever.

Now, let's consider the issue of performance. The 1 Hz clock speed surely means it's not going to perform anywhere near as well as your estimate. If we assume Crysis' software renderer runs at 1 FPS on a 1 GHz CPU, it would take 33 years to render a frame at a clock speed of 1 Hz, not even accounting for the IPC discrepancy. If the discrepancy is even 33x (which I think is quite a low estimate), that would probably increase frame rendering times to about 1 millennia. I'd actually peg the IPC discrepancy at more like 330x, which means you'd get a frame every 10k years. Give or take.
"Lighten up Francis". You know full well it was a joke. This device is nothing more than a teaching tool.
 
"Lighten up Francis". You know full well it was a joke. This device is nothing more than a teaching tool.
I do like jokes! I'm glad you mentioned teaching...

In my job, a significant problem I face is one of basic numeracy among people who you'd think should know better (i.e. college educated, technical folks). When working with computers, the range of numbers we deal with might even be uncommon among engineering disciplines. It can be hard to keep things in perspective, sometimes.

For instance, if you compare how many clock cycles tick by on a 64-core server CPU @ 3 GHz in an hour, that's a mind-boggling number on the order of 10^14. I mean, it can be truly difficult to get your head around such a number. Therefore, I find it's helpful to sometimes walk through and run the numbers to show what we're actually dealing with.

Let's say someone wants to optimize a bit of code that takes a few microseconds and is called at most every few seconds? Not worth it! Now, if it's called thousands of times per second, maybe it's worth a look. However, if the optimization you're trying to do is shaving off a few dozen nanoseconds, because the code is making the same number of syscalls either way - and that's where the real time is spent - not worth it!

I'm not one to let a learning opportunity go unexploited. Show people how to do the analysis, and try to improve numeracy among the population. That's my thinking.

In a similar vein, here's a great stocking-stuffer, for anyone still looking for geek gifts. The author of the popular XKCD web comic is a trained physicist who indulges in just such explorations (often to a much greater comedic effect):

You could think of it a bit like Mythbusters, for bookish types.
: )

You can also read some of them free, online (and videos, even!):

Speaking of videos, here are some talks he's given.

I hope you don't mind the recommendation. I'm both a fan of XKCD and a believer in what he's doing to spread understanding of spread math & science by making it fun.
 
I think so. I believe add, jmp, xor is sufficient for turing completeness. You'd have to provide a substantial external storage and display obviously.
Unfortunately it is not Turing complete. A Turing-complete processor needs to be able to write to memory (the "tape" of a Turing machine) and this can't.

It also can have the halting problem solved for it - the answer is no, it never halts. You could instead look at whether it reaches the end of memory (implicit loop) or uses jump (explicit loop) which only requires looking at the two possible instructions in the program (and even then, only 3 bits) - only "Add 0/1 Add 0/1" or "JMP 1 Add 0/1" reaches the end of memory. If it contains a jump it must be a forward jump, as the program space is too small for a "forward jump to backward jump to forward jump to the end of memory" sequence.

An external program counter for extending its program memory also wouldn't help it as its jump instruction would mostly stop working, being unable to store to the external PC.
 
Last edited:
  • Like
Reactions: bit_user
Status
Not open for further replies.