News Linus Torvalds says RISC-V will make the same mistakes as Arm and x86

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
The difference is noticeable only with very CPU optimized code. But nowadays who needs it ?
99% of softwares development is done with high level languages and frameworks with many abstractions layers.
Optimization is needed only partially by kernel and drivers developers.


This thinking is part of the reason we need buy new PCs every few years.

Example, Haiku OS runs well on an old server that was made in 2004. But haiku is responsive because of it's excellent base design, as well as low level computer knowledge.
 
  • Like
Reactions: NinoPino
Example, Haiku OS runs well on an old server that was made in 2004. But haiku is responsive because of it's excellent base design, as well as low level computer knowledge.
It's easier when you start with a clean slate and fewer requirements. I suspect that if Haiku had to satisfy all of the requirements placed on something like Windows 11, it would start to resemble it more closely, in terms of performance.
 
Producing & sustaining so many humans is (so far) even more resource-intensive than AI!
; )
Eh? Humans themselves are running in a fraction of the energy requirements, most humans barely spend 1KW/hr per year of energy, which is a lot more than the 50000KW/hr a small scale training server needs.

What does humans spends a lot of energy into is in the material things they want, like AI hardware, since energy resources value in our current economic system is only equal to the cost of extraction + profit margin of the extractor, plus one time costs at power plants creations and engines.

The way raw materials and natural resources are valued in our economic system was intentionally created that way to favour those in the power some centuries ago, and, in a theorical planet with infinite resources, it would make a lot of sense. But that is the biggest delusion by itself.

The main advantage of AI is that it can be owned, like slaves were owned before, and you can train it while being ensalved, unlike slaves before which had stunted learning capacities while being enslaved. To our mercantilist economic system, this is the dream, until we kill the planet in some decades more, but that is not contemplated by the current system.
 
Eh? Humans themselves are running in a fraction of the energy requirements,
You've heard the term "carbon footprint", no? We should consider the entire energy consumption of a human worker. If you want to compare lifecycle costs, then you have to account for the energy & resource consumption including all of the nonproductive years of a human.

Taking all of this into account, wet brains look remarkably inefficient!
 
You've heard the term "carbon footprint", no? We should consider the entire energy consumption of a human worker. If you want to compare lifecycle costs, then you have to account for the energy & resource consumption including all of the nonproductive years of a human.

Taking all of this into account, wet brains look remarkably inefficient!
A 2000 calories diet equals to 2.2w/hr per day, or 803w/hr per year, or 64kw/hr in a 80 years life time. This is the energy cost (25% variance) of keeping a human alive and thinking.

Our industrial economy exponentially scales this number, to an extent, out of need, as we need to artificially grow food through industrial agriculture, and keep people warm or refreshed in overexpanded habitats, and the costs of healthcare, and the educational and transport costs to keep all the previous going.

However, from the 5000-10000kw/hr per person to keep a modern lifestyle going to the for example 78000kw/hr somebody in the United States uses, which includes now making AI handle more things that we were already handling before, there is a huge difference.

Technology needs to improve further before it is prudent to use AI in tasks that humans were already doing, and most efforts should be focused in things our non-binary brains cannot handle well, instead. The only reason it makes sense to use AI as people replacement is due to our very broken economic system.

And in a similar way, is due the same that high-level and api redundancy is taking over optimization, too.
 
A 2000 calories diet equals to 2.2w/hr per day, or 803w/hr per year, or 64kw/hr in a 80 years life time. This is the energy cost (25% variance) of keeping a human alive and thinking.
No, it's not. Creating food with the requisite nutrition and energy + getting it to the worker + preparing it and accounting for spoilage and other waste puts the figure vastly higher. But, even after that, food is just a small part of an educated worker's total energy footprint.

from the 5000-10000kw/hr per person to keep a modern lifestyle going to the for example 78000kw/hr somebody in the United States uses,
I doubt you're comparing the energy needed for education, care, and feeding of a knowledge worker to the US figure. A university education (or equivalent) surely uses a lot of energy. Those unproductive years should also be accounted for, in the total human lifecycle cost.

The whole thing is a big tangent, though.
 
A 2000 calories diet equals to 2.2w/hr per day, or 803w/hr per year, or 64kw/hr in a 80 years life time.
Rather than "W/hr" (or kW/hr), the unit you're looking for is Whr (potentially written as W⋅hr). It's watt-hour, not watt-per-hour.

Also, your numbers here are off by a factor of a thousand, likely due to the fact that "calorie" can refer to two different units which differ by a factor of 1000. In the physics/thermodynamic use, calorie is equal to 4.184 joules. When used in the context of food energy, one calorie is equal to 4184 J (4.184 kJ). Food calories are sometimes referred to as kilocalories (or written with a capital 'C') to differentiate. So humans consume roughly 2.2 kWhrs, per day, in food.
 
Last edited:
  • Like
Reactions: bit_user
We need to rise above just processor architectures and think about system architecture, and the different styles of system from general multi-processing systems to single-purpose embedded systems.
Here is what I have written in response to an article about what Torvalds said:
https://medium.com/@ianjoyner/what-...s-beyond-processor-architectures-9d3450efb72b
A couple things in your article suggest to me that might find it interesting to read about old LISP machines, which worked at a higher level of abstraction than RISC or even CISC. However, I have it on good authority that when Symbolics compiled their OS for leading RISC machines of the day (I'm guessing probably SPARC, since this would've been late 1980's) that it ran faster than on their own native LISP hardware.

FWIW, I think you draw a somewhat artificial distinction between hardware and software, when you suggest that a meaningful difference exists between implementing security in hardware vs. at the language level. For instance, even if some CPU had support for strings, as a first class data type, such instructions would likely be decoded into the same conventional micro-ops used in current microarchitectures, because that's the level of abstraction that's best-suited to implement efficiently. Why do that translation in hardware, rather than at compile-time?

Anyway, I think this isn't the sort of stuff Linus was talking about. He wasn't talking about RISC-V because its ISA is all that different, but just because it will have to go through the same iterative refinement needed to achieve a robust and efficient system architecture, like other CPU families before it.

As for what system-level architecture is ultimately best, I think it's really best to start at the OS level and figure out what kind of OS architecture & constructs you want to streamline. Then, find the impedance mismatches vs. current hardware and think about ways to refactor it so it can more naturally and efficiently implement the target OS.
 
Yes, I've heard of LISP machines. The machines I'm talking about were designed for ALGOL, which is sufficiently general enough to execute most other languages efficiently, including what were the best Simula and APL compilers. The instruction set is:

https://en-academic.com/dic.nsf/enwiki/711647

"FWIW, I think you draw a somewhat artificial distinction between hardware and software, when you suggest that a meaningful difference exists between implementing security in hardware vs. at the language level."

I can't see what you are getting at there. Security should be at system level below which programmers can program or languages can generate instructions. That is the instruction set itself is secure. How much that is in hardware or microcode is an open question.

Many operations are very common, like string operations. Instead of compilers generating the same sequence of RISC instructions over and over, put that sequence of instructions in on-chip cache, although that is an implementation detail. But it is system organisation that has more to do with speed than instruction set, which is where the RISC instruction set was somehow magically supposed to be faster. It had more to do with getting everything on a single chip.

We rely far too much on compilers generating safe code. The fact is that if compilers can generate bad code, then hackers can get at that level. Security must be built in at lowest levels or we are always trying to catch up, adding after-the-thought utilities to make up for the weakness.

For general computing, memory accesses must be tested to be within bounds. That check should be done in the microinstructions. The RISC processors (or any processors) provide the microinstruction level. All processors just execute a stream of instructions — they don't know about branches or loops or calls. Part of the system schedules those actual instructions for execution. Some native instructions can be used in the system instruction set.

There are many ways to design such systems, we are just fixed on one particular way from most simplistic 'computer architecture' courses.

"Anyway, I think this isn't the sort of stuff Linus was talking about."

He was certainly making a point about the infrastructure behind supporting an architecture like x86. However, we are finding that big companies are losing their monopoly there and there are many chip fabricators who will build what we want. That means we can have different system architectures for different applications.

For general multi-purpose, multi=programming computing, which is the majority of user systems these day, we can do much better than x86, or raw RISC. We should be doing much better than C as a system language. We are stuck in this 50-year-old mindset. I suspect Linus is somewhat himself.

However, he made the point about the semantic gap from hardware to software and the hardware people should design what is required in software. What is required in software these days — thinking at the system level — is security.

"As for what system-level architecture is ultimately best, I think it's really best to start at the OS level and figure out what kind of OS architecture & constructs you want to streamline. Then, find the impedance mismatches vs. current hardware and think about ways to refactor it so it can more naturally and efficiently implement the target OS."

Yes, but we need to stop this 'one-size-fits-all' mentality. Linus did mention IoT devices. Certainly these are single-purpose simple systems not needing what general systems need.

It should not be x86, RISC-V, ARM, Windows, or C everywhere. That is certain sectors trying to impose a monopoly. Computing has grown beyond special purposes of 1960s and 70s to be in common use where hardware is cheap and guys in hushed voices and white lab coats attend machines behind security walls.

Perhaps Linus did not realise it, but he has brought up an issue where we can start to think differently to solve the most pressing problems in modern computing. We need to be more imaginative in our approaches — and yes that imagination has existed since 1961.
 
We rely far too much on compilers generating safe code. The fact is that if compilers can generate bad code, then hackers can get at that level. Security must be built in at lowest levels or we are always trying to catch up, adding after-the-thought utilities to make up for the weakness.
Firmware, microcode, and hardware all have bugs. The widely-publicized side-channel attack vulnerabilities we've seen in the past 7 years (starting with Spectre & Meltdown) have clearly shown that the lower level implementations you seem to be counting on are far from infallible. Furthermore, I'm sure most hardware and software folks would agree that bugs are easier to fix and fixes are easier to deploy at the software level.

For general computing, memory accesses must be tested to be within bounds.
Bounds-checking is a good idea, but the very concept of bounds is something that's supplied by the software itself. If I wanted to run some insecure legacy code on your super-secure platform, I could write an emulator that simply allocates one big arena and keeps all of the program's dynamic state within that array, managing it via its own userspace heap implementation. At that point, bounds-checks aren't going to do any more good than the page-fault violations already supported by modern operating systems and hardware.

My basic point is that it gives a false sense of security to relegate security to the hardware. For sure, there are things hardware needs to do to enable secure software, but you can't really build security into the hardware in a way that software can't work around. So, the hardware needs to take the approach of enabling security, rather than trying to enforce security on software. Plus, the hardware is already eye-wateringly complex, so it's best to put the minimum complexity into the hardware necessary to enable secure software than to try and build a complete security model into it.

There are many ways to design such systems, we are just fixed on one particular way from most simplistic 'computer architecture' courses.
I agree with a certain amount of this. I think the conceit of computers implementing a Von Neumann architecture is beginning to outlive its usefulness. We all know it's a fiction and even though it does simplify life for the software folks, it's a very costly facade for the hardware to maintain.

We should be doing much better than C as a system language. We are stuck in this 50-year-old mindset. I suspect Linus is somewhat himself.
Rust is making some real inroads in replacing C. Linus is obviously on board with it, to some degree, or it wouldn't be getting integrated into the Linux kernel.

Yes, but we need to stop this 'one-size-fits-all' mentality. Linus did mention IoT devices. Certainly these are single-purpose simple systems not needing what general systems need.
I think almost no one is under the impression that UNIX provides the right type of OS to run on machines with less than 1 MB of RAM, sip micro-Watts of power, and lack even the basic facilities needed to implement virtual memory. And yet, such a machine can run a network stack and even a little web server.

It should not be x86, RISC-V, ARM, Windows, or C everywhere. That is certain sectors trying to impose a monopoly.
Anyone with a different idea can use an FPGA to prove it out. They're not locked into the hardware as it currently exists.

Perhaps Linus did not realise it, but he has brought up an issue where we can start to think differently to solve the most pressing problems in modern computing. We need to be more imaginative in our approaches — and yes that imagination has existed since 1961.
I guarantee people have been talking about these things for several decades, at least. Even if you lack access to a good university library, you need only poke around on arXiv.org and Google Scholar to see that. It's a fruitful area to do more research.

Anyway, I still think it's getting off-topic for this particular forum thread. The moderators don't like when discussion threads stay active for too long (don't try to argue, it's just how they run things), so I'll probably stop replying here and this thread will probably get locked. Feel free to start your own thread in the following forums. However, I think you might find better engagement with your ideas on Reddit, Usenet, or a site that's more geared towards that sort of stuff.

Thanks for sharing your thoughts. Good luck with your further explorations!
 
Firmware, microcode, and hardware all have bugs. The widely-publicized side-channel attack vulnerabilities we've seen in the past 7 years (starting with Spectre & Meltdown) have clearly shown that the lower level implementations you seem to be counting on are far from infallible. Furthermore, I'm sure most hardware and software folks would agree that bugs are easier to fix and fixes are easier to deploy at the software level.
That really doesn't say anything but state the obvious. My point is that insecure systems allow people to write insecure compromising code. We need to design systems so that they cannot be compromised.

I can see you don't understand the point and just want to argue for arguments sake and then at the end drop in threats about being off topic. Well, you responded to me in an off topic way, and continue that in your latest response.

My point is simple and to the point — we need to think about system design, not just processor design. We should design systems in a way that close that hardware-software gap. This is actually done by designing instruction sets to support software development, not by just designing processors and expecting software people to program them.

What is needed in this industry is more thinking about how things can be better and different.

So I will leave it to people with open minds to read my original very short comment here arising from what Linus Torvalds said and the linked article that explains.

If you don't want to think deeper, fine, but don't try to discourage others. It is not a case of "my further explorations", it is thinking about what the entire industry needs and to address where it has gone very wrong.
 
That really doesn't say anything but state the obvious. My point is that insecure systems allow people to write insecure compromising code. We need to design systems so that they cannot be compromised.
Security is relative, not absolute. It exists in a context and software is what's responsible for defining that context.

I can see you don't understand the point and just want to argue for arguments sake
Nope. I think I understand what you're saying and I'm not convinced. I don't appreciate the accusation that I'm arguing in any sort of bad faith, especially having taken the time to read what you wrote and try form what I consider to be an intelligible and coherent reply. You're free to disagree with my statements, but to attack my motives probably reflects worse on you than on me.

If you don't want to think deeper, fine, but don't try to discourage others.
It's a fact that this site's moderators will close threads after a couple months since it was started, which was mid-July. I was simply warning you of that and suggesting other avenues for further discussion of the subject. I fail to see how you can cast that as discouraging, but that certainly wasn't my intention.