Linux Kernel Grows Past 15 Million Lines of Code

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

palladin9479

Distinguished
Moderator
Jul 26, 2008
3,242
0
20,860
45


I'll say it again, MicroKernels do not outperform monolithic kernels. They are more secure and more stable, but executing the same set of code on both will yield better performance on the monolithic kernel.

You might not know this, but I've been working with QNX since early beta. I'm incredibly familiar with it's internals and it's strengths and weakness's. RTOS's are used not because of higher performance but because they are deterministic OS's. Non-deterministic OS's can only provide a guess at how many cycles it will take a specific piece of code to run, due to how they thread share and virtualize the CPU stack it is impossible for them to provide exact execution time. RTOS's do not virtualize the CPU stack and programmers can know ~EXACTLY~ how long a specific segment of code will take to execute prior to it's execution. For consumer desktop PC's this is largely useless, but for industrial and medical systems (along with the defense systems I happen to work on) it's incredibly important to know exactly how long code takes to execute and when results will be back. Their delicate timings are such that slight delays can result in large miscalculations. Cisco is not using QNX for performance related reasons, their using it because it provides better stability, security and determination.

Your original statements were accusing the worlds programmers of being "lazy" that they didn't overnight rewrite all their code for some as-yet-uncreated OS. This is a childish way of looking at things and when faced with that childishness your backpedaling and fronting.

Now what you don't know is that the various OS's of the world are slowly moving to a micro-kernel architecture. The are converting each of their subsystems to behave as they would in a modular uKernel environment. To the uninformed user they would appear to be a single monolithic kernel, but internally the components are being converted and recoded. MS especially has been doing this since XP / 2003 was created with each large block of the OS being broken into smaller blocks and dependencies being altered. Their doing this all while maintaining backwards compatibility by providing various virtual API's. A good example is MS' Windows On Windows architecture. I expect a full uKernel conversion to be complete in the next 10 or so years. The one that will take the longest is Linux due to it being a largely undirected effort.
 

w3k3m

Distinguished
Aug 12, 2008
40
0
18,530
0
[citation][nom]palladin9479[/nom]...[/citation]

It is interesting how you interpret my comments, you must be an aspiring psychic :) I didn't mean at least half of the things you attached to my comments.

Performance is a relative term. I'm talking about real life performance, not some useless synthetic IPC benchmarks. Few percent difference plus or minus in specific cases, are simply irrelevant. The fact is, as Linux kernel gets more bloated and complex it also gets slower and slower by definition and there's nothing one can do about it. Not to talk about maintenance and higher susceptibility to bugs.
Hybrids you are talking about exist already as Linux implementations (eg. RTLinux, L4Linux), but a hybrid is still monolithic. Because of its nature switching to pure microkernel requires radical changes. Good that we both agree that microkernels are future and monoliths are past. Tell that to Linus who once famously said that microkernels are stupid :) Eventually some day he will realize that the earth is round :)

 

palladin9479

Distinguished
Moderator
Jul 26, 2008
3,242
0
20,860
45



Most of this is plain wrong.

More lines =/= more bloat. More lines =/= slower speed, size of code has nothing to do with speed of execution. A program with 150 functions will execute at the same speed as a program with 500 functions, assuming the function being tested is the same on both. A specific function that has more machine code may or may not execute faster then another function with less machine code. Actual lines of code is irrelevant as lines are not execute but rather machine code is executed, code that is produced by a compiler.

Which goes into your lack of knowledge of RISC vs CISC (non-RISC). One of the fundamental differences is size of machine language. A RISC architecture will require more machine operations to execute a specific function then a non-RISC architecture. RISC fundamentals say that each operation should take no more then 1 clock cycle to execute and there should be no complex memory addressing schemes or compound commands. This leads to a rather large number of operations to do anything, yet each is only one cycle long. Current standard x86 has multiple addressing modes and commands can take longer then one cycle to execute, yet for any function it requires less machine commands to execute then a RISC design. By your logic the non-RISC should be faster because there is less "bloat", yet actual performance varies with RISC tending to perform better in general.

And your talk about hybrid kernels is missing the mark, I'm talking MS, Linux, AIX and Solaris, the big OS's in the industry. The developers for these OS's are slowly transitioning them to a uKernel design, but to prevent breaking backwards compatibility their doing it one module at a time over several revisions while creating various environment emulations technologies.

Most of your posts have degenerated into personal attacks on Mr Torvalds. I couldn't care less about your personal desire to insult people, just don't expect to use disinformation and faulty logic on a board full of professionals and not have someone challenge you.
 

w3k3m

Distinguished
Aug 12, 2008
40
0
18,530
0
palladin9479:

Blah, blah, blah, your brain definitely needs an upgrade to microkernel :) If you are a professional then I am the pope. Relax, go get some fresh air, it helps against hallucinations.

 
G

Guest

Guest
Microkernels will start happening when tech has advanced to the point where there aren't new hardware versions every 6 months
 

palladin9479

Distinguished
Moderator
Jul 26, 2008
3,242
0
20,860
45


And now you've gone and further proven my point by degenerating from attacking Mr. Torvalds to attacking me.

I rest me case.
 

w3k3m

Distinguished
Aug 12, 2008
40
0
18,530
0
[citation][nom]palladin9479[/nom]I rest me case.[/citation]

I rest your case too, it seems that medications work :) this time you didn't write tons of completely irrelevant babble laced with arbitrary misinterpretations just to fit your fantasies of competency.
 
Status
Not open for further replies.

ASK THE COMMUNITY