Linux Kernel Grows Past 15 Million Lines of Code

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]w3k3m[/nom]The reason why Linus likes monolithic kernels (in his own words) is because "it's easier to implement". That is true, but well executed microkernels are technically vastly superior and worth the effort. That's a proven fact and there are numerous examples in practice. I suspect back in the days, as a student he didn't have enough programming capacity and knowledge to handle such a task. Today, of course, he would defend his baby that made him famous at all costs. You can praise linux as a movement, but for it's technical quality it is definitely not much better then windows.[/citation]
Almost no one uses microkernels. Most OSes comprise of hybrid or monolithic kernels.
 
"Anyone remembers famous Tanenbaum-Torvalds Debate?"

I wasn't familiar with that, but reading it 10 years later Tanenbaum sure does come off as a tool.
 
[citation][nom]Vladislaus[/nom]Almost no one uses microkernels. Most OSes comprise of hybrid or monolithic kernels.[/citation]

There are many pure microkernel OSs, but of course they are not mainstream, the same way Ferrari is not mainstream. Linux kernel however is not even a hybrid, just an ancient monster monolithic architecture. But then again it's ok for the price :)
 
[citation][nom]Camikazi[/nom]Give it time and it will be like Windows (which isn't slow really, my Windows 7 clean install boots in almost the same amount of time as my Ubuntu clean install). But that will be the fate of Linux, or any OS, if they wish to be easier to use and more widely adopted, they must make installs easier and to do that as many systems must work on first install as possible and that means bloated code. Make fun of Windows all you want, but MS already knew that to make things simple the software had to get bigger.[/citation]

than they also make retarded concessions like in the explorer for files, i can open any file by clicking anywhere on a Column, a feature i despise as moving files is so much harder than it needs to be now. or taking out gif support... that was really stupid, the the ability to easily tile windows in favor of the crappyer substitute areo snap (i do not like it but i use it), or not being able to easily see the size of folders, but those are just the hardest things for me to live without in windows 7
 
Micro kernels have their out set of problems, namely IPC is a big performance nightmare.

QNX is a good example of a modern micro kernel OS. It's performance is rather solid and it's used in various industries.

Micro Kernels require lots of standards to be written and maintained, otherwise it wouldn't be possible for multiple parts to interact with each other. You can't do that efficiently "by committee" and none of the big players want to invest that kind of money.
 
IPC performance was a problem only with badly implemented microkernels. Nowadays there is no single argument in favor of monoliths. It's just an excuse for programming laziness. Even Linus admits that Linux is going to degenerate into big mess but what he won't admit is that the problem is of fundamental architectural nature. Good microkernels are initially hard to implement but once done they can healthy scale indefinitely.
 
Yes, but the kernel has native support for a lot more hardware than the windows' kernel does natively.
It has built-in graphics drivers for different ATI, Intel, Nvidia, VIA chipsets for god's sake, and that's not taking VESA into account. Not only graphics drivers but just about any hardware with open source support is on there. Don't forget KVM.

Also, don't forget that linux can be trimmed down easily. While it can have up to 15 million lines of code in it, when you compile it, you have the option to get rid of most of the stuff you don't need on your desktop/laptop/server/printer/router. You must understand that the "mainline" kernel has "support" for a lot more hardware than Windows's kernel does and that the linux kernel is never used "as is", the final package from Linus is always trimmed, patched, trimmed again and tested twenty times for stability even in the most volatile and even in the most conservative distributions.
I wouldn't consider it bloatware just yet. Linux gets bloaty in the desktop environment part, where you have around ~15 different environments and about 25 window managers.

Still, faster than windows on most of my machines.
 
In article ast@cs.vu.nl (Andy Tanenbaum) writes:
>My point is that writing a new operating system that is closely tied to any
>particular piece of hardware, especially a weird one like the Intel line,
>is basically wrong. An OS itself should be easily portable to new hardware
>platforms.

Who would have thought this guy got on the wrong horse that long ago...
 


IPC is still an issue with micro-kernels ~now~. It will always be an issue due to the security layers involved. Driver to driver communication is required to go through protocol and security checks, those alone present a performance hit vs a monolithic design where the drivers can directly communicate to each other. This involves everything in the system and you'll always lose performance doing this.

The real question isn't about whether a performance hit exists or not, but if the hit is acceptable for the gained stability and security. For modern systems I would say that i was an acceptable trade off. Taking a relatively slight performance hit for a much greater improvement in security and platform stability.

And your wrong about the "lazy" bullsh!t. It's not common because the major desktop OS's already have written standards and implementations. To go with a "micro kernel" architecture they would need to rewrite not only their OS, but their application programming interface (API) as well. This would then necessitate rewriting and recompiling every application and driver in the world that needs to run on the "new" micro-kernel architecture.

To say the above is difficult and borderline impossible would be a gross understatement.

You either don't know the fullness of what your saying, or your just trying to stir up the pot and rage on what you see as "the establishment".
 
[citation][nom]DSpider[/nom]Maybe they should drop support for older hardware starting with 3.3 or at least fork it somehow as a legacy kernel. Because both new AND old hardware are impacted (older hardware more, obviously).[/citation]

I agree, you see this done with Windows, Mac OSX and even web browsers dropping support for old features, architectures and operating systems. If they removed some of the old "junk" (yes I know that many will say they use it because it is the only thing that runs/runs fast on their old machine) then it would obviously reduce the number of lines of code and may allow increase in performance or new features to be added that were not possible with the old legacy hardware support.
 
[citation][nom]Zingam[/nom]Well, what the world needs is actually completely re-engineered from the ground up 1. hardware/platform 2. a new language for system programming 3. a new operating system.See, everything in the IT industry is a mess. Even HTML?... how is it so hard to make a element centered int he browser and compatible with all browsers.Maybe that's the result of the software written by 18-20 years old, half-educated "Geniuses" like Bill Gates and co.[/citation]

That is where I think backwards compatibility rears it's head. If you didn't have to worry about the past then you can move on. Look at how many people are still forced to designs sites for IE 6.

You are also right in that sloppy coding practises and languages do us no favours.
 
[citation][nom]w3k3m[/nom]...it is definitely not much better then windows.[/citation]
Not much better than what? Then windows what? It makes no sense. Were you trying to say Linux is not much better thAn windows?
 
So many bad comments. The Linux Kernel line count is so high because it includes drivers for a lot of devices. These don't have to be compiled into the kernel image, only the devices you have in your system.

The windows kernel drops drivers for things like web cams and relies on the vendor to provider drivers.

The kernel once compiled is typically around ~30mb
 
[citation][nom]Thunderfox[/nom]So they have about one user for every million lines of code, then?[/citation]
You forgot to add, "in my little ignorant world," at the end of your sentence.
 
[citation][nom]ahnilated[/nom]So here is a totally uneducated jab at the Linux community.[/citation]
If he had said 1 user for each line of code it would have been more accurate, but less funny.
...
That 0.86% market share is pretty piss poor, you should really do something about that
 
[citation][nom]g4114rd0[/nom]Complete CleanUp the comments and blank lines.[/citation]
When you compile the code it omits the comments and blank lines. Those are just for developer use.

I'd like to see Linux detect the hardware on install and only install the necessary drivers. Then when a new device is detected ask for the disc or just download it.
 
[citation][nom]lamorpa[/nom]Not much better than what? Then windows what? It makes no sense. Were you trying to say Linux is not much better thAn windows?[/citation]

I wanted to say that linux kernel is simply an ugly bloat that stands in the same class like windows. That happens inevitably if you start something with the wrong concepts in mind.

 
[citation][nom]w3k3m[/nom]I wanted to say that linux kernel is simply an ugly bloat that stands in the same class like windows. That happens inevitably if you start something with the wrong concepts in mind.[/citation]
You're just trying to stir up the pot aren't you? A fine example on how trolling is a art.

9/10
 
-"I'll take care of it.
(then, to Tom)
Tom, I want you to find Santino. Tell him to come to the office"


Thanks for your prompt response i will love to compile the kernel .
 
[citation][nom]palladin9479[/nom]IPC is still an issue with micro-kernels ~now~. It will always be an issue due to the security layers involved. Driver to driver communication is required to go through protocol and security checks, those alone present a performance hit vs a monolithic design where the drivers can directly communicate to each other. This involves everything in the system and you'll always lose performance doing this.The real question isn't about whether a performance hit exists or not, but if the hit is acceptable for the gained stability and security. For modern systems I would say that i was an acceptable trade off. Taking a relatively slight performance hit for a much greater improvement in security and platform stability.And your wrong about the "lazy" bullsh!t. It's not common because the major desktop OS's already have written standards and implementations. To go with a "micro kernel" architecture they would need to rewrite not only their OS, but their application programming interface (API) as well. This would then necessitate rewriting and recompiling every application and driver in the world that needs to run on the "new" micro-kernel architecture.To say the above is difficult and borderline impossible would be a gross understatement.You either don't know the fullness of what your saying, or your just trying to stir up the pot and rage on what you see as "the establishment".[/citation]

Performance hits in microkernel IPC may occur because of message passing copying overhead. This became largely irrelevant with modern and efficient mikrokernels (eg. QNX, L4). Think RISC vs CISC.
I didn't say either that changing Linux kernel architecture now would be attainable. I was just trying to say that monolithic architectures are doomed and one of the reasons why Linux faces these manageability problems. Another analogy would OOP vs procedural code managing. It is just pathetic that Linus talks about these problems and won't say a word about his mistakes. Instead he just says "we must make things simpler". Good luck reaching 100 million lines of code.

 


Again you've shown you don't actually know what your talking about, RISC vs CISC is not a comparison, there is no such thing as "CISC", mere "not RISC". RISC isn't a processor design, its a set of principles that are recommended for engineers to follow. I could write a book about this subject.

There is ~ALWAYS~ a performance penalty with micro-kernels, its the natural result of heightened security and restricted communication between process's. That is the only difference between a macro and a micro kernel, the level of modularization that occurs. Only way around that is to give driver modules direct access to kernel memory space and the memory space of other modules, which invalidates the entire reason for having a micro-kernel to begin with.

Now onto your complete lack of processor micro-architecture knowledge. Currently in the world there is no "not RISC" CPU's being made. Intel / AMD CPU's are RISC CPU's internally. Their various processor resources (MMU / ALU / AGU / FPU / Register File) are all designed using RISC principles. What they do is put a front end decoder / scheduler that intakes the binary x86/87 machine code and converts it into their proprietary RISC language and then forwards it to the various processor elements for processing. Once the results are finished they return those results to the program. This was done to maintain backwards compatibility with previous code.

Backwards compatibility is what drives the entire world. The computing world is not an authoritarian dictatorship, no single entity nor group of entities can rewrite world wide standards overnight. It takes years for standards to be adopted, and even longer for older standards to fade into obscurity.

If you think otherwise, if you think that your some all knowing paragon of programming skill, then write your own OS and make it successful. Anything else means your just pissing in the wind.
 
So just use what is available. Something else will come along, we will start to use it, while bad-mouthing the Linux stuff. Of course we will continue to dis Microsoft. Even though they are getting better each release. (Unfortunately the cost keeps going up also.)
 
[citation][nom]palladin9479[/nom]...[/citation]

Have you ever heard for a word called "analogy" ? :) RISC vs CISC was just an analogy to the original principle, nothing to do with current CPU architectures. Small and efficient microkernel OS would despite IPC overhead as a whole match or beat one-piece-monster-kernel OS anytime. Did you know that almost all real-time operating systems are microkernels ? Cisco is using QNX in their high-end carrier grade routers and believe me they do care about performance very much. Microkernels are used in mission critical applications where performance could be matter of life and death. Enough said.

All I wanted to say is that from the pure technical point of view, I find it hypocritical when Linux advocates criticize Microsoft Windows. Otherwise, I also use Linux and as an idea and free source movement I find it great. Simple as that.

 
Status
Not open for further replies.