unix vs linux

Status
Not open for further replies.
Solution
Solaris guy here, also Linux enthusiast at home. Each UNIX variant is a bit different but they all share a few things in common.

#1: Software on Unix tends to be incredibly stable. Because of it's mission critical Enterprise focused nature the software stack tends to be updated only after lots of focused testing to ensure system stability. Just because someone somewhere wrote a new version of some software isn't a good enough reason to throw it onto the system. Because of this mindset the updates happen at a much slower and dramatic pace, you do large infrequent updates rather then daily's or weekly's and only when a specific issue needs to be addressed.

#2: Steep learning curve. In UNIX the GUI only exists for you to have more...
On a whole I thought that Unix was developed mainly for mainframes, servers and workstations. Unix is also a little bit more of a closed system, rather than than the community development and support of Linux distros. If you looking into a daily OS there is really no benefit/need to get Unix.

Here is a link that gives a rough Pro and Con of each...http://www.diffen.com/difference/Linux_vs_Unix
 


Neither Unix nor Linux refer to a particular operating system.

Linux refers to a robust operating system kernel, which when combined with an appropriate boot chain and user software form an operating system.

Most "Linux distributions" combine the Linux kernel with the GNU project software and then tack on additional free software on top of that.

Unix refers to a family of related operating system that can track portions their codebase back to Research Unix version 7 released in 1979 and comply with the POSIX standards. Software development practices at the time meant that that companies would often purchase shared source licences for software that would also allow them to modify and sell the product themselves. This was the case with Unix which started out at AT&T and made its way to a large number of different vendors. The Department of Justice anti-trust lawsuit also helped it break out as well. Unix-like refers to similar operating systems which retain compatibility but usually have no common codebase such as the GNU-project.

Perhaps the most well unknown Unix operating system is Mac OSX. OSX is based on Apple's open source Darwin OS which derives part of its codebase from 4.xBSD which is a spinoff of Research Unix Version 7. FreeBSD, OpenBSD, and NetBSD all derive from 4.xBSD as well.

Enterprise Unix operating system such as HP's UX, IBM's AIX, and Oracle's (formerly Sun Microsystems') Solaris are based on Unix System V which was first released in the mid 1980s.

Whereas there are many free operating systems based on the BSD codebase, there is only one free operating system based on the SystemV codebase, and that is OpenSolaris and its descendants.

http://upload.wikimedia.org/wikipedia/commons/7/77/Unix_history-simple.svg
 
Unix was initially developed on and for DEC PDP-11 mini-computers, not mainframes, at AT&T.

The problem with Unix, from my experience way back, is that each vendor had their own "mix", or flavor. One company's mix was mostly AT&T System V with a little BSD thrown in, another was mostly BSD with a little System V in the mix, etc. Since the name was trademarked, I think by AT&T, each vendor had their own proprietary name: Silicon Graphics (remember them?) called theirs IRIS/Irix, HP called theirs HP-UX, IBM's was AIX, etc. Very fragmented, and each supported a different set of shells (C shell, Bourne, etc.). Hey, but it was "Unix"!

Oh, and not to forget Solaris, from Sun.
 
Does it really matter whether it was developed for a mainframe, a mini-computer, or a personal computer? It does the same job whatever it was written for.

In truth, the distinction between a mainframe and a mini-computer is very artificial. Both are computers intended to support multiple users at the same time (hence Unix) whereas personal computers - insofar as they existed then - were strictly single-user machines.

I agree - the whole business of exactly which computer Unix was written for is hair-splitting.
 


No matter the advertisement Google still does it faster and better.
 


What is the function of fedora ?? I have accidentally restart my college computer when I am submitting a project on pollution (DLS LAB) So 1st I saw it was running on Windows XP and when I accidentally restart it and Fedora just came Why ?😀
 


Fedora is just a specific Linux distro based on the red hat enterprise distro. Sounds like your computers are set up with a dual boot for some reason.
 

That's my college computer.
 


Linux was designed basically to run in a network environment with server and work stations. Linux is a desktop OS that typically has a graphic user interface. Linux is more of a consumer product (although) is used in corporate environments and Unix a corporate product with huge database as early GIS systems used Unix.
 
Solaris guy here, also Linux enthusiast at home. Each UNIX variant is a bit different but they all share a few things in common.

#1: Software on Unix tends to be incredibly stable. Because of it's mission critical Enterprise focused nature the software stack tends to be updated only after lots of focused testing to ensure system stability. Just because someone somewhere wrote a new version of some software isn't a good enough reason to throw it onto the system. Because of this mindset the updates happen at a much slower and dramatic pace, you do large infrequent updates rather then daily's or weekly's and only when a specific issue needs to be addressed.

#2: Steep learning curve. In UNIX the GUI only exists for you to have more terminal windows open. GDM is pretty common but the vast majority of Unix software you interact with will be command line driven, many people end up disabling GDM all together. The command structure and "way" of doing things tends to be different from linux and there are dozens of ways to do any one task. This makes the systems archaic as hell and working on them requires experience and can generate lots of headaches. Not much help text and Unix web forums become your best friend.

Now I can speak more deeply about Solaris. The Solaris 10 (and 11) kernel is fully modular, you can insert and remove kernel modules at run time and there is very little need to mess with the kernel in general. It's network stack is robust and implementing LACP, IPMP, VLAN tagging and such is pretty easy if a bit .. "different". It has a very unique form of virtualization that's designed with security and stability in mind vs the "emulated hardware" that we see with other solutions. This restricts the "VM's", aka child zones, to running Solaris. Most implementations are sparse root with the child zone being really run inside a read only partition of the global zone's kernel. Big difference is the naming and location of things, say no to /usr/local and yes to /opt. Block devices are /dev/dsk/c1t0d0s0 and not /dev/sda1. Network devices are e1000g0, e1000g1, bge0, ige0 and such instead of eth0, eth1, wlan0. Ohh and the system won't propt you when you type in cp /dev/null /usr, or rm -r /. Instead it will let you blindly nuke your system into oblivion where only a full backup can save you, you do keep backups right.
 
Solution
Status
Not open for further replies.

TRENDING THREADS