best linux in terms of ease and software-ecosystem.

Feb 1, 2014
20
1
10,515
i know nothing about linux, but looking to learn it a bit by installing a linux as a dual-boot to windows 7 or 8 (not sure). size of the installation is not an issue. which linux distro is preferable based on the following factors:

1. free of cost.
2. large variety and diversity of choice of the software ecosystem (close to, if not identical to the large clection of applications available to windows).
3. easier to learn.
4. any other factors that will suit a linux-ignorant person.

also URLs elaborating how to setup dual-boot in layman language are needed.

EXPERT suggestions with supporting reasons are welcome. thanks in advance.
 
Solution


#1 is easy - head to distrowatch.com. Most of the distributions of Linux are free (sans...
My vote is a Ubuntu Distro. The free edition has quite a bit to it.

I will edit this post later with reasons why.

But here is my take on it. Go download like three, put them on jump drives and test them all. (All linux distros to my knowledge can be booted straight from the installation medium. So really easy to test them out. You only know what you want, all I can do is guess. :) )

I don't expect you to give me the best answer, haha. Just my .02
 
Ksham, my only issue with your solution is that running a virtual environment is going to considerably reduce system performance. I am not saying it will be sluggish, but if you are running any software that is CPU intensive, the time it takes to complete the action in an emulated environment is going to be increased a good bit.
 


#1 is easy - head to distrowatch.com. Most of the distributions of Linux are free (sans RHEL).
#2 is based on the testing and regression testing of different applications. All in all, if there is a MS app that you are particular to, there is probably a Linux-based alternative. Example: the Microsoft Office alternative could be LibreOffice or OpenOffice ... or the new kid Kingsoft Office ... all of those have a Windows version as well.
#3 is subjective. What is easy to learn for you is difficult for someone else.
#4 the best way is to test the different flavors out there using a "Live CD/USB". What this does is basically boot from your USB stick in a live session where you can actually use the OS from the USB without disrupting your currently installed Windows.

Again, my subjective opinion is on easiest 5 - or - most popular: #1 Mint; #2 Ubuntu (it's an usual interface, but many like the newness); #3 CentOS; #4 Fedora; #5 Zorin. Regardless of flavor, you might want to use Unetbootin to transfer the distribution image to your USB

As for your dual boot, I followed this thread with excellent results: Ubuntu 13.10 installer (alongside windows 8) and gpt partition ... making the assumption your existing Windows install is UEFI (GPTable).

Hope this get's you closer... :bounce:
 
Solution
Make one of those three Linux Mint, it's based on the same linux distribution (debian) as Ubuntu but uses a user interface that is a lot closer to the Microsoft windows paradigm than the Ubuntu interface.
 
The complications of dual booting and setting up possible drivers is more of a pain for someone completely new and likely don't know what he/she is doing. Starting with a VM is a lot more user friendly to start with.

Ubuntu has decent driver support, but it's not perfect and unless someone knows their way around the system, booting into an operating system that is completely foreign is not very productive. Especially if you need to do something that you only know how to do on Windows, you have to restart your computer.

And if the boot loader ever fails to install properly on dual boot, most people have no idea how to fix it.
 
+1 for the VM suggestion. Makes it much easier to try out various distributions and is much safer. Performance of Linux in a VM is almost as good as a native install other than programs that make heavy used of 3D graphics (i.e. games). Bear in mind that modern virtualization software doesn't emulate the CPU but uses it directly.
 


Virtualization is not emulation. The intricacies of virtualization are more complex than I want to get into on a forum post but I assure you that all modern microprocessors handle virtualization very, very well. I have two Linux VMs (one RHEL 7, and one Ubuntu 13.10) running in the background 24/7 for work related tasks while I play BF4 in the foreground. Linux in particular is extremely guest friendly and the upstream kernel contains a large number of well written drivers for various virtualization platforms.
 
Yes virtualization works well if your processor supports Intel vtx or amd-v virtualization features. Not all do though.

Honestly not hard to set up dual boot though, in addition pretty rare that you come across a device that doesn't have a driver in the kernel. Usually you might find at most you need to grab a firmware blob for a WiFi card because nonfree software... And Ubuntu generally does that for you anyway with its 'additional drivers'.

IMO if looking to 'learn' linux a real installation will serve you better. Virtualization if you just want to occasionally use Linux tools..
 
Fine, but you will lose some performance. It takes performance to do said virtualization.
Can you explain (bearing in mind how VT technology in the processor works) why there should be an appreciable performance drop?

That said, I am biased because I have always disliked those environments.
Good that you appreciate that your views may be formed from prejudice rather than experience.
 

You worry a bit too much about performance. It will run fine. Can you name a task that will get a noticeable performance hit?

I also disagree that learning Linux on a dual boot is better. Setting up drivers for hardware has little to do with the actual usage of the system. Unless there's a specific case, I really fail to see the advantage. I would only recommend setting Linux up as a dual boot if someone plans to use it as a primary OS a good percentage of the time. With a VM, it's a lot more forgiving to screw ups. Just create a snapshot and you're good to go. You have less of a luxury with a dual boot. Again, installs can fail and most people have issues with fixing that outside of a new install. And even more people have no idea how to fix boot loader issues. I've even seen people mistakenly deleting their Windows partition trying to install Linux.
 
IMO if looking to 'learn' linux a real installation will serve you better. Virtualization if you just want to occasionally use Linux tools..
I use a Linux VM running on an OS X host to do OS and Android development. Is this what counts as occasional use of Linux tools? (And then I run a VM on that virtual machine to test the OS - but that's not a common situation! But even a VM running inside a VM suffers no real performance drop.)

I actually think that a VM is better for someone learning Linux as it allows them to easily experiment with different setups and/or distributions. Perhaps more important, it allows them to play with networked services such as SSH, Apache, Samba, mail servers, LDAP, and the like. Being able to run two or more machines, one accessing network services on the other, is a great learning tool. I can't really see any way that a native install enables one to learn more about Linux than a VM as they are in both cases just running Linux on a particular set of hardware. (On the other hand, it certainly does provide a variety of ways - as we see so often in these forums - to screw up your Windows install, lose all your data files, and so on.)
 
Why add another level of complexity to testing out different DE on VBox? Installing/configuring VB, installing the DE...blah, blah. Live USB it and find the best feel/functionality. Then spend the time on a VBox if you found one you like. I still believe, given good tutorial with either legacy or UEFI/GPT, a dual boot is the best of both worlds without any compromise.

(Incidentally, since every now and then, I do need a MS solution - and I run W7P in VBox - just another working app on Kubuntu). Wine won't do some of the stuff I need to do.
 

First of all, desktop environments are different than Linux distributions. And the complexity in creating a Live USB that keeps persistent data saved so that the next time you want to fiddle with it is NOT the same as creating a basic Live USB / installer (read-only). This can vary slightly across different distributions but most of the setup to the USB is the same. So your method is actually rather hard to set up and inconvenient to use.

Also, to use your method requires a restart to the Live USB. Should there be issues, it is really inconvenient to look up information unless the user has an alternative computer. This is a common annoyance with dual-booting as well.
 


That was quite the asinine response.

Who the heck said I had zero experience? I work in Virtual environments all day long at my job. I code...

For personal use I shy away from it, in gaming environments, yes, you lose a lot of performance. About 20% which is noticeable.

When running AUTOCAD and other GPU intensive software, you lose graphical performance...

 
Why would those be good examples to use on a Linux environment over a Windows one? Gaming is superior on Windows. And AutoCAD support for Linux is shoddy. GPU drivers on Linux is meh. nvidia does provide installer scripts, but they don't work all the time.
 
That was quite the asinine response.
Perhaps - but I notice that you didn't answer my question. 😉

Maybe you missed it :). Here it is again, just in case -

Can you explain (bearing in mind how VT technology in the processor works) why there should be an appreciable performance drop?

When running AUTOCAD and other GPU intensive software, you lose graphical performance...
Ah - graphics intensive programs (as I said in my first post), not CPU intensive ones (as you claimed).

I agree with you that trying to run AutoCad on Linux is not a good idea - for more than one reason.
 


In his defence, there is memory translation overhead associated with TLB misses. If the virtual page is cached in the TLB the memory operation should execute at native speed but if it's not, overhead is incurred. Processes within a guest machine need to have addresses translated from guest-virtual to guest-physical, from guest-physical to host-virtual, and then finally to from host-virtual to host-physical. Pre-Westmere the guest virtual to guest physical part of this translation was intercepted by the VMM and cached in a shadow page table which could be used by the x86 hardware page table walker on the host. Thus, every time the guest OS updated its own internal page table the VMM incurred substantial software overhead but walking the page table was not substantially more complex than running a process on the host.
With Westmere and beyond, the hardware page table walker can walk both the guest and host page table. This does take longer than walking just the host page table, but is significantly faster than having the software maintain a shadow page table in all but a few edge cases.

The question then is not whether or not there's overhead incurred with virtualization outside of hardware assisted tasks such as rendering, but whether or not that overhead is appreciable. In MMU/TLB heavy applications such as compiling programs the difference can be quite large. VMWare has a whitepaper on the topic so I'll let you judge it for yourself

http://www.vmware.com/pdf/Perf_ESX_Intel-EPT-eval.pdf
 


You can certainly make any Live USB persistant with a simple click of the button on Unetbootin. You're suggestion of creating multiple VMs is beyond boggling (anyone ask how much disk space the Anniyan has?). The user will not be able to see if his unique hardware will stand up to anything beyond the VM environment. Use a Live and chances are, it will find (or not) his hardware and the testing on Live will be fairly close to the real thing.

It's absolutely the #1 reason distro's have embraced that option.

He's looking at different distro's, not distros that work in a VM. The ability to dual boot has been so simplified in hundreds of tutorials on legacy and UEFI that I do not understand that logic either. Anniyan - do yourself a favor, especially if you want to know if your hardware will work with (pick your flavor) Linux, use a Live session of whichever you decide on. It will save you heartache later if you rely on the VM ability to run.
 
Read the OP's original question, he is asking which distro is most user friendly and simple to learn. Its hands down Ubuntu. Debian is by far the easiest when it comes to updates and having a large library of driver support. Most of all it has the largest community making it easy to find all the guides he needs. As far as dual dooting, I agree that installing to a VM is the way to go. Ignore the comments about hogging resources, im guessing your not gonna be gaming and using CAD software at the same time. As far as running a live version, it is a great option for a security professional who needs to boot to a Linux environment constantly from different stations but for your use a VM is the way to go. Download Virtual-box, a copy of Ubuntu and check google for a guide if you need one. After all that you will have spent a whopping $0.