Install WebVirtMgr+KVM on Fedora 27

Fedora 27 Final hasn’t dropped yet (as of today, Nov 3, 2017) but it won’t stop those technologists among us who insist on showing some poor, defenseless code who’s boss.

After successfully surviving an upgrade from Fedora 25 to 27 Beta I knew it was time to keep moving forward.  Being a long time VMware and XenServer fan I have always been a bit spoiled when it comes to managing virtual machines.  Each hypervisor provides a fairly intuitive management interface with very little hassle in setting it up.

KVM has an abundance of available management tools here:  http://www.linux-kvm.org/page/Management_Tools

My requirements for a hypervisor manager were pretty short:

  1. WebUI
  2. “Easy” install
  3. Simple configuration

I would have likely found success MUCH FASTER had I not insisted on using the latest Beta release of Fedora.  For example, oVirt 4.2 is a beautiful solution that is the foundation of RHV.  However it only has repo’s for CentOS/RHEL/SUSE.

I REALLY wanted to use the on-prem, opensource version of mist.io which uses best of breed modern tech stack (MongoDB, ElasticSearch, RabbitMQ, Docker) but I couldn’t get it through the “create a user” stage and chose to pick our headline technology.  I’m going to chalk this up to some minor changes to the way that Docker-Compose works on FC27 and will DEFINITELY be trying this again in a few months when FC27 has been GA for a while.

Installing WebVirtMgr

Apologies to my former co-worker Thomas Cameron but I disabled SELinux and the firewall.  It’s a home server and I’m not going to enhance my career by mastering SELinux at this stage.

Disable firewall

https://github.com/retspen/webvirtmgr/wiki/Install-WebVirtMgr

Minor semantic change from yum to dnf

 

https://github.com/retspen/webvirtmgr/wiki/Setup-Local-connection

If you receive an error you will need to fix the permissions:

 

Test I/O performance with dd

Test I/O performance by timing the writing of 100Mb to disk

Write 200 blocks of 512k to a dummy file with dd, timing the result. The is useful as a quick test to compare the performance of different file systems.

Sample Output:

Recovering from fsck.ext3 Unable to resolve LABEL problem

I was a little more than hasty in physically removing an SSD drive from my CentOS NAS box. Yes, I at least powered the system off before yanking it out.  I simply did it in the wrong order. How was I to know that the system would see ANY drive as an important drive and force you into maintenance mode even if the volume is empty.  Either way, always remove the fstab entries first when planning drive maintenance like this.

CentOS (5.7) booted up until it found the missing SSD then allowed forced me into the ‘maintenance’ console. This puts the system in to read-only mode. Meaning I could not simply edit fstab and remove the entry for my now missing drive.

Just needed to remount in read-write mode, edit fstab and move on.

A google search found this site which would have probably worked for a similar scenario where the outage wasn’t planned:  http://centoshacker.com/admin/disks/surviving-fsckext3-unable-to-resolve-label-problem.html

Finding large files/folders on ESX(i) or Linux

While patching ESX 4.0 systems we ran out of space while remediating patches.  Upon investigation we found the root volume completely full.  Running the following command basically stalled out:

Why?  Well it didn’t “stall”, I am impatient.  The second that du hits the /vmfs volume with over 60 datastores, there’s a lot to think about.  Using an exclude may be obvious to those who admin *nix systems every day.

Why did we run out of space?  Thanks to running the “Export System Logs” for VMware Support Team all willy-nilly over the 2 years these systems have been in production leaves residual files.  Namely over 2.5GB of vmsupport*.tgz files in the following folder:

tail logfiles with color highlights

This is looking for “dis” or “err” and will highlight in red.  So essentially your color is coming from grep.  I’ve seen much fancier solutions where they call some slick perl code but this is quick-n-easy.