Install WebVirtMgr+KVM on Fedora 27

Fedora 27 Final hasn’t dropped yet (as of today, Nov 3, 2017) but it won’t stop those technologists among us who insist on showing some poor, defenseless code who’s boss.

After successfully surviving an upgrade from Fedora 25 to 27 Beta I knew it was time to keep moving forward.  Being a long time VMware and XenServer fan I have always been a bit spoiled when it comes to managing virtual machines.  Each hypervisor provides a fairly intuitive management interface with very little hassle in setting it up.

KVM has an abundance of available management tools here:  http://www.linux-kvm.org/page/Management_Tools

My requirements for a hypervisor manager were pretty short:

  1. WebUI
  2. “Easy” install
  3. Simple configuration

I would have likely found success MUCH FASTER had I not insisted on using the latest Beta release of Fedora.  For example, oVirt 4.2 is a beautiful solution that is the foundation of RHV.  However it only has repo’s for CentOS/RHEL/SUSE.

I REALLY wanted to use the on-prem, opensource version of mist.io which uses best of breed modern tech stack (MongoDB, ElasticSearch, RabbitMQ, Docker) but I couldn’t get it through the “create a user” stage and chose to pick our headline technology.  I’m going to chalk this up to some minor changes to the way that Docker-Compose works on FC27 and will DEFINITELY be trying this again in a few months when FC27 has been GA for a while.

Installing WebVirtMgr

Apologies to my former co-worker Thomas Cameron but I disabled SELinux and the firewall.  It’s a home server and I’m not going to enhance my career by mastering SELinux at this stage.

Disable firewall

https://github.com/retspen/webvirtmgr/wiki/Install-WebVirtMgr

Minor semantic change from yum to dnf

 

https://github.com/retspen/webvirtmgr/wiki/Setup-Local-connection

If you receive an error you will need to fix the permissions:

 

Home Lab Update 1

I have a machine booting ESXi 5.1 on Intel 520 180GB SSD.  Then then running FreeNAS-8.3.0-RELEASE-x64 (r12701M) on a VM that I have configured the 4 local 1TB 7200 RPM SATA disks as RDMs.   http://blog.davidwarburton.net/2010/10/25/rdm-mapping-of-local-sata-storage-for-esxi/

FreeNAS runs ZFS on the 4 x 1TB (rdm) drives.  I added 32GB vmdk on the SSD is used by FreeNAS as a ZFS cache.

NFS export list mount process requires the name resolution of the client. When a client initiates the request for a NFS share, the servers checks its export list for the requested directory and name of the client in this access list for that particular share. Now if server fails to resolve the name of the initiator it denies its request for mounting that share. In order to overcome this problem you must have a dns server in network or else you have you manually enter the names and IP address information in to hosts file of the server.  

That little bit of news was very helpful during this build as I had not been able to connect from the ESXi host to the FreeNAS VM running on that same host after much tinkering with permissions and network settings.  Opened up the console on the FreeNAS VM, edited /etc/hosts to give the ESXi server IP a host name.  BAM, NFS mounted with no problem.

Home Lab Hardware

vSphere 5 is almost GA and View 5 Beta is already in the hands of approved clients.  XenServer 6 beta is out.  It’s time for me to update the home lab.  Obligatory link to ESX Whitebox HCL!

My ASUS motherboards have onboard Realtek 8111E Gigabit adapters which ESX 4.1 U1 does not support. HOWEVER, much to my surprise ESX5 DOES see my Realtek adapters!!!  This makes it kinda depressing when going back to 4.1 U1.

I have the typical 3 box scenario.  2 x ESX hosts and 1 x NAS.

I boot from 2GB USB flash drives since like most people I have several laying around from every vendor and partner imaginable.  I’m able to swap from ESX5 to 4.1U1, and back, very quickly.

I’ve repurposed as much existing hardware components as possible.  An HP xw4300 workstation as my NAS.  I’m using NFS for the VM’s and ISO’s.  Also I had as my primary workstation a very swift AMD Athlon 7750 x2 with 8GB of DDR2-667 RAM which holds it own against the new AMD PhenomII x6 system.

ESX01 Host (AMD 6 core, 16GB RAM, 3 x GigE NIC’s, 1 x 80GB SATAII)

ESX02 Host (Repurposed AMD 2 core, 8GB RAM, 3 x GigE NIC’s, 1 x 500GB SATAII)

NAS Host (Repurposed HP xw4300, 4GB RAM, 2 x GigE NIC’s, 2 x 500GB SATAII Software RAID1, 1 x 64GB SATAIII SSD)

Switch (8port, Gigabit, VLAN capable)

1 x NETGEAR ProSafe GS108T-200NAS Gigabit Smart Switch 10/100/1000Mbps 8 x RJ45

How did I connect all this equipment together?  Did it actually work?  Is it even functional?  All these answers and more when I get time to post Home Lab Configuration and Testing.  Hint:  With much cursing, yes, and yes.