Finding large files/folders on ESX(i) or Linux

While patching ESX 4.0 systems we ran out of space while remediating patches.  Upon investigation we found the root volume completely full.  Running the following command basically stalled out:

Why?  Well it didn’t “stall”, I am impatient.  The second that du hits the /vmfs volume with over 60 datastores, there’s a lot to think about.  Using an exclude may be obvious to those who admin *nix systems every day.

Why did we run out of space?  Thanks to running the “Export System Logs” for VMware Support Team all willy-nilly over the 2 years these systems have been in production leaves residual files.  Namely over 2.5GB of vmsupport*.tgz files in the following folder:

PCoIP in View 5 uses less bandwidth!

I’m taking this with a grain of salt but even if it’s not 75% this is a great step forward!  Having demoed PCoIP and HDX for clients on the exact same infrastructure and OS, I can say they are tied from an end-user perspective.  Both offer a very rich desktop experience.  But over the WAN, PCoIP loses out to HDX when latency comes into the picture.

http://blogs.vmware.com/euc/2011/07/pcoip-enhancements-coming-to-vmware-view-75-bandwidth-improvement.html

Home Lab Hardware

vSphere 5 is almost GA and View 5 Beta is already in the hands of approved clients.  XenServer 6 beta is out.  It’s time for me to update the home lab.  Obligatory link to ESX Whitebox HCL!

My ASUS motherboards have onboard Realtek 8111E Gigabit adapters which ESX 4.1 U1 does not support. HOWEVER, much to my surprise ESX5 DOES see my Realtek adapters!!!  This makes it kinda depressing when going back to 4.1 U1.

I have the typical 3 box scenario.  2 x ESX hosts and 1 x NAS.

I boot from 2GB USB flash drives since like most people I have several laying around from every vendor and partner imaginable.  I’m able to swap from ESX5 to 4.1U1, and back, very quickly.

I’ve repurposed as much existing hardware components as possible.  An HP xw4300 workstation as my NAS.  I’m using NFS for the VM’s and ISO’s.  Also I had as my primary workstation a very swift AMD Athlon 7750 x2 with 8GB of DDR2-667 RAM which holds it own against the new AMD PhenomII x6 system.

ESX01 Host (AMD 6 core, 16GB RAM, 3 x GigE NIC’s, 1 x 80GB SATAII)

ESX02 Host (Repurposed AMD 2 core, 8GB RAM, 3 x GigE NIC’s, 1 x 500GB SATAII)

NAS Host (Repurposed HP xw4300, 4GB RAM, 2 x GigE NIC’s, 2 x 500GB SATAII Software RAID1, 1 x 64GB SATAIII SSD)

Switch (8port, Gigabit, VLAN capable)

1 x NETGEAR ProSafe GS108T-200NAS Gigabit Smart Switch 10/100/1000Mbps 8 x RJ45

How did I connect all this equipment together?  Did it actually work?  Is it even functional?  All these answers and more when I get time to post Home Lab Configuration and Testing.  Hint:  With much cursing, yes, and yes.

Keeping sharp in 2011

Currently working towards my VCAP4-DCD

And my AWESOME wife answered a long forgotten “I want this…” email and presented me with the “Getting Started with Arduino Kit V2.0” for Christmas!  Expect some terribly useless Arduino based projects to be posted here.  Although I am incredibly interested in an Arduino powered automated home garden watering system!

I’ve been working in the vMA (vSphere Management Assistant) to collect syslog information, stunnel it to a central syslog-ng server, use sec to report back to Nagios for notifications.

These events and others should keep me sharp in 2011.

PuTTY to Cluster

Great for quickly firing up esxtop on all hosts in a cluster simultaneously.

I put together a package that executes and authenticates PuTTY to all hosts in a cluster. It then launches PuTTYCS to allow for simultaneous execution of commands to all open PuTTY sessions.

It assumes you have PowerShell and vCenter PowerCLI 4.0 or better installed. The following files (PuTTY_to_Cluster.ps1 and PuTTY_to_Cluster.cmd) need to be in the same folder with PuTTY.exe and PuTTYCS.exe

To execute, run PuTTY_to_Cluster.cmd

PuTTY_to_Cluster.ps1

PuTTY_to_Cluster.cmd

Reason for lack of news

If anyone reads this site, I have been short on information (useless or otherwise) since late ’09. In the latter part of last year I was hired by a well known VMware Virtualization consulting firm. It has been a roller-coaster ride of experiences, training and travel. The team of consultants are beyond phenomenal and have been a huge help in getting me up to speed on the various VMware products and 3rd party virtualization tools that my previous client-base had neither the need (or funding) for. This being said, I can’t share too much detail on what I am doing with concern that it may be seen as proprietary. I can share that I have worked with the following products:

  1. VMware Lab Manager 4 – Give you the ability to duplicate your production servers in a “fenced” network so they may have the same name and IPs without interfering with the running production version.  Great for app-dev.
  2. VMware View 4 – The cornerstone of the VDI (Virtual Desktop Infrastructure) world.
  3. Quest Virtual Access Suite – Competitor to VDI with a very robust management/deployment interface.
  4. Liquidware Labs Stratusphere – Analyse your physical or virtual machines to see their VDI fitness.  Great for seeing if certain application configurations are good candidates for virtualization complete with pretty graphs.
  5. AppSense – I get giddy discussing this management suite.  GPO’s, roaming profiles, and 5-10 minute Active Directory login times BE-GONE!  AppSense is a great solution for virtual and physical environments whose scripted logons have now grown to do more harm than good.  It is able to execute all scripted elements simultaneously (map drives, configure Outlook, install printers) instead of sequentially, drastically improving login times.  I could write a few pages on the 4 products that make up the AppSense Suite but will instead leave you a link to Brian Madden‘s AppSense Tutorial http://www.appsense.com/brianmaddentraining/