Sunday, February 5, 2012

VM on the cheap

Playing around with VMware on some old hardware.

The box, an old P4 collecting dust:
  • Dell Dimension 5400
  • P4 2.26GHz
  • 1.5GB RAM
  • 20GB Disk
10 years old?  My how time flies.  Funny how clock speeds haven't gotten that much faster although density/cores have increased, more work possible per cycle.

It's a 32-bit processor so I'm limited to ESXi 3.5.  Last level to support 32-bit.

VMware


Register, download the .iso, burn to CD, etc.  (Note to self, throw away freecycle those 128MB and 512MB flash drives and just buy a few 4GB ones).

Boot the installer.  First up, it doesn't see my hard drive.  Probably because 20GB is too large and it's refusing to believe it's actually a real disk.  Or maybe it just doesn't like IDE.  Great post here - http://www.vm-help.com/esx/esx3i/ESXi_install_to_IDE_drive/ESXi_install_to_IDE_drive.php.  Got to the install console, edited /usr/lib/vmware/installer/Core/TargetFilter.py, continued installing.  Installed just fine.

Next up, rebooted, ESXi came up but complained about "Failed to load lvmdriver" and the networking was set to 0.0.0.0.  Going into the networking settings on the console gave the option of doing a reset of the driver, but no options to set things.  Turns out my white box network card isn't supported out of the box.  The card uses a Realtek 8169 chip.  To find this out, on the ESXi host console, I did alt-F1 to get the console, typed in "unsupported" (nothing echoes, just type it in) and then you get a login prompt, which lets you do things like lspci to see the devices.

Open source to the rescue, the good guys at http://vm-help.com/esx/esx3i/customize_oem_tgz.php have drivers.  I rebooted the ESXi host with Knoppix, mounted the system partition, then downloaded the mymods-0.1.gz from vm-help.com and saved as 'oem.tgz'.  Rebooted back in to ESXi, reset the network interface, and magic, DHCP started working, and I'm on the net.

I went to the ESXi host via http, logged in, and downloaded the VMware Infrastructure client.  Technically ancient management software, but looks and behaves surprisingly similar to the vSphere/vCenter stuff I'm using at work.

Storage

OK, pretty good, but what about augmenting that 20GB disk space?  I tried iSCSI.  I've set up a Linux iSCSI target before, but I was just lazy and didn't feel like wrestling with my old linux box.  So, I googled around and after discovering that Windows Server 2008 does iSCSI out of the box, but not Windows 7 (WTF?!), settled for the free www.starwindsoftware.com option.

Registered (what, no public e-mail, only corporate?  Hello, 1998 calling.) and downloaded the free version. It installed a bunch of drivers and services and other PC slowing crap that I will probably regret later, but I also have an iSCSI target.  Started the console, activated the free license, and created a test 10GB "device" as an image file and exported it as an iSCSI target.

Back over on ESXi, under the configuration tab, I enabled the iSCSI storage adapter, and defined a new datastore using exported LUN from the iSCSI target.  Schweet, was pretty simple.

VMs

Next up, create a couple of VMs.  I decided to create one on the iSCSI partition, one on the local disk.


First error, on powering on, I get "Admission Check failed for Memory Resource".  Turns out on 1..2 GB systems, the ESXi Hypervisor RAM reservation is too high.  Good page at http://ittechnikt3.wordpress.com/2010/02/16/vmware-esxi-admission-check-failed-for-memory-resource/ on how to change the VIM System Resource Pool advanced setting so that the reservation is no longer 1024MB but 192MB.  Did that and the VM powered on OK.  Installed Ubuntu.

Network


ttcp, from a windows7 box to the VM over gigE I was able to get at most 10MB/sec burst, but sustained things dropped off to about 2MB/sec.  Rebooting the ESXi host to Knoppix and running TTCP, I get about 48MB/sec burst, sustained 43MB/sec.  Unfortunately I can't run ttcp directly on the ESXi host, so it's not apparent if the lack of perf is due to the network driver in ESXi, or virtual server inefficiencies.  Since the guest VM CPU spikes when I do the test, I suspect it's probably the latter, namely that the hardware lacks instructions/support for passing through the network traffic directly and instead has to go through the hypervisor stack.  Old P4.


IO


I installed iozone on both boxes and monitored with iostat.  The local disk VM was about to write about 30MB/s, whereas the iSCSI system wrote about 48MB/s.  The local disk is Ultra DMA ATA 100 (so 100MB/s max bus rated) but the drive is XXXXXX which can only do about XXX MB/s (probably something like 33MB/sec).  The iSCSI system is gigE connected, SATA 6Gb WDC1002FAEX which will do about 126MB/s maximum.  Coincidentally, iSCSI on gigE can do 125MB/s maximum.

Ran another test, with "zcav" just reading.  Local disk got at most 46MB/s iSCSI about 2MB/s.  Yack.  The iozone test was much better (random I/O).  Maybe sequential reads like this have nasty overhead with iSCSI?

Bonnie++ had to say:

Local Disk:  Read about 37MB/sec block input
iSCSI Disk: Read about 20MB/sec block input

Note during the iSCSI disk test, I saw the network use up to about 25 MB/sec.

Friday, February 3, 2012

Time Calculator

Handy Time Calculator if you need to add and subtract hours, minutes, seconds.

Time Calculator