Skip navigation

I was recently involved in defining a backup strategy for multiple SQL Servers running in a vSphere environment.  The latest version (1.5 U1) of VCB was being used along with the latest release (6.5.5) of NetBackup .  Part of this process involved creating a matrix of Microsoft supported Windows/SQL configurations and the data consistency levels of these supported configurations.  The chart below summarizes the current state of data consistency levels, when using VCB for Windows/SQL Server backups.

 

http://communities.vmware.com/servlet/JiveServlet/downloadImage/38-5445-8344/matrix.jpg 

 

Not Supported:

Microsoft does not support this combination of Windows and SQL Server.  A combination unsupported by Microsoft.

 

Crash Consistent:

This is the state in which a system would be found after a system failure or power outage.

 

Application Consistent:

This is the state in which all databases are in-synch and represent the true status of the application.

 

 

The customer, in this case, was not willing to assume the risk of crash consistent data for SQL Servers.   With this particular VCB and NetBackup setup, the reality is that any Windows/SQL pair that does not provide application level consistency will either need to be backed up as a native SQL flat file backup or utilize the NetBackup SQL Agents.  This decision would ultimately lead to additional spending in the form of either disk space or the purchase of NetBackup SQL Agents, and would add operational complexities with multiple backup strategies.  There is also hope that NetBackup 7 may allow us to revisit this issue very soon and find a more singular approach for SQL Servers.

 

Note: This information is valid as of January 28, 2010 for VCB 1.5 U1 and vSphere 4.0 U1 editions. 

 

As always, thanks for reading.

Brian

I was recently involved in a situation where a development group at a SMB was tasked with consolidating two SQL servers into one.  These SQL servers were physical servers running old versions of Windows, with equally old versions of SQL Server, on some pretty old hardware.  The situation became interesting, when the development group put in the specifications request for the new virtual machine. The request was for a 64-bit Windows 2008 server with 8 GB of RAM, 4 vCPUs and over half a TB of FC SAN disk for storage.

 

This seemed like a bit of a tall order, so the first thing I did was to compare the specifications in the request with the specifications of the current physical servers. Server 1 had two Pentium III 1.2 GHz processors with 512 MB of RAM and 90 GB of used disk space.  Server 2 had two Pentium III 1.0 GHz processors with 2 GB of RAM and 16 GB of used disk space.  Even after ignoring the massive storage difference, the requested numbers didn't match up with the 4 vCPUs and 8 GB RAM specified in the request.  Next I went to the system baselines, thinking that the systems might be overburdened.  The baselines actually revealed that the systems weren't doing much work - 3% CPU average utilization, low disk IOPs, and very low network utilization.  Using the perfmon SQLServer:Memory Manager -> Total Server Memory counter did reveal that the SQL servers were actually using the memory they were allocated. The numbers in the request still didn't add up though, and now with data in-hand, it was time to go talk to the requestors. 

 

Ultimately it was discovered that this request was submitted this way to "allow for future growth." Many years ago this may have been standard practice with physical hardware, but in today's virtual environments it just no longer makes any sense.  Based on the baseline data, the requested virtual machine could be built with 1 vCPU, 3 GB of memory and less than 100 GB of SATA disk space.  If it turns out that the server actually needs more resources in the future, then these resources may be very quickly added with minimal or even no downtime.  Gone are the days of provisioning everything up front while allowing room for future growth, hoping the server makes it to the next refresh cycle and then repeating the same process over again. To complement the virtual infrastructure, there must be an awareness of the way this technology fundamentally changes how systems are now provisioned.

 

Thanks for reading!

The vSphere client for Linux is coming, but the question is what to do in the time between?  Here is a set of instructions for one approach, based on a workaround that has been mentioned in the forums a few times. This workaround does require a Windows machine with RDP enabled and the vSphere client installed.

 

Step 1: Make sure to get the latest version of rdesktop for the Linux machine, by using the following command:

sudo apt-get upgrade rdesktop

Step 2: Go to http://www.cendio.com/seamlessrdp/ and download the seamlessrdp.zip file.

 

Step 3: Extract the zip file to C:\seamlessrdp on the Windows machine -the one with RDP enabled and the vSphere client installed.

 

Step 4: On the Linux desktop, create a launcher or just run rdesktop with the following command:

rdesktop -A -s "c:\seamlessrdp\seamlessrdpshell.exe C:\PROGRA1\VMware\INFRAS1\VIRTUA~2\Launcher\vpxClient.exe" 10.0.0.10

Note 1: Change 10.0.0.10 to the IP address of the Windows machine. The path to vpxClient.exe may also differ, if the defaults were not used during install.

 

Note 2: Unrecommended - If security is not a concern, or if the inconvenience of logging in to the RDP session is just too much, then the following command may be used to bypass the login prompt:

rdesktop -u admin -p pass -A -s "c:\seamlessrdp\seamlessrdpshell.exe C:\PROGRA1\VMware\INFRAS1\VIRTUA~2\Launcher\vpxClient.exe -passthroughAuth -s LOCALHOST" 10.0.0.10

After the login screen clears, the vSphere client will be running on the Linux desktop just like any native Linux application would be.  No WINE, Windows remote desktops or other hassles (other than having the Windows machine) are required.

 

Update: Rich Brambley over at VM/ETC has a blog entry with an even more elegant solution.  It still requires a Windows box though!

 

Enjoy, and thanks for reading.

I recently was asked to patch an ESX 4 host for a customer.  This customer did not make use of VMware's Update Manager, and the customer also wanted a simple set of instructions to be provided for use in future patching. Below is a simplified bullet-item version of the ESX 4 Patch Management Guide that I presented to the customer.

 

01:

On a Windows box, download the patch bundle directly from VMware. This will be .zip file.

 

02:

On a Windows box with the vSphere client installed, use the vSphere client's datastore browser to upload the .zip file to a datastore on an ESX 4 host.

 

03:

Obtain local console access, or SSH (putty), to the ESX 4 host that the bundle file was uploaded to.

 

04:

Verify that the ESX 4 host disk free space is acceptable (2X the size of the bundle), using the command:

 

vdf -h

 

05:

Move the bundle file off of the datastore and into /var/updates, using the command:

 

mv /vmfs/volumes/datastore/ESX400-200909001.zip /var/updates

 

Note: The directory /var/updates is used in this document, but any directory on a partition with adequate free space could substituted.

The patch bundle referenced in this document (ESX400-200909001.zip) was for the 09/24/2009 update release.  Adjust file names as required, for newer bundles.

 

06:

Verify that the patch bundles aren't already installed (or if they are required), using the command:

 

esxupdate query

 

07:

If applicable, use the vSphere client to put the ESX 4 host in maintenance mode.  Alternatively, use the command:

 

vimsh -n -e /hostsvc/maintenance_mode_enter

 

The following commands may also be used to list and then shut down virtual machines.  This is for environments without VMotion or for single hosts.

 

vmware-cmd -s listvms

vmware-cmd <full path to .vmx file> stop soft

 

08:

To determine which bulletins in the bundle are applicable to this ESX 4 host, use the command:

 

esxupdate --bundle file:///var/updates/ESX400-200909001.zip scan

 

09:

To check VIB signature, dependencies, and bulletin order without doing any patching (a dry run), use the command:

 

esxupdate --bundle file:///var/updates/ESX400-200909001.zip stage

 

10:

If the stage (dry run) found no problems, then the bundle can be installed using the command:

 

esxupdate --bundle file:///var/updates/ESX400-200909001.zip update

 

11:

When (or IF) prompted to reboot, use the command:

 

reboot

 

Note: Not all patches will require an ESX host reboot.

 

12:

After the system boots, verify patch bundles were installed with the command:

 

esxupdate query

 

13:

If applicable, take the ESX host out of maintenance mode with the command:

 

vimsh -n -e /hostsvc/maintenance_mode_exit

 

14:

If applicable, restart virtual machines using the vSphere client or the following command:

 

vmware-cmd <full path to .vmx file> start

 

15:

Delete the bundle zip file from the /var/updates folder, using the command:

 

rm /var/updates/*.zip

 

16:

Verify that host disk free space is still acceptable, using the command:

 

vdf -h

 

As always, thanks for reading!

For a recent vSphere upgrade, I needed to find all virtual machines that had the VMware Tools Sync driver installed.  All this required was a few Excel exports and the Microsoft devcon utility.  I simply built batch files around the exported server lists with the following syntax:

 

devcon -m:\\<SERVERNAME> find * | find "Sync Driver"

 

With the lists created, it is now just a matter of following the instructions in either kb 1009073 to disable the Sync driver or kb 1009886 to remove the Sync driver and replace it with the VSS driver as part of the VMware Tools upgrade.

 

Thanks for reading!

Having done my fair share of baselining Windows systems, and even blogging about it before - I finally took some time to dig a bit deeper into some of the data I had collected. One of the things I knew was that certain maintenance/operational procedures, like nightly backups, were elevating the average disk IOPS values. What I recently found at one site revealed the extent to which these averages could be elevated.

 

One particular physical server that was monitored with perfmon showed average disk IOPS of 5 and peak disk IOPS of 1208.  What's interesting is that once the nightly backup window was excluded from these averages, the peak IOPS value dropped from 1208 all the way down to 391. That's a third of the first value!  This server made use of direct attached storage and was being backed up over the network using an agent from a popular 3rd party software company.  This server did not have regularly scheduled antivirus scans or defrag jobs running, but these activities would have certainly increased the disk IOPS averages as well. 

 

When sizing physical servers for virtualization, one of the things to keep in mind is that certain operational processes and procedures will likely change along with the conversion. In most of the cases I have been involved with, the agent-based backups will be no more - Virus scans, defrags and other tasks may very likely remain. To appropriately size the required IOPS for this measured workload, I would exclude the data collected during the backup window.  The awareness of peak IOPS that may not exist after a P2V conversion, becomes very important during sizing for virtualized systems. Depending on the criteria used for converting systems, knowing the true value of what the disk IOPS will be could make the difference between a system that gets virtualized and one that does not.

 

It is also important to note that disk IOPS are only one part of the sizing process. The same ideas apply equally to memory usage, cpu usage and network IOPS.  Manually measuring with perfmon is a quick and easy way to size a server, but combining this data with complementary knowledge of what the physical systems are actually doing (and when) can lead to very accurate sizing estimates for P2V conversions.

 

Thanks for reading!

This blog entry continues to get a lot of hits, so I thought I would keep it updated and reformat it a bit. VMware's Fault Tolerance is a great feature that has generated a lot of interest, and it is also a new feature of vSphere that will only continue to improve. With that being said, the list below is the current state of requirements and limitations for enabling FT virtual machines in vSphere.  The majority of this information came from the vSphere Pre-requisites Checklist, the VMware Fault Tolerance Datasheet and the Availability Guide. Other items were picked up in the forums or in the VMware knowledge base. kb article 1010601 "Understanding VMware Fault Tolerance" is a great kb resource to start with, if you are new to this feature.  kb 1022844 contains the changes to Fault Tolerance in vSphere 4.1.

 

Last updated: October 02, 2012

 

INFRASTRUCTURE:

 

VMware FT is available in the following versions of vSphere: Enterprise, Enterprise Plus
Note: vSphere Advanced Edition is no longer available in vSphere 5.

 

A host must be certified by the OEM as FT-capable. Refer to the current Hardware Compatibility List (HCL) for a list of FT-supported servers.

 

Ensure that HV (Hardware Virtualization) is enabled in the BIOS.

 

Ensure that FT protected virtual machines are on shared storage (FC, iSCSI or NFS).

 

Ensure that the primary and secondary ESX hosts and virtual machines are in an HA-enabled cluster.

 

Ensure that there is no requirement to use DRS for VMware FT protected virtual machines; in this release VMware FT cannot be used with VMware DRS (although manual VMotion is allowed). - In vSphere 4.1, FT is integrated with DRS, which means that DRS can now load balance both the primary and secondary Fault Tolerant virtual machines.

 

Ensure that the primary and secondary ESX/ESXi hosts are running the same build of VMware ESX/ESXi.  Note: kb 1013637 published September 25, 2009 states that "When creating a cluster that will have fault tolerant virtual machines, the cluster should consist of all ESX hosts or all ESXi hosts and not a mix of ESX and ESXi hosts." - In vSphere 4.1, FT has a version associated with it, which means that the primary and secondary ESX do not need to be run at the same build number/patch level.

 

When you upgrade hosts that contain fault tolerant virtual machines, ensure that the Primary and Secondary VMs continue to run on hosts with the same ESX/ESXi version and patch level. In vSphere 4.1, FT has a version associated with it, which means that the primary and secondary ESX do not need to be run at the same build number/patch level.

 

Ensure that there are will be no more than four VMware FT enabled virtual machine primaries or secondaries on any single ESX/ESXi host (from the configuration maximums document.)

 

Ensure that at least gigabit NICs are used. (10 Gbit NICs can be used as well as jumbo frames enabled for better performance.) Each host must have a VMotion and a Fault Tolerance Logging NIC configured. The VMotion and FT logging NICs must be on different subnets.

 

Ensure that host certificate checking is enabled (enabled by default) before you add the ESX/ESXi host to vCenter Server.

 

Ensure that IPv4 is used for the Fault Tolerance logging network.  (HA does support IPv6 for management networks.)

 

Ensure that there is no user requirement to use NPT/EPT (Nested Page Tables/Extended Page Tables) since VMware FT disables NPT/EPT on the ESX host.

 

VMware Fault Tolerance requires a dedicated Gigabit Ethernet network between the physical servers, 10 Gigabit Ethernet should be considered if VMware FT is enabled for many virtual machines on the same host.

 

There are no limits on how many virtual machines in a VMware DRS or VMware HA cluster can be enabled for VMware FT, but every machine with VMware FT enabled takes up twice as much capacity; this should be built into the configuration.

 

Overhead is dependent on the workload and can be as low as 5% or as much as 20%.

 

If firewalls or other controls exist between ESX hosts, ports 8100, 8200 (Outgoing TCP, incoming and outgoing UDP) must be open.

 

Ensure that a resource pool containing fault tolerant virtual machines has excess memory above the memory size of the virtual machines. Fault tolerant virtual machines use their full memory reservation. Without this excess in the resource pool, there might not be any memory available to use as overhead memory.

 

To ensure redundancy and maximum Fault Tolerance protection, VMware recommends that you have a minimum of three hosts in the cluster. In a failover situation, this provides a host that can accommodate the new Secondary VM that is created.

 

Too Much Activity on VMFS Volume Can Lead to Virtual Machine Failovers - reduce the number of file system operations or ensure that the fault tolerant virtual machine is on a VMFS volume that does not have an abundance of other virtual machines that are regularly being powered on, powered off, or migrated using VMotion.

 

When Fault Tolerance is turned on, vCenter Server unsets the virtual machine's memory limit and sets the memory reservation to the memory size of the virtual machine. While Fault Tolerance remains turned on, you cannot change the memory reservation, size, limit, or shares.

 

Disabling the virtual machine restart priority setting for a fault tolerant virtual machine causes the Turn Off Fault Tolerance operation to fail. In addition, fault tolerant virtual machines with the virtual machine restart priority setting disabled cannot be deleted.

 

FT requires that the hosts for the Primary and Secondary VMs use the same CPU model, family, and stepping.

 

Hosts running the Primary and Secondary VMs should operate at approximately the same processor frequencies, otherwise the Secondary VM might be restarted more frequently. Platform power management features which do not adjust based on workload (for example, power capping and enforced low frequency modes to save power) can cause processor frequencies to vary greatly.

 

You cannot back up an FT-enabled virtual machine using VCB, vStorage API for Data Protection, VMware Data Recovery or similar backup products that require the use of a virtual machine snapshot, as performed by ESX/ESXi. To back up a fault tolerant virtual machine in this manner, you must first disable FT, then re-enable FT after performing the backup. Storage array-based snapshots do not affect FT.

 

Apply the same instruction set extension configuration (enabled or disabled) to all hosts.

 

Ensure that the processors are supported. (Download VMware SiteSurvey.) For VMware FT to be supported, the servers that host the virtual machines must each use a supported processor from the same category as documented below.

Intel Xeon based on 45nm Core 2 Microarchitecture Category:

31xx Series

33xx Series

52xx Series (DP)

54xx Series

74xx Series

Intel Xeon based on Core i7 Microarchitecture Category

Nehalem Series Group (any processor series here can be used):

34xx Series (Lynnfield)

35xx Series

55xx Series

65xx Series

75xx Series

Westmere Series Groups (each processor series must be used separately):

34xx Series (Clarkdale)

i3/i5 (Clarkdale)

36xx Series

56xx Series

AMD 3rd Generation Opteron Category

13xx and 14xx Series

23xx and 24xx Series (DP)

41xx Series

61xx Series

83xx and 84xx Series (MP)

View full details about processor and other requirements in kb 1008027

 

-


 

 

VIRTUAL MACHINES:

 

Virtual machines must be running on one of the supported guest operating systems. See VMware kb 1008027 for more information.

 

Mac OS X Server 10.6 is not supported.

 

The combination of the virtual machine's guest operating system and processor must be supported by Fault Tolerance (for example, 32-bit Solaris on AMD-based processors is not currently supported).

 

VMware FT requires virtual machines to have thick-eager zeroed disks.  Thin or sparsely allocated disks will be converted to thick-eager zeroed when VMware FT is enabled requiring additional storage space. The virtual machine must be in a powered-off state to take this action.

 

Ensure that the datastore is not using physical RDM (Raw Disk Mapping). Virtual RDM is supported.

 

VMware recommends that you use a maximum of 16 virtual disks per fault tolerant virtual machine.

 

The virtual machine cannot have more than 64GB of RAM.

 

Ensure that there is no requirement to use Storage VMotion for VMware FT VMs, since Storage VMotion is not supported for VMware FT VMs.

 

Ensure that NPIV (N-Port ID Virtualization) is not used, since NPIV is not supported with VMware FT.

 

Ensure that the virtual machines are NOT using more than 1 vCPU. (SMP is not supported.)

 

Ensure that there is no user requirement to hot add or remove devices since hot plugging devices cannot be done with VMware FT.

 

Ensure that USB Passthrough is not used.

 

Ensure that there is no user requirement to use USB (USB must be disabled) and sound devices (must not be configured) since these are not supported for Record/Replay (and VMware FT.)

 

Ensure that there is no user requirement to have virtual machine snapshots since these are not supported for VMware FT. Delete snapshots from existing virtual machines before protecting with VMware FT. Note: Client agents may be required for backups.

 

Ensure that virtual machine hardware is upgraded to v7.

 

Ensure that the virtual machines do not use a paravirtualized guest OS. Note: On September 22, 2009 it was announced that support for guest OS paravirtualization using VMware VMI to be retired from new products in 2010-2011.

 

Fault Tolerance is not supported with Paravirtual SCSI adapters.

 

The vmxnet3 adapter is not supported with Fault Tolerance.  See kb 1013757In vSphere 4.1, you can use vmxnet3 vNICs in FT-enabled virtual machines.

 

Some legacy network drivers are not supported. vmxnet2 is, but you might need to install VMware tools to access the vmxnet2 driver instead of vlance in certain guest operating systems.

 

Ensure MSCS clustered virtual machines will have MSCS clustering removed prior to protecting with VMware FT.

 

VMs can’t have any non-replayable devices (USB, sounds, physical CD-ROM, physical floppy)

 

The virtual machine must not be a template or linked clone.

 

The virtual machine must not have VMware HA disabled.

 

VMDirectPath is not available for FT virtual machines.

 

VMCI stream socket connections are dropped when a virtual machine is put into Fault Tolerance (FT) mode. No new VMCI stream socket connections can be established while in FT mode.

 

The hot plug device feature is automatically disabled for fault tolerant virtual machines. To hot plug devices, you must momentarily turn off Fault Tolerance, perform the hot plug, and then turn on Fault Tolerance.

 

Extended Page Tables (EPT)/Rapid Virtualization Indexing (RVI) is automatically disabled for virtual machines with Fault Tolerance turned on.

 

Software virtualization with FT is unsupported.

 

FT virtual machines cannot be replicated with the vSphere Replication feature in SRM 5.

 

Dynamic Disk Mirroring use in the guest OS is not supported.

 

OTHER:

 

In the situation where virtual machines are configured with Fault Tolerance, AppSpeed might not monitor these virtual machines fully in the current GA version. In some cases AppSpeed generates empty monitoring data caused by the passive virtual machine in the Fault Tolerance constellation. - kb 1013896

 

Fault Tolerant virtual machines that have a change tracking resource (CTK) listed in the virtual machine configuration will rapidly switch between ESX hosts when being powered on.  CTK must be disabled, or the CTK variables must be removed from the virtual machine configuration (.vmx) file. - kb 1013400

 

An Absolute Must Read: The Design and Evaluation of a Practical System for Fault-Tolerant Virtual Machines

 

If you know of any others, feel free to share.  As always, thanks for reading.

I have a customer that deployed NetApp's NFS as the storage for their VI3 infrastructure. After the implementation, there was some general confusion about thin provisioning and understanding how it works in the VMware VI3 environment. In researching these issues, here is what was found:

 

  1. Cloning thin provisioned disks will create thick disks.

  2. Move/Copy (SVMotion, cold migration w/move storage option) operations will convert thin disks to thick, including NFS volume to NFS volume operations.

  3. Disks created in vCenter and via VMware Converter are created "thin" by default.

  4. Running defrag utilities inside a Windows virtual machine will cause the associated thin disk(s) to grow to varying degrees.

  5. When a thin provisioned disk grows, a SCSI reservation takes place.

  6. If performance is the primary concern for a particular virtual machine, thin provisioned disks should not be used.

 

Bottom Line: Without additional work and/or operational procedures, cloning, storage VMotion and even cold migrations will convert thin disks to thick disks. vSphere addresses these issues by supporting thin provisioning, but in the meantime - check out Kent's blog for a great workaround for converting thick disks to thin.

 

Now that the operational limitations and the realities of thin provisioned disks were understood, there also was a need to determine true disk allocation and usage. 

 

To discover what the totals are for all allocated VMDK files, run the following command from the /vmfs/volumes directory:

find . -name '.snapshot' -prune -o -name "*-flat.vmdk" -exec ls -lh {} \;

This command will exclude the hidden NetApp snapshot directory and only return the "flat" vmdk files in the listing.  The output will contain this block of information:

 

20G

./01234a56-bc78d901/Win2003Std32SP1/Win2003Std32SP1-flat.vmdk

20G

./0ab1cd23-45efg67h/Win2003Std32SP2/Win2003Std32SP2-flat.vmdk

 

Adding the sizes up will show that 40Gb of space has been allocated. 

 

The next step is to discover what the total disk used value actually is.  To do this, run the following command from the /vmfs/volumes directory:

find . -name '.snapshot' -prune -o -name "*-flat.vmdk" -exec du -sh {} \;

This command will exclude the hidden NetApp snapshot directory and only return the "flat" vmdk files in the listing.  The output will appear as:

 

3.5G

./01234a56-bc78d901/Win2003Std32SP1/Win2003Std32SP1-flat.vmdk

3.7G

./0ab1cd23-45efg67h/Win2003Std32SP2/Win2003Std32SP2-flat.vmdk

 

Adding the sizes up will show that 7.2Gb is actually being used on disk. These numbers can be verified by viewing the free space value of the datastores in the VMware Infrastructure Client, NetApp FilerView or NetApp System Manager application.

 

Dividing the combined values returned from the "du" command by the combined values returned from the "ls" command will give the total percentage of disk space in use.  In the example above, this value works out to 18% or a savings of 82%.  The customer was actually seeing a savings of 51% in their production environment, and this is just by using thin provisioning.  A-SIS, or deduplication, will be implemented soon, and it will be interesting to see what the disk usage numbers change to then.

 

Thanks for reading!

A question that comes up from time to time in the forums is about how to use the vmrun application with the runProgramInGuest command to install a Windows Installer (MSI) package in a virtual machine running Windows. In other words, someone needs to run something similar to the following command:

||vmrun -T ws -gu USERNAME -gp PASSWORD runProgramInGuest "C:\VMs\VM1\VM1.vmx" "C:\Windows\system32\msiexec.exe /i C:\app.msi"||

The runProgramInGuest command will be issued without error, but a look at the virtual machine's console will reveal a Windows Installer options screen displaying all of the switches that are available to the msiexec.exe application. Any approach or combination of single-quotes, double-quotes, slashes, backslashes, spaces or special characters thrown at vmrun and runProgramInGuest results in the same options screen or (even worse) the "Error: A file was not found" message.

 

After some quick research into this issue, I discovered a discussion on VMTN, where VMware employee mattrich states that "vmrun quotes each argument it passes to the guest." This is good to know. It is actually this behavior that creates the problem for the Windows Installer.

 

To verify that the behavior mattrich describes is correct, use the free Process Monitor utility from Sysinternals. On the virtual machine, launch Process Monitor and press Ctrl+L to bring up the filter menu. Working from the left, choose "Operation" from the first drop-down menu. Choose "is" from the second drop-down menu. Choose "Process Create" from the third drop-down menu and "Include" from the fourth drop-down menu. Now click the "Add" button. The new filter will now be listed in the bottom of the filter window. Click OK to continue. 

 

Now use the vmrun application with the runProgramInGuest command to send the Windows Installer commands to the virtual machine again. 

 

||vmrun -T ws -gu USERNAME -gp PASSWORD runProgramInGuest "C:\VMs\VM1\VM1.vmx" "C:\Windows\system32\msiexec.exe /i C:\app.msi"||

Viewing the detail column in Process Monitor, it becomes very apparent that each argument sent to the Windows host via runProgramInGuest is encapsulated in quotes, just like mattrich said. Using Start->Run in Windows, re-typing this same command (exactly as it appears in Process Monitor) will create the same output of a Windows Installer options screen. So there is definitely a problem; but how about a workaround?

 

There is actually an easy workaround that can be used to solve this problem. On the host, create a batch file by issuing the following command:

 

echo C:\Windows\system32\msiexec.exe /i C:\app.msi > C:\sourcebatch.bat

Now run the following command on the same host where the batch file is located:

 

vmrun -T ws -gu USERNAME -gp PASSWORD copyFileFromHostToGuest "C:\VMs\VM1\VM1.vmx" "C:\sourcebatch.bat" "C:\destbatch.bat"

This will copy the batch file C:\sourcebatch.bat from the host to the guest as C:\destbatch.bat.  All that is left is to run the batch file on the virtual machine with:

 

vmrun -T ws -gu USERNAME -gp PASSWORD runProgramInGuest "C:\VMs\VM1\VM1.vmx" cmd.exe "/c C:\destbatch.bat"

After the install completes, one final command should be run (in the interest of cleaning up):

 

vmrun -T ws -gu USERNAME -gp PASSWORD deleteFileInGuest "C:\VMs\VM1\VM1.vmx" "C:\destbatch.bat"

This workaround certainly isn't as elegant as just running a single command would be, but it will get the task done.  This approach is also interesting in that it solves a possible limitation of an existing product, by using additional functionality contained in the same product. I refer to a "possible limitation," because I am optimistic that someone will figure out the secret combination of characters to make vmrun runProgramInGuest work with msiexec.exe!

P2V conversions are always interesting. I don't think that I have ever seen any two go exactly the same way, and each one always has something unique about it.  The most recent Windows 2003 Server P2V I ran went relatively smooth, with one exception. On every Windows startup, I would receive the following error: 

 

Event Type:     Error

Event Source:     Service Control Manager

Event Category:     None

Event ID:     7000

Description:

The Parallel port driver service failed to start due to the following error:

The service cannot be started, either because it is disabled or because it has no enabled devices associated with it.

 

 

After entering the "set devmgr_show_nonpresent_devices=1" command, I could see a device listed as "Direct Parallel" under the "Network Adapters" in Device Manager. This device could be disabled, but to uninstall it was not possible.  At least not initially.....

 

After a few quick Google searches for "Direct Parallel", I ran across a post at the WinDrivers forums that appeared to have a fix for a similar issue. The suggestion there lead me to try a registry search for "Direct Parallel" under the HKLM\SYSTEM\CurrentControlSet\Control\Class key. Once the correct key was found, it was a simple matter of locating the REG_DWORD named "Characteristics" and changing the binary Hexadecimal value from 29 to 09.  Apparently this change clears the "not user removable" value from the "Direct Parallel" device. Returning to Device Manager, I was now able to remove the "Direct Parallel" device from under "Network Adapters" without issue.

 

A subsequent reboot for another issue revealed that even though the "Direct Parallel" device was now gone, there was still a problem.  The Event ID 7000 message was still present. The final fix was to disable the ParPort service, which required one additional registry edit. Under the HKLM\SYSTEM\CurrentControlSet\Services\Parport key, there is a REG_DWORD named "Start" and changing this value from 1 to 4 disabled the service. Now the virtual machine boots right up and starts Windows 2003 with no errors.

 

I have quite a few P2Vs in my immediate future, so there will probably be more things to share very soon!

I have read several blogs and articles about the possibility of file level restores for files contained within vmdks (and snapshots of vmdks) housed on NFS volumes. Most of these reads discussed, at a high level, how to make it work, and some even included commands. What I couldn't find was a decent guide to setting the whole environment up and making it work. So I built my own, and documented the steps. 

 

   

Note: If you already have a working Ubuntu virtual machine that is networked and has the VMware Tools installed, or if you simply don't need the step-by-step instructions for creating one, skip straight to Step 04 below for the configuration settings.

 

STEP 01: CREATE THE VM

01. Using the VMware Infrastructure Client, start the New Virtual Machine Wizard.

 

02. Choose Custom Configuration.

 

03. Choose the required inventory location and datastore for the new virtual machine.

 

04. Choose Linux and for the version, choose Ubuntu Linux (32-bit).

 

05. Choose 1 virtual processor, 512 MB Memory, and 1 NIC on the appropriate network.

 

06. Choose the LSI Logic SCSI Adapter.

 

07. Create a new virtual disk, sized at 5 GB, in the appropriate location for your environment.

 

08. Click Next to accept the default values for the Advanced Options.

 

09. Click Finish.

 

 

STEP 02: INSTALL THE OS

01. Right-click the newly created virtual machine and choose Edit Settings.

 

02. Verify the virtual machine settings.

 

03. Click CD/DVD Drive 1 and choose the Ubuntu 8.04.2 installation iso or media.

    Note: Remember to select the Connect at power on option in the device status area.

 

04. Click the OK button, and wait for the virtual machine to finish with the reconfiguration.

 

05. Right-click on the virtual machine and choose Open Console.

 

06. Press the green triangle button (or use the VM menu) to power on the virtual machine.

 

07. If all goes well, the Ubuntu live cd will boot.  Choose a language, and then choose the Install Ubuntu option.

    Note: It is always a good idea to choose the Check CD for defects before beginning.

 

08. At the Welcome screen, click the Forward button to continue.

 

09. Choose the correct time zone and click the Forward button.

 

10. Choose the correct keyboard layout and click the Forward button.

 

11. Take the default of Guided - use entire disk for the detected VMware Virtual disk and click the Forward button.

 

12. Fill out the name, username, password and computer name and click the Forward button.

 

13. At the Ready to Install screen, click the Install button. 

 

14. Wait for the Installation complete screen, and then click the Restart now button. 

 

15. Now press Ctrl+Alt to escape the console window, and go to the VM menu and choose Edit Settings. Select the CD/DVD Drive 1, and then un-check Connected and Connect at power on in the Device Status area.  Click the OK button.

 

16. Press ENTER as instructed on the Ubuntu setup screen.

 

 

STEP 03: CONFIGURE THE OS - Part I

01. Login to Ubuntu, using the username and password selected during the install.

 

02. Choose the VM menu and select Install/Upgrade VMware Tools.  Click OK at the Information Screen.

    Note: If a window doesn't automatically open, double-click the VMware Tools icon on the desktop. 

 

03. In the File Browser window, right-click the VMwareTools-X.X.X-123456.tar.gz file and choose Extract To...

At the top left, under Places, scroll down and select the directory that has the same username that you are currently logged in with - it should be the third option down.  Leave all other options at the defaults, and press the Extract button.

 

04. On the Ubuntu desktop, on the top toolbar, use the Places menu and select Home Folder.  When the File Browser window opens, verify that there is a vmware-tools-distrib directory.

 

05. Close all windows that are currently open in the Ubuntu virtual machine.

 

06. On the Ubuntu desktop, on the top toolbar, use the Applications -> Accessories menu and choose Terminal.

Enter the following commands in the terminal window:

 

  cd vmware-tools-distrib

  sudo ./vmware-install.pl

 

When prompted, enter your password and press Enter.

When prompted for which directory to install the binary files, press Enter.

When prompted for which directory contains the init directories, press Enter.

When prompted for which directory contains the init scripts, press Enter.

When prompted for which directory to install the daemon files, press Enter.

When prompted for which directory to install the library files, press Enter.

When informed about path creation, including parent directories, press Enter.

When prompted for which directory to install the documentation, press Enter.

When informed about path creation, including parent directories, press Enter.

 

The install should now be complete, but configuration still needs to happen.

 

When prompted about invoking the vmware-config-tools command, press Enter.

When prompted about pre-built vmmemctl modules and a C compiler, press Enter.

When prompted about the location of the C header files, press Enter.

Choose the appropriate display size by entering the corresponding number (probably 2 for 800x600) and pressing Enter

At this point, the screen will probably go black and flicker about, but the terminal window will re-appear eventually.

 

VMware Tools setup is now complete.  Enter the following command:

 

  exit

 

07. On the Ubuntu desktop, on the top toolbar, use the System menu and select Administration - Network.

On the Connections tab, select the Wired Connection and then click the Properties button.

Un-check the Enable Roaming Mode option for eth0 properties.

Under Configuration Settings - Configuration, choose the required settings for static or DHCP addressing and then click the OK button.

Select the General tab and verify the correct host name, and enter a domain name (if required).

Select the DNS tab and enter DNS servers and/or search domains (as/if required).

Click the Close button.

 

08. On the Ubuntu desktop, on the top toolbar, use the System menu and select Preferences - Network Proxy and configure as/if

required.  If proxy server access is not required, omit this step.

 

09. Test network connectivity (web, ping, etc), before continuing to the next step.

 

 

STEP 04: CONFIGURE THE OS - Part II

01. On the Ubuntu desktop, on the top toolbar, use the Applications -> Accessories menu and choose Terminal.

Enter the following commands:

 

  sudo apt-get update

  sudo apt-get install portmap nfs-common

 

When prompted if you want to continue, type y for yes and press Enter.

 

02. In the same terminal window, enter the following command:

 

  gksudo gedit /etc/hosts.deny

 

Add the following line to the end of the file:

 

  portmap : ALL

 

Press the Save button and close gedit

 

Now, enter the following command:

 

  gksudo gedit /etc/hosts.allow

 

Add the following line to the end of the file:

 

  portmap : 91.189.94.8

 

Note: Replace 91.189.94.8 with the IP address of the correct NFS server.  DNS names will not work.

Press the Save button and close gedit.

 

03. In the same terminal window, enter the following commands:

 

  sudo mkdir /mnt/datastore01

  sudo mount 91.189.94.8:/vol/vol1 /mnt/datastore1

 

Note: Replace the ip address and export path (91.189.94.8:/vol/vol1) in the above example as required.

 

04. On the Ubuntu desktop, on the top toolbar, use the Places menu and select Computer.  Now open Filesystem in the right pane.

Open the mnt directory.  datastore01 should be visible here.  Open datastore01 to browse the nfs volume.  To view the snapshot directory, use the View menu in File Browser to select Show Hidden Files, or press Ctrl+H.  Once hidden files are visible, a .snapshot directory should be available.  Browsing this directory is one way to find the path to the desired snapshot.

 

05. In the same terminal window, enter the following commands:

 

  sudo mkdir /mnt/vmdk

  sudo mount /mnt/datastore01/.snapshot/hourly.1/test/test-flat.vmdk /mnt/vmdk -o ro,loop=/dev/loop0,offset=32768 -t ntfs

 

Note: Change the above path to reflect the actual path of the desired vmdk in the desired snapshot.  Also note the offset value of 32768,

which may need to be changed. See the FULL DISCLOSURE, WARNINGS AND OTHER section below for more information on the offset.

 

06. On the Ubuntu desktop, on the top toolbar, use the Places menu and select Computer.  Now open Filesystem in the right pane.

Open the mnt directory.  The vmdk directory should be visible here.  Open to view/browse the contents of the virtual disk.  This is one method to locate files contained in the snapshot. 

 

07. To restore files to a Windows server, on the Ubuntu desktop, on the top toolbar, use the Places menu and select Connect to Server. Use the following settings:

 

Service type: Windows share

Server: dns name of the Windows server

Share: share name on Windows server - c$, USERS$, DATA$, etc

User name: Active Directory or local server account with permission to the share

Domain name: Active Directory domain name or blank if local account will be used

 

Click Connect and then enter the correct password, for the account provided previously.  Also, choose the "Remember password until you logout" option.  Ignore any errors, and then look on the desktop for the Ubuntu equivalent of a mapped drive to the Windows share.

 

08. From here its simple copy and paste operations from the /mnt/vmdk directory to the desired location on the Windows share to accomplish the file-level restores.

 

 

FULL DISCLOSURE, WARNINGS AND OTHER:

This document details how to create a working Ubuntu virtual machine that can recover files from NFS snapshots.  If the OS is built to the level detailed in this document, the Ubuntu install will not be patched, secured or otherwise hardened in any way. For these reasons, this exact implementation is not considered ready for production environments. I can make no guarantees about the use of this setup.  Use it "as-is" and at your own risk! You've been warned...

 

The techniques described here will not work for non-NTFS vmdk files.  This essentially means, Windows virtual machines only. Other options are possible, but not covered in this document.  Check VMware on NFS: Backup Tricks for additional information.

 

On the NFS server, the NFS datastore should be exported Read-Only to the Ubuntu virtual machine. 

 

The NFS datastore should also be mounted Read-Only on the Ubuntu virtual machine. 

 

Any VMDK files contained in snapshots should be mounted Read-Only on the Ubuntu virtual machine. 

 

When mounting VMDK files that have NTFS partitions on them, you must specify the offset as part of the mount command.  If you do not happen to know the partition offset of the virtual machine's disk you are working with, there is an easy way to find it.  On the virtual machine in question, run msinfo32.  Navigate to Components -> Storage -> Disks and then find the Partition Starting offset value.  Use this value in the mount command offset option. 

 

There is a helpful script for mounting and unmounting iso images, that is available at:

http://www.debianadmin.com/mount-and-unmount-iso-images-without-burning-them.html

that can be easily modified to mount the vmdk files from the snapshots, but will most likely need to be modified (or expanded into multiple versions) to deal with offset differences.  Maybe that will be a future blog entry...

 

The NFS datastore can be added to fstab on the Ubuntu virtual machine, so that it is always available at boot.

To achieve this, add the following line to /etc/fstab on the Ubuntu virtual machine:

91.189.94.8:/vol/vol1 /mnt/datastore1 nfs ro,hard,intr 0 0

 

 

REFERENCES AND FURTHER READING:

VMware on NFS: Backup Tricks

http://storagefoo.blogspot.com/2007/09/vmware-on-nfs-backup-tricks.html

 

Guest Operating System Installation Guide

http://www.vmware.com/pdf/GuestOS_guide.pdf

 

ISO Mounting Scripts

http://www.debianadmin.com/mount-and-unmount-iso-images-without-burning-them.html

vmroyale Guru
User Moderators

Read the FREE Manual

Posted by vmroyale Mar 6, 2009

I am often asked to recommend "good books" for learning about VMware.  I  usually reply by telling the requesting party to go to the VMware website and look at the documentation provided there.  And almost every single person that I make this suggestion to appears confused and/or insulted by my response.

 

There certainly are some great books on VMware products out there, and I'm not saying that these resources don't add value.  But the simple truth is that the VMware product documentation is very thorough and generally explains things in great detail.  For example, the main documentation for VMware ESX 3.5 Update 2 or 3, vCenter Server 2.5 Update 2,3 or 4 consists of Introduction to VMware Infrastructure, VMware Infrastructure 3 Primer, Configuration Maximums for VMware Infrastructure 3, Quick Start Guide, ESX Server 3 and VirtualCenter Installation Guide, Upgrade Guide, Basic System Administration, Virtual Infrastructure Web Access Administrator's Guide, ESX Server 3 Configuration Guide, Resource Management Guide, Fibre Channel SAN Configuration Guide, iSCSI SAN Configuration Guide, Virtual Machine Backup Guide.  Think of this as a free 13 chapter book! 

 

And if that's not enough reading, there is also the VMware vCenter Update Manager Administration Guide, VMware vCenter Converter Administration Guide (PDF), Setup for Microsoft Cluster Service, Remote Command-Line Interface Installation and Reference Guide, VMware Infrastructure Management Assistant Administrator's and Developer's Guide, SAN System Design and Deployment Guide and the Guest Operating System Installation Guide.  Vol II is another 7 free chapters!

 

And where the product documentation is lacking, there are numerous white papers, best practices guides, getting started guides, information guides, technical guides, performance studies, technical notes, collaborative documents written with business partners, and much much more to fill in the gaps.

 

Googling filetype:pdf site:vmware.com currently returns over 2,500 documents! Drop whatever you are interested in along with that search, and chances are very good that you will find something worthwhile to read. 

 

If you still can't find what you are looking for, then head over to VMTN and ask there!

One of the questions that comes up over and over again in the forums is related to sizing ESX servers.  These questions are usually in the form of "How many VMs can I run on this server?" or "How many ESX hosts do I need to virtualize 10 physical servers running Windows?" and are usually supported with statements like "none of the servers are really doing much work" or "these are really busy servers that get hit hard by lots of users".  The biggest problem with these posts is that the data presented is subjective at best and completely invalid at worst.  In other words, one person's "busy" or "large" is another person's "idle" or "small", and none of these descriptors provide any real value for the task at hand.

 

For proper sizing, it really comes down to knowing what your servers and/or workloads are actually doing.  Anything else is guesswork, and will likely result in you (or your customers) not realizing the full potential that virtualization offers.  The instructions below will detail how to create a very generic baseline for a Windows server, using nothing but the native perfmon application that comes included with Windows.

 

To create a generic baseline, we will use the following three Windows perfmon counters:

 

Memory: Available MBytes

PhysicalDisk: Disk Transfers/sec

Processor: % Processor Time

 

Step 1 - Implement the baseline:

 

01. Log on to the server that you wish to baseline

02. Start - Run -> perfmon

03. Expand Performance Logs and Alerts and then choose Counter Logs

04. Right-click in the right pane under System Overview and then choose "New Log Settings From..."

05. Browse to the GenericBaseline.htm (attached to this entry) file and choose Open

06. When prompted, give the baseline a meaningful name

07. Leave the default values on the General tab - this will sample data every 15 seconds

08. Select the Log Files tab and click the Configure button to change the location off of the C: drive - You've been warned!

09. Select the Schedule tab and configure both Start log and Stop log to Manually (using the shortcut menu)

    Note: After testing is complete, you can set the schedule accordingly (30 days is a recommended interval, but be sure to include your busiest times - like month-end)

10. Click Apply and OK

 

Step 2 - Test the baseline:

 

Now that the baseline is implemented, it would be a really good idea to let it run for a few hours or so and then verify that the data is indeed the data you want to see.  You don't want to run this baseline for 30 days only to then find out that something went wrong.  You can Start/Stop the counter log at any time, by right-clicking it and choosing Start/Stop.  Once you have verified that you are getting accurate data, schedule it and forget about it. 

 

Step 3 - View the baseline data:

 

01. Log on to the server that you wish to view the baseline data for

02. Start - Run -> perfmon

03. Select System Monitor

04. Right-click in the right pane and then press Ctrl+L

05. For the data source, choose Log files and then click the Add button

06. Browse to the counter log you created earlier and then choose Open

07. Click the Data tab and remove the default objects listed there

08. Now add the 3 counters used in the counter log (Memory: Available MBytes, PhysicalDisk: Disk Transfers/sec, Processor: % Processor Time)

09. Click OK and the results are now visible in the window

10. Best bet is to avoid looking at the graph, and just focus on the raw numbers shown just below the graph - Here you will see Minimum, Maximum and Average Values

 

Step 4 - Interpreting the data:

 

Available MBytes - This is the amount of available memory.  Subtract this value from the actual memory in the server, and you will get a pretty good idea of what the actual memory requirements are for this workload. 

 

Disk Transfers/sec - These are the disk IOPS.

 

% Processor Time - The percentage of time the processor is being used - maximums of 100 are to be expected. 

 

Step 5 - Using the data:

 

With just these three counters, you can gain good insight into what your servers are really doing.  With this limited data, you could even do some of the server sizing yourself.  At least now, you can take this basic data to the forums with your questions.  Having the supporting data is important, because it removes a large degree of the subjectivity and leaves what is basically a simple math problem.

 

Good Luck!