Skip navigation
1 2 Previous Next

VMware In SMB

19 posts
billhill Enthusiast

A New Home! (well... kinda)

Posted by billhill Aug 24, 2010

Hey all you devoted readers!

 

I appreciate you spending your time with me on this blog. Due to my expanded role blogging about VMware products/technologies/whatever, I am finding that much is less about SMB specifically. So, I am forking myself (not so much like "eh... go fork yourself", but like a Linux distribution, or a fork in the road) to another blog.

 

So, continue checking out this blog for SMB related information and check out my new locale at: http://virtualbill.wordpress.com.

 

Hasta pasta, y'all!

 

~Bill

Hey everyone! T-minus 1 week before VMworld... and boy, am I getting excited! (I guess plain text does not really convey emotion very well).

 

As the time slowly approaches, I thought it would be a good time to grab a nice cool drink and bask in the glory that is the VMware Lab Flings! [Flings|http://labs.vmware.com/flings]

 

For those of you who are not keen to what a Fling is... no, it is not that quick relationship with that girl/boy Freshmen year of college. Instead, it is a pet project by VMware Lab employees. And, as should be no surprise, they have come up with some really spiffy things.

 

Now, without further ado, the top 3 Flings that look awesome!!!

1) ESXPlot Link

- With this gem, you can generate .csv output from ESXTOP and plot it in a graphical environment against other performance related data.

- Talk about useful! Now, you can compare data against various ESX machines for comparison purposes!

2) VCMA Link

- Virtual Center Mobile Access

- Yes, you too can now access the vCenter via your mobile phone!

- This is a very streamlined web interface to vCenter interface... control VM state, view/adjust configuration, search, etc... About the only thing you cannot do is remote control the VM.

   - Note: who wants do that anyway?

3) VMware Auto-Deploy Link

- A little sad to see VMware almost immediately invalidate my awesome writeup on PXE deployments

- Conceptually, you simply deploy a pre-configured VM (OVF) machine in your environment. This will PXE boot and install ESX/ESXi in your environment!!!

 

The nice thing about these Flings is that they do not care if you are using other Flings at the same time (a common problem in College).

 

While we're talking about ESXTOP, it is worth noting that Mr. Duncan Epping (author of Yellow Bricks) has developed a mighty fine ESXTOP tutorial. Please check it out at ESXTOP Page. If you see Duncan at VMworld, give him a high-five for his awesome blog work!

 

That's all from here for now. You can find me at this blog or on the Twitters at: @arctic_99.

 

Hasta pasta!

VMware rocks. No. Seriously. VMware rocks. Let me tell you why...

 

Forget the consolidation ratios, forget the resource usage, forget all the cool advanced functions you can bolt on (like SRM, View, etc...). Just focus on the portability for a moment.

 

We have a nagging issue going on with one of our servers. I believe the motherboard is going bad, but I am having an issue getting the support guys to see it... trying to prove "it's just a feeling I have" is very difficult. So, I am walking down their support path to get to that endpoint. We keep seeing errors on DIMM0... which, if you're not hip to the xSeries hardware, is not one of the actual DIMM slots on the server. Yes... I know someone out there is going to mention how it may be possible that IBM is counting from 0... but, that is not the case here...

 

Anyways... this weekend, the DIMM0 issue popped up again. This time, in the form of rebooting the server and shutting off all of the virtual machines. Wamp wamp wamp, right!?

 

We do not have any of the whiz bang FT or HA features setup in our environment. Heck, I have such a hard time keeping enough room on the ESX hosts to handle an ESX failure. Budget does not exist to keep too much extra room available. However, I don't need those features to truely appreciate VMware (and show the business another area that VMware addresses).

 

The VMs were in a 'Power Off' state. So, I simply powered them up onto other machines in the cluster and moved on with life. The single-package-like design of a virtual machine is so convenient that when a host died, the whole server was not dead. I just brought it back to life somewhere else! Magical! Right?

 

I think I will use the event this weekend for many years to come. Most people look down the feature set for the value virtualization brings to the company. However, one of the most basic principles of virtualization may be the most useful!

Hey All,

 

I have been spending some quality time looking through the content that the VMworld 2010 team has lined up! Man... this is going to be awesome! I am just a little concerned about the "first come, first serve" policy... but, I guess we'll see how that works out!

 

I have selected a mere 18 sessions to attend (God willing)... the following are the ones I am looking forward to most:

- TA7743 - ESX iSCSI News, Configuration, and Best Practices

   - This sounds like it is going to be very useful. iSCSI storage (well... network storage, to be more general) is an awesome way to create and expand your virtual infrastructure without needing the costly Fibre infrastructure. So, keeping up to date and ensuring that your network and iSCSI environment is running at (or close to) best practices is going to be massively useful. Plus, this is being presented by Jon Hall (Technical Certification Developer), so it should be very useful to us VCP types out there!

- EA7850 Design, Deploy, and Optimize Microsoft SQL 2008

- EA7849 - Design, Deploy, and Optimize Microsoft Exchange Server 2010 on vSphere

- EA6705 - Best Practices for Virtualizing Active Directory

   - One of the common statements in virtualization is that the infrastructure is independent from the application. So, install the application like you think you should. However, if VMware would like to point out possible issues and/or better ways to do something, that is awesome. In our environment, like many, AD, Exchange, and SQL is very important... so, having a handle on this is going to be very important.

- TA812 - Tuning Linux For Virtual Machines

   - I needed to miss a similar session at last year's VMworld. So, this is partially a makeup for me. However, seeing as about 1/2 of our environment is comprised of Debian, Ubuntu, and RHEL (3/4/5) servers, being hip to optimal performance is very useful!

- DV7500 - VDI Assessment and Migration

 

In VMworld 2009, I was a major lab rat. If I was not in a session, I was taking the labs. I covered almost all of the labs with the exception of the SRM labs (as it was not really applicable). The following are the Labs I am looking forward to this year:

- Lab03 - ThinApp 4.6

- Lab08 - AppSpeed

- Lab11 - SRM Installation and Config

- Lab12 - SRM Extended Config And Troubleshooting

- Lab24 - vSphere Performance and Tuning

- Lab25 - vSphere Troubleshooting

- Lab26 - vSphere PowerCLI

- Lab30 - Sandbox

 

So... 18 sessions and 8 labs = 26 things to do during VMworld. I look to be a pretty busy guy during the week. I am planning on writing up my thoughts on sessions, pictures, etc... Why not, right?!

 

Out of everything going on, I am most excited about keeping up to speed with vSphere performance, View/ThinApp (we have a fairly direct need for ThinApp), and SRM (fairly soon in the future, so it pays to know about gotchas ahead of time).

 

Look for me on the Twitters at: @arctic_99 or at the same bat URL, same bat time. Hey, if you're going to be there too, shoot me a message and we may be able to high-5 while I am between sessions!

The PDX VMUG performance lab was a massive success! Thanks for everyone that participated.

 

The turnout was very impressive... especially with the admittedly short notice. Ravi and Balaji were absolute Rock Stars (like the celebrities, not the drink) and knocked the socks off of everyone.

 

Our environment was setup with (because geeks care):

- 4  IBM xSeries 3550 -- Dual Nehalem, 52GB RAM, ~200G local storage, 6 10/100/1000 NICs

- 1 Dell/EqualLogic PS6000 iSCSI SAN array (about 6TB usable space).

- 1 Dell PowerConnect 5448 iSCSI optimized 10/100/1000 Switch

- ESX 4.0 u2(not ESXi)

- vCenter 4.0 u2

- Personal laptops that the members used to access the vSphere environment

 

Ravi ended up building around 80 virtual machines for the lab, each with the necessary components and utilities installed.

 

This kind of event is exactly what the members are looking for from the user group. Quartely meetings are great... but, let's be honest and call it like it is... there is not always much in the way of information exchange between people. This got the atendees thinking about their environment, bottlenecks that exist, and how to apply the theories and tools they learned about in their environment. Plus, many concepts were defined and clarified for the members.

 

Getting everything setup was a challenge in and of itself. Figures that many of my different worlds collided at the same time the Performance Lab environment was being setup. However, things always have a way of working themselves out, and work themselves out they did!

 

So... special thanks go to:

- Dell/EqualLogic - Donation of the use of one of their PS6000 SAN arrays

- IBM - Donation of the use of 4 x3550 servers

- Toyota Motor Sales - Portland Regional Office - Donation of the use of their awesome training facility

- VMware - Donation of the use of Ravi, Balaji, and Pablo

 

We will be getting the slide deck and a video of the presentation section posted on the PDX VMUG Communities site when they are made available (technically, the video is being shipped down at this moment, so hold tight... it will make it soon enough). Until then, though, please feel free to check out this posting from Pablo regarding the lab. Trip Report - PDX Performance Lab 2010

 

As sleep deprived as I was, I am so glad the event went well! For anyone interested, the VMUG is working on the November all day conference. Stay tuned!

 

Finally, Pablo Roesch is awesome and posts stuff online as 'heyitspablo'

Vimeo - heyitspablo

Twitter - heyitspablo

YouTube - heyitspablo

One of the new features of the new ESXi environment is super cool... and super useful. Specifically, the ability to deploy an ESXi host using a kickstart file and PXE!

 

"But Bill... I do not have any of the infrastructure to support this! How would I set it up!?"

 

Great question, Anonymous user, I am glad you asked. Please check out the following instructions on how to setup the environment to support this!

 

NOTE! WARNING! DISCLAIMER! ALERT! CAUTION! CUIDADO! ANMERKUNG! NOTA! 笔记! 警告!
     I know nothing about your environment. You are following these instructions at your own risk. This setup will impact DHCP! Follow these instructions at your own risk and make the appropriate adjustments. Additionally, I do not know everything about everything. So, you are going to need to rely upon your sleuthing abilities to help resolve issues. Finally, this is an official VMware document. I am posting this independantly of VMware. So, again, you are on your own. All I can verify is that the procedure, as defined in this document, worked in my specific environment.

Phew... now that the unpleasantries are overwith, please enjoy "ESXi Scripted Installation via PXE and Kickstart"

 

Overview

Deploying ESXi via PXE using a Kickstart file requires the following resources in your environment:

 

  • DHCP server

  • TFTP server

  • Web Server

  • SYSLINUX/pxeboot.0

  • Access to the ESXi installed .iso (mounted and available to allow copies of the files)

NOTE: VMware has only tested this setup using a Linux environment using SYSLINUX to provide the PXE boot functions. Using another solution may work, but is not officially supported.

 

The following environment, then, was setup to deploy the PXE functions:

 

 

  • DHCP Server and TFTP server

  • Ubuntu 8.10 server (20G HD and 1G RAM)

  • IP Address: 192.168.163.48

  • Note: if you already have DHCP running on another server, you will need to make the appropriate adjustments to your existing DHCP server. Otherwise, you will run into issues with multiple DHCP servers on your network... not a good thing.

 

The details of the Ubuntu installation are a little on the boring side, so I will assume you can get it installed on your own. The instructions will pick up after you have a properly functioning Ubuntu server on your network.

 

Installation

1 - Install required software

sudo apt-get update

sudo apt-get install inetutils-inetd syslinux tftpd-hpa dhcp3-server apache2

 

2 - Configure DHCP server

 

cd /etc/dhcp3

sudo mv dhcpd.conf dhcpd.conf.ORIG

sudo vi dhcpd.conf

 

Once inside of the file, enter the following configuration:

 

option domain-name-servers 192.168.11.130;

 

default-lease-time 86400;

max-lease-time 604800;

 

authoritative;

 

subnet 192.168.163.0 netmask 255.255.255.0 {

range 192.168.163.10 192.168.163.49;

filename "pxelinux.0";

option subnet-mask 255.255.255.0;

option broadcast-address 192.168.163.255;

option routers 192.168.163.2;

}

 

 

This will assign the machines on the network an IP address between 192.168.163.10 - 49. Additionally, the default gateway will be 192.168.163.2. The "pxelinux.0" line is very important because it instructs the PXE booting client on which file to download to begin the PXE boot process. Other PXE servers exist, so if you are using one of those, you should make the appropriate adjustments.

 

Once this configuration has been loaded, it is time to start the DHCP server:

 

/etc/init.d/dhcp3-server restart

 

 

3 - Configure the TFTP server environment

The TFTP server is what is going to be serving up the files for the client. So, the environment needs to be seeded properly.

The default location for the TFTP home is: /var/lib/tftpboot

SYSLINUX package provides a number of utilities for creating bootable Linux environments. The SYSLINUX package was installed so we can harvest one of these utilities... specifically, pxelinux.0.

pxelinux.0 is expecting some files in specific locations in the TFTP home. So:

 

  • cd /var/lib/tftpboot

  • sudo mkdir pxelinux.cfg

  • cd pxelinux.cfg

 

 

Now, it is time to actually create the pxelinux configuration file. This configuration file is read by pxelinux.0 to display options to the end user. This is massively useful if you have multiple PXE boot options. In this scenario, though, we are only concerned with providing a single option: ESXi 4.1

 

  • sudo vi default

 

 

Inside of the file, add the following:

 

DISPLAY boot.txt

 

DEFAULT ESX_4.1_install

 

LABEL ESX_4.1_install

kernel vmware/esx4.1/mboot.c32

append vmware/esx4.1/vmkboot.gz ks=http://192.168.163.48/ks.cfg --- vmware/esx4.1/vmkernel.gz --- vmware/esx4.1/sys.vgz --- vmware/esx4.1/cim.vgz --- vmware/esx4.1/ienviron.vgz --- vmware/esx4.1/install.vgz

 

DISPLAY = a text file to display to the end user. In this example, the contents of the file may not be seen.

DEFAULT = the default selection

LABEL ESX_4.1_install = This section defines which files to load to begin the installation process.

 

  • Note: the 'ks' value after 'vmkboot.gz' defines which Kickstart file to use. This will come in handy later on. You can remove this to perform the installation manually.

Now that you have defined the PXE configuration, it is time to put the pxelinux.0 file in place:

 

  • cd ..

  • sudo cp /usr/lib/syslinux/pxelinux.0 .

 

The final piece needed to setup the TFTP environment is the installation files. Recall the "LABEL ESX_4.1_install" section from above. Each file was specified using "vmware/esx4.1/vmkernel.gz" (for example). In reality, the files live on the OS at: /var/lib/tftpboot/vmware/esx4.1/vmkernel.gz. The structure needs to be created and the files put into place:

 

  • sudo mkdir -p vmware/esx4.1

 

The files are located on the ESXi installer CD/.iso. You need to get all of these files onto the server in order for them to be served up properly. If you are using a Windows desktop, you can load the ESXi installer into your CD drive and open it using Explorer. Using an SCP program (like WinSCP), copy the files to the server and then move them to the proper location:

 

  • mkdir /tmp/esxi4.1files

  • Copy the files to /tmp/esxi4.1files

  • cd /vmware/esx4.1

  • sudo cp /tmp/esxi4.1files/* .

 

 

So, now we have the files in a location where a PXE booting client can find them!

The final step is enabling the TFTP daemon to run and turning it on!

 

  • sudo vi /etc/default/tftp-hpa

  • Change "RUN_DAEMON=no" to "RUN_DAEMON=yes"

  • sudo /etc/init.d/tftp-hpa restart

 

 

4 - Configure the Kickstart file

The Kickstart file needs to be available to the PXE booting host in order to load the installation script. Common methods include NFS, FTP, HTTP/HTTPS. In this environment, we are going to be using Apache to provide that. If you already have another server, the adjustment is minimal to get the script loaded from somewhere else.

In a Ubuntu/Debian environment, the default web root is at: /var/www. So, we are going to place our Kickstart script here.

 

  • cd /var/www

 

The Kickstart script can contain a large number of configuration items. The file should be able to address any of the questions/options you may run into during the installation. The following is the Kickstart file in use for this example:

 

  1. Accept the VMware EULA

accepteula

 

  1. Set the 'root' password

rootpw supersecretpassword

 

  1. Select the partition to install ESX on

autopart --firstdisk --overwritevmfs

 

  1. Identify the location of the installation files

install url http://192.168.163.48/vmware

 

  1. Configure the initial network settings

network --bootproto=static --ip=192.168.163.200 --gateway=192.168.163.2 --nameserver=192.168.11.130 --netmask=255.255.255.0 --hostname= --addvmportgroup=0

 

You can copy the contents of the ks.cfg file and place into ks.cfg at /var/www/ks.cfg

"rootpw" = specifies the password for the 'root' user on the host. Note that this is in plaintext on the web server. So, if you are concerned about this:

  1. Use a temporary password that will be changed later

  2. Place the ks.cfg file in place right before the installation and remove when complete

  3. Configure HTTPS to ensure the password is encrypted during transmission to the server

  4. Anything else you can think of

 

  • "autopart" - specifies which partition to install ESXi onto. NOTE!!! THIS WILL DESTROY ANY DATA ON THE PARTITION. Be very careful in selecting which partition you want to use. If you are unsure, do not use this option. In the example ks.cfg file, this will take the first disk and install to it.

  • "install" - defines where you would like to pull the installation files from. In the ks.cfg example, we will be pulling them from the webserver at 192.168.163.48 (the Ubuntu host).

  • "network" - defines the initial network settings for the host.

Additionally, pre-install, post-install, and first-boot scripts can be created and added to the ks.cfg script. These allow you to use PYTHON or BusyBox (slimmed down BASH shell) to run scripts on the host (ex: esxcfg-vmnics).

 

The final piece is ensuring that the ESXi files are available to download via HTTP (per the ks.cfg file). Recall that ks.cfg states:

install url http://192.168.163.48/vmware

. In order for this to work, we need to have the ESXi install files in a folder called 'vmware' off of the root of the web server (aka: /var/www/vmware).

  • Yes, I know Apache can do some cool directory things. You can make this happen however you would like.

 

Earlier, when we were setting up the TFTP area, the contents of the ESXi installer was placed into the TFTP root. There is not too much reason to copy the same files over again. Instead, just create a symlink to the files:

 

sudo ln -s /var/lib/tftpboot/vmware/esx4.1 vmware

 

 

5 - Time to test it out!

VMware Workstation is a great tool for testing this out! Otherwise, find another host with a PXE enabled NIC. Join whatever machine you are going to use onto the network and boot to the network. The following is a couple screenshots of what the process looks like for me:

 

 

Notice the error message! purposely left a value in ks.cfg undefined. The installation process notified me of the error but that it had a resolution and would move on automagically. 

 

 

 

Note: the utility I used to create the screenshots includes a timer. The installation took 4 minutes and 2 seconds to complete! Very useful utility!

 

Closing

Ultimately, this worked out GREAT! 4:02 to install ESXi is great! With a little creativity, many of the utilities can be expanded on (PXE, ks.cfg, pre-/post-/first, etc...) and this can become a massively useful utility to handle the addition of new installs in your environment... or moving from ESX (classic) to ESXi.

 

VMware has come up with a way using fairly comodity and freely available utilities to create a very efficient and utility approach to installing ESXi.

billhill Enthusiast

vSphere 4.1 - Everything Else

Posted by billhill Jul 13, 2010

Welcome back... I am glad you could make it. Hopefully there was not too much traffic on your way over.

 

In this final post about the new 4.1 features, we are going to cover some pretty major topics. These can be a little heavy. So, put on your thinking cap and get comfortable. Don't forget to stretch your VM muscles.

 

vStorage APIs for Array Integration (or VAAI for short)

We all know that the shared storage environment is critical for proper vSphere operation. Those shared storage devices are pretty smart. Those controllers work very well when dealing with storage operations... much more efficient than an ESX host. So, it only makes sense for an ESX environment to make use of the device functionality and offload work onto the device. This is where VAAI comes in.

 

VMware has been working close with storage vendors to develop APIs and handshakes that the ESX server can utilize to identify VAAI enabled storage devices and interact with them.

 

Functionally, VAAI uses the following primitives:

 

  1. Full Copy operations - xcopy functions to offload work to the array

 

  1. Block Zeroing - speeds up zeroing out of blocks or writing repeated content

 

  1. Hardware Assisted Locking - alternate means to locking the entire LUN during certain functions (VM startup, for example).

 

VAAI helps improve performance with functions including:

 

  1. Storage vMotion

 

  1. Provisioning VMs from templates

 

  1. Improve thin disk provisioning

 

  1. VMFS share storage pool scalability

 

Ultimately, this just requires a firmware upgrade from your storage vendor. Don't count on your device receiving this functionality, though. This is a decision that the storage vendor is going to make. So, if your equipment is too old, it may not be the recipient of the VAAI firmwares. Don't worry, though, you can still use the storage device with your vSphere environment!

 

 

HA & FT Enhancements

The limits for FT and HA operations have been increased!!!

32 hosts/cluster

320 VMs/host

3000 VM/cluster

 

Now, it is worth noting that these limits are still in effect post-failover. So, you really need to be sure thast these limits will not be violated after a failover event. For example, if each host is running 320 VMs and one host dies, it is unlikely that HA and FT will work properly because you cannot find homes for the failed 320 VMs because all other ESX hosts are loaded to the brim. Oops.

 

HA Diagnostic and Relibility Improvements

 

  1. Health check status - ensures we do not shoot outselves in the foot during setup.

 

  1. HA operational status - a new cluster status window displays more information about current HA status (found in cluster summary window/tab)

 

  1. Improved HA + DRS interopability during HA failover - DRS will improve vMotion to free up contiguous resources so HA can place a VM that needs to find a home.

 

  1. HA App Awareness - APIs now available to 3rd party application developers that expose HA state to guest OS. This allows agents to more completely monitor guest status.

 

 

FT Enhancements

 

  1. Don't hold your breath... you are still limited to a single vCPU per VM when using FT! Sources say that this will be addressed in future releases. Oh so sorry!!!

 

  1. Finally, FT is fully integrated with DRS. This ensures proper balancing across the cluster after a FT event. However, this requires EVC to be enabled... which is not too bad, right?

 

  1. The requirement that each ESX host be running the same exact version is no longer necessary. The ESX host can be off some patch levels and still function properly.

 

  1. Seperate events for the primary and secondary VMs

 

DRS Enhancements

A new constraint is available. Specifically, you can limit a VM to a specific subset of ESX hosts. These constraints can be implemented in 2 different ways:

 

  1. Required - this cannot be violated even if violated manually. This would be used in the event that you have an application licensed to specific machines versus all ESX hosts. In the event that the machines become unavailable, you do not want to start the application elsewhere to avoid violation of the licensure.

 

  1. Preferential - DRS/HA will violate the rule if necessary for failover purposes. HA may not place the VM correctly, but DRS will move the VM to the appropate server.

 

The issue is that the use of the constraints above creates a complex environment. It is plausable to envision a scenario where too many constraints can actually restrict cluster performance and VM uptime! Only use when you really, really need to!

 

This is one case of "It is about time!!!". Previously, we could only setup 1-to-1 relationships for antiaffinity rules. Now, we can specify multiple VMs in a single antiaffinity rule!

 

 

 

So, that concludes my updates as to what is going on with vSphere 4.1! The features made available stand to make all of our environments that much better and move VMware forward as the leader in the market. Drive home careful. Y'all come back soon, 'ya hear?!

Welcome back... I assume you would like your usual seat. Wendy will be right over with your drink. Please sit back, relax, and enjoy getting Down 'n Dirty with ESX!

 

ESX serves as the foundation piece to the vSphere environment. So, one would expect some attention to be directed towards hardening and enhancing the environment... which was definately the case.

 

General

First off... this is the perfect time to address a growing concern/rumor in the VMware environment: ESX vs. ESXi.

 

VMware is moving towards the small footprint style ESXi server. So much so that this is the last release with the ESX and ESXi options. In the future, you can fully expect that the ESX original environment will no longer be an option. So, it is in your best interest to look into migrating from ESX to ESXi. This is like a bandaid... just jump in now and get it over with, right?! No? Well... spend some time planning the migration if you plan on moving beyond this revision.

 

"How should I approach the upgrade, then???!!!" First off, calm down, VMware's got your back. Check out the VMware Upgrade Center. VMware has created a website dedicated to moving to the ESXi environment.

 

Finally... we all know about the free version of ESX server. There has been some confusion regarding nomenclature. So, to remedy this issue, the free version will now be known as: vSphere Hypervisor.

 

Going forward, I will be using ESX and ESXi interchangably.

 

Deployment Options

ESX now has 2 new ways to deploy! Woo hoo!

 

  1. Boot From SAN

 

  1. Scripted Installation

 

The boot from SAN option is a long awaited feature that many environments have been longing for. Previously, the ESX installation required local storage or a local flash drive. Now, assuming you have supported hardware, you can actually boot your ESX server from a SAN. Say good bye to servers with local hard drives!

 

The scripted installation is going to be an awesome addition to small and large scale deployments. I will be posting a more indepth look at the scripted installation. Suffice it to say that with the addition of some external servers (DHCP and TFTP), an ESXi installation can take 4 minutes from power up to usable ESX environment!

 

 

Tech Support Mode

Prior to the 4.1 release, there have been very few options for remote ESXi administration. Sure, you can use the rCLI (and other tools) and vSphere Client. However, being able to log into the console remotely and perform more OS-style administration tasks required the not-so-secret unsupported Tech Support Mode.

 

With this release, Tech Support Mode is fully supported and is highly configurable (both from the Console as well as vCenter). Now, you have options!

 

  • Local Tech Support - Access from the console of the server

 

  • Remote Tech Support - Opens up SSH access to the server for remote access.

 

VMware considers Tech Support Mode to be a security issue. So, once this has been enabled, the vSphere Client will display a "Configuration Error" graphic on the ESX host summary screen as well as show the "!" graphic on the host. This functionality can be enabled on demand, with little effort. So, it is reasonable that it be enabled and disabled as needed versus keeping it enabled.

 

Just because you have a more traditional OS-style access to the console, it does not mean you should install agents, use CRON, or any other function. The preferred method for accessing the server is via the rCLI, direct console, vCenter, and vSphere Client.

 

Active Directory Integration

Finally, the ESX host, via a little service running, can communicate and authenticate against Active Directory.

 

Conceptually, just point your ESX host to the Active Directory Domain, create an "ESX Admin" group (ESX looks for this group's existance and assigns default rights to users in the group), and start authenticating. This can be used to authenticate: vSphere Client, APIs, Direct Console UI, and Tech Support Mode! Don't worry, though, the original local ESX user still exists in the event that the domain is unavailable.

 

To be a little more granular in security, it is possible to create additional Active Directory groups and assign rights on the server to those groups!

 

 

Total Lockdown!

This allows you to completely control all local access to ESXi from vCenter:

 

  1. DCUI

 

  1. Lockdown mode (disallow access except root on DCUI)

 

  1. Tech support mode

 

If all of the functions are locked, the only way to really impact the server is to pull the plugs!

 

 

This was really just a smattering of everything going on in the new ESX environment. Again, check out my post on scripted installations! They are awesome!

 

 

 

Wait...

 

You're still here?

 

You're not getting tired of my ranting and raving?

 

Awesome... call home and make sure you can stay for one more posting! This time, we are going to over some of the additional features of 4.1. You're going to love it!

Hi All!

 

I am very excited about the new release of vSphere 4.1! The new features are awesome... so, grab a coffee/beer/whatever, sit back, and relax as you read about the cool new features of vSphere 4.1!

 

PREFACE:

It is worth noting that many of the new features are available at the Enterprise Plus SKU. So, do your duty and check on the features that meet your current licensure... and start working on those proposals to upgrade!

 

Storage I/O Control

This is an epic addition to the ESX/vSphere environment!. Now, you have the ability to throttle the I/O on specific volumes. Plus, this is accomplished across all ESX hosts... the throttling data is held in the first 5MB of the volume. As long as the ESX host can see the volume, it will be subject to the control!

 

Recall that you can allocate CPU shares and Memory shares for a virtual machine (wait... you did not know about that... you should check that out too!). Now, you can add one more share to the VM configurations: Storage I/O (yep... it is configured on a per VMDK basis!).

 

For the Storage I/O throttling functionality to be activated, there needs to be some congestion on the datastore for sometime. The default settings require a sustained ~30ms response time before the code is executed.

 

Conceptually, this functionality is similar to using a carpool lane on a loaded freeway. The carpool lane is practically useless at 2:00am (unless you live in Los Angeles, Hong Kong, Moscow, or anywhere else with way too many drivers). However, you see benefits to the carpool lane once 5:00pm hits.

 

In a more real world example, it is completely plausable that a data warehouse environment starts trashing the IO on the storage environment... so much so that the Email environment (IMAP, Exchange, Lotus, whatever) takes a massive performance hit. This functionality would give the Email environment priority on storage usage over the data warehouse. A properly performing email environment will (more often than not) be more important than a data warehouse taking an extra minute to process some data.

 

Go go gadget Storage I/O Control!

 

 

Network I/O Control

I know... Storage I/O Control was a big pill to swallow. But, storage is not the only function that can have I/O control. Let me introduce the Network I/O Control function!

 

This function is all about scheduling, baby!

 

In the real world, we are all used to having multiple physical NICs assigned to multiple VM network types:

 

  1. VM

 

  1. Management

 

  1. NFS

 

  1. iSCSI

 

  1. vMotion

 

  1. FT Logging

 

Now that we are seeing more and more 10Gbps NICs, the available bandwidth combined with the cost of the hardware to support it means that it does make sense to have the same number of NICs. This becomes way too expensive for the nnumber of NICs needed and a ton of bandwidth! So, these once seperate networks now need to converge onto fewer physical interfaces.

 

Network I/O control accomplished 2 goals:

 

  1. Really flexible partitioning

    Guarantee service levels when different network flows are competing for the same bandwidth

 

  1. Isolation

    One specific flow cannow dominate another flow

 

This new feature is a function of the vDistributed Switch (aka - VDS). So, sadly, you need the Enterprise Plus SKU to take advantage of this sweet functionality.

 

Recall that this functionality is really coming out to address the emergence of the 10Gbps NICs. However, don't fret. It does not take too much imagination to see that this is useful for us non-10Gbps NIC users.

 

Memory Compression

At first glance, one may believe that this is a new function that allows for more overcommittment of memory on an ESX host. However, it is quite the opposite. This feature is a tool that an ESX host will use to combat memory overcommittment.

 

Now, an ESX host has 4 tools to deal with using too much memory on a host:

 

  1. Transparent

 

  1. Balooning

 

  1. Memory Compression

 

  1. Swap To Disk

 

There are multiple advantages to this new function:

 

  1. This keeps more virtual memory in physical RAM for longer periods of time

 

  1. Allows for short periods of memory overcommit to have minimal impact on the performance of the overall environment.

 

Conceptually, the swapped memory is actually zipped/compressed and living in physical RAM. When the VM OS requests some of the swapped RAM, the portion is unzipped/uncompressed and is used. This functionality is still much faster than swapping to physical disk... even if the physical disk is a Solid State Disk (SSD).

 

 

vMotion Enhancements

vMotion is the functionality that became the holy grail for VMware... it really placed VMware at the forefront of the industry, which it continues lead. I am happy to see that they have not stopped improving it!

 

Enhancements have been made to speed up the actual vMotion process. Additionally, the limits for concurrent vMotions have been increased:

 

  • 1Gbps NICs = 4 Concurrent vMotions

 

  • 10Gbps NICs = 8 Concurrent vMotions

 

  • Datastore (both VMFS and NFS) = 128 Concurrent vMotions

 

This is super useful for Maintenance Mode evacuations!

 

Last, but certainly not least, EVC Enhancements

This is one of the most cool features in ESX... but most people do not know about it or what it does.

 

Conceptually, this function is an addition to a cluster that allows for multiple CPU families to co-exist in the same cluster. For example: Intel 5100 and 5500 CPUs can live in the same cluster! Great for SMBs trying to expand their vSphere environment without a complete hardware overhaul.

 

This time around, adjustement have been made for newer AMD processors without the 3DNow! functionality. Additionally, the cluster now looks at the functions of CPU that a VM is using in determining if a CPU can join the cluster versus Ye Olden Times where the actual CPU functions were used to determine membership in the cluster.

 

 

So... that is it for the cool new 'whiz bang' new features in the ESX 4.1 release! Hopefully, this was enough to get you drooling over what you have to look forward to in your future. But, in the event that you are itching for more, stay tuned... we're going to dive into some more ESX specific enhancements...

billhill Enthusiast

Come And Get It!

Posted by billhill May 19, 2010

Here's another quick post... 'cause that's how I roll!

 

The PDX VMUG meeting starts in about 1 hour... which means your chance of winning an iPad takes place in about 5 hours. Come and get it!

 

~Bill

Hey all you loyal reader!

 

Super quick post for you.

 

The VMworld planners are currently taking attendee feedback on what they would like to see regarding VMworld content! Just jump on over to: Call For Papers Voting

 

There are a ton of different topics available to peruse and vote on. So, if you have a moment and would like to help decide what is going on, dive in!

 

~Bill

Who here uses VMware ESXi Free!? Had that question been asked a couple days ago, I would have raised my hand. However, one of the most prevalent drawbacks to the ESXi Free product slowly creeped into our environment... and it just will not fly.

 

VMware touts the ESXi Free as a fully functional virtualization product. And, they are totally correct. However, with the loss of the Console OS (the management piece in ESX Classic), the installation of any kind of server hardware management agent becomes fairly difficult/impossible.

 

So... what does that mean to you? Well... when you log into the server using the vSphere Client, you will not see the "Hardware Status" tab! For our environment, we decided to make the purchase of vSphere Standard licenses and connect the host to our primary vCenter Server! Magically, the "Hardware Status" tab appears!

 

Woa there... this story is not over... time for the meat of the post!

 

Now that the hardware status is available, what is the best way for you to get that information?! How are you going to find out if the server has a failing fan? Perhaps a DIMM has gone bad! Unless you are checking on a regular basis, you may miss out on the issue.

 

VMware's got your back on this one!

 

The vCenter Server product includes a large amount of alerts and triggers available to you. All you need to do is create the proper alert, associate the triggers and you are good to go.

 

The following is an example of a possible alert that you may want to look into creating (although, you take full responsibility if something errant happens. We are only dealing with sending an email, so nothing bad should happen. Use your best judgement, though):

 

  • Open vSphere Client

 

  • Display your hosts and clusters (Home -->; Inventory -->; Hosts and Clusters)

 

  • Select your vCenter Server

 

  • Select the "Alarms" tab

 

  • Push the "Definitions" button

 

  • In the vast amount of white space that appears, right-click and select "New Alarm"

 

  • Name the alarm something creative like "VMHost Hardware Issue"

    

  • Give the Desciption field some love too. These underutilized fields actually serve a purpose... especially when they are your last best hope of figuring out what the alert is doing there in the first place.

 

  • Select Alarm Type - Monitor: Hosts

    

  • Select the "Monitor for specific events occuring on this object, for example, VM powered On"

 

  • Click on the "Triggers" tab

 

  • Drop down the list to define the event and select "Hardware Health Changed"

 

  • Ensure the "Status" = "Alert"

 

  • Add a new Event of "Hardware Health Changed" and a "Status" of "Warning"

 

  • Click on the "Actions" tab

 

  • Select the "Action" as "Send a notification email"

 

  • In the "Configuration" column, enter your email address

 

  • In the final 4 columns (green --> yellow, yellow --> red, red -->yellow, yellow --> green), select "Once"

    

  • This configuration states that you will be alerted one (1) time when the state changes

 

  • Click OK

 

  • Find your server and unplug it! JUST KIDDING!!! DO NOT DO THAT!

 

The alarm was applied to at the vCenter level, so any host under the vCenter shelter will be affected.

 

In our Test environment, I was able to create the rule above and I was able to remove one of the power cords from the server. The following is the email I received:

 

Target: testesx001.domain.local

Previous Status: Gray

New Status: Red

 

Alarm Definition:

(Event alarm expression: Hardware Health Changed; Status = Red)

 

Event details:

Health of Power changed from red to green.

 

 

Then, I plugged the power supply back in and received the following email:

 

Target: testesx001.domain.local

Previous Status: Gray

New Status: Red

 

Alarm Definition:

(Event alarm expression: Hardware Health Changed; Status = Red)

 

Event details:

Health of Power changed from green to red.

 

 

Awesome!

 

Further testing is needed before I feel like I am ready to fly with this. The biggest key is ensuring that the reporting time is good enough for us!

 

Oh yeah... since we all like seeing things like this... check out what happens when a drive is removed from the server to simulate a drive failure!

 

 

Essentially, the client is reporting:

 

  • Power Supply is having issues

 

  • Every single drive involved in the RAID array for the server is showing errors!

 

  • The RAID Array is showing a warning!

 

I would suggest playing with the alerting. CIM/SMASH provides the ESX host (as well as vCenter) a ton of useful information. Best not be missin' any of that, ya' hear?!

I just wanted to take a quick moment to thank everyone in attendance at the "Affordable Business Continuity Solutions for SMBs: VMware and Intel Webinar Series Part 2"!

 

I hope you found value in what you saw and heard.

 

If you have any questions about the content, VMware products, or virtualization in general, please feel free to ask! Plus, please feel free to look into your local VMware User Groups!

billhill Enthusiast

IBM's New Hottness - eX5

Posted by billhill Apr 28, 2010

I have a sweet spot in my heart for the IBM xSeries servers. Since I began at this fine institution, we have been using the IBM xSeries and they have proven themselves to be amazingly reliable! Plus, the design and manufacturing process IBM follows creates some really stellar products!

 

We have been keeping tabs on the xSeries servers as we have seen some new models and would like to take advantage of them for some new projects and refreshing the VMware environment. One day, my manager attended an update session put on by IBM... low and behold, IBM went and threw down the gauntlet!

 

(thank you Flicker entry) - I am not sure which one IBM is... probably the Luke Skywalker Lego guy. Maybe the other Lego guy is Dell Vader... Meh... I digress...

 

IBM has really stepped up the offerings in their xSeries servers. Get this:

  • The new 3690 servers have the ability to join up with another to create a monster computer (or 2 physical servers acting as one) - double sockets, double DIMMS, etc...

  • The Max5 tray is a 1U (I believe) expansion chasis that can be loaded with RAM... How does 2.4TB RAM sound for your virtualization environment!? (without seeing pricing, I am going to guess that it will be cost prohibitive).

  • RAM in the servers is no longer tied to a Processor. In many cases, if you want to max out the RAM in your server, you will need to purchase another processor. This is caused by the chipset/chip's ability to properly address the RAM. That is no longer an issue.

  • EXFlash is a new storage environment... local storage. Just by looking at the name of the device, it should come as no surprise that this is flash based server storage (again, probably cost prohibitive at this time).

  • IBM is stating (per the .pdf on my desk from IBM called "Changing the economics of the x86 server environment") that you can run up to 82% more virtual machines on eX5 based systems for the same license fees.

 

The key to making this happen is a little expansion port/socket that Intel designed into their newer server chipsets. The expansion port (QPI) allows for the servers to be daisy chained together. Think about SCSI for storage... for servers now. This allows the servers to connect together, connect to the Max5 RAM expansion, and connect to the EXFlash array.

 

So, to all of those other server manufacturers... It has been brought!

 

For more information, I would suggest going to the horses mouth:

IBM eX5 System - Main

IBM White Paper (.pdf) - IBM, Intel, VMware vSphere

billhill Enthusiast

Option #3 - EVC

Posted by billhill Apr 27, 2010

As we are all aware, Virtualization is heavily dependent on the CPU architecture of the servers that host the environment. VMware is no different. Differences in CPU Type, Family, Model, Stepping, and Revision (as far as Intel Chips are concerned - I will let an AMD user speak to the differences between chips) as well as technology support (VT, Hyper-Threading, etc...) can cause a technology freeze in your virtualization environment.

 

Typically, when you need to expand your VMware ESX environment (yay!), you will be stuck with the following scenarios:

 

  1. Buy the same make and model of server. Ensure that the same CPU type is included with the server so that it can join your VMware environment "seamlessly".

    1. As your servers get older, though, finding the same make and model with a matching CPU type becomes more and more difficult. Remember, your are in a SMB environment. Budgets may not exist to keep overhauling.

  2. Bite the bullet and buy all new servers!

 

I am not too sure about you, but I think that Option #2 sounds great!... but not realistic. Most SMBs will go for Option #1.

 

But, what if I told you there was another option... Option #3?! Surely I jest, right?! With vSphere 4 and vCenter 4, the ability for Option #3 to exist is finally here! - Enhanced VMotion Compatibility (EVC). EVC is one of those little gems that appear out of nowhere and can make a huge difference in keeping your VMware environment modern!

 

Conceptually, a DRS cluster is able to emulate different levels of CPU families.

 

 

So, the key is finding the common denominator between what all of your processors can support. Odds are your older Xeon processors cannot support the i7 chip instructions.

 

To do this:

 

  • Open vCenter

  • For every server in your environment

    • Select the Summary tab

    • Locate the VMware EVC Mode line of the General section

    • Click the pop-out balloon (this will display the EVC modes supported by the chip

      • Server with Xeon 5100 processors

 

 

      • Server with Xeon 5500 processors

 

 

    • Make note of the different EVC Modes supported by each server

 

Based on the configuration of the processors we have in our environment (based on the images above), we can create a new DRS cluster with the base EVC Mode being Intel Xeon Core 2!

 

Option #3 is an amazing option for ensuring that we can stay with the times and purchase the best server available at the time (assuming EVC Mode compatibility)!

 

FYI: VMware suggests shutting down the VMs before adjusting any cluster properties.

 

Scott Lowe has some great commentary on the introduction of newer CPUs into the EVC cluster! [Nehalem Processors In EVC|http://blog.scottlowe.org/2009/06/10/xeon-5500-cpus-and-evc-in-vsphere/] (Spoiler alert: very few CPU functions are lost!).