VMware Cloud Community
lamw
Community Manager
Community Manager
Jump to solution

VI3.5 Update 2 Available Now - per Duncan

Looks like VMware did another release over the weekend, ESX 3.5 Update 2, ESXi Update 2, Virtual Center Update 2 and VCB Update 1.5

Has anyone took the bold step to upgrade in their "development" environment? Any gotchas yet? I'm looking forward to the new alarms that's in VC

Per Yellow-bricks: http://www.yellow-bricks.com/2008/07/26/esx-35-update-2-available-now/

Am I the first one to notice this? VMware just released Update 2 for ESX(i) 3.5 and a whole bunch of new patches!

So what's new?

  • Windows Server 2008 support - Windows Server 2008 (Standard, Enterprise, and Datacenter editions) is supported as a guest operating system. With VMware's memory overcommit technology and the reliability of ESX, virtual machine density can be maximized with this new guest operating system to achieve the highest degree of ROI. Guest operating system customizations and Microsoft Cluster Server (MSCS) are not supported with Windows Server 2008.

  • Enhanced VMotion Compatibility - Enhanced VMotion compatibility (EVC) simplifies VMotion compatibility issues across CPU generations by automatically configuring server CPUs with Intel FlexMigration or AMD-V Extended Migration technologies to be compatible with older servers. Once EVC is enabled for a cluster in the VirtualCenter inventory, all hosts in that cluster are configured to ensure CPU compatibility for VMotion. VirtualCenter will not permit the addition of hosts which cannot be automatically configured to be compatible with those already in the EVC cluster.

  • Storage VMotion - Storage VMotion from a FC/iSCSI datastore to another FC/iSCSI datastore is supported. This support is extended on ESX/ESXi 3.5 Update 1 as well.

  • VSS quiescing support - When creating quiesced snapshot of Windows Server 2003 guests, both filesystem and application quiescing are supported. With Windows Server 2008 guests, only filesystem quiescing is supported. For more information, see the Virtual Machine Backup Guide and the VMware Consolidated Backup 1.5 Release Notes.

  • Hot Virtual Extend Support - The ability to extend a virtual disk while virtual machines are running is provided. Hot extend is supported for vmfs flat virtual disks without snapshots opened in persistent mode.

  • 192 vCPUs per host - VMware now supports increasing the maximum number of vCPUs per host 192 given that the maximum number of Virtual Machines per host is 170 and that no more than 3 virtual floppy devices or virtual CDROM devices are configured on the host at any given time. This support is extended on ESX 3.5 Update 1 as well.

I really really like the VSS support for Snapshots, especially for VCB this is a great feature! And what about hot extending your harddisk, this makes a VMFS datastore as flexible as a RDM datastore!

For Hardware there are also a couple of really great additions:

  • 8Gb Fiber Channel HBAs - Support is available for 8Gb fiber channel HBAs. See the I/O Compatibility Guide for ESX Server 3.5 and ESX Server 3i for details.

  • SAS arrays - more configurations are supported. See the Storage/SAN Compatibility Guide for ESX Server 3.5 and ESX Server 3i for details.

  • 10 GbE iSCSI initiator - iSCSI over a 10GbE interface is supported. This support is extended on ESX Server 3.5 Update 1, ESX Server version 3.5 Update 1 Embedded and ESX Server version 3.5 Update 1 Installable as well.

  • 10 GbE NFS support - NFS over a 10GbE interface is supported.

  • IBM System x3950 M2 - x3950 M2 in a 4-chassis configuration is supported, complete with hardware management capabilities through multi-node Intelligent Platform Management Interface (IPMI) driver and provider. Systems with up to 32 cores are fully supported. Systems with more than 32 cores are supported experimentally.

  • IPMI OEM extension support - Execution of IPMI OEM extension commands is supported.

  • System health monitoring through CIM providers - More Common Information Model (CIM) providers are added for enhanced hardware monitoring, including storage management providers provided by QLogic and Emulex. LSI MegaRAID providers are also included and are supported experimentally.

  • CIM SMASH/Server Management API - The VMware CIM SMASH/Server Management API provides an interface for developers building CIM-compliant applications to monitor and manage the health of systems. CIM SMASH is now a fully supported interface on ESX Server 3.5 and VMware ESX Server 3i.

  • Display of system health information - More system health information is displayed in VI Client for both ESX Server 3.5 and VMware ESX Server 3i.

  • Remote CLI - Remote Command Line Interface (CLI) is now supported on ESX Server 3.5 as well as ESX Server 3i. See the Remote Command-Line Interface Installation and Reference Guide for more information.

One of the important thing in my opinion is the full support for the CIM Smash API! And iSCSI over a 10GBe interface, same goes for NFS! 8GB fibre and SAS arrays is a great extension.

  • VMware High Availability - VirtualCenter 2.5 update 2 adds full support for monitoring individual virtual machine failures based on VMware tools heartbeats. This release also extends support for clusters containing mixed combinations of ESX and ESXi hosts, and minimizes previous configuration dependencies on DNS.

  • VirtualCenter Alarms - VirtualCenter 2.5 Update 2 extends support for alarms on the overall health of the server by considering the health of each of the individual system components such as memory and power supplies. Alarms can now be configured to trigger when host health degrades.

  • Guided Consolidation - now provides administrators with the ability to filter the list of discovered systems by computer name, IP address, domain name or analyzing status. Administrators can also choose to explicitly add physical hosts for analysis, without waiting for systems to be auto-discovered by the Consolidation wizard. Systems can be manually added for analysis by specifying either a hostname or IP address. Multiple hostnames or IP addresses, separated by comma or semi-colon delimiters, may also be specified for analysis. Systems can also be manually added for analysis by specifying an IP address range or by importing a file containing a list of hostnames or IP addresses that need to be analyzed for consolidation. Guided Consolidation also allows administrators to override the provided recommendations and manually invoke the conversion wizard.

  • Live Cloning - VirtualCenter 2.5 Update 2 provides the ability of creating a clone of a powered-on virtual machine without any downtime to the running virtual machine. Therefore, administrators are no longer required to power off a virtual machine in order to create a clone of it.

  • Single Sign-On - You can now automatically authenticate to VirtualCenter using your current Windows domain login credentials on the local workstation, as long as the credentials are valid on the VirtualCenter server. This capability also supports logging in to Windows using Certificates and Smartcards. It can be used with the VI Client or the VI Remote CLI to ensure that scripts written using the VI Toolkits can take advantage of the Windows credentials of your current session to automatically connect to VirtualCenter.

One of the best new features described above in my opinion is the extension of Alarms! It's awesome that VirtualCenter will report on hardware health! But what about that live cloning, that will definitely come in handy when troubleshooting a live production environment. Just copy the server, start it without the network attached and try to solve the problem!

DOWNLOAD it now:

ESX 3.5 Update 2

ESXi 3.5 installable Update 2

VirtualCenter 2.5 Update 2

VMware Consolidated Backup 1.5

Thanks for the update Duncan! Great blog

0 Kudos
36 Replies
sheetsb
Enthusiast
Enthusiast
Jump to solution

I received the same error when I tried to change the EVC level on a test cluster with one running VM. I don't think you can just VMotion systems in without a power off. As I said before, this will make it very difficult to use EVC if I have to power down all the VMs, even if I can do it one at a time.

Bill S.

0 Kudos
lamw
Community Manager
Community Manager
Jump to solution

Agreed, I think VMs will need a power cycle for this EVC functionality to be enabled.

0 Kudos
dconvery
Champion
Champion
Jump to solution

I think they (VMware) are figuring a reboot is required anyway after the tools upgrade....

Dave Convery, VCDX-DCV #20 ** http://www.tech-tap.com ** http://twitter.com/dconvery ** "Careful. We don't want to learn from this." -Bill Watterson, "Calvin and Hobbes"
0 Kudos
lamw
Community Manager
Community Manager
Jump to solution

That's a good point ... but I don't know if everyone will take an outage for upgrading the VMware tools, it's not required but recommened as a best practice. I'm sure you can do dev/test pretty easily but for production systems it might take a little more effort and probably do it over a period of time. This is of course after doing sufficent testing with dev/test to make sure Update 2 doesn't do anything unexpected. I still think it's going to be hard to get most organizations to take an outage for their production systems to reboot the VMs for either VMware Tools & EVC

0 Kudos
jamieorth
Expert
Expert
Jump to solution

As far as the SSO - create a shortcut with the following:

"C:\Program Files\VMware\Infrastructure\Virtual Infrastructure Client\Launcher\VpxClient.exe" -passthroughauth -s yourVirtualCenterServerNameHere

This assumes you installed the client in the default locations...

Regards...

Jamie

If you found this information useful, please consider awarding points for "Correct" or "Helpful".

Remember, if it's not one thing, it's your mother...

0 Kudos
lamw
Community Manager
Community Manager
Jump to solution

Yea, I think most of the folks out here were hoping this was be built in to VIC/VC or have an option to toggle on/off, the commandline addition was discovered back in Feb, I also wanted to confirm that with VMware Support to make sure there wasn't some hidden configuration you could enable. I've posted my results after getting off the phone with the SE.

http://communities.vmware.com/message/1010341

0 Kudos
dconvery
Champion
Champion
Jump to solution

The -passthroughAuth switch has been around for a while. Not new to U2...

Dave Convery, VCDX-DCV #20 ** http://www.tech-tap.com ** http://twitter.com/dconvery ** "Careful. We don't want to learn from this." -Bill Watterson, "Calvin and Hobbes"
0 Kudos
stvkpln
Virtuoso
Virtuoso
Jump to solution

The power cycle isn't necessary if you vmotion because you'll be enabling EVC in the newly built cluster beforehand, so if you're moving to an already EVC-enabled cluster, you wouldn't need to touch the VM again... because it would already having the correct masks coming from the host it was in already to the newly built mask-friendly EVC cluster... I still think you should be able to enable EVC on the spot if all hosts are compatible, don't get me wrong.. But that should work as expected.. And this was from conversations had with the product manager for EVC last time I was in Palo Alto a few weeks ago Smiley Happy

-Steve
0 Kudos
lamw
Community Manager
Community Manager
Jump to solution

I think the issue lies with existing cluster's that do not have EVC enabled, to enable them you'll need to configure it but also require VMs to be power cycle as the masking is done for you within the VM's .vmx file, least that's how I understood EVC was being implemented, it was just more of an auto-configure thing for you vs. you having to spend your time re-configuring each VM everytime you have a different set of CPUs that you need compatiability against.

If we were to follow the basis that you create a new cluster, you take your Host A and VMotion all VMs to Host B, detach from current cluster and re-attach to new Cluster which has EVC enabled, you won't be able to get the VMs from the other Host to the new cluster as you can't VMotion between 2 Clusters. So you're still stuck with VMs not having the EVC mode enabled which will require a power cycle. I also recently noticed a new flag in the .vmx file for VMs.

evcCompatibilityMode = "FALSE"

So unless I'm miss-understanding, I'm still not sure how you can enable EVC for your current existing environment and not have to power cycle your VMs for EVC to take affect?

0 Kudos
stvkpln
Virtuoso
Virtuoso
Jump to solution

Right, you're missing the whole concept of "use vmotion to move the VM's into the new cluster"... heh. There is the caveat that the hardware needs to support the VT bits required, but we'll assume all hardware is ready to go for EVC. What you'd do is the following (in order):

1. Create new cluster and add new host and/or set one host from existing cluster into maintenance mode and move it into the new cluster

2. Enable EVC on the cluster at the right compatibility level for the existing cluster

3. vmotion existing VM's over, and as hosts clear out, move the hosts into the cluster

4. Voila!

The use of vmotion here is a perfect example. Since the new clusters hardware will be in a state that is compatibile with the existing, as long as all the LUN's and portgroups are configured, you can migrate your VM's to the new host with no fear. That's what EVC is for, right? Doing it this way avoids having to shut down all VM's in the cluster. It's 100% non-disruptive. As you said, disconnecting and moving in wouldn't do the trick for EVC, the VM's need to be migrated in via vmotion.

-Steve
0 Kudos
lamw
Community Manager
Community Manager
Jump to solution

I'm still confused on your logic, you talk about VMotion, okay ... but you can't VMotion between clusters? Let me see if I follow, let me purpose a scenario and see if I'm following you or if you're following me.

We have Cluster-OLD which consist of 3 ESX Servers, lets say 10 VMs per host and they've recently been upgraded to ESX 3.5 Update 2, all configuration changes were kept, so DRS/HA exists but no EVC. From what I understand, you can not enable EVC within a ESX Host within a Cluster if you have running Virtual Machines, the EVC is a cluster configuration on the hosts within the cluster, so even if you had 2 of the 3 hosts with no VMs, you still can not enable EVC, with me so far? Okay, your suggestion is to create a seperate Cluster, say Cluster-NEW, to enable EVC you need a host within the cluster, so you suggested putting one of the 3 ESX Servers to maintenance mode, that will VMotion the VMs to the remaining two hosts, lets say each has 20 VMs. You now have an ESX 3.5 Update 2 system with no VMs and that's in Maintenance Mode, you then suggested to detach from Cluster-OLD and put into Cluster-NEW, okay I agree with that. You then said to modify the Cluster configuration to enable EVC, okay that'll work since there are no VMs running on that host and it's the only by itself.

Now the interesting thing is when you stated bullet point #3, "3. vmotion existing VM's over, and as hosts clear out, move the hosts into the cluster", how do you purpose to do the following? You have 2 ESX Servers running in Cluster-OLD and you have 1 ESX Server running in Cluster-NEW, last time I checked, VMotion does not cross between clusters? If you follow the logic with putting another server in Maintence Mode, you eventually have 1 ESX Server running 30 VMs while the other 2 are idle and eventually if you want to compelte this EVC functionality, you need to power off the VMs which is what we're trying to avoid.

If you can clarify that would be awesome, if we can enable EVC on a new cluster without taking an outage to VMs then EVC is valuable to all organizations, but if you need to take down set or all your VMs just to get your current environment with EVC ability, then I'm reluctant to say that all organizations will have the ability to ask for this outage.

0 Kudos
stvkpln
Virtuoso
Virtuoso
Jump to solution

A cluster is not a vmotion boundary, the only boundary that would exist would be for cluster-specific features.. i.e. HA, DRS, and DPM. As long as hardware is compatible for the vmotion operation (which is why you enable EVC to the appropriate profile in the new cluster), shared storage is visible by all hosts, and port groups are properly configured, there's absolutely no reason you can't do that. Like I said, I do it all of the time... In fact, I just vmotioned a VM out of a two host cluster to a standalone box with no issue just now. If anything, I'd imagine the datacenter object would be the vmotion boundary, but if you'e got multiple logical datacenters where storage is shared across like that, I don't see why vmotion wouldn't work in that scenario either. I've never tried it outside the datacenter object, though.

-Steve
0 Kudos
lamw
Community Manager
Community Manager
Jump to solution

I'll give it a whirl tomrorow, I've always been under the impression that VMotion can not cross DC's and Clusters.

0 Kudos
lamw
Community Manager
Community Manager
Jump to solution

You were right! I think I've had that notion of not being able to VMotion between DCs/Cluster was that fact that our Clusters actually had different VMotion subnets hence it would have not worked, but if you created a new Cluster and use your suggested method it works, but majority of it has to be done manually which isn't too bad. I'm currently starting up the first host. What I did notice, which I thought that EVC would have taken care of, was the mix between CPU families (AMD / INTEL) but enabling EVC is only 1 type of CPU Vendor, so you cluster still need to be consistent of either AMD or INTEL, that would have been nice if they could have gotten that to work, but probably require the CPU vendors to provide layer of abstrction which I believe both AMD/INTEL are working towards but probably not in our the current chipset.

0 Kudos
sheetsb
Enthusiast
Enthusiast
Jump to solution

If you can actually vmotion between the clusters that would solve my problem. I just need to move some hosts to a new cluster and start the migrations. Did your VMs have an cpu masks applied? Ours did to support moving between earlier version of the AMD CPUs. I'm just wondering if EVC will automatically fix the masks as the Vmotion occurs, which would be my assumption.

Bill S.

0 Kudos
lamw
Community Manager
Community Manager
Jump to solution

The masks do get updated I believe and also the very last line where it says evcCompatibilityMode = "FALSE" becomes true after VMotioning to EVC enabled cluster. I guess technically you're just creating a temporarily cluster of the same set of Hosts, long as they hosts are all configured the same with the VMotion network/etc, then you can cross Clusters, I don't know if you can cross DC's .... I would have to test that one out, but you'll probably create it within the same DC and you just choose EVC for Intel OR AMD, I was hoping it might be for both. It's working great, I'm going to my 2nd host right now.

Update: You do not need to disconnect/remove your old ESX Server to move to the newly created cluster, just drag/drop from cluster-OLD to cluster-NEW, this will preserve all statistical data!

0 Kudos
stvkpln
Virtuoso
Virtuoso
Jump to solution

I think asking for compatibility between Intel and AMD may be asking for too much in: 1) a v1 concept and 2) a technology stack from Intel and AMD that, frankly, aren't compatible outside their own family. It shouldn't be surprising that neither Intel or AMD has any desire to make things too easy here. Baby steps Smiley Happy

-Steve
0 Kudos