VMware Cloud Community
lamw
Community Manager
Community Manager
Jump to solution

VI3.5 Update 2 Available Now - per Duncan

Looks like VMware did another release over the weekend, ESX 3.5 Update 2, ESXi Update 2, Virtual Center Update 2 and VCB Update 1.5

Has anyone took the bold step to upgrade in their "development" environment? Any gotchas yet? I'm looking forward to the new alarms that's in VC

Per Yellow-bricks: http://www.yellow-bricks.com/2008/07/26/esx-35-update-2-available-now/

Am I the first one to notice this? VMware just released Update 2 for ESX(i) 3.5 and a whole bunch of new patches!

So what's new?

  • Windows Server 2008 support - Windows Server 2008 (Standard, Enterprise, and Datacenter editions) is supported as a guest operating system. With VMware's memory overcommit technology and the reliability of ESX, virtual machine density can be maximized with this new guest operating system to achieve the highest degree of ROI. Guest operating system customizations and Microsoft Cluster Server (MSCS) are not supported with Windows Server 2008.

  • Enhanced VMotion Compatibility - Enhanced VMotion compatibility (EVC) simplifies VMotion compatibility issues across CPU generations by automatically configuring server CPUs with Intel FlexMigration or AMD-V Extended Migration technologies to be compatible with older servers. Once EVC is enabled for a cluster in the VirtualCenter inventory, all hosts in that cluster are configured to ensure CPU compatibility for VMotion. VirtualCenter will not permit the addition of hosts which cannot be automatically configured to be compatible with those already in the EVC cluster.

  • Storage VMotion - Storage VMotion from a FC/iSCSI datastore to another FC/iSCSI datastore is supported. This support is extended on ESX/ESXi 3.5 Update 1 as well.

  • VSS quiescing support - When creating quiesced snapshot of Windows Server 2003 guests, both filesystem and application quiescing are supported. With Windows Server 2008 guests, only filesystem quiescing is supported. For more information, see the Virtual Machine Backup Guide and the VMware Consolidated Backup 1.5 Release Notes.

  • Hot Virtual Extend Support - The ability to extend a virtual disk while virtual machines are running is provided. Hot extend is supported for vmfs flat virtual disks without snapshots opened in persistent mode.

  • 192 vCPUs per host - VMware now supports increasing the maximum number of vCPUs per host 192 given that the maximum number of Virtual Machines per host is 170 and that no more than 3 virtual floppy devices or virtual CDROM devices are configured on the host at any given time. This support is extended on ESX 3.5 Update 1 as well.

I really really like the VSS support for Snapshots, especially for VCB this is a great feature! And what about hot extending your harddisk, this makes a VMFS datastore as flexible as a RDM datastore!

For Hardware there are also a couple of really great additions:

  • 8Gb Fiber Channel HBAs - Support is available for 8Gb fiber channel HBAs. See the I/O Compatibility Guide for ESX Server 3.5 and ESX Server 3i for details.

  • SAS arrays - more configurations are supported. See the Storage/SAN Compatibility Guide for ESX Server 3.5 and ESX Server 3i for details.

  • 10 GbE iSCSI initiator - iSCSI over a 10GbE interface is supported. This support is extended on ESX Server 3.5 Update 1, ESX Server version 3.5 Update 1 Embedded and ESX Server version 3.5 Update 1 Installable as well.

  • 10 GbE NFS support - NFS over a 10GbE interface is supported.

  • IBM System x3950 M2 - x3950 M2 in a 4-chassis configuration is supported, complete with hardware management capabilities through multi-node Intelligent Platform Management Interface (IPMI) driver and provider. Systems with up to 32 cores are fully supported. Systems with more than 32 cores are supported experimentally.

  • IPMI OEM extension support - Execution of IPMI OEM extension commands is supported.

  • System health monitoring through CIM providers - More Common Information Model (CIM) providers are added for enhanced hardware monitoring, including storage management providers provided by QLogic and Emulex. LSI MegaRAID providers are also included and are supported experimentally.

  • CIM SMASH/Server Management API - The VMware CIM SMASH/Server Management API provides an interface for developers building CIM-compliant applications to monitor and manage the health of systems. CIM SMASH is now a fully supported interface on ESX Server 3.5 and VMware ESX Server 3i.

  • Display of system health information - More system health information is displayed in VI Client for both ESX Server 3.5 and VMware ESX Server 3i.

  • Remote CLI - Remote Command Line Interface (CLI) is now supported on ESX Server 3.5 as well as ESX Server 3i. See the Remote Command-Line Interface Installation and Reference Guide for more information.

One of the important thing in my opinion is the full support for the CIM Smash API! And iSCSI over a 10GBe interface, same goes for NFS! 8GB fibre and SAS arrays is a great extension.

  • VMware High Availability - VirtualCenter 2.5 update 2 adds full support for monitoring individual virtual machine failures based on VMware tools heartbeats. This release also extends support for clusters containing mixed combinations of ESX and ESXi hosts, and minimizes previous configuration dependencies on DNS.

  • VirtualCenter Alarms - VirtualCenter 2.5 Update 2 extends support for alarms on the overall health of the server by considering the health of each of the individual system components such as memory and power supplies. Alarms can now be configured to trigger when host health degrades.

  • Guided Consolidation - now provides administrators with the ability to filter the list of discovered systems by computer name, IP address, domain name or analyzing status. Administrators can also choose to explicitly add physical hosts for analysis, without waiting for systems to be auto-discovered by the Consolidation wizard. Systems can be manually added for analysis by specifying either a hostname or IP address. Multiple hostnames or IP addresses, separated by comma or semi-colon delimiters, may also be specified for analysis. Systems can also be manually added for analysis by specifying an IP address range or by importing a file containing a list of hostnames or IP addresses that need to be analyzed for consolidation. Guided Consolidation also allows administrators to override the provided recommendations and manually invoke the conversion wizard.

  • Live Cloning - VirtualCenter 2.5 Update 2 provides the ability of creating a clone of a powered-on virtual machine without any downtime to the running virtual machine. Therefore, administrators are no longer required to power off a virtual machine in order to create a clone of it.

  • Single Sign-On - You can now automatically authenticate to VirtualCenter using your current Windows domain login credentials on the local workstation, as long as the credentials are valid on the VirtualCenter server. This capability also supports logging in to Windows using Certificates and Smartcards. It can be used with the VI Client or the VI Remote CLI to ensure that scripts written using the VI Toolkits can take advantage of the Windows credentials of your current session to automatically connect to VirtualCenter.

One of the best new features described above in my opinion is the extension of Alarms! It's awesome that VirtualCenter will report on hardware health! But what about that live cloning, that will definitely come in handy when troubleshooting a live production environment. Just copy the server, start it without the network attached and try to solve the problem!

DOWNLOAD it now:

ESX 3.5 Update 2

ESXi 3.5 installable Update 2

VirtualCenter 2.5 Update 2

VMware Consolidated Backup 1.5

Thanks for the update Duncan! Great blog

Reply
0 Kudos
1 Solution

Accepted Solutions
stvkpln
Virtuoso
Virtuoso
Jump to solution

A cluster is not a vmotion boundary, the only boundary that would exist would be for cluster-specific features.. i.e. HA, DRS, and DPM. As long as hardware is compatible for the vmotion operation (which is why you enable EVC to the appropriate profile in the new cluster), shared storage is visible by all hosts, and port groups are properly configured, there's absolutely no reason you can't do that. Like I said, I do it all of the time... In fact, I just vmotioned a VM out of a two host cluster to a standalone box with no issue just now. If anything, I'd imagine the datacenter object would be the vmotion boundary, but if you'e got multiple logical datacenters where storage is shared across like that, I don't see why vmotion wouldn't work in that scenario either. I've never tried it outside the datacenter object, though.

-Steve

View solution in original post

Reply
0 Kudos
36 Replies
mikemcsw
Contributor
Contributor
Jump to solution

I tried it to see if it would cure my "pink screening" last night, but it didn't help...i still get a pink screen (see previous message).

Reply
0 Kudos
Troy_Clavell
Immortal
Immortal
Jump to solution

this is great news, I like the option of now supporting 192 vCPU's per host!!

http://www.vmware.com/download/vi/

Reply
0 Kudos
George_B
Enthusiast
Enthusiast
Jump to solution

I wonder if there is any chance that they have seen fit to enable jumbo frames on the iSCSI support yet. Some of us with Qlogic HBAs have spent a long time waiting Smiley Sad

Reply
0 Kudos
George_B
Enthusiast
Enthusiast
Jump to solution

I wonder if they have got round to providing jumbo frame support for iSCSI yet. Some of us with QLogic HBAs have been waiting a long time Smiley Sad

Reply
0 Kudos
minerat
Enthusiast
Enthusiast
Jump to solution

Hmmm, VSS quescing. Does this mean that vmtools will automatically quiesce the filesys/(VSS aware)apps in 2003 when you take a snapshot?

Reply
0 Kudos
steve31783
Enthusiast
Enthusiast
Jump to solution

Anyone know how to enable the single sign on feature? I read the post on how to do it when we were using 2.5 U1, but I assumed there would be some type of setting in the client with U2 since this was an advertised supported feature...

Reply
0 Kudos
RParker
Immortal
Immortal
Jump to solution

I was under the impression it was automatic. If you login from your computer and you are on the domain, the client should (take off those custom login -u -p settings from before) login automatically.

The instructions sound as if it's a client update, more so than a VC enabled feature. You can already login, but not its a seemless passthru.

Reply
0 Kudos
RParker
Immortal
Immortal
Jump to solution

Jumbo Frames are supported. If you have an HBA, that's up to your HBA to support jumbo frames, the iSCSI SOFTWARE implementation is the part that doesn't support jumbo frames. . . .

Reply
0 Kudos
steve31783
Enthusiast
Enthusiast
Jump to solution

Weird.. I just update my test VC (which is on the domain).. the installer updated the client, I can see the new build #, but had to login manually.... No SSO... never had those settings configured, just knew of them.

Reply
0 Kudos
abaum
Hot Shot
Hot Shot
Jump to solution

Pardon my iSCSI ignorance (don't have it in my shop), but is this thread mixing terminlogy a bit? I thought Jumbo Frames related to Ethernet, but some posts mention HBAs. I didn't think HBAs used Jumbo Frames since HBAs are typically fibre channel. I'm thinking the proper terminology to use for iSCSI is "initiatior". Is this correct? Are you using HBA in the generic sense to reference a storage card of any sort?

adam

Reply
0 Kudos
dominic7
Virtuoso
Virtuoso
Jump to solution

An 'hba' is a "host bus adapter", technically anything you plug into the expansion slot could be an HBA, but generally it's used to refer to a storage card of some sort. It is used to refer to dedicated iscsi cards that have TOE ( TCP Offload Engine ) and fibre channel in the VMware world.

Reply
0 Kudos
jsykora
Enthusiast
Enthusiast
Jump to solution

A HBA is a Host Bus Adapter and can be either a FibreChannel or iSCSI hardware device used to connect to that particular type of storage fabric. With iSCSI there are also software initiators in addition to the hardware method of connection. Basically the poster is wanting software support for Jumbo Frames on the VMware iSCSI initiator built into ESX I believe.

Reply
0 Kudos
abaum
Hot Shot
Hot Shot
Jump to solution

Thanks for the clarification. In my years of working on SANs, I've never heard of HBA being used for anything but fibre channel. In my limited readings of iSCSI, I've never seen an iSCSI card called an HBA. I usually see them referred to as TOEs or hardware initiators.

adam

Reply
0 Kudos
George_B
Enthusiast
Enthusiast
Jump to solution

Jumbo Frames are supported. If you have an HBA, that's up to your HBA to support jumbo frames, the iSCSI SOFTWARE implementation is the part that doesn't support jumbo frames. . . .

RParker, I am talking about JUMBO frame support on HBA's. I have Qlogic QLE4062C iSCSI HBAs and enabling jumbo frames in their firmware firstly prevents an ESX host from booting for a good 5 minutes while it hangs at the driver load stage, secondly theres a 50% chance of it actually being able to see the HBA after that and finally doesn't use the JUMBO frames any way.

The HBA supports JUMBO frames, the switch supports JUMBO frames, the SAN supports JUMBO frames but VMware support say they do not support JUMBO frames and that is in relation to a hardware HBA that fully supports it.

My Support ticket question:

-


With ESX 3.5 Update 1 and the Qlogic QLE4062C iSCSI PCI-E HBA the system will not boot correctly with Jumbo frames enabled. With the MTU set to 1500 bytes the system operates normally but when Jumbo frames of 9000 MTU are enabled the system hangs for ~ 10 minutes when booting on the QLA4022.o driver. The system eventually continues but when it gets to the login screen it gives an error : Initialization for qla4022 failed with -19.

I am using the latest bios firmware release from Qlogic for ESX 3.5 which is 3.0.1.33

-


Their response:

-


Regarding to your query about the Jumbo frames for iSCSI is not currently

implemented on the ESX. Please change the configuration to regular frames and

check if you are still having issues.

I hope this information is enough to solve your query. If you need more

information please don't hesitate in contacting us back via phone or email.

If this information solved your problem, please let me know.

-


Reply
0 Kudos
depping
Leadership
Leadership
Jump to solution

Weird to see your own name in a topic title Smiley Happy

Duncan

My virtualisation blog:

If you find this information useful, please award points for "correct" or "helpful".

Reply
0 Kudos
sheetsb
Enthusiast
Enthusiast
Jump to solution

I'm extremely interested in the EVC function. We had to purchase some newer HP Blades with AMD processors that aren't compatible with our existing ESX hosts for VMotion. This would be a great help, except you need to shutdown all the VMs in a cluster to turn this on. I guess I understand why you need to power off all the VMs, but I don't see why you have to do it for all at once. I have over 300 VMs, 150+ VMs in each of my two clusters and I can't just shut them all down, not even 150 at a time. This requirement makes what would be a desirable function nearly worthless to me. Unless I can migrate running VMs to a cluster with this turned on it is a non-starter for me.

Bill S.

Reply
0 Kudos
lamw
Community Manager
Community Manager
Jump to solution

I agree, we've had to do special masking between 585 G1/G2 and now we have a new G5 in the mix. EVC sounds/looks exciting, but the downtime required to get your cluster within EVC is almost nearly impossible with most organization. I think from their release notes, this is probably as close as you'll get with this getting auto-configure which will require a reboot of all your VMs. Usually what we've done is just mask newly created VMs that will not VMotion due to the difference in the CPU steppings/etc. It would be nice to not take an outage and be able to enable this feature

Reply
0 Kudos
stvkpln
Virtuoso
Virtuoso
Jump to solution

You don't necessarily need to take an outage to enable EVC for use. The easiest approach would be to create a new cluster and add the new 585's into that cluster with EVC set to the appropriate level to support backward compatibility with the Rev. E/F systems, configure networking/storage/etc, then start vmotioning over VM's from one cluster to another, then as the old hosts empty out, move them over into the new cluster.. Not the neatest of ways, but it will avoid the downtime problem..

-Steve
Reply
0 Kudos
lamw
Community Manager
Community Manager
Jump to solution

I thought EVC required the VMs to go through a reboot as the cpuid masking is done for each VM as EVC auto-configures the cluster based on your hardware and reflects that for all existing VMs? I know for a new cluster setup it would be easy, but I think for most of us, we have an existing production envrionment and taking an outage for the VMs is probably out of the question? If this is only done on the ESX Host, then that would be awesome!

Update: I actually look at the cluster settings and when I click on EVC, it says "error, virtual machines are not powered off". I can try to VMotion them all off and see that'll help but I have a feeling you'll need to power cycle your VMs before the new changes take affect. Can anyone else confirm this assumption?

Reply
0 Kudos