VMware Cloud Community
melvintang
Contributor
Contributor
Jump to solution

Install Vsphere in mix hardware environment (IBM Rack & Blade servers?)

Hi all,


I've installed and configured six ESX (Vsphere 4.1) in my environment. (with vNetwork Standard Switch

IBM x3650 M3

2 x SC

2 x VMotion

2 x VM network traffic


Now, my team plan to install ESX into "Blade Server" (Blade Center H with 7 Blade server HS22) and join it in an existing ESX cluster.

The blade has 6 x NICs , 2 x FC ports


I've found few on-line articles but just not really sure whether it's encourage to mix two type of different hardware together or not? I just felt not really comfortable to have it.

Anyone here has experience before?

is that it can be done?


Really appreciated it :smileylaugh:


Best regards,

Melvin



0 Kudos
1 Solution

Accepted Solutions
logiboy123
Expert
Expert
Jump to solution

It is generally bad form to use different hardware in the same cluster. It is definately possible if the EVC mode on the cluster is correct, but I wouldn't want to unless I absolutely had to.

Theoretical maximums aside I usually find that clusters with 8-10 hosts are the most efficient as far as price / pathing issues / vm density / maintenance issues go. Totally dependant on the size of your hosts and storage medium of course.

If you are using blades in a cluster then make sure that all blades do not come from the same Chassis. There is a limit on the number of primary hosts in a cluster, so the maximum blade hosts I assign per chassis per cluster is 4. See Page 3 of the Best Practices guide for more information on this;

For networking see the following article;

http://www.kendrickcoleman.com/index.php?/Tech-Blog/vsphere-host-nic-design-6-nics.html

You say that you are using the SC and I presume this means you are trying to use ESX 4.1. Is there any reason why you cannot use ESXi 4.1 instead? vSphere 5 will not include ESX service console, so if you are going to swap now would be the better time.

Regards,

Paul

View solution in original post

0 Kudos
10 Replies
AndreTheGiant
Immortal
Immortal
Jump to solution

It is possible to have vMotion running if all CPU are compatible with a EVC baseline.

IMHO 6 + 7 become a big cluster. Two clusters could be an option.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
chriswahl
Virtuoso
Virtuoso
Jump to solution

Goal is always to keep hardware as similar as possible but not a requirement to help ensure balance with HA/DRS.

With blades my concern has always been more about HA primaries; ensure that your 5 HA primary nodes are not on blades in the same chassis. If chassis fail and it contains all 5 HA primary nodes there will be no HA restart event.

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
melvintang
Contributor
Contributor
Jump to solution

Thanks Andre & Chris feedback Smiley Happy

In this morning, just found out the Blade Center H is not fully reserved for "ESX“. Some of the blade server will install other application which run as physical server.

My concern here is the networking part. As I know, the NIC of the blade will mapped to Bay 1 and 2.

VMware *strongly* recommended that VMotion/FT to have a dedicated NIC. Will it possible to work if my blade configured VLAN?

Any idea?

Thanks

0 Kudos
logiboy123
Expert
Expert
Jump to solution

It is generally bad form to use different hardware in the same cluster. It is definately possible if the EVC mode on the cluster is correct, but I wouldn't want to unless I absolutely had to.

Theoretical maximums aside I usually find that clusters with 8-10 hosts are the most efficient as far as price / pathing issues / vm density / maintenance issues go. Totally dependant on the size of your hosts and storage medium of course.

If you are using blades in a cluster then make sure that all blades do not come from the same Chassis. There is a limit on the number of primary hosts in a cluster, so the maximum blade hosts I assign per chassis per cluster is 4. See Page 3 of the Best Practices guide for more information on this;

For networking see the following article;

http://www.kendrickcoleman.com/index.php?/Tech-Blog/vsphere-host-nic-design-6-nics.html

You say that you are using the SC and I presume this means you are trying to use ESX 4.1. Is there any reason why you cannot use ESXi 4.1 instead? vSphere 5 will not include ESX service console, so if you are going to swap now would be the better time.

Regards,

Paul

0 Kudos
melvintang
Contributor
Contributor
Jump to solution

sure, will try to propose ESXi 4.1 to management

Thanks Paul Smiley Happy

0 Kudos
logiboy123
Expert
Expert
Jump to solution

NP. Just point out to them that;

1) vSphere 5 in a few weeks and ESXi is the only release.

2) Upgrading from ESX 4.1 to ESXi 5.0 is going to be a massive nightmare.

3) There is absolutely no feature and functionality loss by going to ESXi. In fact it make's it easier to manage and install.

I live to serve. 😉

0 Kudos
Kahonu84
Hot Shot
Hot Shot
Jump to solution

Aloha,

Where did you hear about 5.0???

Bill

0 Kudos
AlbertWT
Virtuoso
Virtuoso
Jump to solution

Wow, that happens so quick 🙂

I just finished upgrading 4.0 into 4.1 yesterday and now v5.0 is out already.

/* Please feel free to provide any comments or input you may have. */
0 Kudos
Texiwill
Leadership
Leadership
Jump to solution

Hello,

Actually 5 is not out at the moment, its GA date has not been stated. It has been announced however.

As for mixing nodes, that is not a huge deal as long as the CPU Families are the same and EVC works. The concern over networking however is very important. You really want to rethink having multi-use blade chassis' as that opens up some interesting security issues depending on how the networking and storage is configured. If they use pass through blade switches, not a huge issue, but if they use IO virtualization blade switches then those blade switches become a fairly serious attack point.

If you trust VLANs then this may not be an issue, but if you are using these switches for storage then you may have issues related to performance as well as security.  It really depends on how you can divide up the blade switch more than anything and whether you can segregate the non-ESX storage, vMotion, FT traffic appropriately.

Best regards,

Edward L. Haletky

Communities Moderator, VMware vExpert,

Author: VMware vSphere and Virtual Infrastructure Security,VMware ESX and ESXi in the Enterprise 2nd Edition

Podcast: The Virtualization Security Podcast Resources: The Virtualization Bookshelf

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
melvintang
Contributor
Contributor
Jump to solution

Hi Edward, thanks for your reply

I will re-think it ..

Cheers,

Melvin

0 Kudos