VMware Cloud Community
ruisantos
Contributor
Contributor
Jump to solution

vSphere Essentials Plus upgrade from 4.1 to 5.5

Hi all,

I'm beginning the process of upgrade from a Essentials Plus 4.1 to 5.5, also Essential Plus.

My question is not related directly to the upgrade issue, but more to ask a few advice's from a more experienced user. I'll try to explain my problem.

Our current datacenter is composed of three physical servers each with, among other hardware:

- 2 Physical CPU

- 32 GB RAM

- 6 Gbit/s NICs

We also have two switches to provide network redundancy between hosts

The current network layout, as far as ESX hosts are concerned, do not reside on VLAN implementation. This is the current network approach per hosts:

- Each two NIC are grouped together into 3 NIC groups

- Each NIC group is connected to two switches, for availability purposes

- There are basically 3 different physical networks

The problem with the current layout is that almost all communications are limited to 1Gbit/s.

Another problem is the scalability issue of not having VLANs.

I'm taking this upgrade window, not only to upgrade vSphere to version 5.5, but also to try to improve network performance and scalability.

One of the main objectives is to get more than 1Gbit/s on communications. Now what is my thinking on this:

- I will aggregate the NICs into one group of 4 NICs and another of 2 NICS. This will give me a total of 2Gbit/s on the Management Network and 4 Gbit/s on the remaining networks. QoS can also be applied, giving different layouts but, I'm just trying to keep it simple for you (the reader) Smiley Happy

As far as I know there can be several approaches:

1) Use of Cisco Nexus 1000v: AFAIK, this is not an option, since it requires the Enterprise Plus version, which is not an option ;(

2) Creation of non LACP aggregation link between the multiple NICs and the two switches: The switches are HP 2510, which do not support inter switch trunking so, this cannot be done.

So, I'm left with a few, in my perspective two bad approaches:

3) Create two standard vSwitch, sub-grouping the group of four NIC's into two groups of two NICs and connecting each vSwitch to a different physical switch. The availability issue would have to be assured within each virtual machines that would contain two vNIC's, one connected to each vSwitch. This would get me a 2Gbit/s rate.

4) I use only one vSwitch for all 4 NICs but, connect it to only one physical switch. I then configure the second switch as a standby only switch with same configuration as the active one. This would require manual intervention in case of the failure of the active switch, which would be very unpleasant.

This is the help I require. Is there anything I'm missing ? Is there anything else that I can do to try to take advantage of our current infrastructure ?

If you've made it this far, thank you for your patience Smiley Happy.

Thanks to all,

Rui Santos

0 Kudos
1 Solution

Accepted Solutions
rh5592
Hot Shot
Hot Shot
Jump to solution

From a design perspective, one key rule is to keep it as simple as possible. You do not want to introduce complexity in your environment. I am assuming this is only a small environment (from the license the you have and from the number of networks, not using VLANs). Is the 1GB network being saturated already? My approach in this is to identify where the bottleneck might be. Majority of the issue with vSphere performance is in storage, not in the network. If you are really worried on the network scalability, the easiest approach for me is to enable VLANs and allow trunking. You can then combine the 4 NICs in one vSwitch and let the default load balancing policy (route based on port ID) do its thing.

Just my 2 cents....

Regards. ================================================= "If found useful, kindly mark answers Correct or Helpful " http://rh5592.com =================================================

View solution in original post

0 Kudos
7 Replies
gabinun
Enthusiast
Enthusiast
Jump to solution

I think you are right.

The steps for performing the upgrades are

- Upgrade vCenter Server

- Upgrade ESX hosts (because we are doing a hardware refresh we are just going to install 5.5 on our new hosts and add them to the cluster and then decommission our existing)

- Upgrade VMWare Tools

- Upgrade Datastores

GN
ruisantos
Contributor
Contributor
Jump to solution

Hi gabinun,

Thank you for your input on this matter.

Regards,

Rui

0 Kudos
rh5592
Hot Shot
Hot Shot
Jump to solution

From a design perspective, one key rule is to keep it as simple as possible. You do not want to introduce complexity in your environment. I am assuming this is only a small environment (from the license the you have and from the number of networks, not using VLANs). Is the 1GB network being saturated already? My approach in this is to identify where the bottleneck might be. Majority of the issue with vSphere performance is in storage, not in the network. If you are really worried on the network scalability, the easiest approach for me is to enable VLANs and allow trunking. You can then combine the 4 NICs in one vSwitch and let the default load balancing policy (route based on port ID) do its thing.

Just my 2 cents....

Regards. ================================================= "If found useful, kindly mark answers Correct or Helpful " http://rh5592.com =================================================
0 Kudos
GFK
Enthusiast
Enthusiast
Jump to solution

I would also suggest a VLAN implementation, the HP 2510 supports VLAN and as rh5592 says, create a vSwitch with 4 NICs and one with two NIC for the management each with at different VLAN ID. The rest of the upgrade gabinun's post and you should be fine.

Gert Kjerslev | If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
ruisantos
Contributor
Contributor
Jump to solution

rh5592 and GFK,

This is a relatively small environment. VLANs are already applied for a few smaller networks like voice and other connectivity. I also agree with the KISS Technic Smiley Happy

My hope was that I might have misread or miss the reading if some technical document that would allow me to configure LACP, and if preferably on multiple switches, with the Essentials version. Unfortunately that does not seems the case Smiley Sad.

I will go towards the route based on port ID alternative.

Thanks to all of you that helped on this matter.

Rui

0 Kudos
RayHahn
Contributor
Contributor
Jump to solution

Another thing to be aware of...

When bonding together multiple connections, you do get more bandwidth, but maybe not how you might think.  If you have four 1Gbit connections bonded together, then 4Gbit of bandwidth is available, but it's split up into four 1Gbit streams, and a user can only access one stream.  So if there's a single user, it'll feel like 1Gbit to him.  But if there are four users, each of them will use a 1Gbit connection (so long as the networking automatically spreads out the load properly).

This becomes even more important to know when designing iSCSI storage.  Ten 1Gbit connections bonded together will not give you the same thing as a single 10Gbit Ethernet connection.  It's ten 1GB streams vs. one 10Gbit stream.

Ray

0 Kudos
ruisantos
Contributor
Contributor
Jump to solution

Hi RayHahn,

Thanks for your input on this.

I am aware of the limitations of EtherChannel / Bonding and most of it's policy modes.

And it is as you state. Ten Trunked 1GBit NICs are different then One 10 Gbits NIC.

I will use the default ESXi policy, will monitor it's network activity, and act accordingly. I'll post the results here as soon as I get them.

Once again, thanks for your input RayHahn,

Rui

0 Kudos