VMware Cloud Community
john_its
Enthusiast
Enthusiast

storage vmotion between 2 VSAN datastores

Hello everyone,

We have 1 VSAN cluster

vcenter appliance 5.5U3

5 esxi hosts: 5.5 U3

VSAN version 5.5

We want to create a new cluster, on the same vcenter, with 5 new hosts that will run ESXi 6.0U1 and migrate the VMs there

Is the below scenario possible or should we follow a different procedure?

1. Upgrade the vcenter to 6.0U1

2. Create a new cluster with 5 new esxi 6.0U1 hosts, on the same vcenter

3. Migrate the VMs from the existing cluster to the new one

4. Upgrade the hardware version, vmtools etc on the VMs

5. Remove/Delete the old cluster & hosts, update the hosts (esxi , firmware etc) and add them to the new one.

These hosts are part of an Horizon View environment. The new hosts will also be added (automatically I guess, since the vcenter is the same) to that environment.

We have some performance issues on this cluster so we want to move the data to another cluster.

We want to avoid having the existing cluster to upgrade the VSAN version to 6.1, because when a rebalancing operation is happening we have a lot of issues with our Horizon view users.

Thanks

0 Kudos
9 Replies
zdickinson
Expert
Expert

Good morning, I think that's right.  You can change host and datastore at the same time.  As long as the hosts between the clusters match or have matching EVC levels, you should be fine.  I would probably start with a small number (1) of virtual machines.  If you're having problems with a rebalance a storage migration of a larger number of virtual machines may also cause problems.  Thank you, Zach.

0 Kudos
john_its
Enthusiast
Enthusiast

Hi Zac,

Actually I was more concerned if the migration can happen since the old cluster will be 5.5 and the new one will be 6 (hosts and vsan versions)

The migration will happen when we can have most of the VMs powered off to minimize the disk IO, and migrate only a few at a time.

On a vsan cluster while upgrading VCSA does the hosts need to be down?

One other question , since I will have to shutdown the whole VSAN cluster to do some configuration on the switches, is there a "proper" way to shutdown and power on a vsan cluster?

The vcenter is located on a different cluster and storage so it will always be live.

Thank you

0 Kudos
zdickinson
Expert
Expert

On a vsan cluster while upgrading VCSA does the hosts need to be down?

No, vSAN will operate independently of your vCenter instance.


"proper" way to shutdown and power on a vsan cluster?

I believe it is to shut the hosts down 1 by 1, and then bring them up in reverse order.  Last one shutdown is the first one to be powered on.  Be patient and use RVC to check the resync, don't power another host on until all resync is done.  However, if you have network redundancy, can you not make the switch changes while the hosts are online?  Thank you, Zach.

0 Kudos
john_its
Enthusiast
Enthusiast

Hi Zach,

" However, if you have network redundancy, can you not make the switch changes while the hosts are online?  "

We have 2 switches in  stuck mode.

That makes a bit difficult to upgrade them, since during the firmware upgrade the whole stuck goes down. This is the No1 change.

The second change we want to do is to enable IGMP snooping in order to build the second cluster.

And now comes another question, which I did not know if should post it on the VSAN or Networking section.

At the moment the VSAN network kernel is configured with no VLAN ID on the esxi hosts. We have assigned 2 NICs for redundancy. The physical switch ports are not configured with any subnets for the VLAN 1 (it is the native VLAN of the physical switches).

This is probably not good since when the second cluster is ready, my guess is that there might be multicast issues (I am not sure about that though).

One option is to change the VLAN ID for the VSAN kernel , although I am not sure if it is supported, I am going through the VSAN documentation to check this.

Another option is to assign a subnet to the native VLAN, and enable IGMP for the native VLAN as well as the other one for the second cluster.

I am still searching on how should we proceed with this and what is the safest scenario.


I'll be glad if you have any suggestions on this to.


Thanks

0 Kudos
zdickinson
Expert
Expert

Good morning, understood on the switching redundancy.  What model/brand of switches are you using?  I'm not familiar with "stuck mode".

As for the vLAN question, it's probably best practice to get the clusters on separate vLANs.  But I don't think you have to in this case because I believe vSAN uses directed multicast.  I would confirm with VMware though.  Thank you, Zach.

0 Kudos
john_its
Enthusiast
Enthusiast

Our switches are Dell N4032F.

0 Kudos
Paul_Sheard
Enthusiast
Enthusiast

For best practice, Dont forget to change the multicast addresses on the 2nd VSAN cluster..

VMware KB: Changing the multicast address used for a VMware Virtual SAN Cluster

Virtual SAN Troubleshooting: Multicast - VMware vSphere Blog - VMware Blogs

Paul Sheard VMware Consultant (Contract) VCP6 DCV NV CMA DTM
0 Kudos
john_its
Enthusiast
Enthusiast

The second article mentions

As a suggestion for performance optimization, if two Virtual SAN clusters do exist on the same layer 2 network segment, modifying the multicast addresses for one of the clusters will reduce the amount of multicast traffic received for each Virtual SAN cluster and possibly resolve the “Network Status: Misconfiguration detected” message as well

As we intent to have different subnets for the 2vsan kernel clusters is IGMP required?

another question

Is LACP the only way if you want to achieve the maximum possible throughput of your VSAN teaming? On standard switch it usually only uses the bandwidth of 1 of 2 NICs

Thanks

0 Kudos
zdickinson
Expert
Expert

Good afternoon, I believe IGMP is required no matter how the networking is configured.

My understanding is that ESX out of the box does not do link aggregation for any of it's traffic.  vMotion, management, VM traffic, etc...  It's only for redundancy.  There are ways around this.  If you have two NICs for vMotion, make two vMotion ports.  NIC1 active in one, and NIC2 active in the other.  Then it use both NICs.  I don't know if you can do this with other types of traffic.

Thank you, Zach.

0 Kudos