VMware Cloud Community
bilalfuuu
Enthusiast
Enthusiast

need help in vSAN design

Hi,

i have a trouble ,need expert help, i have deployed vSAN on 3 HP G10 Servers ,each server have 2 SSDs and 12 HDD ,i configured RAID 1 on SSDs on each host , and one Disk Group contain one SSDs (which is RAID one ) and 7 HDDs on each Hosts, and create Local storage with remaining HDDs...So now VMware recommend me that you  should break SSDs RAID one and use in pass though mode  all drives include SSDs and HDDs , no need to create RAID any more , so then you can use storage and cache efficiently .i need to redesign all of this without loss of any data , kindly any design for this from expert  , also i need to disable controller cache P408i-p,  only use SSDs cache. how can i achieve this .

currently vSphere 6.5 is using, i also wish to upgrade it to 6.7 Version .

Hypervisor installed on internal SD card which is also RAID 1 .

Thanks

5 Replies
IRIX201110141
Champion
Champion

Well,  you have to offload all your running VMs to a spare Host before you can destroy the existing VSAN and reconfigure all Hosts.

VSAN would like to see the single drives and not anykind of LUN/vDisk which is hiding behind a RAID Controller. So also a RAID0 for every single drive is not an ideal solution.  Your P408i-p supports RAID and HBA mode. You have to check the VMware HCL or ask VMware Support if HBA is a supportet mode.

For your vSAN Node

- 2 Diskgroups per Host

- Every Diskgroup use one SSD as Cache device and 6 HDDs as Capacity device

You need a vCenter which can temporarily 4 Hosts. You need vSphere Standard or Enterprise or EVAL Mode so that you can use  svMotion for moving the VMs around.

Some of the PRO may can answer the following: If you setup a 2 Node Cluster with a virtual Whitness somewhere in your Environment if it is possible to add a 3rd. vSAN node later and removing the virtual Whitness than?

Regards,

Joerg

iopsGent
Enthusiast
Enthusiast

Looks like you have some spare HDDs on each host, can you move the VMs to them? If they are big enough move all the VMs over to them, destroy the vSAN and setup.

Please consider marking this answer as "correct" or "helpful" if you think your questions have been answered.
0 Kudos
GreatWhiteTec
VMware Employee
VMware Employee

HI bilalfuuu,

According to the VCG, this controller only supports pass-through mode. VMware Compatibility Guide - vsan

In order to reconfigure the nodes without data loss,  you can do the following - assuming you are using the default policy (FTT=1) for all objects and there is no FTT=0.

Put host in maintenance mode and select no data evacuation, as you don't have enough hosts to copy that data elsewhere. This is a perfect example why having N+1 is a much better design. At this stage you still have one copy of your components, and your witness objects... But you will lose one of the replicas during the transition.

Verify all data is still accessible.

While in MM, you can delete the Disk Groups from that host (no data evacuation), reboot the node, disable any caching and advanced features as well as making the change to pass-through mode.

Once the server comes back up, re-create the Disk Groups for that node.

At this point you will need to wait for objects to come back into compliance (2 copies of data). So this process can take a while.  If you don not wait and repeat the process right away, some objects will be stale, and some will be absent, causing your VMs to be unavailable and possibly lost. So, move slowly one host at a time, making sure you have 2 copies before moving to the next host.

Make sure everything is healthy.

Repeat for the next host, and then one more time for the last host. Only doing one host at a time.

Not sure how your "Local Storage" is configured, so if you make changes to the drives on your controller, it may also affect the local storage. So, ideally you may want to move that data out temporarily just in case.

IF you are using the same controller for the "remaining SSDs" take a look at this. VMware Knowledge Base

Hope this gives you an idea of the process. You may also follow up with GSS to validate the steps above

TheBobkin
Champion
Champion

Hello bilalfuuu

Never Apply RAID (other than RAID0 individual VDs per disk where supported) to drives before presenting them to vSAN - this is just asking for data-consistency issues and offers no benefit.

There are a multitude of options of how to walk the data over to a supported configuration:

- Create VMFS datastores on the disks not currently consumed by vSAN and use this as swing-storage for moving data off the vsanDatastore, reconfigure the disks, recreate the Disk-Groups and move the data back.

- Use external SAN storage same as above as swing-storage.

- (Take back-ups!) decommission one node (Ensure Accessibility option),then apply an FTT-0 Storage Policy the data, create a new vSANdatastore on the empty node (with correct disk configuration) migrate enough data over that you can evacuate another node, configure it correctly and add it to the new cluster, migrate over data from last node, configure this correctly then add it to the other cluster and then FTT=1 the data again.

- If you don't have SvMotion capability (for whatever reason) clone, vmkfstools -i (note -W vsan options where relevant) or back-up/restore to destination are likely your main feasible options.

"also i need to disable controller cache P408i-p"

Ask your hardware vendor as this varies and they should know the specifics.

"currently vSphere 6.5 is using, i also wish to upgrade it to 6.7 Version ."

Make sure to get your whole cluster configuration sorted out before doing this.

"Hypervisor installed on internal SD card which is also RAID 1 ."

This is fine - vSAN doesn't really care much where ESXi boots from, you can even reimage/wipe the ESXi install without touching the vSAN Disk-Groups data.

Bob

bilalfuuu
Enthusiast
Enthusiast

HI, Thanks all of you friends to give me a suggestion , kindly confirm i have extract  design from all of your expert suggestion for my simplicity .

1. create local datastore on any host

2. migrate  vms to local data store

3. break disk group and then disable vsan on cluster.

4. break RAID on SSD and disable cache on storage controller.

5. passthrough all SSDs and HDD.

6. upgrade all esxi host , enable vsan and create two disk group per host.

7. create FTT=1 policy and migrate all vms to vsan datastore back.

Thanks

Bilal Ahmed

0 Kudos