Enthusiast
Enthusiast

Solution design ideas for VSAN 2 node model

Hi All

Just have a question regarding a 2 node setup with witness using VSAN 6.6.1. This will be hybrid. An opportunity has come up for us to purchase 2 servers to be able to run data modelling which require some decent raw compute power however we have to do this on a budget of around 22k and it cannot sit on main shared storage. Unfortunately for us that does knock out VSAN ready nodes however we are working closely with one of our partners who is putting together a server build which is more cost effective but still supported with VSAN.

For the 2 hosts we were looking at:

 

- 1 x Intel® Xeon 6130 Gold - 2.1Ghz, 16 Core, 32 thread (to keep costs low and also max # cores before having to increase Win Svr Licensing

- 64GB ram each host

- 1 X cache and 2 x capacity disks based on our storage requirements and also supported on the HCL.

To keep costs low and within budget we plan to have only have one CPU only per host and utilise vSphere standard and VSAN standard. We also don't need to invest in 10GB ethernet which we currently don't have anyway. I am aware that this sort of design is mainly for smaller branch office setups but could we utilise this in our main data center?

We have just recently upgrade vcenter to 6.5 U1g and our existing production cluster sits on here, the existing hosts are still on 6.0U3 however these new servers will be built with vSphere 6.5 and using VSAN 6.6.1 and will be on a separate cluster managed by the same vcenter. The witness host will sit in our main production cluster but that will be on ESXi 6.0U3. Would that be ok as its a VM? 

Any suggestions on this design would be greatly appreciated.

Many Thanks

Tags (1)
2 Replies
VMware Employee
VMware Employee

What are your performance requirements for the data modeling use case?

Going single CPU to save cost is a common approach; however, I would pay close attention to amount of RAM for each host.

Single disk group and Hybrid may or may not satisfy your performance needs. You may want to look into your workload and get an IO profile of it.

Disk types and controllers/HBA is also an important factor during a design phase. Any info on this?

You can certainly 2-node cluster with 10GB direct connect (no switch), and use 1 GB for mgmt, VM, Witness traffic, etc. You witness appliance should be at the same build level as the data nodes.

A+, DCSE, MCP, MCSA, MCSE, MCTS, MCITP, MCDBA, NCDA, NCIE-SAN, NCIE-BR, VCP4, VCP5, VCP5-DT, VCAP5-DCA _____________________ If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful.
Enthusiast
Enthusiast

Hi


Thanks for this information.

We have now purchased the 2 Nodes based on HPE ProLiant DL380 G10 nodes. I have a question regarding the 10GB direct connect as we not using a 10GB switch.

Is there any documentation in terms of how connectivity is established between the 2 nodes on direct connect I was expecting to see the 10GB adapters to be shown under physical adapters but they are not showing.

In terms of cabling the 2 nodes together using CAT6A, would it be Node 1 port 1 to node 2 port 1 or vice versa?

One question I have on the witness, does the OVF create a VM or do I need to create as a virtual host?

Any advice on this would be most appreciated.

Thanks

0 Kudos