VMware Cloud Community
Success3
Enthusiast
Enthusiast

vSAN, vMotion and VDS - Help

Looking for feedback regarding a four node vSAN setup.

We've got four nodes.

2 - 1GB NIC's

2 - 10GB NIC's

What we've done so far:

Created VDS with 3 uplinks.

vSAN portgroup - Assigned to uplink 2(active) and uplink 3(standby) - 10gb-1

vMotion pg - Assigned uplink 3(active) and uplink 2(standby) - 10gb-2

Primary Servers pg - Assigned to uplink 1 with vlan tag - 1gb-1

standby servers pg - Assigned to uplink 1 with different vlan tag. - 1gb-1

I wanted to created a management port group on the VDS and migrate the remaining 1GB link and management kernel over to that PG. The migration kept erroring out for some reason so I decided to just keep them on their own VSS. Not sure if that was a good move or not.

The management subnet is isolated and must stay that way. It's a /28 I think. 

vSAN/vMotion are on the same subnet a 172.x.x.x. /24

Pri and Sec servers are a different subnet. Standby servers are powered off until needed. I can't trunk management and servers.

So right now we have 2 single points of failure - The management link and the servers link. I was told not to worry about it because the physical switch is technically a single point because if it goes, everything goes.

What are our options?

Tags (1)
Reply
0 Kudos
2 Replies
unsichtbare
Expert
Expert

Seems like a really bad idea to combine 1Gb and 10Gb on same vSwitch - if only for aesthetic reasons.

IMHO, Use both 10Gb ACTIVE on a vDS for your vSAN/vMotion Port Groups

Consider placing your PrimaryServers (tagged with VLAN) on 10Gb vDS as well

Place only your Management Network VMkernel (usually vmk0) on a VMware Standard vSwitch with your 2 1Gb uplinks.

+The Invisible Admin+ If you find me useful, follow my blog: http://johnborhek.com/
Reply
0 Kudos
Success3
Enthusiast
Enthusiast

I didn't know combining 1gb and 10gb links on a vswitch was an issue? As long as the port groups are pointed to the specific uplinks you want to use.

I could probably get away with using both uplinks as active on vsan/vmotion since they are on the same /24 subnet anyway. I'd use NIOC and "Route based on physical load" to balance it out. Right?

So all 4 hosts are connected to a 10GB switch which then has the 1GB links going to a separate 1GB-switch(192 subnet) for data(management and server traffic) and the 10GB links going to a separate 1GB-switch(172 subnet) which is needed for the time being anyway for migrating old data over.

Was thinking you could trunk management and servers link together, but again they don't want to because the physical 10GB switch is already a single point of failure.

Not sure if this makes sense or not.

Reply
0 Kudos