VMware Cloud Community
ctcbod
Enthusiast
Enthusiast

Novice vSAN connectivity question 1GB -> 10GB

Hi all,  I’m looking at moving from a traditional 3x host, 2x 1GB switch, 1x SAN ESXi environment to a vSAN HCI environment and am trying to get my head around connectivity needs.

I know my 1GB switch fabric is my potential bottleneck, but what’s the best way to overcome this.  

Our core network switches will support 10GB soon, but this will host other network traffic and will not be used exclusively by the HCI hosts.

Should I still have dedicated 10GB switches exclusively for the 3 hosts for iSCSI traffic  then 10GB NICS on the hosts going back to the core network for VM traffic? 

Or is it OK to VLAN the core network switches for iSCSI traffic and have all 10GB going back to the core switches, effectively negating the need for dedicated  10GB switches for the hosts? (budget is tight!)

It’s been a long time since I’ve been on a VMware course – I feel I need to revisit!

Thanks to the community in advance.

5 Replies
TheBobkin
Champion
Champion

Hello ctcbod​,

A warm (like our caches should be :smileygrin:) welcome to vSAN/HCI.

"I know my 1GB switch fabric is my potential bottleneck, but what’s the best way to overcome this."

Whether this is sufficient to run this cluster depends on the workload and thus throughput, latency requirements of the VMs etc.

I am going to assume you are planning to implement a Hybrid solution here (HDD capacity drives), as All-Flash requires 10GB (shared is okay) networking.

"I know my 1GB switch fabric is my potential bottleneck, but what’s the best way to overcome this."

This is not necessarily going to be the (main) bottleneck but again this depends on the specs of the cache and capacity drives you are using; and of course their quantities and their usage.

"Should I still have dedicated 10GB switches exclusively for the 3 hosts for iSCSI traffic  then 10GB NICS on the hosts going back to the core network for VM traffic?"

This really depends on the requirements of the workload still working off iSCSI SAN. - are these currently operating fine on 1GB Networking? My advice would be to get a better metric of current iSCSI and proposed vSAN traffic and work out whether these will impinge on one another on the same switches.

"Or is it OK to VLAN the core network switches for iSCSI traffic and have all 10GB going back to the core switches, effectively negating the need for dedicated  10GB switches for the hosts? (budget is tight!)"

Not sure I understand this fully, ideally you should be segregating traffic with VLANs anyway - whether your current core switches can handle this workload or whether you require deicated switches depends entirely on the intended workload.

"It’s been a long time since I’ve been on a VMware course – I feel I need to revisit!"

If you want links to any particular aspect of vSAN/vSphere PM me and I will try to accommodate, reading back into topics you find interesting in this and other Communities sub-forums is also recommended,  otherwise for VMware/vSphere in general looking at what vBrownBag are covering is always a good way of keeping current.

Bob

ctcbod
Enthusiast
Enthusiast

Thanks very much Bob, very helpful and insightful.

Yes, we’ll look at a hybrid solution with the bulk of the storage being HDD.  In terms of workload, we’re currently looking at about 550 IOPs (95th percentile) peaking briefly at 2500 IOPs (during backups) so our current SAN isn’t exactly breaking into a sweat.  We will however be bringing in some web services that will increase this somewhat.

We’ve traditionally had 1GB dedicated switch fabric between hosts and SAN. In our up-and-coming hardware refresh scenario, we are looking at not buying 10GB switches dedicated to the hosts, but having both the iSCSI (2x 10GB per host) and VM networks (2x 10GB per host) running back to the same core LAN switches.  These would be VLANed so I’m just trying to figure out whether this configuration will be OK.

0 Kudos
TheBobkin
Champion
Champion

Hello ctcbod​,

"Yes, we’ll look at a hybrid solution with the bulk of the storage being HDD."

Just an FYI - in a Hybrid configuration the SSDs are not used for storage, just as a cache-tier Read/Write (70/30) buffer.

"In terms of workload, we’re currently looking at about 550 IOPs (95th percentile) peaking briefly at 2500 IOPs (during backups) so our current SAN isn’t exactly breaking into a sweat.  "

While this obviously depends on the IO-size and profile and pattern, an even average 3-4 node vSAN would likely eat these for breakfast.

"These would be VLANed so I’m just trying to figure out whether this configuration will be OK."

It is my understanding that segregating Storage/vSAN traffic onto its own switch/stack is generally advisable, VLANing this would obviously segregate traffic but it would still be drawing from a shared resource. GreatWhiteTec is more into the physical network side of things and perchance might be able to weigh in here.

Bob

GreatWhiteTec
VMware Employee
VMware Employee

Hi Guys,

I don't know your network layout or network devices, but I can weigh in on recommended approaches. Ideally we will like to see the different network types segregated by VLANs on a TOR switch. Also, we highly recommend that you have a dedicated 10GB NIC for vSAN traffic, and have one or more NICs for other traffic. You can make these active/standby to avoid single points of failure, which brings me to the next point. Try to use vDS (included with vSAN for FREE) with NIOC so that if all the traffic falls into a single NIC due to other NIC failures, you can make sure vSAN takes priority. For NIOC don't use limits or reservation, just set up shares (vSAN high, vMotion Low, Other normal).

Also make sure that there is no oversubscription on your switch stack, and also no oversubscription going to the core.

Although, having a switch dedicated may make things easier, it is not something we require, and I personally don't run into that set up often.

Hope this helps...

ctcbod
Enthusiast
Enthusiast

Thanks both – again, very helpful. 

Our budget here is tight, so is it better to spend the money on dedicated 10GB switches for the iSCSI stack which would mean running VM traffic back to the network core switches over 1GB, or upgrading the core switches to 10GB and having both iSCSI and VM traffic running back to the VLAN’d 10GB ports on the core switches? 

In terms of the hardware we have, we’ve not bought the new hosts yet, but each host will have 2x dedicated 10GB NICS for iSCSI and possibly 2x 10GB for VM traffic (we could possible use a few 1GB for VM traffic if need be).  The network core switches will be couple of Cisco Catalyst 3850 48 Port (12x10GB ports) the rest of the 1GBs on the core switches will be used for ESXi management and other servers on our LAN.  (vMotion can stay on it’s current dedicated 1GB switch fabric).

If this physical config works, we’ll then look at the vSAN and vDS configs – thanks for advice here too.

0 Kudos