VMware Cloud Community
sandroalvesbras
Enthusiast
Enthusiast

Changing the vswitch vmnic

Hi,

I have a server with 8 vmnics.

- 4 10GB NICs

- 4 1GB NICs

The 4 10GB NICs and I will separate for iSCSI the 1GB will be used for LAN:

Standard Switch

Option 1

- vSwitch1 - Active vmnic0, Standy vmnic1 (Connected on different physical switches with stack / LAG)

PortGroup VLANID: 1 (VLANID is not necessary, I have dedicated cables)

- vSwitch2 - Active vmnic2, Standy vmnic3 (Connected on different physical switches with stack / LAG)

PortGroup VLANID: 2 (VLANID is not necessary, I have dedicated cables)

or

Option 2

- vSwitch1 - Active vmnic0, Active vmnic1, Standy vmnic2 and Standy vmnic3

PortGroup VLANID: 1

PortGroup VLANID: 2

What is the best configuration recommendation for iSCSI communication?

I am concerned is that here the traffic will be only 10GB instead of 20GB in option 1.

Thank you.

12 Replies
a_p_
Leadership
Leadership

Hard to tell without knowing anything about your environment.

Assuming that you have a storage system which supports Round-Robin path policy,keep things simple and let ESXi distribute traffic across all uplinks.

Channeling may - if at all - make only sense with Distributed Virtual Switches. Also keep in mind that if you are using LACP, all links have to be active.

André

Reply
0 Kudos
sandroalvesbras
Enthusiast
Enthusiast

Hi,

I understand that when I use a standard virtual switch I don't increase the throughput speed network with many NICs, so I need to let the vswitch do the distribution automatically.

What caused me to doubt is to have four vmnic available.

When I have two vmnic we usually create a port group (iSCSI-vlanid A) and define one as active and another as standby and another portgroup (iSCSI-vlanid B) applying the same logic but invoking the vmnic order.

Yes, the storage supports round-robun is a Dell SC Series.

But when I have four vmnic I'm curious how I could improve throughput speed network, but understanding that this is not applicable.

That is, these two 10Gps NICs have no use.

I believe that if I had an enterprise license and could use VDS with LACP, I could use all cards as active and yes, increase throughput speed network.

Does that make sense?

Thank you.

Reply
0 Kudos
a_p_
Leadership
Leadership

Did you already read Dell's https://downloads.dell.com/manuals/common/sc-series-vmware-vsphere-best-practices_en-us.pdf​?

It contains best practices regarding networking, advanced settings, ... which help to get the most out of your storage.


André

Reply
0 Kudos
IRIX201110141
Champion
Champion

For iSCSi and a SC?

2 vSwitch with one VMNIC each. Separate Subnets/VLANs assume that you have configure 2 Faultdomains on your SC. Thats it.... and of course no VMK Binding and NO LACP!

The 2 other 10G for LAN/vMotion/FT.  I dont see a need for the 1GbE NICs.

Regards,
Joerg

sandroalvesbras
Enthusiast
Enthusiast

IRIX201110141

you used an idea to separate the NICs in vSwitch, but with that I lose physical redundancy.

I plan on using a single vSwitch-ISCSI with the two NICs, separating into portgroups with VLANID and defining active NIC to one and standby per portgroup.

So I have physical and logical redundancy.

The other two NICs, yes, I will use on another vSwitch for LAN / vMotion.

The 1G NICs I will use to connect to another legacy Dell Equallogic network.

But my doubt still persists.

If I have 4 10Gb NICs I want to use 40GB I need to use LACP with VDS, right?

Thank you.

Reply
0 Kudos
sandroalvesbras
Enthusiast
Enthusiast

a.p.

I read yes, I saw that you have several options and even one is the one that Joerg commented on.

But my question is more conceptual.

I know there is no right or wrong, but there are better settings that we can use.

I've read a lot of people asking about this question of increasing transfer capacity. The answer is very simple: (increase the capacity of the NIC and the Switch).

So, of all the options that exist on vSwitch I see most using default settings like:

Standard Switch with original portID failover and two or more active NICs, so each VM will be directed to an active NIC. As you said, let VMware do its job.

What always confuses me is whether there is any configuration that uses the maximum possible network load, that is, would it be using a LAG with LACP and VDS?

Or use a different failover configuration than the original portID?

Thank you.

Reply
0 Kudos
IRIX201110141
Champion
Champion

Stop!!

You use RoundRobin as PathPolicy for iSCSI and this is how you get phys. Redundancy.

If you go for your redundancy than you must take care of the VLAN because the 2nd. NIC needs it also.  From a SC perspective this is not a Dual Fabric any more.

Now Stop again and back to start. You told us now that you also have a Dell PS(EqualLogicI) also.

EQL requires:

- Single Subnet/VLAN which take all Ports

- pSwitches have a ISL

- Single vSwitch  with at least >=2 VMKs an one active VMNIC, others are unused

- VMK Binding to the swISCSI

CMPL requires:

- Depending on the number of FD multiple Subnets/IP

- pSwitches are separated and no ISL. If you use pSwitches which are combined than you separate the FDs with VLANs

- >=2 vSwitches with one VMK/VMNIC

- NO VMK Binding to the swISCSI

As you see these are to different setups.  I am unable to configure a ESXi which handle both way at same time because of the VMK binding. ESXi trys to reach each iSCSI Target port from every VMK and this is not possible. Ends in incredible boot times depending of the number of LUNs and missing Datastores after a Host reboot.

To make it possible Dell released a Guide how to make a SC and PS compatible and you configure SC with a single Subnet and one/two FDs.  I have such a setup in the house for 4 years now and both storages also use sync. replications (SC LiveVolume and PS Sync.Rep).

What kind of SC do you have? I ask because only the entry level SC is use installable on paper. The joke is that all upper models are use the same Tool for setup today and you need to be a certified SC installer (which i am).

Regards,
Joerg

Reply
0 Kudos
sandroalvesbras
Enthusiast
Enthusiast

IRIX201110141

Grateful for your clarifications. Very good!

SC is installed by Dell and this configuration that I mentioned above was implemented by them in several projects, so I understand that it is correct, because I learned from them.

In all the projects I implemented with Dell, they always use two NICs, whether they are 1Gb or 10Gb. They usually separate into two VLANs 130 and 140. You should know that these are VLANs traditionally suggested by them.

I had never really noticed this active and standby card configuration one per ISCSI vSwitch. But it makes perfect sense. This way you guarantee that even if one of the NICs goes off, both paths will pass through the physical medium that is working. That is, both VLAN130 and VLAN140 will use the same physical medium in case of physical failure of one of the NICs.

Honestly, until today I have never seen a single physical port implementation in different vSwitch, despite being a valid configuration and included in the Dell documentation. As there are different DFs, I don't see it as a problem to use this idea.

Regarding EQL, I don't see the problem you mentioned. Because?

This EQL is in production on another VLANID. My goal is to connect it to VMware on another vSwitch with a VLANID porgroup and configure portbind and that's it! The volumes will appear and we will migrate the data.

I think I understand what you mean in relation to the VMware ISCSI initiator with SC and EQL.

I'm going to have to use the same ISCSI initiator to discover both EQL and SC, and that's what you're telling me it won't work, even if I configure the portgroups for SC and EQL in different VLANID, right?

I will configure only one NIC to access the EQL and see the datastores and migrate to copy the data to my new datastores that are in SC.

NOTE: Regarding certification, as far as I know the only Dell device that needs certification is SC Series 4x onwards is vxRail. SC Series 3x and EQL are not necessary, but making internal certifications is important. I may be wrong ...

SC3020 Model.

------>

But my question mixed with ISCSI. Excuse me! A thousand apologies!

My question was about the portgroup for VMs, not iSCSI. There are actually two different subjects that it was good to learn from you too.

What configuration should I use when I need VMs to use the full workload of my four NICs for example? Or two for example?

I was reading it now and I understood that for a VM to use all the workload of a configuration with LAG - pSwtich, I need to configure the load balanced as IP Hash and not original port ID.

When using a LAG-pSwitch and a VDS with LACP, if I do not put all the active ports in the VDS and use load balanced with IP Hash, I understand that I will not have the benefit of increasing the workload for all my VMs.

Tks.

Reply
0 Kudos
IRIX201110141
Champion
Champion

I had never really noticed this active and standby card configuration one per ISCSI vSwitch. But it makes perfect sense.

No. Thats not possible when using VMK binding for the scISCSI Initiator. If you try to add a VMK with more than one NIC the ESXi complains. The use of swISCSI with LACP is not supportet.

When youre now talking only about your VM traffic.

  1. PG with 2x10G NICs and PortID policy
  2. PG with 2x10G and when using vDS you can try LBT (Load Base Teaming)
  3. PG and when using vDS you can try LACP

In around 100 installation only 1% go for LACP.  Its the most complicated setup and i case of a disaster/problem when you have a need for steal a VMNIC on ESXi command line or Hostclient, because VCSA is down, you have a long way....

If you see network congestion and have E+ lics than go for LBT.

Most of customers are fine by using the first method.  There is no right or wrong and all 3 methods works very well.

If you go for 2,3 please keep in mind to separate a PG for your VCSA/vSphere related VMs and select the right PG attributes.

About SC installation. Only SC2000/3000 are allowed to install by customers. SC4000 and above cant be ordered otherwise.

Regards,
Joerg

Reply
0 Kudos
sandroalvesbras
Enthusiast
Enthusiast

IRIX201110141

https://downloads.dell.com/solutions/storage-solution-resources/BPforPS_SC_CoexistenceESXiHostAndiSC...

It is this documentation that you say.

It guides you to make a configuration with two NICs, allocating one vmnic as active in an ISCSI with the other inactive, just as we do when configuring a PS EQL.

The detail from what I understood is that both SC and EQL need to be on the same network. That is the main detail.

I think you misunderstood me. When I talk about putting an active vmnic and other standby vmnic it is in PG in a Standard vSwitch and not VDS. This is working without errors in an environment that we implemented a year ago.

I think as it is a new implementation, I will leave the PS EQL configured using only one vmNIC that I have to use 1Gb.

When configuring SC, we configure it using the documentation recommendation and after the migration is finished, we can redefine the vmnic settings using separate VLANID for the FDs.

However, we will have to reconfigure the SC iSCSI as well, it will take some work.

In practice I was not clear if we will have any losses using this configuration. If so, we can maintain the SC configuration like this, even after discontinuing PS EQL.

Do you see any problems from your experience?

Reply
0 Kudos
sandroalvesbras
Enthusiast
Enthusiast

IRIX201110141

I read the documentation again ...

I will do a PS EQL configuration normally to have access to the LUNs.

When adding the SC, do I just need to add the SC's VIP IP? Nothing more?!

I figured I would need to configure port bind for SC ISCSI as well and add SC's VIP IP to do the discovery.

I was confused now ... I think it's sleep ... Smiley Happy

Reply
0 Kudos
IRIX201110141
Champion
Champion

Yes, thats the guide i mean.

If your EQLs leaving the house i suggest that you re-configure the SC and ESXi back to the standard setup to avoid later trouble when dealing with Dell Support. It maktes life easier when your one and only storage it setup in its origin way and based on Vendor best practices.

You need to check the SC and PS iSCSI Adv. (Delayed Ack, Nooptimeout, LoginTimeout) settings for the swISCSI because IIRC there is a small difference. Also read the LATEST vSphere SC best practices for all the other ESX/Cluster related settings. Andre have posted above but i havent check if its the latest.

Regards,
Joerg

Reply
0 Kudos