VMware Cloud Community
aacjao
Contributor
Contributor
Jump to solution

vDS and iSCSI VMKernel

Is it better to use Standard Switches or vDS switches for iSCSI VMKernel ports?  Which is the better route to go?

Is there documentation outlining the process somewhere?

0 Kudos
1 Solution

Accepted Solutions
rickardnobel
Champion
Champion
Jump to solution

aacjao wrote:

Might it make management easier to use vDS rather than configuring this on each host?

For the iSCSI vmkernel it will not really make the management easier as you would have to create "virtual interfaces" unique for each host, so it will more or less the same amount of work.

aacjao wrote:

Is there documentation around setting this up on vDS?

The official manual on iSCSI has only documentation about how to set this up on standard vSwitches: http://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-50-storage... - page 76.

On how to setup the Vmkernel interfaces on a distributed switch you could see this manual and page 32.

Most best practise information on the web around iSCSI and VMware is around the ordinary vSwitches - which might make this more easy to setup and troubleshoot.

My VMware blog: www.rickardnobel.se

View solution in original post

0 Kudos
13 Replies
rickardnobel
Champion
Champion
Jump to solution

Both should be fine, but there is no real advantage of having the iSCSI vmk interfaces on a distributed switch since they are very static. (For multiple VM portgroups with many changes / adding new you have more gain of the distributed vSwitch.)

So if it feels more comfortable to have the VMKnics on ordinary vSwitch it should be no problem.

My VMware blog: www.rickardnobel.se
0 Kudos
aacjao
Contributor
Contributor
Jump to solution

Might it make management easier to use vDS rather than configuring this on each host?  Is there documentation around setting this up on vDS?

0 Kudos
rickardnobel
Champion
Champion
Jump to solution

aacjao wrote:

Might it make management easier to use vDS rather than configuring this on each host?

For the iSCSI vmkernel it will not really make the management easier as you would have to create "virtual interfaces" unique for each host, so it will more or less the same amount of work.

aacjao wrote:

Is there documentation around setting this up on vDS?

The official manual on iSCSI has only documentation about how to set this up on standard vSwitches: http://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-50-storage... - page 76.

On how to setup the Vmkernel interfaces on a distributed switch you could see this manual and page 32.

Most best practise information on the web around iSCSI and VMware is around the ordinary vSwitches - which might make this more easy to setup and troubleshoot.

My VMware blog: www.rickardnobel.se
0 Kudos
kcucadmin
Enthusiast
Enthusiast
Jump to solution

For me, where i'm having a problem understanding is when you step up to 10gig nics

i am migrating from dedicated 1gig nics to using VLAN's on a Duel Port 10gig nic.

so in vDS

i will have port groups for VM Network, ISCSI1, ISCSI2, VMotion, Mgmgt each on thier on VLAN using 2 10gig uplink to 2 Cisco Nexus 5548 using PVC.

i'm waiting on my second nexus to show up before i can start piecing this puzzle together, but in my reading its soundling like i may want to leave my vCenter server on legacy vswitches.  that's going to make this difficult as all my storage will not be connected via the 10gig uplinks on the dvswitch.

my understanding was everything could ride over the 10gig uplinks.  should i move my vcenter server outside "10gig box" for DR/ shutdowns, etc.

i already know what a pain it can be to power everything back up from a complete power off...  and my guess is if vcenter is down, the dvs  is down too?  or does it cont to function independently on each host, just no config changes to it?

0 Kudos
aacjao
Contributor
Contributor
Jump to solution

It continues to function, just not able to manage.  Check out this article:

http://www.yellow-bricks.com/2011/04/21/distributed-vswitches-go-hybrid-or-go-distributed/

0 Kudos
vGuy
Expert
Expert
Jump to solution

i already know what a pain it can be to power everything back up from a complete power off...  and my guess is if vcenter is down, the dvs  is down too?  or does it cont to function independently on each host, just no config changes to it?

You're absolutely correct! The hosts will continue to function as normal just that you will not be able to make any changes such as add additonal ports,etc. The reason being the data plane of the dvSwitch resides on individual ESX/i hosts and is not impacted by the unavailability of the vCenter. vCenter only has the control plane.

0 Kudos
aacjao
Contributor
Contributor
Jump to solution

So next question, should I configure a Standard Switch for each iSCSI port or just a single vSwitch for both ports?

0 Kudos
rickardnobel
Champion
Champion
Jump to solution

aacjao wrote:

So next question, should I configure a Standard Switch for each iSCSI port or just a single vSwitch for both ports?

You can do both ways. If using a single vSwitch you will have to enter the NIC teaming options and set on VMKnic as active on one vmnic and inactive on the other, and then the opposite on the other VMKnic.

My VMware blog: www.rickardnobel.se
kcucadmin
Enthusiast
Enthusiast
Jump to solution

I've been doing some reading and it seams there may be an issue with static port/dynamic port assignment with vm's and dvswitch if vcenter is offline.

you can not power on a vm, if vcenter is offline, and your vm is in a dvportgroup that is using static/dynamic port assignment.  seams it works fine with ephemeral port binding.

where this may become a problem is if your vcenter server is a VM, and you suffer a complete shutdown event.

you would need to power up your esxi host, connect directly to that host, and then power on vcenter vm.  however again if it's a member of a dvportgroup with static/dynamic port binding it can not power up. 

so becarefull where you put your vcenter vm

0 Kudos
rickardnobel
Champion
Champion
Jump to solution

Robert Samples wrote:

I've been doing some reading and it seams there may be an issue with static port/dynamic port assignment with vm's and dvswitch if vcenter is offline.

I think that only applies to "dynamic" binding, but not "static". In the static mode the configured port on the Distributed vSwitch is stored inside the VMs vmx file and it should be able to connect even with the vCenter online. The static is also default.

If no VM could power on with the vCenter offline then the whole Distributed vSwitch should be extremely unsecure and HA should have little possibility to power on VMs after failure involving the vCenter VM too.

My VMware blog: www.rickardnobel.se
0 Kudos
vGuy
Expert
Expert
Jump to solution

To add to the Rickard's comment, static port binding is not affected by the unavailability of the vCenter. Once the port is assigned to the port group with static binding it's reserved and guranteed until the VM is removed from the portgroup.

In my opinion static binding is the safest and best option to use. If there is a need for Dynamic port binding; an Ephemeral Portgroup could be used in conjunction for management and critical VMs. A good read on diff. port bindings: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=102231...

Also important to note that Dynamic Port Binding has been depricated in vSphere 5 and a new feature called Auto Expand (for Static binding) has been added.

0 Kudos
kcucadmin
Enthusiast
Enthusiast
Jump to solution

good to hear.  im planning a dvswitch implementation now, and was wondering how i was going to handle vcenter, sounds like static port binding would be fine for me, i dont have so many VM to hit ceiling....

0 Kudos
vGuy
Expert
Expert
Jump to solution

Yes, static binding is the recommended one..

0 Kudos