VMware Cloud Community
Rick88
Contributor
Contributor
Jump to solution

Differences between ISCSI connections to storage in vSphere 4.1 and 5.0.

I am running into trouble configuring the storage connections in a new VSphere 5.0 test environment. In our VSphere 4.1 environment it was possible to create a virtual switch add two Broadcom NetXtreme II 5709 NICs. Set both to active. Set IP Hash as load balancing. (Etherchannel LACP on physical switchs) This configuration used one kernel port and one IP address. Then add both our NetApp NFS storage and Equillogic ISCSI storage. In VSphere 5.0 I can't create this same configuration. Do I need to create separate vmkernal ports with separate IP addresses? The idea was to have redundant connections to storage and have the throughput of both NICS. Possibly adding additional NICS if needed in the futuer. Any help with this would be greatly appreciated.

How do I work around this:

"VMkernel network adapter must have exactly one active uplink and no standby uplinks to be eligible for binding to the iSCSI HBA."

0 Kudos
1 Solution

Accepted Solutions
vmnomad
Enthusiast
Enthusiast
Jump to solution

I can definitely say that the way you used to configure iSCSI multipathing is the incorrect one.

The main idea is to let Native Multipathing Plugin to decide which network path to choose, not the vSwitch.

At the network layer you must have 1 VMK per one physical NIC.

This technique applies both to vSphere 4.1 and 5.0, but with the 5.0 it is configured way easier and faster.

here is the very brief consequence of the steps

1. Get 2 VMK interfaces and assign one IP Address per each VMK.

2. Configure each VMK to use only one physical NIC

3 go to iSCSI software storage adapter, enable it and bind your 2 VMKs to it.

4 connect to your iSCSI storage

5 rescan devices, create VMFS datastore

6. Set the desired load balancing method per VMFS or just configure the default one prior creating new VMFS

and it is highly recommended to read this guide - http://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-50-storage...

VCP-4/5, VCAP4-DCA ID-553, vExpert, CCNP, CCSP, MCSE http://vmnomad.blogspot.com

View solution in original post

0 Kudos
6 Replies
vmnomad
Enthusiast
Enthusiast
Jump to solution

I can definitely say that the way you used to configure iSCSI multipathing is the incorrect one.

The main idea is to let Native Multipathing Plugin to decide which network path to choose, not the vSwitch.

At the network layer you must have 1 VMK per one physical NIC.

This technique applies both to vSphere 4.1 and 5.0, but with the 5.0 it is configured way easier and faster.

here is the very brief consequence of the steps

1. Get 2 VMK interfaces and assign one IP Address per each VMK.

2. Configure each VMK to use only one physical NIC

3 go to iSCSI software storage adapter, enable it and bind your 2 VMKs to it.

4 connect to your iSCSI storage

5 rescan devices, create VMFS datastore

6. Set the desired load balancing method per VMFS or just configure the default one prior creating new VMFS

and it is highly recommended to read this guide - http://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-50-storage...

VCP-4/5, VCAP4-DCA ID-553, vExpert, CCNP, CCSP, MCSE http://vmnomad.blogspot.com
0 Kudos
Rick88
Contributor
Contributor
Jump to solution

Thanks for the answer. I marked the question as answered. Your answer fits with what I read in the storage best practices guide. I just didnt understand why a previous employee configured the vSphere 4.1 environment differently. The way it was done seemed fine but I understand it was not correct. Ok so now with that said, would you make any changes to the 5.0 configuration if I told you I also wanted to connect to NetApp NFS storage using the same NICs? I added NFS storage and both seem to operate fine together. However then I did a little bit of failover testing and the iSCSI storage seems to be rerouting if one of the adapters is unplugged however NFS traffic does not seem to reroute. If I take down one adapter it throws all paths down events. If I take the other adapter down it throws path redundency events but continues to operate. Not sure what happens in vSphere 4.1 if I do this. Can't test there right now. Should I configure NICS differently for NFS storage?

0 Kudos
Josh26
Virtuoso
Virtuoso
Jump to solution

Rick88 wrote:

Thanks for the answer. I marked the question as answered. Your answer fits with what I read in the storage best practices guide. I just didnt understand why a previous employee configured the vSphere 4.1 environment differently.

Although "wrong", I've seen plenty of ESXi 4.x installations running that way. The usual reason is down to a technician that didn't know any better.

0 Kudos
vmnomad
Enthusiast
Enthusiast
Jump to solution

Surely, these wrongly configured ESXi hosts can run smoothly, and I even believe that load balancing can work quite well. However, I am convinced that NIC failure detection is not as reliable and as fast as Storage Path failure detection. Therefore, it all looks good till first networking incident.

VCP-4/5, VCAP4-DCA ID-553, vExpert, CCNP, CCSP, MCSE http://vmnomad.blogspot.com
0 Kudos
vmnomad
Enthusiast
Enthusiast
Jump to solution

NFS doesn't support multipathing. For failover purposes you have to configure NIC Teaming.

Basically this requirement results in the situation where you will need at least couple teamed NICs for NFS and separate couple of UNteamed NICs for iSCSI storage.

VCP-4/5, VCAP4-DCA ID-553, vExpert, CCNP, CCSP, MCSE http://vmnomad.blogspot.com
0 Kudos
Rick88
Contributor
Contributor
Jump to solution

Why did I know you were going to say that! I have a lot of spare NICs on the servers but not the spare switch ports. I have a total of 32 hosts I will sooner or later be moving to this environment. That would require adding an additional 64 switch ports in my storage stack and because of redundency I would want that to be in at least two switches. Yikes! Maybe the previous employee figured all this out and decided to go the less expensive way (cheap way). Well this in itself is a good reason to switch our NetApp filers to using iSCSI for host storage instead of its current NFS configuration. That way I dont need to support NFS at all.

0 Kudos