VMware Cloud Community
snakehead
Contributor
Contributor

iSCSI Network Configuration. Trunk or not Trunk?

Hi guys,

First of all, pardon me for lack of knowledge in VMware and i might be asking a silly question. Anyway, i would like to seek for your advice on how should i setup my iSCSI storage. Basically i'm using a DL 380 G7 server for ESX 4.1 and i have allocated 4 ports(Giga) for iSCSI network. These 4 ports will be connected to DELL Equalogic storage.

Now apart from trunking on the switch level, what else I need to do to ensure that the 4 ports are utilized as single network pipe to storage?

Kindly advice

TQ.

0 Kudos
9 Replies
vGuy
Expert
Expert

Welcome to the communities! You will need to create separate vmkernel ports (with unique IPs) for each physical nic. Those vmkernel ports will then need to be associated with iSCSI initiator using port binding. This will make those NICs as a separate path for the iSCSI storage traffic. Below are couple of articles describing the steps for configuring iSCSI port binding on vSphere 4:

http://www.vmware.com/pdf/vsphere4/r40/vsp_40_iscsi_san_cfg.pdf

http://blogs.vmware.com/kbtv/2011/01/how-to-configure-iscsi-port-binding-on-vsphere-4.html

marchlam
Enthusiast
Enthusiast

You can check 'Dell 's recommendation

http://i.dell.com/sites/content/business/solutions/engineering-docs/en/Documents/NetworkingGuide_vSp...

NETWORKING BEST
PRACTICES
FOR VMWARE® vSPHERE 4 ON DELL™
POWEREDGE™ BLADE SERVERS

Althrough this paper write for Blades, but this configuration is also apply to your environment;

I suggest you to read page 11 to 13.

0 Kudos
Josh26
Virtuoso
Virtuoso

snakehead wrote:

Now apart from trunking on the switch level, what else I need to do to ensure that the 4 ports are utilized as single network pipe to storage?

Kindly advice

TQ.

Make sure you understand what you will achieve.

There is no configuration where four 1Gb ports will allow a single VM to achieve 4Gb throughput on a single connection.

As you mentioned "storage" and iSCSI, using a team of adapters isn't actually supported. You will should remove any trunk you have created on your switch, and focus on implementing multipathing. There's a good article here:

http://virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iscsi-with-vmware-vs...

Of course it's for an old version - but it lays out the process very well.

0 Kudos
snakehead
Contributor
Contributor

Thanks guys for all the great articles.

I'm not sure if the switch trunk should be remove, what i want to achieve is to utilize all four ports to storage for better throughout. Isn't this has to be configured both ways, meaning from switch and vswitch level?

Kindly advice

TQ

0 Kudos
vGuy
Expert
Expert

snakehead wrote:

I'm not sure if the switch trunk should be remove, what i want to achieve is to utilize all four ports to storage for better throughout. Isn't this has to be configured both ways, meaning from switch and vswitch level?

Kindly advice

TQ

As per my understanding there is no special configuration required on the pSwitch ports. In my opinion the configuration would be rather simple if the pSwitch ports are configured in access mode.

Incase the vmkernel NIC IPs are in different subnets, you can put the vmk ports on separate vSwitches.

0 Kudos
snakehead
Contributor
Contributor

Hi vGuy,

Thanks for reply.

Meaning to say, after i've bind all 4 ports together, i should be able to achieve 4g connection to storage? How about the path selection, round robin?

I'm not good at network, but a network engineer told me that if the pSwitch is not configured as link aggregation, then traffic would only utilize 1 path instead of 4 and the rest will act as standby...@.@"  now i'm getting more confuse:smileysilly:...Anyway, what will be the best practice for iSCSI config?

Kindly advice.Thx!

0 Kudos
vGuy
Expert
Expert

Once iSCSI port binding is configured, the multipathing is handled by the storage stack of vmkernel. Therefore, there shouldn't be any link aggregation setup on the physical switch ports.

For the multipathing policy, RR is most widely used and recommended for active/active arrays. Although you may want to double check the storage vendor's recommendation.

0 Kudos
snakehead
Contributor
Contributor

Let say there is no port binding configured on all 4 vmknics, can i say the connection to storage is only on single uplink and thus the connection is only 1 gig?

0 Kudos
Josh26
Virtuoso
Virtuoso

snakehead wrote:

Hi vGuy,

Meaning to say, after i've bind all 4 ports together, i should be able to achieve 4g connection to storage? How about the path selection, round robin?

Kindly advice.Thx!

As per my earlier post, this simply isn't possible.

The "round robin" system, is exactly as the name implies "round robin". Although this means "using all your NICs" means "one and then the next one". You will not be able to access a single LUN on > 1GB throughput. Multiple LUNs may be accessed independantly at 1Gb each though.

Your network engineer is assuming you want to team the ports, but as we've said, VMware does not support this. If it did, it wouldn't make sense. A typical trunk consists of "route by ip hash". Since a single LUN will mean a single source and destination IP, all that trunk traffic will run along a single NIC. Multipath at least gives you better options.

You will get best throughput using multipath, as per the advise earlier on this thread.