vSphere vNetwork

 View Only
  • 1.  vSwitch config for iSCSI with EMC AX4

    Posted Mar 13, 2010 08:09 AM

    Hi All,

    Yep - another vSwicth config questions! Wondering if anyone might be able to provide some assistance on the correct vSphere config to support our AX4-5i SAN (iSCSI) - specifically maximising our iSCSI performance (through multipathing) and maintaining appropriate redundancy (vSwitches and port groups)

    We have:

    - 2 x DL380's with GB 8 nics each (4 onboard, 4 expansion)

    - 2 x HP Procurve 2910al-48G with uplinks between them and to the rest of our production network.

    - EMC Clariion AX4-5i (dual SP) iSCSI (2 x 1GB ports per SP)

    - Currently 4 VLAN's (iSCSI-971, Vmotion-972, ServiceConsole-978, VM-978).

    - SC is currently on production network but we've setup a dedicated VLAN should we move to a dedicated management VLAN in the future.

    - Enterprise Plus licensing

    I've came across quite a bit of (often contrasting) info on the web. There is an EMC whitepaper on the clariion's however it seems to concentrate on the CX series and varies on its vSwitch recommendations between models. The VMWare iSCSI recommendations seem solid however various blogs and articles note that configs seem to have changed with vSphere and again in U1 in regard to multipathing.

    Currently got these distributed switches and port groups on the drawing board:

    dvSwitch0 (4 uplinks split between both physical switches, all with access to ISCSI and VMOTION VLANS)

    - PG_ISCSI (VLAN 971, 2 active NICS, 2 standy NICS)

    - PG_VMOTION (VLAN 972, 2 active NICS, 2 standby NICS)

    dvSwitch1 (4 uplinks split between both physical switches, all with access to SC and PROD VLANS)

    - PG_SC (VLAN 978, 2 active NICS, 2 standby NICS)

    - PG_PROD (VLAN 978, 2 active NICS, 2 standby NICS)

    The active / standby NICS are reversed between port groups in each dvSwitch, and also split between cards (onboard / expansion) in each server for redundancy to survive host, card, nic or switch failure..

    The above is one option but I've got a number flying round in my head! Having 2 dedicated NICS just for VMotion seems overkill for example.

    After scouring the net, have come across these questions:

    - Do I require multiple vmkernel ports for iSCSI to ensure multipathing is configured correctly.

    - If so, do they need to be on different subnets?

    - Is Round Robin the appropriate model to use, or stick to MRU?

    - Should I be installing PowerPath on the hosts to handle multipathing?

    - If using multiple port groups on a vSwitch should the adapters be configured in Active/Standy, or Active/Unused?

    - Which failover algorithm shouls be used (port based / ip hash)?

    - While there are many models for vSwitch config, what would be an effective config given these requirements?

    - Can / should dvSwitches now be used in U1 for this type of iSCSI configuration?

    - Once configured how can I most effectively test that throughput is maximised, and redundancy is effective (assume just yanking a network cable is still valid!)

    Any help appreciated!

    Thanks,

    Andrew

    References:

    http://kensvirtualreality.wordpress.com/2009/05/13/the-great-vswitch-debate-part-8-final/

    http://www.kendrickcoleman.com/index.php?/Tech-Blog/vsphere-host-nic-configuration.html

    http://www.ntpro.nl/blog/archives/1283-vSphere-DvSwitch-caveats-and-best-practices!.html

    http://virtualgeek.typepad.com/virtual_geek/2009/08/important-note-for-all-emc-clariion-customers-using-iscsi-and-vsphere.html

    http://virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iscsi-with-vmware-vsphere.html

    http://goingvirtual.wordpress.com/2009/12/01/vsphere-4-0-update-1-with-software-iscsi-and-2-paths-on-dvswitch/

    http://goingvirtual.wordpress.com/2009/09/21/celerra-iscsi-targets-visable-to-all-initiators/

    http://www.emc.com/collateral/hardware/white-papers/h1416-emc-clariion-intgtn-vmware-wp.pdf



  • 2.  RE: vSwitch config for iSCSI with EMC AX4

    Posted Mar 13, 2010 08:57 AM

    Do I require multiple vmkernel ports for iSCSI to ensure multipathing is configured correctly.

    Yes.

    Check AX/CX best practice for iSCSI.

    You need at least two vswitches, for two isolated iSCSI networks.

    Each vswitch will have 1 vmkernel interface and 1 NIC.

    On AX side you have to configure each controller with 1 IP on first network and 1 IP on second network.

    Finally you must see 4 path.

    Is Round Robin the appropriate model to use, or stick to MRU?

    I suggest to use MRU, cause AX controllers work in active/passive on the SAME LUN.

    Different LUNs can work on different controllers (so the hist is to use at least 2 LUNs).

    Should I be installing PowerPath on the hosts to handle multipathing?

    Not necessary.

    Note that PowerPath/VE (for ESX) is sold apart and require the Enterprise Plus license.

    But ESX can work also without it :smileyhappy:

    Andre