VMware Cloud Community
TonyJK
Enthusiast
Enthusiast
Jump to solution

Switches for implementing VI 3 with iSCSI ?

We are going to implement VI 3 with iSCSI SAN.

From our understanding, the iSCSI SAN should be connected to 2 Gigabit Switches for redundancy. Is it necessary for both of them to be same model ? Can we use 2 existing switches (There are ports already used for other purposes) ?

Your advice is sought.

Tags (3)
Reply
0 Kudos
1 Solution

Accepted Solutions
mcowger
Immortal
Immortal
Jump to solution

They dont HAVE to be, but it makes it easier if they are (config mgmt, etc).

You 'should' have a sep. network for iSCSI, but I dont believe its actually required. Talk to your network engineers to make sure your switches can handle it.

--Matt

--Matt VCDX #52 blog.cowger.us

View solution in original post

Reply
0 Kudos
9 Replies
mcowger
Immortal
Immortal
Jump to solution

They dont HAVE to be, but it makes it easier if they are (config mgmt, etc).

You 'should' have a sep. network for iSCSI, but I dont believe its actually required. Talk to your network engineers to make sure your switches can handle it.

--Matt

--Matt VCDX #52 blog.cowger.us
Reply
0 Kudos
oschistad
Enthusiast
Enthusiast
Jump to solution

The requirements for iSCSI are actually quite loose, but for the sake of high availability it is highly recommended to both use redundant equipment, and to separate your storage infrastructure from your network infrastructure.

The actual models used does not really matter all that much - there are no interoperability issues worth noting these days - but they should be reliable. Since the ESX server will be limited to one active path per LUN, you will typically never see more than 1Gb of traffic per ESX, which means that the backplane switching capacity of your physical switches is unlikely to be a problem. That being said, for a large number of ESXes with multiple LUNs, and a mid- to high-end iSCSI SAN (ie with the ability to aggregate multiple Gb ports) you may be able to generate a big switching load anyway in which case you should also consider the backplane speeds after all - ymmv.

As to my recommendation to separate storage from networking, even though the same fundamental protocols are used; This is based on my experience that network outages typically happen because of reconfiguration, and that network edge switches are typically maintained a lot more often than storage infrastructure. By separating these into completely separate networks, the likelyhood of your ESX servers losing their storage due to a network admin having a bad day is much lower.

Lastly, I would definitely consider using hardware iSCSI initiators rather than the build-in software initiator in VMware ESX, if this is for a production site. Although the SW-ISCSI works, it will eat a lot more CPU cycles while doing the TCP/IP processing than would a dedicated iSCSI HBA. Of course, this is also a cost driver so if performance is not a big issue you may be able to cut a few hundred $$s by using the SW initiator. Again, YMMV Smiley Happy

chucks0
Enthusiast
Enthusiast
Jump to solution

As others have said, the requirements for iSCSI aren't very stringent, but skimping on the switches could cause you a lot of issues down the road. There are several features that are important in an iSCSI network (flow control, jumbo frames, etc) and not all switches can have these features enabled at the same time. In addition, many gigabit switches have a shared architecture with 4 or 8 ports limited to a combined 1GB of throughput.

We have tested several switches and have found the Cisco 3750 switches to work very well.

Reply
0 Kudos
TonyJK
Enthusiast
Enthusiast
Jump to solution

Many thanks for your advice.

We are going to use DELL PE2950 with as the new ESX Host. According to the specification, there is an integrated Dual Broadcom Gigabit Network Card with TOE hardware enabled. Does it mean that it already has hardware iSCSI initiator ? If yes, does it mean that it is not necessary for us to enable iSCSI initiator in ESX Host ?

Thanks

Reply
0 Kudos
mcowger
Immortal
Immortal
Jump to solution

No, these NICs wont work as HW iSCSI cards. You will need to use the SW inititator or buy supported iSCSI cards from QLogic.

--Matt

--Matt VCDX #52 blog.cowger.us
TonyJK
Enthusiast
Enthusiast
Jump to solution

Many thanks for your advice.

From VMWare documentation, it appears that the choices are limited to - QLogic QLA4050c / QLA4052c / QLA4060c and QLA4062c. Does single port one (Like QLA4050c) is better than a dual port one (From the port of redundancy) ?

From my understanding, we need 4 NICs for VMs / VC Server and 2 such iSCSI Hardware Iniitator cards for connecting to iSCSI SAN. Is it correct ?

Thanks again

Reply
0 Kudos
mcowger
Immortal
Immortal
Jump to solution

Well, a dual port one gives you the possibility of dual path, so yes, you get more redundancy. Personally, I buy single port cards and buy 2 of them rather than a single dual port card.

As for NICs - what you've designed shoudl be fine as long as you are using VLAN trunking.

--Matt

--Matt VCDX #52 blog.cowger.us
Reply
0 Kudos
TonyJK
Enthusiast
Enthusiast
Jump to solution

Dear Matt,

Many thanks for your advice.

Sorry for asking a silly question - My understanding is that a Daul port card = 2 single port card. If the purpose is to increase redundancy, we should use 2 single port cards. Can you elaborate how a dual port card can increase redundancy ? Do you mean to use 2 dual port cards ?

Thanks

Reply
0 Kudos
mcowger
Immortal
Immortal
Jump to solution

a dual port card isn't the same as 2 single port cards.

Say have a box with 2 single port cards, each on different PCI busses. 1 bus fails - you lose 1 card but stay up on the other card. If you had a single dual port card, you've lost both ports.

Granted such a failure is very rare, but I've certainly seen them Smiley Happy

--Matt

--Matt VCDX #52 blog.cowger.us
Reply
0 Kudos