VMware Cloud Community
prutter
Enthusiast
Enthusiast
Jump to solution

Netowrking Environment with VMware

Hello All,

I just had a few questions related to the networking side of our infrastructure. We are about to move to a new building and we get a mostly new data center. We are taking all our hosts (3) but we are getting all new switches. Some of the questions i have are related to our SAN, and maximizing performance. We currently have an IBM N3600 (AKA - NETAPP) iSCSI SAN. There are two QLogic HBA's in each host and we are nning VSphere on all hosts. Since we are getting all new switches we were going to plan ahead and make everything 10GB at least on the backbone. I'm curious what the throughput is of those iSCSI HBA's is since i think this is going to be our slowing point, or will it be the hard drives? Is there anything we can do to the SAN and/or the Hosts to maximize perormance to our network? It seems pointless at this time to invest all the money into 10GB if our SAN can't even keep up. If anyone has any suggestions about how to maximize this new isntall i would really appreciate the input. Thanks

Reply
0 Kudos
1 Solution

Accepted Solutions
TobiasKracht
Expert
Expert
Jump to solution

Are your iscsi hba's 1gig or 10gig hba's?

But some general suggestions.

  • Make sure you have jumbo frames enabled on the filer, the switch and your vsphere hosts.

  • Make sure you have your virtual guests partitions aligned
    properly. Do a google on netapp partition alignment and read up if you
    aren't already familiar with.

Another thing you'll want to know before the move is, Are your hosts

even fully utilizing and pushing your 1gig (I'm assuming) iscsi network

throughput. If they aren't doing this then 10gigs isn't really going to

do much.

On the filer end of things, check to see how hard it's being pushed right now. Does your filer even have 10gig cards in it?

....

</div>

StarWind Software R&D

StarWind Software R&D http://www.starwindsoftware.com

View solution in original post

Reply
0 Kudos
3 Replies
AnatolyVilchins
Jump to solution

I've a HUGE fan of using less 10Gps ports
than more 1Gbps ports for modern server installations assuming you're
happy to deal with firewalling the trunked VLANs this implies.

As for hitting a full 10Gbps, well to be honest not many boxes can
get the full use of these links but importantly many servers can use



2Gbps, especially when using TOE and iSCSI-accelerated

NICs/HBAs/CNAs. But of course you need to deal with the full data path

  • it's no point 10gig'ing your servers and switches if your storage

can't cope and your N3600 only has 4 x 1Gbps NICs (iirc).

If you have &gt;1Gbps of server to server or server to client

traffic then I'd suggest going to 10Gig today but I'm not that sure

that you'll get that much benefit from the iSCSI unless you plan it

well.

</div>

from http://serverfault.com/questions/102265/networking-environment-with-vmware

Starwind Software Developer

www.starwindsoftware.com

Kind Regards, Anatoly Vilchinsky
Reply
0 Kudos
TobiasKracht
Expert
Expert
Jump to solution

Are your iscsi hba's 1gig or 10gig hba's?

But some general suggestions.

  • Make sure you have jumbo frames enabled on the filer, the switch and your vsphere hosts.

  • Make sure you have your virtual guests partitions aligned
    properly. Do a google on netapp partition alignment and read up if you
    aren't already familiar with.

Another thing you'll want to know before the move is, Are your hosts

even fully utilizing and pushing your 1gig (I'm assuming) iscsi network

throughput. If they aren't doing this then 10gigs isn't really going to

do much.

On the filer end of things, check to see how hard it's being pushed right now. Does your filer even have 10gig cards in it?

....

</div>

StarWind Software R&D

StarWind Software R&D http://www.starwindsoftware.com
Reply
0 Kudos
BenConrad
Expert
Expert
Jump to solution

With a 3+ host env I would skip the 10Gb on the hosts, 10Gb is expensive (port & transceiver costs). Qlogic has 10Gb CNAs but it's LAN and Fibre Channel.

With the QLA 1Gb/s iSCSI cards and vSphere you can get > 1Gb/s using vSphere NMP Round Robin. That will get you closer to 2Gb/s (4Gb/s full duplex) minus some Ethernet overhead. At this point your SAN should start to become the bottleneck. And in most cases the random I/O on the SAN is the limiting factor, driving latency upward to a point where the disks are saturated. 2Gb/s (256MB/s) is a lot of I/O for 1 host.

Also, it's important that your switches have a low oversubscription ratio, should be as close to 1:1 as possible. Switch port buffer size is important as well to reduce dropping frames.