VMware Cloud Community
m_grewnow
Contributor
Contributor

End-to-end FC settings

Hello,

Is it in keeping with best or recommended practices to ensure your end-to-end FC throughput settings are the same?

I ask because...

Our servers contain 2x Qlogic 4Gb single port HBAs, our fabric switches can use 1/2/4Gb speed.  Our NetApp array ports can use 1/2/4Gb.  Our disk shelves are rated at 1/2Gb and not 4Gb which appears to be our choke point.  Everything in the path from the host initiator to target is running 4Gb/auto negotiated, but will I get a performance increase if I change all rated speeds in the path to 2Gb?  Is it best to keep all devices operating on the same lowest common denominator in regards to link speed?  All shared storage is running off of the disk shelves rated at 1/2Gb.  We are running ESXi 4.1U2.

Thank you.

Matt

Reply
0 Kudos
1 Reply
TheEsp
Enthusiast
Enthusiast

HI m_grewnow

Keeping everything 4GB end to end is sometimes not possible due to the age and nature of kit , this should not cause you any performance issues though and the end of the day you'll need to make sure that the workload your pushing back to your Netapp is keeping up.

Do you have any tools with your Netapp to look at Aggreate performance that will give you an indication on how things are running on the Filer , then if you still experiencing problems work your way back to the host.

1. Check stats on aggregate with tools.

2. Check your Fiber switches for any errors.

3.Check your que depth settings on your HBA.

Good luck

David

Reply
0 Kudos