I am looking for best practices for switch configuration in
our esx environments.
From my research and experience I have seen best performance with flow control
enabled and 1000 full on the network side switches and jumbo frames, flow
control and 1000 full on the iscsi switches. I also manually configure all esx
hosts to 1000 full for consistency (although not always necessary I understand).
We connect to Equallogic arrays that by default take advantage of jumbo frames.
We use cisco switches in our environment.
Can anyone attest to these configs? Are there some documents
available where the suggested configurations are laid out in a single location
without having to skim through 200 page documents?
That sounds just like my environment. All servers use a dual port Intel NIC to connect to two separate Cisco stacks. The two stacks have an 8 port etherchannel connecting them. Each EqualLogic SAN connects to each stack. Flow control, Jumbo Frames, 1Gbps, Cisco on the SAN side. Flow control, 1 Gbps, HP on the LAN side.
It has worked very well for us. We are starting to really push the limits of our configuration. 20 EqualLogic arrays gets pretty cumbersome to manage in our environment. Your mileage may vary.
EqualLogic has some good documents on best practice ways to configure things like this though. Log in to their support site to download them.
Charles Killmer, VCP4
If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".