We are in the process of investigating the possibilites to virutalize some of our FC SAN attached servers. These servers are rather data intensive, and each of the 10 physical servers have 2 paths for our EMC CX SAN. The question comes to virtualizing them, we will continue for the near future be using the same LUNS as the original physical servers used in RAW mode. These luns are DATA luns only and the operating systems will be held in VMDK images on a VMFS LUN.
Since each of these servers had a connection to our SAN Fabric consolidating 3 or 4 of them to a single VMWare server would free up many Fiber connections on the switch. And the question has been raised if once before 4 servers having their own direct SAN connection would be impacted having to share a single path to the SAN. This made us wonder if anyone out there has mutliple HBAs in a single ESX host? Should we add 2x or maybe even 3x HBA QLE2462 to our single ESX HOST BOX? Will ESX use these cards or will it still default to just having one connection and the rest just be seen as standby?
Thanks in advance.
Well, your hosts are doing (at least in this sample) less than 500K/sec, and a reasonable number of IOPs, so 1 card shoudl be plenty (though i would recommend 2 cars for redundancy).
--Matt
VCP, vExpert, Unix Geek
ESX supports multiple paths and fiber cards. We run all of ours on 2 QLE2462s, and if you have your zoning right you can do n-paths (where n < 32).
Whether you need that many isn't something we can answer with the data you've given. If each host was only running 100IOPs and 100MBit,sharing a single card for what is now 5VMs would be more than fine. However, if each physical host were pegging its card, then sharing would be a bad plan. Whats your throughput on the physical hosts?
--Matt
VCP, vExpert, Unix Geek
Thank you for responding. Here is the IOs/Sec for our TOP 5 of 10 physical servers.
SERVERA HBA1 445 IO/s
SERVERA HBA2 414 IO/s
SERVERB HBA1 200 IO/s
SERVERB HBA2 51 IO/s
SERVERC HBA1 461 IO/s
SERVERC HBA2 228 IO/s
SERVERD HBA1 449 IO/s
SERVERD HBA2 234 IO/s
SERVERI HBA1 80 IO/s
SERVERI HBA2 110 IO/s
Do you have throughput data?
--Matt
VCP, vExpert, Unix Geek
No, we collected these numbers from PowerPath. Would the SAN give us the throughput numbers?
It might - I dont use the CX line. Your hosts certainly would.
--Matt
VCP, vExpert, Unix Geek
For some reason I can't post the screen shot of this but here is the details I collected with performance monitor getting the average throughput to the SAN disks for a peroid of 3 minutes: R indicates Read W: Indicated Write, in Bytes
SERVERA: R: 4,096 W: 4,061
SERVERB: R: 4,065 W: 4162
SERVERC: R: 5632 W: 5138
SERVERD: R: 12553 W: 7428
SERVERI: R: 9688 W: 18926
Well, your hosts are doing (at least in this sample) less than 500K/sec, and a reasonable number of IOPs, so 1 card shoudl be plenty (though i would recommend 2 cars for redundancy).
--Matt
VCP, vExpert, Unix Geek
Thanks for the info!