VMware Cloud Community
surbahns
Contributor
Contributor
Jump to solution

Multiple QLogic QLE2462, Dual Channel HBAs in a ESX Server?

We are in the process of investigating the possibilites to virutalize some of our FC SAN attached servers. These servers are rather data intensive, and each of the 10 physical servers have 2 paths for our EMC CX SAN. The question comes to virtualizing them, we will continue for the near future be using the same LUNS as the original physical servers used in RAW mode. These luns are DATA luns only and the operating systems will be held in VMDK images on a VMFS LUN.

Since each of these servers had a connection to our SAN Fabric consolidating 3 or 4 of them to a single VMWare server would free up many Fiber connections on the switch. And the question has been raised if once before 4 servers having their own direct SAN connection would be impacted having to share a single path to the SAN. This made us wonder if anyone out there has mutliple HBAs in a single ESX host? Should we add 2x or maybe even 3x HBA QLE2462 to our single ESX HOST BOX? Will ESX use these cards or will it still default to just having one connection and the rest just be seen as standby?

Thanks in advance.

0 Kudos
1 Solution

Accepted Solutions
mcowger
Immortal
Immortal
Jump to solution

Well, your hosts are doing (at least in this sample) less than 500K/sec, and a reasonable number of IOPs, so 1 card shoudl be plenty (though i would recommend 2 cars for redundancy).






--Matt

VCP, vExpert, Unix Geek

--Matt VCDX #52 blog.cowger.us

View solution in original post

0 Kudos
8 Replies
mcowger
Immortal
Immortal
Jump to solution

ESX supports multiple paths and fiber cards. We run all of ours on 2 QLE2462s, and if you have your zoning right you can do n-paths (where n < 32).

Whether you need that many isn't something we can answer with the data you've given. If each host was only running 100IOPs and 100MBit,sharing a single card for what is now 5VMs would be more than fine. However, if each physical host were pegging its card, then sharing would be a bad plan. Whats your throughput on the physical hosts?






--Matt

VCP, vExpert, Unix Geek

--Matt VCDX #52 blog.cowger.us
0 Kudos
surbahns
Contributor
Contributor
Jump to solution

Thank you for responding. Here is the IOs/Sec for our TOP 5 of 10 physical servers.

  1. SERVERA HBA1 445 IO/s

  2. SERVERA HBA2 414 IO/s

  3. SERVERB HBA1 200 IO/s

  4. SERVERB HBA2 51 IO/s

  5. SERVERC HBA1 461 IO/s

  6. SERVERC HBA2 228 IO/s

  7. SERVERD HBA1 449 IO/s

  8. SERVERD HBA2 234 IO/s

  9. SERVERI HBA1 80 IO/s

  10. SERVERI HBA2 110 IO/s

0 Kudos
mcowger
Immortal
Immortal
Jump to solution

Do you have throughput data?






--Matt

VCP, vExpert, Unix Geek

--Matt VCDX #52 blog.cowger.us
0 Kudos
surbahns
Contributor
Contributor
Jump to solution

No, we collected these numbers from PowerPath. Would the SAN give us the throughput numbers?

0 Kudos
mcowger
Immortal
Immortal
Jump to solution

It might - I dont use the CX line. Your hosts certainly would.






--Matt

VCP, vExpert, Unix Geek

--Matt VCDX #52 blog.cowger.us
0 Kudos
surbahns
Contributor
Contributor
Jump to solution

For some reason I can't post the screen shot of this but here is the details I collected with performance monitor getting the average throughput to the SAN disks for a peroid of 3 minutes: R indicates Read W: Indicated Write, in Bytes

SERVERA: R: 4,096 W: 4,061

SERVERB: R: 4,065 W: 4162

SERVERC: R: 5632 W: 5138

SERVERD: R: 12553 W: 7428

SERVERI: R: 9688 W: 18926

0 Kudos
mcowger
Immortal
Immortal
Jump to solution

Well, your hosts are doing (at least in this sample) less than 500K/sec, and a reasonable number of IOPs, so 1 card shoudl be plenty (though i would recommend 2 cars for redundancy).






--Matt

VCP, vExpert, Unix Geek

--Matt VCDX #52 blog.cowger.us
0 Kudos
surbahns
Contributor
Contributor
Jump to solution

Thanks for the info!

0 Kudos