VMware Cloud Community
system_ict
Contributor
Contributor

Storage Paths not loadbalanced

Hi,

I'm trying to load balance the LUN's presented to my ESX hosts across the host ports of our EVA 8000 (= an active/active array).

Here is the output of esxcfg-mpath -l:

Disk vmhba0:0:0 /dev/cciss/c0d0 (69973MB) has 1 paths and policy of Fixed

Local 11:8.0 vmhba0:0:0 On active preferred

RAID Controller (SCSI-3) vmhba1:0:0 (0MB) has 8 paths and policy of Fixed

FC 16:0.0 500110a000857688<->50001fe1500b76d8 vmhba1:0:0 On active preferred

FC 16:0.0 500110a000857688<->50001fe1500b76dc vmhba1:1:0 On

FC 16:0.0 500110a000857688<->50001fe1500b76da vmhba1:2:0 On

FC 16:0.0 500110a000857688<->50001fe1500b76de vmhba1:3:0 On

FC 16:0.1 500110a00085768a<->50001fe1500b76d9 vmhba2:0:0 On

FC 16:0.1 500110a00085768a<->50001fe1500b76dd vmhba2:1:0 On

FC 16:0.1 500110a00085768a<->50001fe1500b76df vmhba2:2:0 On

FC 16:0.1 500110a00085768a<->50001fe1500b76db vmhba2:3:0 On

Disk vmhba1:0:1 /dev/sda (10240MB) has 8 paths and policy of Fixed

FC 16:0.0 500110a000857688<->50001fe1500b76d8 vmhba1:0:1 On active preferred

FC 16:0.0 500110a000857688<->50001fe1500b76dc vmhba1:1:1 On

FC 16:0.0 500110a000857688<->50001fe1500b76da vmhba1:2:1 On

FC 16:0.0 500110a000857688<->50001fe1500b76de vmhba1:3:1 On

FC 16:0.1 500110a00085768a<->50001fe1500b76d9 vmhba2:0:1 On

FC 16:0.1 500110a00085768a<->50001fe1500b76dd vmhba2:1:1 On

FC 16:0.1 500110a00085768a<->50001fe1500b76df vmhba2:2:1 On

FC 16:0.1 500110a00085768a<->50001fe1500b76db vmhba2:3:1 On

Disk vmhba1:0:2 /dev/sdb (311296MB) has 8 paths and policy of Fixed

FC 16:0.0 500110a000857688<->50001fe1500b76d8 vmhba1:0:2 On active preferred

FC 16:0.0 500110a000857688<->50001fe1500b76dc vmhba1:1:2 On

FC 16:0.0 500110a000857688<->50001fe1500b76da vmhba1:2:2 On

FC 16:0.0 500110a000857688<->50001fe1500b76de vmhba1:3:2 On

FC 16:0.1 500110a00085768a<->50001fe1500b76d9 vmhba2:0:2 On

FC 16:0.1 500110a00085768a<->50001fe1500b76dd vmhba2:1:2 On

FC 16:0.1 500110a00085768a<->50001fe1500b76df vmhba2:2:2 On

FC 16:0.1 500110a00085768a<->50001fe1500b76db vmhba2:3:2 On

Disk vmhba1:0:3 /dev/sdc (770048MB) has 8 paths and policy of Fixed

FC 16:0.0 500110a000857688<->50001fe1500b76d8 vmhba1:0:3 On

FC 16:0.0 500110a000857688<->50001fe1500b76dc vmhba1:1:3 On active preferred

FC 16:0.0 500110a000857688<->50001fe1500b76da vmhba1:2:3 On

FC 16:0.0 500110a000857688<->50001fe1500b76de vmhba1:3:3 On

FC 16:0.1 500110a00085768a<->50001fe1500b76d9 vmhba2:0:3 On

FC 16:0.1 500110a00085768a<->50001fe1500b76dd vmhba2:1:3 On

FC 16:0.1 500110a00085768a<->50001fe1500b76df vmhba2:2:3 On

FC 16:0.1 500110a00085768a<->50001fe1500b76db vmhba2:3:3 On

Disk vmhba1:0:4 /dev/sdd (311296MB) has 8 paths and policy of Fixed

FC 16:0.0 500110a000857688<->50001fe1500b76d8 vmhba1:0:4 On

FC 16:0.0 500110a000857688<->50001fe1500b76dc vmhba1:1:4 On

FC 16:0.0 500110a000857688<->50001fe1500b76da vmhba1:2:4 On

FC 16:0.0 500110a000857688<->50001fe1500b76de vmhba1:3:4 On

FC 16:0.1 500110a00085768a<->50001fe1500b76d9 vmhba2:0:4 On active preferred

FC 16:0.1 500110a00085768a<->50001fe1500b76dd vmhba2:1:4 On

FC 16:0.1 500110a00085768a<->50001fe1500b76df vmhba2:2:4 On

FC 16:0.1 500110a00085768a<->50001fe1500b76db vmhba2:3:4 On

Disk vmhba1:0:5 /dev/sde (770048MB) has 8 paths and policy of Fixed

FC 16:0.0 500110a000857688<->50001fe1500b76d8 vmhba1:0:5 On

FC 16:0.0 500110a000857688<->50001fe1500b76dc vmhba1:1:5 On

FC 16:0.0 500110a000857688<->50001fe1500b76da vmhba1:2:5 On

FC 16:0.0 500110a000857688<->50001fe1500b76de vmhba1:3:5 On

FC 16:0.1 500110a00085768a<->50001fe1500b76d9 vmhba2:0:5 On

FC 16:0.1 500110a00085768a<->50001fe1500b76dd vmhba2:1:5 On active preferred

FC 16:0.1 500110a00085768a<->50001fe1500b76df vmhba2:2:5 On

FC 16:0.1 500110a00085768a<->50001fe1500b76db vmhba2:3:5 On

Disk vmhba1:0:6 /dev/sdf (311296MB) has 8 paths and policy of Fixed

FC 16:0.0 500110a000857688<->50001fe1500b76d8 vmhba1:0:6 On

FC 16:0.0 500110a000857688<->50001fe1500b76dc vmhba1:1:6 On

FC 16:0.0 500110a000857688<->50001fe1500b76da vmhba1:2:6 On active preferred

FC 16:0.0 500110a000857688<->50001fe1500b76de vmhba1:3:6 On

FC 16:0.1 500110a00085768a<->50001fe1500b76d9 vmhba2:0:6 On

FC 16:0.1 500110a00085768a<->50001fe1500b76dd vmhba2:1:6 On

FC 16:0.1 500110a00085768a<->50001fe1500b76df vmhba2:2:6 On

FC 16:0.1 500110a00085768a<->50001fe1500b76db vmhba2:3:6 On

Disk vmhba1:0:7 /dev/sdg (770048MB) has 8 paths and policy of Fixed

FC 16:0.0 500110a000857688<->50001fe1500b76d8 vmhba1:0:7 On

FC 16:0.0 500110a000857688<->50001fe1500b76dc vmhba1:1:7 On

FC 16:0.0 500110a000857688<->50001fe1500b76da vmhba1:2:7 On

FC 16:0.0 500110a000857688<->50001fe1500b76de vmhba1:3:7 On active preferred

FC 16:0.1 500110a00085768a<->50001fe1500b76d9 vmhba2:0:7 On

FC 16:0.1 500110a00085768a<->50001fe1500b76dd vmhba2:1:7 On

FC 16:0.1 500110a00085768a<->50001fe1500b76df vmhba2:2:7 On

FC 16:0.1 500110a00085768a<->50001fe1500b76db vmhba2:3:7 On

Disk vmhba1:0:8 /dev/sdh (311296MB) has 8 paths and policy of Fixed

FC 16:0.0 500110a000857688<->50001fe1500b76d8 vmhba1:0:8 On

FC 16:0.0 500110a000857688<->50001fe1500b76dc vmhba1:1:8 On

FC 16:0.0 500110a000857688<->50001fe1500b76da vmhba1:2:8 On

FC 16:0.0 500110a000857688<->50001fe1500b76de vmhba1:3:8 On

FC 16:0.1 500110a00085768a<->50001fe1500b76d9 vmhba2:0:8 On

FC 16:0.1 500110a00085768a<->50001fe1500b76dd vmhba2:1:8 On

FC 16:0.1 500110a00085768a<->50001fe1500b76df vmhba2:2:8 On

FC 16:0.1 500110a00085768a<->50001fe1500b76db vmhba2:3:8 On active preferred

Disk vmhba1:0:9 /dev/sdi (770048MB) has 8 paths and policy of Fixed

FC 16:0.0 500110a000857688<->50001fe1500b76d8 vmhba1:0:9 On

FC 16:0.0 500110a000857688<->50001fe1500b76dc vmhba1:1:9 On

FC 16:0.0 500110a000857688<->50001fe1500b76da vmhba1:2:9 On

FC 16:0.0 500110a000857688<->50001fe1500b76de vmhba1:3:9 On

FC 16:0.1 500110a00085768a<->50001fe1500b76d9 vmhba2:0:9 On

FC 16:0.1 500110a00085768a<->50001fe1500b76dd vmhba2:1:9 On

FC 16:0.1 500110a00085768a<->50001fe1500b76df vmhba2:2:9 On active preferred

FC 16:0.1 500110a00085768a<->50001fe1500b76db vmhba2:3:9 On

Disk vmhba1:0:10 /dev/sdj (155648MB) has 8 paths and policy of Fixed

FC 16:0.0 500110a000857688<->50001fe1500b76d8 vmhba1:0:10 On active preferred

FC 16:0.0 500110a000857688<->50001fe1500b76dc vmhba1:1:10 On

FC 16:0.0 500110a000857688<->50001fe1500b76da vmhba1:2:10 On

FC 16:0.0 500110a000857688<->50001fe1500b76de vmhba1:3:10 On

FC 16:0.1 500110a00085768a<->50001fe1500b76d9 vmhba2:0:10 On

FC 16:0.1 500110a00085768a<->50001fe1500b76dd vmhba2:1:10 On

FC 16:0.1 500110a00085768a<->50001fe1500b76df vmhba2:2:10 On

FC 16:0.1 500110a00085768a<->50001fe1500b76db vmhba2:3:10 On

Disk vmhba1:0:11 /dev/sdk (1024MB) has 8 paths and policy of Fixed

FC 16:0.0 500110a000857688<->50001fe1500b76d8 vmhba1:0:11 On

FC 16:0.0 500110a000857688<->50001fe1500b76dc vmhba1:1:11 On active preferred

FC 16:0.0 500110a000857688<->50001fe1500b76da vmhba1:2:11 On

FC 16:0.0 500110a000857688<->50001fe1500b76de vmhba1:3:11 On

FC 16:0.1 500110a00085768a<->50001fe1500b76d9 vmhba2:0:11 On

FC 16:0.1 500110a00085768a<->50001fe1500b76dd vmhba2:1:11 On

FC 16:0.1 500110a00085768a<->50001fe1500b76df vmhba2:2:11 On

FC 16:0.1 500110a00085768a<->50001fe1500b76db vmhba2:3:11 On

Disk vmhba1:0:12 /dev/sdl (1024MB) has 8 paths and policy of Fixed

FC 16:0.0 500110a000857688<->50001fe1500b76d8 vmhba1:0:12 On

FC 16:0.0 500110a000857688<->50001fe1500b76dc vmhba1:1:12 On

FC 16:0.0 500110a000857688<->50001fe1500b76da vmhba1:2:12 On

FC 16:0.0 500110a000857688<->50001fe1500b76de vmhba1:3:12 On

FC 16:0.1 500110a00085768a<->50001fe1500b76d9 vmhba2:0:12 On active preferred

FC 16:0.1 500110a00085768a<->50001fe1500b76dd vmhba2:1:12 On

FC 16:0.1 500110a00085768a<->50001fe1500b76df vmhba2:2:12 On

FC 16:0.1 500110a00085768a<->50001fe1500b76db vmhba2:3:12 On

Disk vmhba1:0:13 /dev/sdm (1024MB) has 8 paths and policy of Fixed

FC 16:0.0 500110a000857688<->50001fe1500b76d8 vmhba1:0:13 On

FC 16:0.0 500110a000857688<->50001fe1500b76dc vmhba1:1:13 On

FC 16:0.0 500110a000857688<->50001fe1500b76da vmhba1:2:13 On

FC 16:0.0 500110a000857688<->50001fe1500b76de vmhba1:3:13 On

FC 16:0.1 500110a00085768a<->50001fe1500b76d9 vmhba2:0:13 On

FC 16:0.1 500110a00085768a<->50001fe1500b76dd vmhba2:1:13 On active preferred

FC 16:0.1 500110a00085768a<->50001fe1500b76df vmhba2:2:13 On

FC 16:0.1 500110a00085768a<->50001fe1500b76db vmhba2:3:13 On

Disk vmhba1:0:14 /dev/sdn (1024MB) has 8 paths and policy of Fixed

FC 16:0.0 500110a000857688<->50001fe1500b76d8 vmhba1:0:14 On

FC 16:0.0 500110a000857688<->50001fe1500b76dc vmhba1:1:14 On

FC 16:0.0 500110a000857688<->50001fe1500b76da vmhba1:2:14 On active preferred

FC 16:0.0 500110a000857688<->50001fe1500b76de vmhba1:3:14 On

FC 16:0.1 500110a00085768a<->50001fe1500b76d9 vmhba2:0:14 On

FC 16:0.1 500110a00085768a<->50001fe1500b76dd vmhba2:1:14 On

FC 16:0.1 500110a00085768a<->50001fe1500b76df vmhba2:2:14 On

FC 16:0.1 500110a00085768a<->50001fe1500b76db vmhba2:3:14 On

Disk vmhba1:0:41 /dev/sdo (16384MB) has 8 paths and policy of Fixed

FC 16:0.0 500110a000857688<->50001fe1500b76d8 vmhba1:0:41 On active preferred

FC 16:0.0 500110a000857688<->50001fe1500b76dc vmhba1:1:41 On

FC 16:0.0 500110a000857688<->50001fe1500b76da vmhba1:2:41 On

FC 16:0.0 500110a000857688<->50001fe1500b76de vmhba1:3:41 On

FC 16:0.1 500110a00085768a<->50001fe1500b76d9 vmhba2:0:41 On

FC 16:0.1 500110a00085768a<->50001fe1500b76dd vmhba2:1:41 On

FC 16:0.1 500110a00085768a<->50001fe1500b76df vmhba2:2:41 On

FC 16:0.1 500110a00085768a<->50001fe1500b76db vmhba2:3:41 On

Disk vmhba1:0:42 /dev/sdp (40960MB) has 8 paths and policy of Fixed

FC 16:0.0 500110a000857688<->50001fe1500b76d8 vmhba1:0:42 On active preferred

FC 16:0.0 500110a000857688<->50001fe1500b76dc vmhba1:1:42 On

FC 16:0.0 500110a000857688<->50001fe1500b76da vmhba1:2:42 On

FC 16:0.0 500110a000857688<->50001fe1500b76de vmhba1:3:42 On

FC 16:0.1 500110a00085768a<->50001fe1500b76d9 vmhba2:0:42 On

FC 16:0.1 500110a00085768a<->50001fe1500b76dd vmhba2:1:42 On

FC 16:0.1 500110a00085768a<->50001fe1500b76df vmhba2:2:42 On

FC 16:0.1 500110a00085768a<->50001fe1500b76db vmhba2:3:42 On

Disk vmhba1:0:43 /dev/sdq (16384MB) has 8 paths and policy of Fixed

FC 16:0.0 500110a000857688<->50001fe1500b76d8 vmhba1:0:43 On active preferred

FC 16:0.0 500110a000857688<->50001fe1500b76dc vmhba1:1:43 On

FC 16:0.0 500110a000857688<->50001fe1500b76da vmhba1:2:43 On

FC 16:0.0 500110a000857688<->50001fe1500b76de vmhba1:3:43 On

FC 16:0.1 500110a00085768a<->50001fe1500b76d9 vmhba2:0:43 On

FC 16:0.1 500110a00085768a<->50001fe1500b76dd vmhba2:1:43 On

FC 16:0.1 500110a00085768a<->50001fe1500b76df vmhba2:2:43 On

FC 16:0.1 500110a00085768a<->50001fe1500b76db vmhba2:3:43 On

Disk vmhba1:0:44 /dev/sdr (81920MB) has 8 paths and policy of Fixed

FC 16:0.0 500110a000857688<->50001fe1500b76d8 vmhba1:0:44 On active preferred

FC 16:0.0 500110a000857688<->50001fe1500b76dc vmhba1:1:44 On

FC 16:0.0 500110a000857688<->50001fe1500b76da vmhba1:2:44 On

FC 16:0.0 500110a000857688<->50001fe1500b76de vmhba1:3:44 On

FC 16:0.1 500110a00085768a<->50001fe1500b76d9 vmhba2:0:44 On

FC 16:0.1 500110a00085768a<->50001fe1500b76dd vmhba2:1:44 On

FC 16:0.1 500110a00085768a<->50001fe1500b76df vmhba2:2:44 On

FC 16:0.1 500110a00085768a<->50001fe1500b76db vmhba2:3:44 On

RAID Controller (SCSI-3) vmhba1:4:0 (0MB) has 8 paths and policy of Fixed

FC 16:0.0 500110a000857688<->50001fe1500c07a8 vmhba1:4:0 On active preferred

FC 16:0.0 500110a000857688<->50001fe1500c07ac vmhba1:5:0 On

FC 16:0.0 500110a000857688<->50001fe1500c07aa vmhba1:6:0 On

FC 16:0.0 500110a000857688<->50001fe1500c07ae vmhba1:7:0 On

FC 16:0.1 500110a00085768a<->50001fe1500c07a9 vmhba2:4:0 On

FC 16:0.1 500110a00085768a<->50001fe1500c07ad vmhba2:5:0 On

FC 16:0.1 500110a00085768a<->50001fe1500c07ab vmhba2:6:0 On

FC 16:0.1 500110a00085768a<->50001fe1500c07af vmhba2:7:0 On

I did the manual load balancing using Virtual Center by doing a manage path on every LUN.

Now here is the output from esxtop:

ADAPTR CID TID LID WID NCHNS NTGTS NLUNS NVMS AQLEN LQLEN WQLEN ACTV QUED %USD LOAD CMDS/s READS/s WRITES/s MBREAD/s MBWRTN/s

vmhba0 - - - - 1 1 1 1 128 0 0 0 0 0 0.00 1.83 0.00 1.83 0.00 0.02

vmhba1 0 0 - - 1 1 19 30 4096 0 0 35 0 0 0.00 199.43 8.01 191.42 0.69 15.33

vmhba1 0 1 - - 1 1 0 0 4096 0 0 0 0 0 0.00 0.00 0.00 0.00 0.00 0.00

vmhba1 0 2 - - 1 1 0 0 4096 0 0 0 0 0 0.00 0.00 0.00 0.00 0.00 0.00

vmhba1 0 3 - - 1 1 0 0 4096 0 0 0 0 0 0.00 0.00 0.00 0.00 0.00 0.00

vmhba1 0 4 - - 1 1 1 1 4096 0 0 0 0 0 0.00 0.00 0.00 0.00 0.00 0.00

vmhba1 0 5 - - 1 1 0 0 4096 0 0 0 0 0 0.00 0.00 0.00 0.00 0.00 0.00

vmhba1 0 6 - - 1 1 0 0 4096 0 0 0 0 0 0.00 0.00 0.00 0.00 0.00 0.00

vmhba1 0 7 - - 1 1 0 0 4096 0 0 0 0 0 0.00 0.00 0.00 0.00 0.00 0.00

vmhba2 0 0 - - 1 1 0 0 4096 0 0 0 0 0 0.00 0.00 0.00 0.00 0.00 0.00

vmhba2 0 1 - - 1 1 0 0 4096 0 0 0 0 0 0.00 0.00 0.00 0.00 0.00 0.00

vmhba2 0 2 - - 1 1 0 0 4096 0 0 0 0 0 0.00 0.00 0.00 0.00 0.00 0.00

vmhba2 0 3 - - 1 1 0 0 4096 0 0 0 0 0 0.00 0.00 0.00 0.00 0.00 0.00

vmhba2 0 4 - - 1 1 0 0 4096 0 0 0 0 0 0.00 0.00 0.00 0.00 0.00 0.00

vmhba2 0 5 - - 1 1 0 0 4096 0 0 0 0 0 0.00 0.00 0.00 0.00 0.00 0.00

vmhba2 0 6 - - 1 1 0 0 4096 0 0 0 0 0 0.00 0.00 0.00 0.00 0.00 0.00

vmhba2 0 7 - - 1 1 0 0 4096 0 0 0 0 0 0.00 0.00 0.00 0.00 0.00 0.00

So it looks like that my load balancing is not working or esxtop is not giving back the right results.

Somebody please help.

Regards,

Raymond

0 Kudos
7 Replies
Texiwill
Leadership
Leadership

Hello,

Depending on the firmware an EVA8000 is not actually active/active. There is still one active SP and one passive, just that if data is sent to the passive SP, it is forwarded over the backplane to the active SP. This is all pretty transparent and extremely fast.

You may want to verify this with HP as well.

In addition, ESX does not load balance on its own. You have to physically set the paths to use per data store for any active/active array. All ESX provides is failover.

Best regards,

Edward

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
system_ict
Contributor
Contributor

Hi Edward,

Thanks for the reply.

I manualy load balanced the LUN's over the 8 host ports of the EVA8000 as you can see form the esxcf-mpath -s output. The EVA8000 is on XCS 6000. Esxtop is showing activity to one target only, so my question is why?

Regards,

Raymond

Message was edited by: Raymond

0 Kudos
whynotq
Commander
Commander

there are some posts out there already with scripts to help with load balancing the vmhba:

http://www.vmware.com/community/thread.jspa?messageID=681870&#681870

http://www.vmware.com/community/thread.jspa?messageID=638291&#638291

system_ict
Contributor
Contributor

Hi whynotq,

Thanks for the reply.

Still no indication why esxtop is reporting all LUN's to one target. As you can see from the esxcf-mpath output the load balancing is set, but esxtop is not showing it.

Regards,

Raymond

0 Kudos
Jae_Ellers
Virtuoso
Virtuoso

Somewhere in the docs for esxtop there is a reference to reporting on the "canonical" path name. So even though the data is flowing over other switchports esxtop reports it all on the canonical path. I believe this means it's impossible to get accurate info out of esxtop. You basically have to believe or assume the active path is working and then base your measurements on the canonical path name.

Canonical is usually the lowest numbered path, but can be higher numbered if it is the first path discovered. When you do the esxcfg-mpath -l the canonical path is that reported in the first line of each lun:

Disk vmhba1:0:1 /dev/sda (512000MB) has 4 paths and policy of Fixed

FC 3:1.0 210000e08b82d429<->50001fe1500be089 vmhba1:0:1 On active preferred

FC 3:1.0 210000e08b82d429<->50001fe1500be08b vmhba1:1:1 On

FC 3:1.0 210000e08b82d429<->50001fe1500be08d vmhba1:2:1 On

FC 3:1.0 210000e08b82d429<->50001fe1500be08f vmhba1:3:1 On

So the canonical name for the above path is vmhba1:0:1 and will show this way in esxtop regardless of which path is set preferred.

-=-=-=-=-=-=-=-=-=-=-=-=-=-=- http://blog.mr-vm.com http://www.vmprofessional.com -=-=-=-=-=-=-=-=-=-=-=-=-=-=-
BUGCHK
Commander
Commander

Depending on the firmware an EVA8000 is not actually active/active.

The A/A behaviour has always been the same on the EVA4000, 6000 and 8000.

There is still one active SP and one passive, just that if data is sent to

the passive SP, it is forwarded over the backplane to the active SP.

Right, the technical term used is Asymmetric/Active/Active. According to the EVA engineers it is based on an industry standard developed some years ago.

ESX does not load balance on its own.

Yep, depending on the EVA firmware, however, it can transfer virtual disk ownership from one controller to another to take load from the mirror ports. To be effective, of course, all ESX servers should access a single virtual disk through the same controller.

You can check this via esxcfg-mpath -l[/b]

Disk vmhba1:0:1 /dev/sda (10240MB) has 8 paths and policy of Fixed

FC 16:0.0 500110a000857688<->50001fe1500b76d[b]8[/b] vmhba1:0:1 On active preferred

FC 16:0.0 500110a000857688<->50001fe1500b76dc vmhba1:1:1 On

FC 16:0.0 500110a000857688<->50001fe1500b76d[b]a[/b] vmhba1:2:1 On

FC 16:0.0 500110a000857688<->50001fe1500b76de vmhba1:3:1 On

FC 16:0.1 500110a00085768a<->50001fe1500b76d[b]9[/b] vmhba2:0:1 On

FC 16:0.1 500110a00085768a<->50001fe1500b76dd vmhba2:1:1 On

FC 16:0.1 500110a00085768a<->50001fe1500b76df vmhba2:2:1 On

FC 16:0.1 500110a00085768a<->50001fe1500b76d[b]b[/b] vmhba2:3:1 On

If the last character ends in 8[/b], 9, A or B - it is one controller. The characters C, D, E and F belong to the other.

0 Kudos
system_ict
Contributor
Contributor

Hi,

So as I see it my setup is correct but esxtop just does not show it. Thanks for the help.

Regards,

Raymond

0 Kudos