VMware Cloud Community
Johannes1011
Contributor
Contributor

Proactive Multicast test failed - Desired bandwith = 0.00 MB/s

Hello together ,

At the moment we are about to running VSAN with 3 Nodes in a small testing environment. Acctually everything works fine but the multicast performance test seems to be a little bit tricky.

According to the thread Re: Why proactive multicast performance test fails when health check passes? ‌it is confirmed as a bug,

but when I perform the test the desired bandwidth is 0.00 MB/s as you can see in the attached picture.

This is confusing me a little bit because i thought this is the max. bandwidth of the NIC.

So actually everything works fine but for further testing scenarios with different configurations and NICs it would be nice to have this test

So the error is a bit different if i compare the test results to the other thread.

So is it possible that there is an other error or misconfiguration on my system or do you thing it is the same problem with the bug.

At this point I followed the instructions from the virtual SAN Diagnostics & Troubleshooting Reference Manual and testet all points from the Checklist summary for Virtual SAN networking.

Thanks for your help

Johannes

1 Reply
GEJN
Contributor
Contributor

VMware Virtual SAN 6.1 Release Notes

  • Multicast performance test of Virtual SAN health check does not run on Virtual SAN network
    In some cases, depending on the routing configuration of ESXi hosts, the network multicast performance test does not run on the Virtual SAN network. 

    Workaround: Use the Virtual SAN network as the only network setting for the ESXi hosts, and conduct the network multicast performance test based on this configuration.

    If ESXi hosts have multiple network settings, you also can follow the steps listed in this example. Assume that Virtual SAN runs on the 192.168.0.0 network.

    1. Bind the multicast group address to this network on each host:

      $ esxcli network ip route ipv4 add -n 224.2.3.4/32 -g 192.168.0.0?

    2. Check the routing table: $ esxcli network ip route ipv4 list
      default      0.0.0.0          10.160.63.253  vmk0       DHCP
      10.160.32.0  255.255.224.0    0.0.0.0        vmk0       MANUAL
      192.168.0.0  255.255.255.0    0.0.0.0        vmk3       MANUAL
      224.2.3.4    255.255.255.255  192.168.0.0    vmk3       MANUAL 
    3. Run the proactive multicast network performance test, and check the result.

    4. After the test is complete, recover the routing table: $ esxcli network ip route ipv4 remove -n 224.2.3.4/32 -g 192.168.0.0