VMware Communities
ddamerjian
Enthusiast
Enthusiast
Jump to solution

(Possibly large) packets are getting dropped in VMware Workstation network between VMs

I am stumped with an issue and I really hope someone can step in and solve it. I have done google searches and have not been able to find a hit that provides the solution.

I have a setup of VMs that are effectively Linux-based VMs that talk to eachother over a network that make up a product that I support (I work for Cisco) called QvPC-DI (DI = distributed Instance), which is basically a Mobile Gateway used by Service providers, and which I am trying to get to run on this platform for internal testing and playing around. I have it up and working actually 100% of the time when only deploying a single VM (actually two VMs but the first VM is a controller/management VM that does not handle call traffic) where ALL THE PROCESSING OF THE CALL BEING MADE IS ON THAT VM.  BUT when I try with 2 VMs, where traffic needs to go between processes located on both VMs through the life of the call setup (and teardown), then what I am seeing is that packets are getting sent from one VM but I am pretty sure they are not reaching the other VM, and this is 100% reproducible as well.

It has been mentioned to me about the idea of jumbo frames - well, I can't tell you for sure what the sizes are of these packets being sent back and forth, but they could easily be larger than 1500 bytes. My working theory is that VMware Workstation is dropping these packets due to their size, but I have no solid proof of that even with all the debug logging I have turned on in my product, and I have not been able to find the name of any parameter that can control the size of packets allowable.

Qs are:

Is VMware dropping the packets and if so how can I actually see that?

Is there a "name" for this internal network? (i.e. OpenVSwitch, ????)?

Is there an actual limit to the size of packets allowable on the network?

Is there a configurable that can increase the size of packets sent over the network?

Again, this is VMware Workstation Pro, this is NOT VSphere or any other product from VMware.

thanks a lot!

dd

Reply
0 Kudos
1 Solution

Accepted Solutions
ddamerjian
Enthusiast
Enthusiast
Jump to solution

More information....

I have done some more testing and discovered that the size of packets allowed to be sent in the network varies depending on whether I am pinging the VMnetX adapter itself  from a VM (and vice versa) or whether pinging between VMs that are on that VMnetX network.

1) When pinging between a VM and the VMnetX adapter, I can clearly see that the largest packet size allowed is 1558 (1586 including the IP and ICMP headers). I see this with both host adapters (VMnet7 in example below) and NAT adapters (VMnet8 in example below) below:

Ethernet adapter VMware Network Adapter VMnet7:

   Connection-specific DNS Suffix  . :

   Link-local IPv6 Address . . . . . : fe80::f148:4a59:b330:38e3%34

   IPv4 Address. . . . . . . . . . . : 172.16.0.254

   Subnet Mask . . . . . . . . . . . : 255.255.192.0

   Default Gateway . . . . . . . . . :

daves-SIP-MIP:card1-cpu0# ifconfig

cpeth0    Link encap:Ethernet  HWaddr 00:0C:29:F4:01:B3 

          inet addr:172.16.0.1  Bcast:172.16.63.255  Mask:255.255.192.0

          UP BROADCAST RUNNING MULTICAST  MTU:4000 Metric:1

daves-SIP-MIP:card1-cpu0# ping 172.16.0.254 -s 1559

PING 172.16.0.254 (172.16.0.254) 1559(1587) bytes of data.

^C

--- 172.16.0.254 ping statistics ---

6 packets transmitted, 0 received, 100% packet loss, time 5003ms

daves-SIP-MIP:card1-cpu0# ping 172.16.0.254 -s 1558

PING 172.16.0.254 (172.16.0.254) 1558(1586) bytes of data.

1566 bytes from 172.16.0.254: icmp_seq=1 ttl=128 time=0.764 ms dscp=0 [BE]

1566 bytes from 172.16.0.254: icmp_seq=2 ttl=128 time=0.557 ms dscp=0 [BE]

1566 bytes from 172.16.0.254: icmp_seq=3 ttl=128 time=1.08 ms dscp=0 [BE]

1566 bytes from 172.16.0.254: icmp_seq=4 ttl=128 time=0.993 ms dscp=0 [BE]

1566 bytes from 172.16.0.254: icmp_seq=5 ttl=128 time=0.774 ms dscp=0 [BE]

1566 bytes from 172.16.0.254: icmp_seq=6 ttl=128 time=1.02 ms dscp=0 [BE]

^C

--- 172.16.0.254 ping statistics ---

6 packets transmitted, 6 received, 0% packet loss, time 5003ms

rtt min/avg/max/mdev = 0.557/0.865/1.081/0.183 ms

---------------------------------------------------------------------------

Ethernet adapter VMware Network Adapter VMnet8:

   Connection-specific DNS Suffix  . :

   Link-local IPv6 Address . . . . . : fe80::a913:a1da:e69c:44bf%32

   IPv4 Address. . . . . . . . . . . : 192.168.40.1

   Subnet Mask . . . . . . . . . . . : 255.255.255.0

   Default Gateway . . . . . . . . . :

UE2:~# ifconfig

eth0      Link encap:Ethernet  HWaddr 00:0C:29:5F:12:4A 

         inet addr:192.168.40.131  Bcast:192.168.40.255  Mask:255.255.255.0

          inet6 addr: fe80::20c:29ff:fe5f:124a/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:4000  Metric:1

UE2:~# ping 192.168.40.1 -s 1559

PING 192.168.40.1 (192.168.40.1) 1559(1587) bytes of data.

^C

--- 192.168.40.1 ping statistics ---

44 packets transmitted, 0 received, 100% packet loss, time 43022ms

UE2:~# ping 192.168.40.1 -s 1558

PING 192.168.40.1 (192.168.40.1) 1558(1586) bytes of data.

1566 bytes from 192.168.40.1: icmp_seq=1 ttl=128 time=0.387 ms

1566 bytes from 192.168.40.1: icmp_seq=2 ttl=128 time=0.387 ms

1566 bytes from 192.168.40.1: icmp_seq=3 ttl=128 time=0.334 ms

1566 bytes from 192.168.40.1: icmp_seq=4 ttl=128 time=0.338 ms

1566 bytes from 192.168.40.1: icmp_seq=5 ttl=128 time=0.315 ms

1566 bytes from 192.168.40.1: icmp_seq=6 ttl=128 time=0.359 ms

1566 bytes from 192.168.40.1: icmp_seq=7 ttl=128 time=0.594 ms

^C

--- 192.168.40.1 ping statistics ---

7 packets transmitted, 7 received, 0% packet loss, time 5999ms

rtt min/avg/max/mdev = 0.315/0.387/0.594/0.090 ms

When it fails, tcpdump taken on the VM clearly shows the ICMP packet being sent from the VM, while Wireshark capturing on the VMnetX interface that should be receiving the packet sees NOTHING, and so it is getting dropped in between.

2) When pinging between VMs themselves, I have seen two different results.

2a) In the first tests I did using the VMs that are part of the product that I have been trying to test and as described earlier in this query, the largest size allowed was 1554 and I have already provided example output for that.

2b) In the tests I just did between plain Linux VMs, I see the limit to be 1476 (1504 including IP and ICMP headers), so that 1477 or greater fails. Here are pings from UE2 to UE1:

UE2:~# ifconfig

eth0      Link encap:Ethernet  HWaddr 00:0C:29:5F:12:4A 

          inet addr:192.168.40.131  Bcast:192.168.40.255  Mask:255.255.255.0

          inet6 addr: fe80::20c:29ff:fe5f:124a/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST MTU:4000  Metric:1

UE2:~# ping 192.168.40.130 -s 1476

PING 192.168.40.130 (192.168.40.130) 1476(1504) bytes of data.

1484 bytes from 192.168.40.130: icmp_seq=1 ttl=64 time=0.385 ms

1484 bytes from 192.168.40.130: icmp_seq=2 ttl=64 time=0.811 ms

^C

--- 192.168.40.130 ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 1000ms

rtt min/avg/max/mdev = 0.385/0.598/0.811/0.213 ms

UE2:~# ping 192.168.40.130 -s 1477

PING 192.168.40.130 (192.168.40.130) 1477(1505) bytes of data.

^C

--- 192.168.40.130 ping statistics ---

5 packets transmitted, 0 received, 100% packet loss, time 4000ms

UE1:~# ifconfig

eth0      Link encap:Ethernet  HWaddr 00:0C:29:2E:AB:8E 

          inet addr:192.168.40.130  Bcast:192.168.40.255  Mask:255.255.255.0

          inet6 addr: fe80::20c:29ff:fe2e:ab8e/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST MTU:4000  Metric:1

Again, tcpdumps taken on both sending (UE2) and receiving (UE1) sides shows the packets leaving the sending side (UE2), but not arriving at the receiving side (UE1)

Why the different results (2a and 2b) I am not sure, but they were all consistent for each set of those setups.

In summary, I see restrictions in the size of packets allowed to be sent over the network to be either 1476 (1504), 1554 (1582), or 1558 (1586)

In talking with VMware support, finally, they have now told me that "jumbo frames" are not supported on VMware Professional and that there is no planned support for this in the future (they have discussed it). They pointed to the following article does actually capture this information on p32:  https://www.vmware.com/pdf/ws7_performance.pdf

"Jumbo frames are not supported in Workstation, even though some driver user interfaces might offer the option to enable them"

google searches indicate Jumbo frames are greater than 1500 bytes, and so my experience with packets of size 1504 (including IP/ICMP headers) is the closest I have found in the testing I have done that matches what VMware is saying.

View solution in original post

Reply
0 Kudos
5 Replies
louyo
Virtuoso
Virtuoso
Jump to solution

It seems to me (not 100% sure) that workstation does not support jumbo frames (anything over 1500), there does not seem to be an MTU setting unless there is a vmx file entry.

I would guess that your best bet would be Ethereal for debug. If you need jumbo frames, your best bet could be ESX.

Lou

bluefirestorm
Champion
Champion
Jump to solution

The default setting in the vmx is e1000 which would usually appear as Intel PRO/1000 MT or 82545EM adapter. It is not in the list of NICs without jumbo packet support but there might be a need to use Intel drivers and PROset software.

https://www.intel.com/content/www/us/en/support/articles/000006639/network-and-i-o/ethernet-products...

Alternatively you could also try using e1000e or vmxnet3

e1000e would appear as Intel 82574L NIC. Similar to e1000, you could also go to the Intel website for drivers for these instead of using the driver from the corresponding guest OS.

vmxnet3 requires that VMware Tools are installed as the vmxnet3 driver comes with that.

You can change the adapter by editing the appropriate virtual adapter(s) in the vmx configuration file.

Examples:

ethernet0.virtualDev = "e1000e"

ethernet0.virtualDev = "vmxnet3"

Reply
0 Kudos
ddamerjian
Enthusiast
Enthusiast
Jump to solution

Thank you very much, I have been using tcpdump as well as some internal debug captures to try and narrow things down. IF that is a restriction then I really need VMware to tell me that so I dont have to do all this testing (though its valuable for learning for sure). And why would it be a restriction? See my next post response to the next poster

Reply
0 Kudos
ddamerjian
Enthusiast
Enthusiast
Jump to solution

thanks very much, again, for responding to one of my posts, I appreciate it.

I actually already have been using the vmxnet3 for all my adapters in the .vmx config file, having replaced e1000, as that was a requirement for us.  But you mention VMware tools but I did not install VMware tools but it still "works" (or maybe it doesnt fully work?)

OK so I did a bunch of testing and here is what I learned ...

I wanted to determine exactly what size packet would not be able to make from one VM to another. So I set a very high MTU size and tried different sizes until I found the limiting size:

daves-SIP-MIP:card1-cpu0# ip link set mtu 2000 dev cpeth0

daves-SIP-MIP:card1-cpu0# ifconfig

cpeth0    Link encap:Ethernet  HWaddr 00:0C:29:F4:01:B3 

          inet addr:172.16.0.1  Bcast:172.16.63.255  Mask:255.255.192.0

          UP BROADCAST RUNNING MULTICAST MTU:2000 Metric:1

          RX packets:1123756 errors:0 dropped:572 overruns:0 frame:0

          TX packets:698947 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:20000

          RX bytes:1099416554 (1.0 GiB)  TX bytes:163267988 (155.7 MiB)

daves-SIP-MIP:card1-cpu0# ping 172.16.2.1 -s 1554

PING 172.16.2.1 (172.16.2.1) 1554(1582) bytes of data.

1562 bytes from 172.16.2.1: icmp_seq=1 ttl=64 time=5.59 ms dscp=0 [BE]

1562 bytes from 172.16.2.1: icmp_seq=2 ttl=64 time=2.14 ms dscp=0 [BE]

^C

--- 172.16.2.1 ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 1000ms

rtt min/avg/max/mdev = 2.141/3.865/5.590/1.725 ms

daves-SIP-MIP:card1-cpu0# ping 172.16.2.1 -s 1555

PING 172.16.2.1 (172.16.2.1) 1555(1583) bytes of data.

^C

--- 172.16.2.1 ping statistics ---

2 packets transmitted, 0 received, 100% packet loss, time 1001ms

To further test this out to confirm that fragmentation would occur if the packet size was greater than the MTU size and that the packets would make it, I tried the following and captured the packets with tcpdump:

daves-SIP-MIP:card1-cpu0# ifconfig

cpeth0    Link encap:Ethernet  HWaddr 00:0C:29:F4:01:B3 

          inet addr:172.16.0.1  Bcast:172.16.63.255  Mask:255.255.192.0

          UP BROADCAST RUNNING MULTICAST MTU:1583  Metric:1

          RX packets:1102960 errors:0 dropped:560 overruns:0 frame:0

          TX packets:686908 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:20000

          RX bytes:1079115171 (1.0 GiB)  TX bytes:161164938 (153.6 MiB)

daves-SIP-MIP:card1-cpu0# ping 172.16.2.1 -s 1554

PING 172.16.2.1 (172.16.2.1) 1554(1582) bytes of data.

1562 bytes from 172.16.2.1: icmp_seq=1 ttl=64 time=1.69 ms dscp=0 [BE]

1562 bytes from 172.16.2.1: icmp_seq=2 ttl=64 time=2.79 ms dscp=0 [BE]

1562 bytes from 172.16.2.1: icmp_seq=3 ttl=64 time=1.83 ms dscp=0 [BE]

daves-SIP-MIP:card1-cpu0# ping 172.16.2.1 -s 1555

PING 172.16.2.1 (172.16.2.1) 1555(1583) bytes of data.

^C

--- 172.16.2.1 ping statistics ---

1727 packets transmitted, 0 received, 100% packet loss, time 1735833ms

daves-SIP-MIP:card1-cpu0# ping 172.16.2.1 -s 1556

PING 172.16.2.1 (172.16.2.1) 1556(1584) bytes of data.

1564 bytes from 172.16.2.1: icmp_seq=1 ttl=64 time=1.31 ms dscp=0 [BE]

1564 bytes from 172.16.2.1: icmp_seq=2 ttl=64 time=2.52 ms dscp=0 [BE]

1564 bytes from 172.16.2.1: icmp_seq=3 ttl=64 time=1.47 ms dscp=0 [BE]

Looking at the PCAPs, I see:

  • for size 1556, because the total size is 1584 > 1583 (mtu), it does fragment and is successful since the two resulting packets are each under 1583 (one is large, the other tiny)
  • for size 1555, because the total size is 1583 = 1583 (mtu), it does NOT fragment and sends as that size, but it also does NOT make it since the network doesn't like that size
  • for size 1554, because the total size is 1582 < 1583 (mtu), it does NOT fragment and sends as that size, but the network does pass thru that size.

So I'd like VMware to comment on this. But at the same time what I can't figure out is that by DEFAULT the mtu is actually set at 1420:

daves-SIP-MIP:card1-cpu0# ifconfig

cpeth0    Link encap:Ethernet  HWaddr 00:0C:29:F4:01:B3 

          inet addr:172.16.0.1  Bcast:172.16.63.255  Mask:255.255.192.0

          UP BROADCAST RUNNING MULTICAST MTU:1420 Metric:1

          RX packets:120308 errors:0 dropped:43 overruns:0 frame:0

          TX packets:83594 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:20000

          RX bytes:123095511 (117.3 MiB)  TX bytes:53932940 (51.4 MiB)

... but yet somehow our software is likely able to still send larger packets in the first place, that are likely getting dropped per my observations. My understanding is that we do rely on the ability to send large packets between VMs for efficiency sake.  There are other "interfaces" in the VMs, but everything I can tell shows them all 1500 or under. I need to get some help internally but have not been successful in getting the proper attention thus far, hence taking various approaches to try and make at least some conclusions on what might be going on.

thanks

dd

Reply
0 Kudos
ddamerjian
Enthusiast
Enthusiast
Jump to solution

More information....

I have done some more testing and discovered that the size of packets allowed to be sent in the network varies depending on whether I am pinging the VMnetX adapter itself  from a VM (and vice versa) or whether pinging between VMs that are on that VMnetX network.

1) When pinging between a VM and the VMnetX adapter, I can clearly see that the largest packet size allowed is 1558 (1586 including the IP and ICMP headers). I see this with both host adapters (VMnet7 in example below) and NAT adapters (VMnet8 in example below) below:

Ethernet adapter VMware Network Adapter VMnet7:

   Connection-specific DNS Suffix  . :

   Link-local IPv6 Address . . . . . : fe80::f148:4a59:b330:38e3%34

   IPv4 Address. . . . . . . . . . . : 172.16.0.254

   Subnet Mask . . . . . . . . . . . : 255.255.192.0

   Default Gateway . . . . . . . . . :

daves-SIP-MIP:card1-cpu0# ifconfig

cpeth0    Link encap:Ethernet  HWaddr 00:0C:29:F4:01:B3 

          inet addr:172.16.0.1  Bcast:172.16.63.255  Mask:255.255.192.0

          UP BROADCAST RUNNING MULTICAST  MTU:4000 Metric:1

daves-SIP-MIP:card1-cpu0# ping 172.16.0.254 -s 1559

PING 172.16.0.254 (172.16.0.254) 1559(1587) bytes of data.

^C

--- 172.16.0.254 ping statistics ---

6 packets transmitted, 0 received, 100% packet loss, time 5003ms

daves-SIP-MIP:card1-cpu0# ping 172.16.0.254 -s 1558

PING 172.16.0.254 (172.16.0.254) 1558(1586) bytes of data.

1566 bytes from 172.16.0.254: icmp_seq=1 ttl=128 time=0.764 ms dscp=0 [BE]

1566 bytes from 172.16.0.254: icmp_seq=2 ttl=128 time=0.557 ms dscp=0 [BE]

1566 bytes from 172.16.0.254: icmp_seq=3 ttl=128 time=1.08 ms dscp=0 [BE]

1566 bytes from 172.16.0.254: icmp_seq=4 ttl=128 time=0.993 ms dscp=0 [BE]

1566 bytes from 172.16.0.254: icmp_seq=5 ttl=128 time=0.774 ms dscp=0 [BE]

1566 bytes from 172.16.0.254: icmp_seq=6 ttl=128 time=1.02 ms dscp=0 [BE]

^C

--- 172.16.0.254 ping statistics ---

6 packets transmitted, 6 received, 0% packet loss, time 5003ms

rtt min/avg/max/mdev = 0.557/0.865/1.081/0.183 ms

---------------------------------------------------------------------------

Ethernet adapter VMware Network Adapter VMnet8:

   Connection-specific DNS Suffix  . :

   Link-local IPv6 Address . . . . . : fe80::a913:a1da:e69c:44bf%32

   IPv4 Address. . . . . . . . . . . : 192.168.40.1

   Subnet Mask . . . . . . . . . . . : 255.255.255.0

   Default Gateway . . . . . . . . . :

UE2:~# ifconfig

eth0      Link encap:Ethernet  HWaddr 00:0C:29:5F:12:4A 

         inet addr:192.168.40.131  Bcast:192.168.40.255  Mask:255.255.255.0

          inet6 addr: fe80::20c:29ff:fe5f:124a/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:4000  Metric:1

UE2:~# ping 192.168.40.1 -s 1559

PING 192.168.40.1 (192.168.40.1) 1559(1587) bytes of data.

^C

--- 192.168.40.1 ping statistics ---

44 packets transmitted, 0 received, 100% packet loss, time 43022ms

UE2:~# ping 192.168.40.1 -s 1558

PING 192.168.40.1 (192.168.40.1) 1558(1586) bytes of data.

1566 bytes from 192.168.40.1: icmp_seq=1 ttl=128 time=0.387 ms

1566 bytes from 192.168.40.1: icmp_seq=2 ttl=128 time=0.387 ms

1566 bytes from 192.168.40.1: icmp_seq=3 ttl=128 time=0.334 ms

1566 bytes from 192.168.40.1: icmp_seq=4 ttl=128 time=0.338 ms

1566 bytes from 192.168.40.1: icmp_seq=5 ttl=128 time=0.315 ms

1566 bytes from 192.168.40.1: icmp_seq=6 ttl=128 time=0.359 ms

1566 bytes from 192.168.40.1: icmp_seq=7 ttl=128 time=0.594 ms

^C

--- 192.168.40.1 ping statistics ---

7 packets transmitted, 7 received, 0% packet loss, time 5999ms

rtt min/avg/max/mdev = 0.315/0.387/0.594/0.090 ms

When it fails, tcpdump taken on the VM clearly shows the ICMP packet being sent from the VM, while Wireshark capturing on the VMnetX interface that should be receiving the packet sees NOTHING, and so it is getting dropped in between.

2) When pinging between VMs themselves, I have seen two different results.

2a) In the first tests I did using the VMs that are part of the product that I have been trying to test and as described earlier in this query, the largest size allowed was 1554 and I have already provided example output for that.

2b) In the tests I just did between plain Linux VMs, I see the limit to be 1476 (1504 including IP and ICMP headers), so that 1477 or greater fails. Here are pings from UE2 to UE1:

UE2:~# ifconfig

eth0      Link encap:Ethernet  HWaddr 00:0C:29:5F:12:4A 

          inet addr:192.168.40.131  Bcast:192.168.40.255  Mask:255.255.255.0

          inet6 addr: fe80::20c:29ff:fe5f:124a/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST MTU:4000  Metric:1

UE2:~# ping 192.168.40.130 -s 1476

PING 192.168.40.130 (192.168.40.130) 1476(1504) bytes of data.

1484 bytes from 192.168.40.130: icmp_seq=1 ttl=64 time=0.385 ms

1484 bytes from 192.168.40.130: icmp_seq=2 ttl=64 time=0.811 ms

^C

--- 192.168.40.130 ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 1000ms

rtt min/avg/max/mdev = 0.385/0.598/0.811/0.213 ms

UE2:~# ping 192.168.40.130 -s 1477

PING 192.168.40.130 (192.168.40.130) 1477(1505) bytes of data.

^C

--- 192.168.40.130 ping statistics ---

5 packets transmitted, 0 received, 100% packet loss, time 4000ms

UE1:~# ifconfig

eth0      Link encap:Ethernet  HWaddr 00:0C:29:2E:AB:8E 

          inet addr:192.168.40.130  Bcast:192.168.40.255  Mask:255.255.255.0

          inet6 addr: fe80::20c:29ff:fe2e:ab8e/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST MTU:4000  Metric:1

Again, tcpdumps taken on both sending (UE2) and receiving (UE1) sides shows the packets leaving the sending side (UE2), but not arriving at the receiving side (UE1)

Why the different results (2a and 2b) I am not sure, but they were all consistent for each set of those setups.

In summary, I see restrictions in the size of packets allowed to be sent over the network to be either 1476 (1504), 1554 (1582), or 1558 (1586)

In talking with VMware support, finally, they have now told me that "jumbo frames" are not supported on VMware Professional and that there is no planned support for this in the future (they have discussed it). They pointed to the following article does actually capture this information on p32:  https://www.vmware.com/pdf/ws7_performance.pdf

"Jumbo frames are not supported in Workstation, even though some driver user interfaces might offer the option to enable them"

google searches indicate Jumbo frames are greater than 1500 bytes, and so my experience with packets of size 1504 (including IP/ICMP headers) is the closest I have found in the testing I have done that matches what VMware is saying.

Reply
0 Kudos