VMware Cloud Community
groundsea
Enthusiast
Enthusiast
Jump to solution

Network problem about SR-IOV in VIO 3.0

Hi expert,

I have deployed a VIO 3.0 environment based on DVS. I want to use SR-IOV VF (intel 82599) to improve the network performance.

Currently the VM had already boot up, and the VF driver was loaded successfully. from the vsphere, I can see the VF occupying with command "esxcli network sriovnic vf list".

After I configured the IP address on the interface, it can't reach other servers, no matter the other server is located on the same host or not.

I captured the packets on the physical switch, and found out that the VM with VF could send the ARP request out, and other server had response the ARP reply,

but the VM could not receive the ARP reply.

And I also captured the packets on virtual switch's uplink interface. If the another server which located on the same host, the ARP reply packets could be captured on uplink interface;

but if another server which located on the different host, the ARP reply packets couldn't be captured.

I have been in puzzled for a long time...... Could anyone give some advice? Any help would be appreciated!

Haifeng.

0 Kudos
1 Solution

Accepted Solutions
groundsea
Enthusiast
Enthusiast
Jump to solution

Yep! I found the root cause!

It is because the MTU configuration of VDS, it is set as 1600. I didn't know if it is the default value when deploying VIO,

but the deployment of VIO environment is changed from NSX to VDS, maybe the MTU is set to 1600 when I use NSX as the network of VIO.

Now when I change the MTU to 1500, everything is ok!

Thank you for your support! Thank you!

View solution in original post

0 Kudos
10 Replies
lserpietri
Enthusiast
Enthusiast
Jump to solution

Hi Haifeng,

did you follow this guide VMware Integrated OpenStack Information?

Thank you!
Luca

0 Kudos
groundsea
Enthusiast
Enthusiast
Jump to solution

Hi Luca,

Thanks for your rapid response.

Yes, I do follow this guide. But I think this guide has something missing, it doesn't mention the change of nova.conf.

If I don't modify pci_alias configuration in the nova.conf from [{"name": "default"}] to [{"name": "vf", "product_id": "10f8", "vendor_id": "8086", "device_type": "type-VF"}],

then the VM can't be deployed successfully.

Currently the VM was created successfully, but can't receive any packets. -_-

0 Kudos
gjayavelu
VMware Employee
VMware Employee
Jump to solution

Since the VMs could be deployed with VF, it seems no issue with VIO.

For the connectivity issue, have you looked at the physical port configuration?

Is there any VLAN configuration on the switch port? can you check you native/allowed vlan settings?

0 Kudos
groundsea
Enthusiast
Enthusiast
Jump to solution

Hi gjayavelu,

Yes, I have configure the VLAN on portgroup, the physical switch is also configured.

Currently the physical switch can receive the  ARP request packets that SR-IOV VM sent, but the ARP reply packets can't be received by SR-IOV VM.

0 Kudos
groundsea
Enthusiast
Enthusiast
Jump to solution

I deployed 2 VM, each with 1 SR-IOV nic, the VLAN of portgroup is 101, VDS configuratoin is like below picture:

pastedImage_0.png

2 SR-IOV interface's information is like below 2 pictures.

pastedImage_4.pngpastedImage_5.png

And each VM set the interface IP with that neutron assigned (although I have tuned off the port_security_enabled option of the network).

Each VM initiated ping request to the other.

From the physical switch(6125XLG), I can see it had learned the mac address of each VM.

<HP-6125-56>disp mac-address

MAC Address      VLAN ID    State            Port/NickName            Aging

...

fa16-3e69-d5d4   101        Learned          XGE1/0/9                 Y

00d8-0309-0501   101        Learned          XGE1/0/9                 Y

...

but both VM can't receive the packets:

Here is the physical nic's statistics, it show that VF only has send out packets and has no receive packets.

[root@109:~] ethtool -S vmnic8|grep -v ": 0" ( I filtered out all the 0 output)

NIC statistics:

     rx_packets: 374552

     rx_bytes: 236232752

     multicast: 374553

     rx_pkts_nic: 374553

     tx_pkts_nic: 350

     rx_bytes_nic: 239218270

     tx_bytes_nic: 24971

     lsc_int: 1

     rx_queue_0_packets: 374552

     rx_queue_0_bytes: 236232752

     VF 0 Tx Packets: 189

     VF 0 Tx Bytes: 8910

     VF 1 Tx Packets: 161

     VF 1 Tx Bytes: 10217

[root@109:~]

In vCenter environment, I have ever used SR-IOV function, it is ok, I don't know the problem I meet now is correlation with VIO or not.

0 Kudos
gjayavelu
VMware Employee
VMware Employee
Jump to solution

Just to confirm, are you saying without VIO, if you configure VFs directly using vSphere (without using VIO), the connectivity works?

0 Kudos
gjayavelu
VMware Employee
VMware Employee
Jump to solution

Can you also paste your physical port configuration (particularly vlan) for the uplinks (sriov nics)?

0 Kudos
groundsea
Enthusiast
Enthusiast
Jump to solution

No, I just mean I have ever used SR-IOV in vSphere before, and the configuration looks like same(at vSphere level).

0 Kudos
groundsea
Enthusiast
Enthusiast
Jump to solution

I use HP C7000 COTS server, physical switch is interconnect 6125XLG. 2 VM are located on the same blade, slot 9. here is the configuration of 6125XLG:

[HP-6125-56-Ten-GigabitEthernet1/0/9]disp vlan

Total VLANs: 106

The VLANs include:

1(default), 7-10, 18, 88, 101-108, 130, 199, 600, 802-803, 805, 1003-1004 

1200, 1234, 1333, 1700, 1800, 1900, 2000-2002, 2007-2008, 2100, 2500-2502 

3103-3105, 3114-3115, 3124-3125, 3167, 3201, 3203-3205, 3207-3214, 3401-3408 

3411-3418, 3421-3428, 3431-3438, 3441-3444, 3451-3454, 3461-3464, 3471-3474 

[HP-6125-56-Ten-GigabitEthernet1/0/9]disp this

#

interface Ten-GigabitEthernet1/0/9

port link-mode bridge

port link-type trunk

undo port trunk permit vlan 1

port trunk permit vlan 101 to 106 1003 to 1004 2501 to 2502

mirroring-group 1 mirroring-port both

#

return

0 Kudos
groundsea
Enthusiast
Enthusiast
Jump to solution

Yep! I found the root cause!

It is because the MTU configuration of VDS, it is set as 1600. I didn't know if it is the default value when deploying VIO,

but the deployment of VIO environment is changed from NSX to VDS, maybe the MTU is set to 1600 when I use NSX as the network of VIO.

Now when I change the MTU to 1500, everything is ok!

Thank you for your support! Thank you!

0 Kudos