1 2 Previous Next 21 Replies Latest reply on Apr 30, 2019 2:15 AM by TolgaAsik

    All-Flash vSAN Latency & Network Discards (Switching Recommendations)

    LeslieBNS9 Enthusiast

      We have been running an All-Flash vSAN cluster for about 5 months now and noticed some spikes in latency that seemed odd.


      We have six hosts with the following configuration, all hardware is on the HCL.

      - ESXi 6.5.0 7526125, vSAN 6.6

      - SuperMicro 1028U-TR4+

      - 2 x Intel E5-2680v4 2.4Ghz CPU

      - 512GB RAM

      - AOC-S3008L-L8I (Supermicro 12Gb/s Eight-Port SAS Controller)

      - 2 disk groups with (Cache: 800GB SATA, Capacity: 2x3.84TB SATA)

      - 2 X710-DA2 10Gb network adapters (Firmware: 6.01, Driver: 1.5.8 i40en)


      When first troubleshooting we noticed spikes in vmknic errors for DupAckRx, DupDataRx, and OutofOrderRx.


      Working with VMWare support we updated the drivers/firmware on our x710-DA2 adapters as the x710's have many known issues. (We specifically looked into LRO/TSO issues this adapter is known to have have). The change in firmware/drivers has not seemed to make a difference in the latency spikes at all.


      Digging more we noticed that our switches were discarding packets multiple times every hour, on ALL of our active vSAN interfaces.


      VSAN-SWITCH1# sh queuing interface ethernet 1/2

      Ethernet1/2 queuing information:

          qos-group  sched-type  oper-bandwidth

              0       WRR            100

          Multicast statistics:

              Mcast pkts dropped                      : 0

          Unicast statistics:

          qos-group 0

          HW MTU: 16356 (16356 configured)

          drop-type: drop, xon: 0, xoff: 0


              Ucast pkts dropped                      : 182232


      VSAN-SWITCH1# sh interface ethernet 1/2 | grep discard

          0 input with dribble  0 input discard

          0 lost carrier  0 no carrier  0 babble 182232 output discard


      Working with Cisco support they instructed us to enable Active Buffer Monitoring to check the shared buffer usage on the ports. The switch has 3 groups of buffers each group has 4MB of buffer (usually the switch has 6MB, but the jumbo frame config on this switch reduces it down to 4MB) for 12MB total.


      Once we enabled this, we were able to see the buffer usage on each of our ports from the last hour.


      VSAN-SWITCH1# show hardware profile buffer monitor interface ethernet 1/2 brief

                           Maximum buffer utilization detected

                         1sec     5sec    60sec     5min      1hr

                        ------   ------   ------   ------   ------

      Ethernet1/2        384KB    384KB    768KB   4224KB   4224KB


      What we found was that all of our ports were bursting a few times every hour to use the entire 4MB buffer space for that shared buffer group. During that burst, the interfaces in that buffer group would discard packets. These discarded packets also correlate with our odd latency spikes we see.


      Cisco support recommended that if we were unable to change the traffic patterns on the switch (all we have on the switch are dedicated vSAN ports), then we would need to look towards getting a switch with deep buffers.


      I spoke with VMWare support regarding this and all they recommended was 10Gb switches, they did not reference anything about needing bigger buffers or anything. The concern I have is that we are only using 6 ports on this switch thus far, and because of these discards we cannot add anymore hosts.


      Is this normal behavior that vSAN would require deep buffers? Does anyone have a recommendations on 10Gb switches that uses SFP+ ports to use with All-Flash vSAN?

        1 2 Previous Next