VMware Cloud Community
unsichtbare
Expert
Expert

Long Ping Time with Standard and Jumbo Frames

I was doing some testing on new systems yesterday and noticed that the RTT for a ping with jumbo frames was around 2.7ms. (average) while the same ping with normal frames was around .7ms (average). Although the difference seems proportionate, the RTT seems excessively high. If 15ms. latency is the tolerance for good SAN performance, 2.7ms. is an alarming portion of that. I also logged-in to a number of other systems and tested against a number of other SANs and found strikingly similar results. Any idea or comment on long RTT?

[root@esxi2:~] vmkping -d -s 8972 -I vmk1 10.0.0.64

PING 10.0.0.64 (10.0.0.64): 8972 data bytes

8980 bytes from 10.0.0.64: icmp_seq=0 ttl=64 time=2.920 ms

8980 bytes from 10.0.0.64: icmp_seq=1 ttl=64 time=2.804 ms

8980 bytes from 10.0.0.64: icmp_seq=2 ttl=64 time=2.591 ms

--- 10.0.0.64 ping statistics ---

3 packets transmitted, 3 packets received, 0% packet loss

round-trip min/avg/max = 2.591/2.772/2.920 ms

[root@esxi2:~] vmkping -d -s 1472 -I vmk1 10.0.0.64

PING 10.0.0.64 (10.0.0.64): 1472 data bytes

1480 bytes from 10.0.0.64: icmp_seq=0 ttl=64 time=0.934 ms

1480 bytes from 10.0.0.64: icmp_seq=1 ttl=64 time=0.816 ms

1480 bytes from 10.0.0.64: icmp_seq=2 ttl=64 time=0.500 ms

--- 10.0.0.64 ping statistics ---

3 packets transmitted, 3 packets received, 0% packet loss

round-trip min/avg/max = 0.500/0.750/0.934 ms

[root@esxi2:~] vmkping -I vmk1 10.0.0.64

PING 10.0.0.64 (10.0.0.64): 56 data bytes

64 bytes from 10.0.0.64: icmp_seq=0 ttl=64 time=0.470 ms

64 bytes from 10.0.0.64: icmp_seq=1 ttl=64 time=0.618 ms

64 bytes from 10.0.0.64: icmp_seq=2 ttl=64 time=0.345 ms

--- 10.0.0.64 ping statistics ---

3 packets transmitted, 3 packets received, 0% packet loss

round-trip min/avg/max = 0.345/0.478/0.618 ms

[root@esxi2:~]

+The Invisible Admin+ If you find me useful, follow my blog: http://johnborhek.com/
0 Kudos
0 Replies