VMware Cloud Community
mlubinski
Expert
Expert
Jump to solution

one more question to people using Netapp with VMWare over NFS

Could you please attach to this post your vmkernel logfile from last 2 weeks? Just remove all server names from it.

I need to compare with my log if this is somehow "common" to VMware and netapp

This is what I see in my logs:

Aug 25 04:03:42 dv29-011 vmkernel: 33:13:05:52.338 cpu4:1028)NFSLock: 516: Stop accessing fd 0xc21db80 4

Aug 25 04:03:42 dv29-011 vmkernel: 33:13:05:52.338 cpu4:1028)NFSLock: 516: Stop accessing fd 0xc21a310 4

Aug 25 04:03:42 dv29-011 vmkernel: 33:13:05:52.338 cpu4:1028)NFSLock: 516: Stop accessing fd 0xc21e798 4

Aug 25 04:03:42 dv29-011 vmkernel: 33:13:05:52.338 cpu4:1028)NFSLock: 516: Stop accessing fd 0xc219448 4

Aug 25 04:03:42 dv29-011 vmkernel: 33:13:05:52.338 cpu4:1028)NFSLock: 516: Stop accessing fd 0xc219f08 4

Aug 25 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028)NFSLock: 516: Stop accessing fd 0xc21d4c8 4

Aug 25 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028)NFSLock: 516: Stop accessing fd 0xc21bdf0 4

Aug 25 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028)NFSLock: 516: Stop accessing fd 0xc21d0c0 4

Aug 25 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028)NFSLock: 516: Stop accessing fd 0xc21c4a8 4

Aug 25 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028)NFSLock: 516: Stop accessing fd 0xc21ecf8 4

Aug 25 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028)NFSLock: 516: Stop accessing fd 0xc21df88 4

Aug 25 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028)NFSLock: 516: Stop accessing fd 0xc21dcd8 4

Aug 25 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028)NFSLock: 516: Stop accessing fd 0xc21f3b0 4

Aug 25 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028)NFSLock: 516: Stop accessing fd 0xc21d370 4

Aug 25 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028)NFSLock: 516: Stop accessing fd 0xc219040 4

Aug 25 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028)NFSLock: 516: Stop accessing fd 0xc219db0 4

Aug 25 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028)NFSLock: 516: Stop accessing fd 0xc218d90 4

Aug 25 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028)NFSLock: 516: Stop accessing fd 0xc218ee8 4

Aug 25 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028)NFSLock: 516: Stop accessing fd 0xc21bb40 4

Aug 25 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028)NFSLock: 516: Stop accessing fd 0xc21a468 4

Aug 25 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028)NFSLock: 516: Stop accessing fd 0xc2196f8 4

Aug 25 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028)NFSLock: 516: Stop accessing fd 0xc219b00 4

Aug 25 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028)NFSLock: 516: Stop accessing fd 0xc21d778 4

Aug 25 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028)NFSLock: 516: Stop accessing fd 0xc21e0e0 4

Aug 25 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028)NFSLock: 516: Stop accessing fd 0xc21e390 4

Aug 25 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028)NFSLock: 516: Stop accessing fd 0xc21e4e8 4

Aug 25 04:03:58 dv29-011 vmkernel: 33:13:06:09.004 cpu2:1026)NFSLock: 478: Start accessing fd 0xc219040 again

Aug 25 04:03:58 dv29-011 vmkernel: 33:13:06:09.004 cpu2:1026)NFSLock: 478: Start accessing fd 0xc21e390 again

Aug 25 04:03:58 dv29-011 vmkernel: 33:13:06:09.004 cpu2:1026)NFSLock: 478: Start accessing fd 0xc21e4e8 again

Aug 25 04:03:58 dv29-011 vmkernel: 33:13:06:09.008 cpu2:1026)NFSLock: 478: Start accessing fd 0xc21bb40 again

Aug 25 04:03:58 dv29-011 vmkernel: 33:13:06:09.008 cpu2:1026)NFSLock: 478: Start accessing fd 0xc21d370 again

Aug 25 04:03:58 dv29-011 vmkernel: 33:13:06:09.009 cpu2:1026)NFSLock: 478: Start accessing fd 0xc21e0e0 again

Aug 25 04:03:58 dv29-011 vmkernel: 33:13:06:09.009 cpu2:1026)NFSLock: 478: Start accessing fd 0xc219b00 again

Aug 25 04:03:58 dv29-011 vmkernel: 33:13:06:09.011 cpu2:1026)NFSLock: 478: Start accessing fd 0xc218ee8 again

Aug 25 04:03:58 dv29-011 vmkernel: 33:13:06:09.015 cpu2:1026)NFSLock: 478: Start accessing fd 0xc21f3b0 again

Aug 25 04:03:58 dv29-011 vmkernel: 33:13:06:09.015 cpu2:1026)NFSLock: 478: Start accessing fd 0xc21d778 again

Aug 25 04:03:58 dv29-011 vmkernel: 33:13:06:09.015 cpu2:1026)NFSLock: 478: Start accessing fd 0xc2196f8 again

Aug 25 04:03:58 dv29-011 vmkernel: 33:13:06:09.017 cpu2:1026)NFSLock: 478: Start accessing fd 0xc218d90 again

Aug 25 04:03:58 dv29-011 vmkernel: 33:13:06:09.017 cpu2:1026)NFSLock: 478: Start accessing fd 0xc21dcd8 again

Aug 25 04:03:58 dv29-011 vmkernel: 33:13:06:09.017 cpu2:1026)NFSLock: 478: Start accessing fd 0xc21a468 again

Aug 25 04:03:58 dv29-011 vmkernel: 33:13:06:09.021 cpu2:1026)NFSLock: 478: Start accessing fd 0xc219db0 again

Aug 25 04:03:58 dv29-011 vmkernel: 33:13:06:09.022 cpu2:1026)NFSLock: 478: Start accessing fd 0xc21df88 again

Aug 25 04:03:58 dv29-011 vmkernel: 33:13:06:09.024 cpu2:1026)NFSLock: 478: Start accessing fd 0xc21bdf0 again

Aug 25 04:03:58 dv29-011 vmkernel: 33:13:06:09.024 cpu2:1026)NFSLock: 478: Start accessing fd 0xc21ecf8 again

Aug 25 04:03:58 dv29-011 vmkernel: 33:13:06:09.025 cpu2:1026)NFSLock: 478: Start accessing fd 0xc21d4c8 again

Aug 25 04:03:58 dv29-011 vmkernel: 33:13:06:09.026 cpu2:1026)NFSLock: 478: Start accessing fd 0xc21c4a8 again

Aug 25 04:03:58 dv29-011 vmkernel: 33:13:06:09.027 cpu2:1026)NFSLock: 478: Start accessing fd 0xc219f08 again

Aug 25 04:03:58 dv29-011 vmkernel: 33:13:06:09.028 cpu2:1026)NFSLock: 478: Start accessing fd 0xc21d0c0 again

Aug 25 04:03:58 dv29-011 vmkernel: 33:13:06:09.029 cpu2:1026)NFSLock: 478: Start accessing fd 0xc219448 again

Aug 25 04:03:58 dv29-011 vmkernel: 33:13:06:09.030 cpu2:1026)NFSLock: 478: Start accessing fd 0xc21e798 again

Aug 25 04:03:58 dv29-011 vmkernel: 33:13:06:09.034 cpu2:1026)NFSLock: 478: Start accessing fd 0xc21a310 again

Aug 25 04:03:58 dv29-011 vmkernel: 33:13:06:09.036 cpu2:1026)NFSLock: 478: Start accessing fd 0xc21db80 again

Aug 26 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025)NFSLock: 516: Stop accessing fd 0xc219040 4

Aug 26 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025)NFSLock: 516: Stop accessing fd 0xc219db0 4

Aug 26 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025)NFSLock: 516: Stop accessing fd 0xc218d90 4

Aug 26 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025)NFSLock: 516: Stop accessing fd 0xc218ee8 4

Aug 26 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025)NFSLock: 516: Stop accessing fd 0xc21bb40 4

Aug 26 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025)NFSLock: 516: Stop accessing fd 0xc21a468 4

Aug 26 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025)NFSLock: 516: Stop accessing fd 0xc2196f8 4

Aug 26 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025)NFSLock: 516: Stop accessing fd 0xc219b00 4

Aug 26 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025)NFSLock: 516: Stop accessing fd 0xc21d778 4

Aug 26 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025)NFSLock: 516: Stop accessing fd 0xc21e0e0 4

Aug 26 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025)NFSLock: 516: Stop accessing fd 0xc21e390 4

Aug 26 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025)NFSLock: 516: Stop accessing fd 0xc21e4e8 4

Aug 26 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025)NFSLock: 516: Stop accessing fd 0xc21db80 4

Aug 26 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025)NFSLock: 516: Stop accessing fd 0xc21a310 4

Aug 26 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025)NFSLock: 516: Stop accessing fd 0xc21e798 4

Aug 26 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025)NFSLock: 516: Stop accessing fd 0xc219448 4

Aug 26 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025)NFSLock: 516: Stop accessing fd 0xc219f08 4

Aug 26 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025)NFSLock: 516: Stop accessing fd 0xc21d4c8 4

Aug 26 04:04:07 dv29-011 vmkernel: 34:13:06:15.053 cpu1:1025)NFSLock: 516: Stop accessing fd 0xc21bdf0 4

Aug 26 04:04:07 dv29-011 vmkernel: 34:13:06:15.053 cpu1:1025)NFSLock: 516: Stop accessing fd 0xc21d0c0 4

Aug 26 04:04:07 dv29-011 vmkernel: 34:13:06:15.053 cpu1:1025)NFSLock: 516: Stop accessing fd 0xc21c4a8 4

Aug 26 04:04:07 dv29-011 vmkernel: 34:13:06:15.053 cpu1:1025)NFSLock: 516: Stop accessing fd 0xc21ecf8 4

Aug 26 04:04:07 dv29-011 vmkernel: 34:13:06:15.053 cpu1:1025)NFSLock: 516: Stop accessing fd 0xc21df88 4

Aug 26 04:04:07 dv29-011 vmkernel: 34:13:06:15.053 cpu1:1025)NFSLock: 516: Stop accessing fd 0xc21dcd8 4

Aug 26 04:04:07 dv29-011 vmkernel: 34:13:06:15.053 cpu1:1025)NFSLock: 516: Stop accessing fd 0xc21f3b0 4

Aug 26 04:04:07 dv29-011 vmkernel: 34:13:06:15.053 cpu1:1025)NFSLock: 516: Stop accessing fd 0xc21d370 4

Aug 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.641 cpu2:1080)NFSLock: 478: Start accessing fd 0xc21e390 again

Aug 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.642 cpu2:1080)NFSLock: 478: Start accessing fd 0xc21e4e8 again

Aug 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.667 cpu2:1080)NFSLock: 478: Start accessing fd 0xc219b00 again

Aug 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.668 cpu2:1080)NFSLock: 478: Start accessing fd 0xc21e0e0 again

Aug 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.670 cpu2:1080)NFSLock: 478: Start accessing fd 0xc2196f8 again

Aug 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.672 cpu2:1182)NFSLock: 478: Start accessing fd 0xc21d778 again

Aug 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.672 cpu2:1182)NFSLock: 478: Start accessing fd 0xc21a468 again

Aug 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.705 cpu2:1182)NFSLock: 478: Start accessing fd 0xc21bb40 again

Aug 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.706 cpu2:1182)NFSLock: 478: Start accessing fd 0xc218ee8 again

Aug 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.706 cpu2:1182)NFSLock: 478: Start accessing fd 0xc218d90 again

Aug 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.712 cpu2:1080)NFSLock: 478: Start accessing fd 0xc219db0 again

Aug 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.713 cpu2:1080)NFSLock: 478: Start accessing fd 0xc21bdf0 again

Aug 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.713 cpu2:1080)NFSLock: 478: Start accessing fd 0xc21d4c8 again

Aug 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.714 cpu2:1182)NFSLock: 478: Start accessing fd 0xc219f08 again

Aug 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.714 cpu2:1182)NFSLock: 478: Start accessing fd 0xc219448 again

Aug 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.718 cpu2:1080)NFSLock: 478: Start accessing fd 0xc219040 again

Aug 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.718 cpu2:1080)NFSLock: 478: Start accessing fd 0xc21d370 again

Aug 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.718 cpu2:1080)NFSLock: 478: Start accessing fd 0xc21f3b0 again

Aug 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.719 cpu2:1080)NFSLock: 478: Start accessing fd 0xc21dcd8 again

Aug 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.719 cpu2:1080)NFSLock: 478: Start accessing fd 0xc21df88 again

Aug 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.720 cpu2:1080)NFSLock: 478: Start accessing fd 0xc21ecf8 again

Aug 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.720 cpu2:1080)NFSLock: 478: Start accessing fd 0xc21c4a8 again

Aug 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.720 cpu2:1080)NFSLock: 478: Start accessing fd 0xc21d0c0 again

Aug 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.721 cpu2:1080)NFSLock: 478: Start accessing fd 0xc21e798 again

Aug 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.721 cpu2:1080)NFSLock: 478: Start accessing fd 0xc21a310 again

Aug 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.721 cpu2:1080)NFSLock: 478: Start accessing fd 0xc21db80 again

Aug 27 04:03:51 dv29-011 vmkernel: 35:13:05:56.744 cpu5:1029)NFSLock: 516: Stop accessing fd 0xc21a468 4

Aug 27 04:03:51 dv29-011 vmkernel: 35:13:05:56.744 cpu5:1029)NFSLock: 516: Stop accessing fd 0xc2196f8 4

Aug 27 04:03:51 dv29-011 vmkernel: 35:13:05:56.744 cpu5:1029)NFSLock: 516: Stop accessing fd 0xc219b00 4

Aug 27 04:03:51 dv29-011 vmkernel: 35:13:05:56.744 cpu5:1029)NFSLock: 516: Stop accessing fd 0xc21d778 4

Aug 27 04:03:51 dv29-011 vmkernel: 35:13:05:56.744 cpu5:1029)NFSLock: 516: Stop accessing fd 0xc21e0e0 4

Aug 27 04:03:51 dv29-011 vmkernel: 35:13:05:56.744 cpu5:1029)NFSLock: 516: Stop accessing fd 0xc21e390 4

Aug 27 04:03:51 dv29-011 vmkernel: 35:13:05:56.744 cpu5:1029)NFSLock: 516: Stop accessing fd 0xc21e4e8 4

Aug 27 04:03:51 dv29-011 vmkernel: 35:13:05:56.744 cpu5:1029)NFSLock: 516: Stop accessing fd 0xc21db80 4

Aug 27 04:03:51 dv29-011 vmkernel: 35:13:05:56.744 cpu5:1029)NFSLock: 516: Stop accessing fd 0xc21a310 4

Aug 27 04:03:51 dv29-011 vmkernel: 35:13:05:56.744 cpu5:1029)NFSLock: 516: Stop accessing fd 0xc21e798 4

Aug 27 04:03:51 dv29-011 vmkernel: 35:13:05:56.744 cpu5:1029)NFSLock: 516: Stop accessing fd 0xc219448 4

Aug 27 04:03:51 dv29-011 vmkernel: 35:13:05:56.744 cpu5:1029)NFSLock: 516: Stop accessing fd 0xc219f08 4

Aug 27 04:03:51 dv29-011 vmkernel: 35:13:05:56.744 cpu5:1029)NFSLock: 516: Stop accessing fd 0xc21d4c8 4

Aug 27 04:03:51 dv29-011 vmkernel: 35:13:05:56.744 cpu5:1029)NFSLock: 516: Stop accessing fd 0xc21bdf0 4

Aug 27 04:03:51 dv29-011 vmkernel: 35:13:05:56.744 cpu5:1029)NFSLock: 516: Stop accessing fd 0xc21d0c0 4

Aug 27 04:03:51 dv29-011 vmkernel: 35:13:05:56.744 cpu5:1029)NFSLock: 516: Stop accessing fd 0xc21c4a8 4

Aug 27 04:03:51 dv29-011 vmkernel: 35:13:05:56.744 cpu5:1029)NFSLock: 516: Stop accessing fd 0xc21ecf8 4

Aug 27 04:03:51 dv29-011 vmkernel: 35:13:05:56.744 cpu5:1029)NFSLock: 516: Stop accessing fd 0xc21df88 4

Aug 27 04:03:51 dv29-011 vmkernel: 35:13:05:56.744 cpu5:1029)NFSLock: 516: Stop accessing fd 0xc21dcd8 4

Aug 27 04:03:51 dv29-011 vmkernel: 35:13:05:56.744 cpu5:1029)NFSLock: 516: Stop accessing fd 0xc21f3b0 4

Aug 27 04:03:51 dv29-011 vmkernel: 35:13:05:56.744 cpu5:1029)NFSLock: 516: Stop accessing fd 0xc21d370 4

Aug 27 04:03:51 dv29-011 vmkernel: 35:13:05:56.744 cpu5:1029)NFSLock: 516: Stop accessing fd 0xc219040 4

Aug 27 04:03:51 dv29-011 vmkernel: 35:13:05:56.744 cpu5:1029)NFSLock: 516: Stop accessing fd 0xc219db0 4

Aug 27 04:03:51 dv29-011 vmkernel: 35:13:05:56.744 cpu5:1029)NFSLock: 516: Stop accessing fd 0xc218d90 4

Aug 27 04:03:51 dv29-011 vmkernel: 35:13:05:56.744 cpu5:1029)NFSLock: 516: Stop accessing fd 0xc218ee8 4

Aug 27 04:03:51 dv29-011 vmkernel: 35:13:05:56.744 cpu5:1029)NFSLock: 516: Stop accessing fd 0xc21bb40 4

Aug 27 04:04:26 dv29-011 vmkernel: 35:13:06:31.629 cpu3:1027)NFSLock: 478: Start accessing fd 0xc21bb40 again

Aug 27 04:04:27 dv29-011 vmkernel: 35:13:06:32.157 cpu3:1156)NFSLock: 478: Start accessing fd 0xc218ee8 again

Aug 27 04:04:27 dv29-011 vmkernel: 35:13:06:32.157 cpu3:1156)NFSLock: 478: Start accessing fd 0xc218d90 again

Aug 27 04:04:27 dv29-011 vmkernel: 35:13:06:32.157 cpu3:1156)NFSLock: 478: Start accessing fd 0xc219db0 again

Aug 27 04:04:27 dv29-011 vmkernel: 35:13:06:32.157 cpu3:1156)NFSLock: 478: Start accessing fd 0xc219040 again

Aug 27 04:04:27 dv29-011 vmkernel: 35:13:06:32.157 cpu3:1156)NFSLock: 478: Start accessing fd 0xc21d370 again

Aug 27 04:04:27 dv29-011 vmkernel: 35:13:06:32.157 cpu3:1156)NFSLock: 478: Start accessing fd 0xc21f3b0 again

Aug 27 04:04:27 dv29-011 vmkernel: 35:13:06:32.157 cpu3:1156)NFSLock: 478: Start accessing fd 0xc21dcd8 again

Aug 27 04:04:27 dv29-011 vmkernel: 35:13:06:32.157 cpu3:1156)NFSLock: 478: Start accessing fd 0xc21df88 again

Aug 27 04:04:27 dv29-011 vmkernel: 35:13:06:32.157 cpu3:1156)NFSLock: 478: Start accessing fd 0xc21ecf8 again

Aug 27 04:04:27 dv29-011 vmkernel: 35:13:06:32.157 cpu3:1156)NFSLock: 478: Start accessing fd 0xc21c4a8 again

Aug 27 04:04:27 dv29-011 vmkernel: 35:13:06:32.157 cpu3:1156)NFSLock: 478: Start accessing fd 0xc21d0c0 again

Aug 27 04:04:27 dv29-011 vmkernel: 35:13:06:32.157 cpu3:1156)NFSLock: 478: Start accessing fd 0xc21bdf0 again

Aug 27 04:04:27 dv29-011 vmkernel: 35:13:06:32.157 cpu3:1156)NFSLock: 478: Start accessing fd 0xc21d4c8 again

Aug 27 04:04:27 dv29-011 vmkernel: 35:13:06:32.158 cpu3:1156)NFSLock: 478: Start accessing fd 0xc219f08 again

Aug 27 04:04:27 dv29-011 vmkernel: 35:13:06:32.158 cpu3:1156)NFSLock: 478: Start accessing fd 0xc219448 again

Aug 27 04:04:27 dv29-011 vmkernel: 35:13:06:32.158 cpu3:1156)NFSLock: 478: Start accessing fd 0xc21e798 again

Aug 27 04:04:27 dv29-011 vmkernel: 35:13:06:32.158 cpu3:1156)NFSLock: 478: Start accessing fd 0xc21a310 again

Aug 27 04:04:27 dv29-011 vmkernel: 35:13:06:32.158 cpu3:1156)NFSLock: 478: Start accessing fd 0xc21db80 again

Aug 27 04:04:27 dv29-011 vmkernel: 35:13:06:32.158 cpu3:1156)NFSLock: 478: Start accessing fd 0xc21e4e8 again

Aug 27 04:04:27 dv29-011 vmkernel: 35:13:06:32.158 cpu3:1156)NFSLock: 478: Start accessing fd 0xc21e390 again

Aug 27 04:04:27 dv29-011 vmkernel: 35:13:06:32.158 cpu3:1156)NFSLock: 478: Start accessing fd 0xc21e0e0 again

Aug 27 04:04:27 dv29-011 vmkernel: 35:13:06:32.158 cpu3:1156)NFSLock: 478: Start accessing fd 0xc21d778 again

Aug 27 04:04:27 dv29-011 vmkernel: 35:13:06:32.158 cpu3:1156)NFSLock: 478: Start accessing fd 0xc219b00 again

Aug 27 04:04:27 dv29-011 vmkernel: 35:13:06:32.158 cpu3:1156)NFSLock: 478: Start accessing fd 0xc2196f8 again

Aug 27 04:04:27 dv29-011 vmkernel: 35:13:06:32.158 cpu3:1156)NFSLock: 478: Start accessing fd 0xc21a468 again

[I]If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points[/I]
0 Kudos
1 Solution

Accepted Solutions
sayamanz
Contributor
Contributor
Jump to solution

So last week we saw one IOCStatus error on a linux guest. It wasn't during the typical time. This is compared to 10-15 (out of 60) linux hosts every Sunday morning and a few varying at other high load times during the week. Nothing has crashed however. So we've gone 2 Sundays now since the code upgrade and have only seen 1 error w/o any crashes. I'm happy with that for now. Would really like to know what the root cause of this is though. Our Storage Admin says there is nothing about nfs in the code upgrade. The DOTF class I went to did mention a change in the kahuna process going to 7.3 which was supposed to help with load. Not sure if that's it or not.

View solution in original post

0 Kudos
19 Replies
FredPeterson
Expert
Expert
Jump to solution

Not normal, at least not for me and the tiny amount of NFS I'm using on a FAS3020.

http://blogs.vmware.com/vmtn/2008/11/nfslockdisable.html

0 Kudos
mlubinski
Expert
Expert
Jump to solution

I just started a test - it will give me results after a week (valid proof), then I will update this post, to inform about results.

[I]If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points[/I]
0 Kudos
jeremypage
Enthusiast
Enthusiast
Jump to solution

Getting more than just that but

Aug 29 01:05:16 gsoesx03 vmkernel: 9:04:21:25.736 cpu5:4101)NFSLock: 584: Stop accessing fd 0x410000e34510 4

Aug 29 01:05:16 gsoesx03 vmkernel: 9:04:21:25.736 cpu5:4101)NFSLock: 584: Stop accessing fd 0x410000e35310 4

Aug 29 01:05:16 gsoesx03 vmkernel: 9:04:21:25.736 cpu5:4101)NFSLock: 584: Stop accessing fd 0x410000e35850 4

Aug 29 01:05:16 gsoesx03 vmkernel: 9:04:21:25.736 cpu5:4101)NFSLock: 584: Stop accessing fd 0x410000e35d90 4

Aug 29 01:05:24 gsoesx03 vmkernel: 9:04:21:33.541 cpu11:6847)NFSLock: 545: Start accessing fd 0x410000e35850 again

Aug 29 01:05:24 gsoesx03 vmkernel: 9:04:21:33.541 cpu11:6847)NFSLock: 545: Start accessing fd 0x410000e35d90 again

Aug 29 01:05:24 gsoesx03 vmkernel: 9:04:21:33.542 cpu11:6847)NFSLock: 545: Start accessing fd 0x410000e34510 again

Aug 29 01:05:24 gsoesx03 vmkernel: 9:04:21:33.542 cpu11:6847)NFSLock: 545: Start accessing fd 0x410000e35310 again

~65 VM's per ESX host, 300 VMs total on a 3070A (Shared with FC and CIFS clients).

0 Kudos
mlubinski
Expert
Expert
Jump to solution

ow, so you have similiar issue to mine Smiley Happy that means that it is probably related to netapp itself.

Please tell me how do you have your netapp connected to network? can you draw your network structure?

See my infrastructure. Filers are connected between themselves and also to Store1/Store2 with 10G interfaces. Storage switches are connected to Store1/2 with FC interfaces (1G). ESX hosts are connected with 1G CAT5 cables.

If you host Windows only, then probably you don't have any issues at all, but if you have linux, than it can cause IO errors, and kernel panics.

I am doing one test now, to see if I can make any "workaround" for this. Check every ESX host, if you don't get also messages about NFS mounts NOT RESPONDING. I am getting these, and this for sure causes linux to crash.

I hope I will get enough proofs to fight this with Netapp.

[I]If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points[/I]
0 Kudos
mlubinski
Expert
Expert
Jump to solution

well, I did some tests, but today I got also this fu** nfslock error in my logs. I thought my workaround helped, but probably was just a coincidence or maybe different thing is going on here.

This is what i did to test this crap. I simply scheduled vmkping to vfiler IP to "keep alive" netapp inteface. And this caused, that for last few days I saw NO NFSLocks in vmkernel logs (during time of testing and almost whole weekend, but yesterday and today again Smiley Sad

Sep 7 04:50:01 ESX1 vmkernel: 78:02:40:56.988 cpu7:1039)World: vm 2766: 901: Starting world vmkping with flags 4

Sep 7 04:53:43 ESX1 vmkernel: 78:02:44:38.563 cpu6:1030)NFSLock: 516: Stop accessing fd 0xc225470 4

Sep 7 04:53:43 ESX1 vmkernel: 78:02:44:38.563 cpu6:1030)NFSLock: 516: Stop accessing fd 0xc218ee8 4

Sep 7 04:53:43 ESX1 vmkernel: 78:02:44:38.563 cpu6:1030)NFSLock: 516: Stop accessing fd 0xc227a10 4

Sep 7 04:53:43 ESX1 vmkernel: 78:02:44:38.563 cpu6:1030)NFSLock: 516: Stop accessing fd 0xc2274b0 4

Sep 7 04:53:43 ESX1 vmkernel: 78:02:44:38.563 cpu6:1030)NFSLock: 516: Stop accessing fd 0xc222818 4

Sep 7 04:53:43 ESX1 vmkernel: 78:02:44:38.563 cpu6:1030)NFSLock: 516: Stop accessing fd 0xc21d370 4

Sep 7 04:53:43 ESX1 vmkernel: 78:02:44:38.563 cpu6:1030)NFSLock: 516: Stop accessing fd 0xc227e18 4

Sep 7 04:53:43 ESX1 vmkernel: 78:02:44:38.563 cpu6:1030)NFSLock: 516: Stop accessing fd 0xc21cb60 4

Sep 7 04:53:43 ESX1 vmkernel: 78:02:44:38.563 cpu6:1030)NFSLock: 516: Stop accessing fd 0xc2297a0 4

Sep 7 04:53:43 ESX1 vmkernel: 78:02:44:38.563 cpu6:1030)NFSLock: 516: Stop accessing fd 0xc21a1b8 4

Sep 7 04:53:43 ESX1 vmkernel: 78:02:44:38.563 cpu6:1030)NFSLock: 516: Stop accessing fd 0xc21d8d0 4

Sep 7 04:53:43 ESX1 vmkernel: 78:02:44:38.563 cpu6:1030)NFSLock: 516: Stop accessing fd 0xc222568 4

Sep 7 04:53:43 ESX1 vmkernel: 78:02:44:38.563 cpu6:1030)NFSLock: 516: Stop accessing fd 0xc219f08 4

Sep 7 04:53:43 ESX1 vmkernel: 78:02:44:38.563 cpu6:1030)NFSLock: 516: Stop accessing fd 0xc228ce0 4

Sep 7 04:53:43 ESX1 vmkernel: 78:02:44:38.563 cpu6:1030)NFSLock: 516: Stop accessing fd 0xc21ee50 4

Sep 7 04:53:43 ESX1 vmkernel: 78:02:44:38.563 cpu6:1030)NFSLock: 516: Stop accessing fd 0xc2280c8 4

Sep 7 04:53:43 ESX1 vmkernel: 78:02:44:38.563 cpu6:1030)NFSLock: 516: Stop accessing fd 0xc221c00 4

Sep 7 04:53:43 ESX1 vmkernel: 78:02:44:38.563 cpu6:1559)WARNING: Swap: vm 1559: 7515: Swap sync read failed: status=195887167, retrying...

Sep 7 04:54:00 ESX1 vmkernel: 78:02:44:55.571 cpu0:1795)WARNING: Swap: vm 1795: 7515: Swap sync read failed: status=195887167, retrying...

Sep 7 04:54:02 ESX1 vmkernel: 78:02:44:57.572 cpu5:1039)WARNING: NFS: 1736: Failed to get attributes (I/O error)

Sep 7 04:54:02 ESX1 vmkernel: 78:02:44:57.572 cpu5:1039)FSS: 390: Failed with status I/O error for b00f 36 3 40 66f9e18a 820 40000000 66f9e18a 78977287 4

0 f9e18a 0 0 0

Sep 7 04:54:05 ESX1 vmkernel: 78:02:45:00.572 cpu3:1532)WARNING: Swap: vm 1532: 7515: Swap sync read failed: status=195887167, retrying...

Sep 7 04:54:09 ESX1 vmkernel: 78:02:45:05.182 cpu1:1506)VSCSI: 2803: Reset request on handle 8247 (2 outstanding commands)

Sep 7 04:54:09 ESX1 vmkernel: 78:02:45:05.183 cpu2:1061)VSCSI: 3019: Resetting handle 8247

Sep 7 04:54:09 ESX1 vmkernel: 78:02:45:05.184 cpu2:1061)VSCSI: 2871: Completing reset on handle 8247 (0 outstanding commands)

Sep 7 04:54:10 ESX1 vmkernel: 78:02:45:05.573 cpu2:1516)WARNING: Swap: vm 1516: 7515: Swap sync read failed: status=195887167, retrying...

Sep 7 04:54:10 ESX1 vmkernel: 78:02:45:06.028 cpu3:1791)NFSLock: 478: Start accessing fd 0xc228ce0 again

Sep 7 04:54:10 ESX1 vmkernel: 78:02:45:06.028 cpu3:1791)NFSLock: 478: Start accessing fd 0xc21cb60 again

The last option I have is to create an incident in Netapp.

Could you tell me what is your Ontap version, and how does your network setup look like?

[I]If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points[/I]
0 Kudos
sayamanz
Contributor
Contributor
Jump to solution

So we've been running into this same issue for quite a while. We opened a support case with NetApp and they told us to align all of our partitions using mbralign from the NetApp toolchest. We started going down that road but saw no improvements with this issue. I'd be curious as to what they advised you to do. This has started to affect our Windows environment as well. We just upgraded our heads praying it would help but .. no luck. We moved from 3020's to 3140's and same issue. We see quite a bit of NFS Latency on the filer heads anywhere from 4 - 20 seconds at times. Our reseller thinks its networking. We are on Nortel gear 5510's I believe 2 stacked with no Link Aggregation. Because of ESX networking right now all traffic goes over 1 link and I was thinking of splitting the traffic across 2 by adding a new vSwitch and making the exports available accordingly. We have roughly 100 vm's spread across 5 datastores/volumes. Have you split your exports across 2 vSwitches as a possible with any good results? Have you had any luck with anything improving the frequency of this issue? We expect things to go down for us at least once a week for roughly 2 - 4 VMs. It's always on the log rotation schedule which we've broken up to Sundays @ 3am and Sundays @ 4am and still run into issues now it seems at both times... Thanks.

0 Kudos
mlubinski
Expert
Expert
Jump to solution

Well our current situation with this issue is that, NetApp has opened case with this. I will not allow them to blame misaligned vmdk's, coz I know that this is not this. It is exactly disconnecting NFS mounts for some reason. I see that ESX loose connection to netapp for some reason, so now I need Netapp to debug it from netapp side. I am pretty sure, that Netapp is also able to monitor if clients connected to volumes have connection or not.

When I am done with this I will probably post here my comments. One thing is sure - when I run vmkping script to ping netapp interface every second then NFSLocks appear less frequent than before.

[I]If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points[/I]
0 Kudos
isi
Contributor
Contributor
Jump to solution

Hi,

we've the same problem here. Is there anything new ?

Uli

0 Kudos
bulletprooffool
Champion
Champion
Jump to solution

have you got NFS locks enabled, or disabled?

Also, have you reviewed the Netapp / VM best practice guide?

There are 5 advanced settings to set on the ESX hosts, specifically for NFS (tcpipheapmax, nfslocks etc)

One day I will virtualise myself . . .
0 Kudos
mlubinski
Expert
Expert
Jump to solution

no solution yet. vmkping only decreased number of nfslock occurance (so you can try to implement in your site).

We are changing guest os timeout setting to see if they are impacted still, but this would be only "workaround". Yes, we followed best practicies both from netapp/vmware (timeout settings, maxheap etc).

[I]If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points[/I]
0 Kudos
isi
Contributor
Contributor
Jump to solution

Hi,

we just updated our nic Drivers (nx_net.o), for now the problem is gone. edit: Version 4.0.404

regards

uli

0 Kudos
mlubinski
Expert
Expert
Jump to solution

1. what servers do you have (model)?

2. Updating nic drivers, you mean upgrading their firmware?

3. You wrote: just updated nic drivers, and problem gone? Well in my environment it is random, but does not happen every minute. So how can you say if your problem is gone or not? Wait couple of days and then say if your problem is solved or not.

[I]If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points[/I]
0 Kudos
sayamanz
Contributor
Contributor
Jump to solution

Edit: Talking to our Storage admin, looks like we did see errors after I made these changes below. We did a code upgrade last week also, this seemed to fix it. We're now running 7.3.1.1P7 of DOT. So I guess the networking changes didn't help.. No errors after the DOT upgrade. Smiley Happy

We've applied everything in the best practices guide from NetApp. I'm pretty sure we've found the fix. Here are the details for the servers:

  • IBM 3650 w/ 2 Intel 5345 procs and 32Gb RAM each. These IBM's come with dual broadcoms on board and we also added an Intel Pro1000PT 4 port card.

This weekend we didn't receive any errors. First weekend in several weeks that we didn't. Earlier this week I changed the networking. Originally we had the following:

OLD

vswitch0 - VMNetworks/SC

  • vmnic0 - broadcom

  • vmnic2 - intel

  • vmnic4 - intel

vswitch1 - NFS VMkernel

  • vmnic1 - broadcom - used primarily

  • vmnic3 - intel - redundant

  • vmnic5 - intel - not really used... so removed it

NEW

vswitch0 - same...plan to make changes in the next few days

vswitch1 - NFS VMkernel Edited the properties for the NFS VMkernel, specifically the nic teaming

  • vmnic3 - intel (changed this to active)

  • vmnic1 - broadcom (changed this to standby)

  • vmnic5 - intel (removed)

In essence I think the onboard broadcom was being overloaded during peak usage or the drivers/firmware has a bug. Forcing the VMkernel to use the intel primarily has seemed to alleviate the problems.

I'm not sure how your networking is setup but it is most likely the culprit. We run roughly 22 vm's per host. Most of them are linux specifically CentOS 4. I hope this helps. I plan to take vmnic 4 and 5

and add another vmkernel on a seperate network along with another vif on the filer to handle additional exports in the future. Not sure though as these are the only free ports I'll have left and

don't really want to tie them up. I may recommend purchasing some new cards down the road to add 1 more per ESX host. What's your networking setup?

mlubinski
Expert
Expert
Jump to solution

Well, I need to check tomorrow which network cards are used for storage. Maybe it is because of netcards. I will try to switch them, and check this out later. I will update, when I am done testing.

If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points

[I]If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points[/I]
0 Kudos
isi
Contributor
Contributor
Jump to solution

Hi,

we're running OnTAP 7.3.1.1P5 on 3160 Hardware all FC Disks, Servers are HP DL380G6 with 72GB RAM and 2 Dual Port NetXen 10GE Adapers.

The NFSLock Message went away, after we updated the Network Drivers. The new Drivers load the Firmware dynamically. There have been lots and lots

of NFSLock Messages in the logfiles before the Update. We didn't have any Problems with our 'old' Intel QuadPort Gigabit Adapter.

mlubinski
Expert
Expert
Jump to solution

updated network drivers, you mean by updating ESX hosts with VMware patches?

If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points

[I]If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points[/I]
0 Kudos
sayamanz
Contributor
Contributor
Jump to solution

So last week we saw one IOCStatus error on a linux guest. It wasn't during the typical time. This is compared to 10-15 (out of 60) linux hosts every Sunday morning and a few varying at other high load times during the week. Nothing has crashed however. So we've gone 2 Sundays now since the code upgrade and have only seen 1 error w/o any crashes. I'm happy with that for now. Would really like to know what the root cause of this is though. Our Storage Admin says there is nothing about nfs in the code upgrade. The DOTF class I went to did mention a change in the kahuna process going to 7.3 which was supposed to help with load. Not sure if that's it or not.

0 Kudos
sayamanz
Contributor
Contributor
Jump to solution

I checked logs on the ESX host and there are no NFSLock errors since the DOT upgrade.

0 Kudos
mlubinski
Expert
Expert
Jump to solution

thanks for this answer. I hope I will also get this thing done, and this solves this nfslocks issue afterwards.

If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points

[I]If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points[/I]
0 Kudos