VMware Cloud Community
jinggege
Contributor
Contributor

NFS41: NFS41_VSIGetMaxQueueDepth:3538

我的esxi6.5主机在11-05和11-26T03:36左右自动重启,两次日志有相同的错误   NFS41: NFS41_VSIGetMaxQueueDepth

请问这个出现这个问题可能的原因是什么,exsi6.5是否也有已经的长时间运行会自动重启的BUG等

My esxi6.5 host is automatically rebooted at 11-05 and 11-26t03:36 or so, and the two logs have the same error

NFS41: NFS41_VSIGetMaxQueueDepth:3538

What is the probable cause of this problem? Is exsi6.5 a BUG that has been automatically restarted for a long time

2017-11-26T01:31:45.454Z cpu26:68209)WARNING: kbdmode_set:440: invalid keyboard mode 4: Not supported

2017-11-26T01:32:13.829Z cpu6:65661)WARNING: ntg3: Ntg3SetWolState:983: vmnic0:WOL currently disabled in NVRAM. Please change WOL setting in NVRAM through NIC mgmt software and reboot.

2017-11-26T01:32:13.832Z cpu6:65661)WARNING: ntg3: Ntg3SetWolState:983: vmnic1:WOL currently disabled in NVRAM. Please change WOL setting in NVRAM through NIC mgmt software and reboot.

2017-11-26T01:32:13.833Z cpu6:65661)WARNING: ntg3: Ntg3SetWolState:983: vmnic2:WOL currently disabled in NVRAM. Please change WOL setting in NVRAM through NIC mgmt software and reboot.

2017-11-26T01:32:13.834Z cpu6:65661)WARNING: ntg3: Ntg3SetWolState:983: vmnic3:WOL currently disabled in NVRAM. Please change WOL setting in NVRAM through NIC mgmt software and reboot.

2017-11-26T03:36:05.853Z cpu9:69957)WARNING: NFS41: NFS41_VSIGetMaxQueueDepth:3538: Invalid arg count! (0): Usage <FS>

2017-11-26T03:36:05.853Z cpu9:69957)WARNING: NFS41: NFS41_VSIGetShares:3396: Invalid arg count! (0): Usage <FS> <worldID>

2017-11-26T03:36:12.299Z cpu39:69987)WARNING: NFS41: NFS41_VSIGetMaxQueueDepth:3538: Invalid arg count! (0): Usage <FS>

2017-11-26T03:36:12.299Z cpu39:69987)WARNING: NFS41: NFS41_VSIGetShares:3396: Invalid arg count! (0): Usage <FS> <worldID>

2017-11-26T03:36:19.787Z cpu15:69957)WARNING: PCI: 179: 0000:00:00.0: Bypassing non-ACS capable device in hierarchy

2017-11-26T03:36:19.996Z cpu15:69957)WARNING: PCI: 179: 0000:00:1c.0: Bypassing non-ACS capable device in hierarchy

2017-11-26T03:36:20.010Z cpu15:69957)WARNING: PCI: 179: 0000:00:1c.7: Bypassing non-ACS capable device in hierarchy

2017-11-26T03:36:20.148Z cpu15:69957)WARNING: PCI: 179: 0000:08:00.0: Bypassing non-ACS capable device in hierarchy

2017-11-26T03:36:22.329Z cpu37:69987)WARNING: PCI: 179: 0000:00:00.0: Bypassing non-ACS capable device in hierarchy

2017-11-26T03:36:22.552Z cpu37:69987)WARNING: PCI: 179: 0000:00:1c.0: Bypassing non-ACS capable device in hierarchy

2017-11-26T03:36:22.567Z cpu37:69987)WARNING: PCI: 179: 0000:00:1c.7: Bypassing non-ACS capable device in hierarchy

2017-11-26T03:36:22.704Z cpu37:69987)WARNING: PCI: 179: 0000:08:00.0: Bypassing non-ACS capable device in hierarchy

2017-11-26T03:38:10.094Z cpu12:66289)WARNING: NMP: nmp_DeviceRetryCommand:133: Device "naa.600a098000b66ddf000003e7593a8b11": awaiting fast path state update for failover with I/O blocked. No prior reservation exists on the device.

2017-11-26T03:38:10.670Z cpu20:66161)WARNING: NMP: nmpDeviceAttemptFailover:640: Retry world failover device "naa.600a098000b66ddf000003e7593a8b11" - issuing command 0x439d09b9f6c0

Tags (1)
0 Kudos
6 Replies
daphnissov
Immortal
Immortal

What type of storage are you using here? Provide full details if you can.

0 Kudos
Marmotte94
Enthusiast
Enthusiast

Hi,

Maybe a bad VIB. Did you configure dump collector on your infrastucture ? Did you verify HCL for your server ?

Thank you,

Regards,

Please, visit my blog http://www.purplescreen.eu/
0 Kudos
jinggege
Contributor
Contributor

Hi:

I use DELL3820F storage and connect with the host through the fiber optic switch

Thanks

0 Kudos
daphnissov
Immortal
Immortal

So you're not using NFS 4.1 then. Can you please provide more information on your configuration?

0 Kudos
jinggege
Contributor
Contributor

hi:

I have three esxi 6.5 hosts connected to a network switch and connected to my storage device via a fiber optic switch.

sorry,Which part of the configuration are you asking?

0 Kudos
daphnissov
Immortal
Immortal

What microcode is your array running? What HBAs are you using? Their firmware and driver versions? What activities lead up to this error you mentioned? How are you consuming the storage from your array, and in what configuration?

0 Kudos