On 2 May 2009, at 17:21, Tom McLaughlin wrote: > Doug Rabson wrote, On 04/08/2009 03:20 PM: >> On 5 Apr 2009, at 07:38, Tom McLaughlin wrote: >>> Hey, I have a recent -CURRENT box which has a mount exported from an >>> OpenBSD NFS server. Recently I enabled lockd and statd on the >>> machine >>> but this has started to cause the network connection on the >>> machine to lockup. I find the following in dmesg: >>> >>> nfs server exports:/mnt/raid0/net/home: lockd not responding >>> nfs server exports:/mnt/raid0/net/home: lockd not responding >>> nfs server exports:/mnt/raid0/net/home: lockd not responding >>> nfs server exports:/mnt/raid0/net/home: lockd not responding >>> nfs server exports:/mnt/raid0/net/home: lockd not responding >>> nfs server exports:/mnt/raid0/net/home: lockd not responding >>> nfs server exports:/mnt/raid0/net/home: lockd not responding >>> nfs server exports:/mnt/raid0/net/home: lockd not responding >>> NLM: failed to contact remote rpcbind, stat = 5, port = 28416 >>> >>> Additionally I see this when trying to restart netif: >>> >>> em0: Could not setup receive structures >>> >>> I've tried building with NFS_LEGACYRPC but that has not changed >>> anything. Additionally I've tested this on 7-STABLE and while >>> lockd still does not work (so, looks like I'll still have to work >>> around my need for NFS locking) the network connection at least >>> does not lock up. Is what I'm seeing evidence of some further >>> problem? >> It looks as if lockd is not running on the server. The NFS locking >> protocol needs it enabled at both ends. Also, NFS_LEGACYRPC won't >> affect this - the record locking code always uses the new RPC code. > > Hi Doug, lockd is runing on both ends. The problem appears to be > with the system running out of mbuf clusters when using lockd. [1] > For now I'm mounting the particular mount with nolockd as an option > to get around this. I've gotten errors with my -STABLE box using > this mount with lockd enabled but at least the system didn't run out > of mbuf clusters and lose all network connectivity. Could you please try the attached (unfortunately untested - I'm at BSDCan) patch and see if it affects your problem. This patch attempts to fix PR 130628 which appears similar to your issue.
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:47 UTC