head -r356066 reaching kern.ipc.nmbclusters on Rock64 (CortexA53 with 4GiByte of RAM) while putting files on it via nfs: some evidence

From: Mark Millard <marklmi_at_yahoo.com>
Date: Fri, 27 Dec 2019 21:59:49 -0800
The following sort of sequence leads to the Rock64 not
responding on the console or over ethernet, after notifying
of nmbclusters having been reached. (This limits what
information I have of what things were like at the end.)

This is for a head -r356066 based non-debug-build context.

The sequence leading to the hangup was:

# mount -onoatime,hard,intr 192.168.1.???:/ /mnt
# tar -cf /mnt/usr/obj/clang-cortexA53-installkernel.tar -C /usr/obj/DESTDIRs/clang-cortexA53-installkernel/ .
# tar -cf /mnt/usr/obj/clang-cortexA53-installworld.tar -C /usr/obj/DESTDIRs/clang-cortexA53-installworld/ .
# tar -cf /mnt/usr/obj/clang-cortexA53-installworld-poud.tar -C /usr/obj/DESTDIRs/clang-cortexA53-installworld-poud/ .
(Hung up during this last one, after reporting
on the console that it had reached
kern.ipc.nmbclusters.)

cp -aRx of directory trees copied to such a mount
and the like also lead to such.

I've not seen such an issue on the Cortex-A7 (armv7,
2 GiByte), CortexA57 (aarch64, 8 GiByte), CortexA72
(aarch64, 16 GiBYte), or powerpc64 (16 GiByte) contexts
I've done similarly with.



For reference, after power-off/power-on and retrying
just the last tar, which worked, shows what I report
below.

Before showing the more complete list of sysctl -a
output that mentions "cluster", here are large
valued items compared to the other example contexts
I've been using:

. . .
vm.uma.mbuf_cluster.stats.frees: 557
vm.uma.mbuf_cluster.stats.allocs: 63807
vm.uma.mbuf_cluster.stats.current: 63250
. . .
vm.uma.mbuf_cluster.limit.items: 63739
. . .
vm.uma.mbuf_cluster.keg.pages: 31870
. . .

# sysctl -a | grep clust
kern.ipc.nmbclusters: 84351
kern.geom.raid.raid1e.rebuild_cluster_idle: 100
kern.geom.raid.raid1.rebuild_cluster_idle: 100
vm.cluster_anon: 1
vm.uma.mbuf_cluster.stats.xdomain: 0
vm.uma.mbuf_cluster.stats.fails: 0
vm.uma.mbuf_cluster.stats.frees: 557
vm.uma.mbuf_cluster.stats.allocs: 63807
vm.uma.mbuf_cluster.stats.current: 63250
vm.uma.mbuf_cluster.domain.0.wss: 1
vm.uma.mbuf_cluster.domain.0.imin: 0
vm.uma.mbuf_cluster.domain.0.imax: 0
vm.uma.mbuf_cluster.domain.0.nitems: 0
vm.uma.mbuf_cluster.limit.sleeps: 0
vm.uma.mbuf_cluster.limit.sleepers: 0
vm.uma.mbuf_cluster.limit.max_items: 84351
vm.uma.mbuf_cluster.limit.items: 63739
vm.uma.mbuf_cluster.keg.efficiency: 98
vm.uma.mbuf_cluster.keg.free: 1
vm.uma.mbuf_cluster.keg.pages: 31870
vm.uma.mbuf_cluster.keg.align: 7
vm.uma.mbuf_cluster.keg.ipers: 2
vm.uma.mbuf_cluster.keg.ppera: 1
vm.uma.mbuf_cluster.keg.rsize: 2048
vm.uma.mbuf_cluster.keg.name: mbuf_cluster
vm.uma.mbuf_cluster.bucket_size_max: 253
vm.uma.mbuf_cluster.bucket_size: 251
vm.uma.mbuf_cluster.flags: 0x2008<VTOSLAB,OFFPAGE>
vm.uma.mbuf_cluster.size: 2048
vm.phys_pager_cluster: 1024
vfs.ffs.maxclustersearch: 10


Here are figures for a couple of the other
contexts, after their tars-via-nfs:

From the Cortex-A7 (armv7, 2GiByte) context:

# sysctl -a | grep cluster
kern.ipc.nmbclusters: 26086
vm.cluster_anon: 1
vm.uma.mbuf_cluster.stats.xdomain: 0
vm.uma.mbuf_cluster.stats.fails: 0
vm.uma.mbuf_cluster.stats.frees: 1689
vm.uma.mbuf_cluster.stats.allocs: 4472
vm.uma.mbuf_cluster.stats.current: 2783
vm.uma.mbuf_cluster.domain.0.wss: 0
vm.uma.mbuf_cluster.domain.0.imin: 125
vm.uma.mbuf_cluster.domain.0.imax: 125
vm.uma.mbuf_cluster.domain.0.nitems: 125
vm.uma.mbuf_cluster.limit.sleeps: 0
vm.uma.mbuf_cluster.limit.sleepers: 0
vm.uma.mbuf_cluster.limit.max_items: 26086
vm.uma.mbuf_cluster.limit.items: 3539
vm.uma.mbuf_cluster.keg.efficiency: 98
vm.uma.mbuf_cluster.keg.free: 41
vm.uma.mbuf_cluster.keg.pages: 1790
vm.uma.mbuf_cluster.keg.align: 3
vm.uma.mbuf_cluster.keg.ipers: 2
vm.uma.mbuf_cluster.keg.ppera: 1
vm.uma.mbuf_cluster.keg.rsize: 2048
vm.uma.mbuf_cluster.keg.name: mbuf_cluster
vm.uma.mbuf_cluster.bucket_size_max: 253
vm.uma.mbuf_cluster.bucket_size: 103
vm.uma.mbuf_cluster.flags: 0x2008<VTOSLAB,OFFPAGE>
vm.uma.mbuf_cluster.size: 2048
vm.phys_pager_cluster: 1024
vfs.ffs.maxclustersearch: 10

The Cortext-A57 (aarch64, 8 GiByte) context:

# sysctl -a | grep clust
kern.ipc.nmbclusters: 168310
kern.geom.raid.raid1e.rebuild_cluster_idle: 100
kern.geom.raid.raid1.rebuild_cluster_idle: 100
vm.cluster_anon: 1
vm.uma.mbuf_cluster.stats.xdomain: 0
vm.uma.mbuf_cluster.stats.fails: 0
vm.uma.mbuf_cluster.stats.frees: 8678
vm.uma.mbuf_cluster.stats.allocs: 10702
vm.uma.mbuf_cluster.stats.current: 2024
vm.uma.mbuf_cluster.domain.0.wss: 0
vm.uma.mbuf_cluster.domain.0.imin: 25
vm.uma.mbuf_cluster.domain.0.imax: 25
vm.uma.mbuf_cluster.domain.0.nitems: 25
vm.uma.mbuf_cluster.limit.sleeps: 0
vm.uma.mbuf_cluster.limit.sleepers: 0
vm.uma.mbuf_cluster.limit.max_items: 168310
vm.uma.mbuf_cluster.limit.items: 2069
vm.uma.mbuf_cluster.keg.efficiency: 98
vm.uma.mbuf_cluster.keg.free: 1
vm.uma.mbuf_cluster.keg.pages: 1035
vm.uma.mbuf_cluster.keg.align: 7
vm.uma.mbuf_cluster.keg.ipers: 2
vm.uma.mbuf_cluster.keg.ppera: 1
vm.uma.mbuf_cluster.keg.rsize: 2048
vm.uma.mbuf_cluster.keg.name: mbuf_cluster
vm.uma.mbuf_cluster.bucket_size_max: 253
vm.uma.mbuf_cluster.bucket_size: 5
vm.uma.mbuf_cluster.flags: 0x2008<VTOSLAB,OFFPAGE>
vm.uma.mbuf_cluster.size: 2048
vm.phys_pager_cluster: 1024
vfs.ffs.maxclustersearch: 10

(The Context-A57 had more than just 3 tars
done over nfs.)



The problem seems somewhat specific to
Rock64-like contexts, not that I know in
what specific respect(s) for that "like".

===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)
Received on Sat Dec 28 2019 - 04:59:57 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:22 UTC