Alexander Leidinger Alexander at leidinger.net wrote on Sun Jun 24 10:03:49 UTC 2018 : > Short: > Can it be that enabling numa in the kernel is the reason why some > people see instability with zfs and usage of swap while a lot of free > RAM is available? [It will likely be a few months before I again have access to the environment these notes are based on. It has been about a month since I last had access.] On a AMD Ryzen Threadripper 1950X (16 cores, 2 HW threads per core) I enabled: options NUMA options MAXMEMDOM=2 in fairly recent times. This is a UFS context, not a ZFS one. I'd not been explicitly controlling how things run (so using defaults). This is head with debugging disabled (via including GENERIC and overriding). I did not have the swap usage problem with doing many buildworld buildkernel (self hosted and the cross builds for several targets). Nor when did a poudriere bulk -a (with ALLOW_MAKE_JOBS=yes ). This was a FreeBSD native boot context at the time. (I usually have run the same drives under Hyper-V but have not seen the problem there either.) For native FreeBSD I used -j32 (in buildworld/buildkernel terms but also for the bulk -a) and for under-Hyper-V I used -j28 . 96 GiBytes of ECC RAM total (48 GiBytes/NUMA-node). I'm not sure how common NUMA being enabled is, nor how common various MAXMEMDOM settings are. I'd not be surprised if various folks reporting problems had not explicitly enabled NUMA, nor set some explicit MAXMEMDOM figure. It may be that they all have ZFS in common in fairly recent times. (I'm ignoring examples of long-latency I/O on the same device as some swap partitions that are in use: this gets into Out Of Memory process killing without the swap being mostly used. Some reports of swap problems have this sort of issue involved on small systems unlikely to be using ZFS.) === Mark Millard marklmi at yahoo.com ( dsl-only.net went away in early 2018-Mar)Received on Sun Jun 24 2018 - 18:07:14 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:16 UTC