Re: FYI: devel/kyua 14 failures for head -r338518M based build in a Pine64+ 2GB (aarch64 / cortexA53 / A64) context [md related processes left waiting (and more)]

From: Mark Millard <marklmi_at_yahoo.com>
Date: Tue, 11 Sep 2018 19:24:00 -0700
[After the run top -CawSopid shows something interesting/odd:
lots of g_eli[?] and md?? processes are still around in 
geli:w state for g_eli[?] and mdwait for md??. Also there
are 4 processes in aiordy state.]

On 2018-Sep-11, at 8:48 AM, Mark Millard <marklmi at yahoo.com> wrote:

> [Adding listing broken tests, but ignoring sys/cddl/zfs/ ones.
> lib/libc/string/memcmp_test:diff is one of them.]
> 
> On 2018-Sep-11, at 2:44 AM, Mark Millard <marklmi at yahoo.com> wrote:
> 
>> [No zfs use, just a UFS e.MMC filesystem on a microsd adapter.]
>> 
>> I got 14 failures. I've not enabled any configuration properties.
>> 
>> I do not know if official devel/kyua tests are part of the head ->
>> stable transition for any tier or not. I'm not claiming to know if
>> anything here could be a significant issue.
>> 
>> Someone may want to test an official aarch64 build rather than presume
>> that my personal build is good enough. But I expect that its results
>> should be strongly suggestive, even if an official tests uses a more
>> normal-for-FreeBSD configuration of an aarch64 system.
>> 
>> The e.MMC is V5.1 is operating in DDR52 mode and is faster than normal
>> configurations for the Pine 64+ 2GB. TRIM is in use for the UFS file
>> system. This might let some things pass that otherwise would time out.
>> 
>> 
>> ===> Failed tests
>> lib/libc/resolv/resolv_test:getaddrinfo_test  ->  failed: /usr/src/lib/libc/tests/resolv/resolv_test.c:299: run_tests(_hostlist_file, METHOD_GETADDRINFO) == 0 not met  [98.834s]
>> lib/libc/ssp/ssp_test:vsnprintf  ->  failed: atf-check failed; see the output of the test for details  [0.107s]
>> lib/libc/ssp/ssp_test:vsprintf  ->  failed: atf-check failed; see the output of the test for details  [0.105s]
>> lib/libproc/proc_test:symbol_lookup  ->  failed: /usr/src/lib/libproc/tests/proc_test.c:143: memcmp(sym, &tsym, sizeof(*sym)) != 0 [0.057s]
>> lib/msun/trig_test:accuracy  ->  failed: 3 checks failed; see output for more details  [0.013s]
>> lib/msun/trig_test:special  ->  failed: 8 checks failed; see output for more details  [0.013s]
>> local/kyua/utils/stacktrace_test:dump_stacktrace__integration  ->  failed: Line 391: atf::utils::grep_file("#0", exit_handle.stderr_file().str()) not met  [4.015s]
>> local/kyua/utils/stacktrace_test:dump_stacktrace__ok  ->  failed: Line 420: atf::utils::grep_file("^frame 1$", exit_handle.stderr_file().str()) not met  [4.470s]
>> local/kyua/utils/stacktrace_test:dump_stacktrace_if_available__append  ->  failed: Line 560: atf::utils::grep_file("frame 1", exit_handle.stderr_file().str()) not met  [4.522s]
>> local/kyua/utils/stacktrace_test:find_core__found__long  ->  failed: Core dumped, but no candidates found  [3.988s]
>> local/kyua/utils/stacktrace_test:find_core__found__short  ->  failed: Core dumped, but no candidates found  [4.014s]
>> sys/kern/ptrace_test:ptrace__PT_STEP_with_signal  ->  failed: /usr/src/tests/sys/kern/ptrace_test.c:3465: WSTOPSIG(status) == SIGABRT not met  [0.017s]
>> usr.bin/indent/functional_test:nsac  ->  failed: atf-check failed; see the output of the test for details  [0.151s]
>> usr.bin/indent/functional_test:sac  ->  failed: atf-check failed; see the output of the test for details  [0.150s]
>> ===> Summary
>> Results read from /root/.kyua/store/results.usr_tests.20180911-070147-413583.db
>> Test cases: 7301 total, 212 skipped, 37 expected failures, 116 broken, 14 failed
>> Total time: 6688.125s
>> 
>> 
>> 
>> 
>> I'll note that the console reported over 73720 messages like (with figures
>> where I've listed ????'s):
>> 
>> md????.eli: Failed to authenticate ???? bytes of data at offset ????.
>> 
>> There are also device created and destroyed/removed notices with related
>> material. Overall there were over 84852 lines reported with "GEOM_ELI:"
>> on the line.
>> 
>> This did not prevent tests from passing.
>> 
>> (The huge console output is unfortunate in my view: it makes finding
>> interesting console messages a problem  while watching messages
>> go by.)
>> 
>> 
>> 
>> I did get the console message block:
>> 
>> kern.ipc.maxpipekva exceeded; see tuning(7)
>> Freed UMA keg (rtentry) was not empty (18 items).  Lost 1 pages of memory.
>> Sep 11 01:36:25 pine64 kernel: nd6_dad_timer: called with non-tentative address <REPLACED>(epair2Freed UMA keg (rtentry) was not empty (18 items).  Lost 1 pages of memory.
>> a)
>> Freed UMA keg (rtentry) was not empty (18 items).  Lost 1 pages of memory.
>> Freed UMA keg (rtentry) was not empty (18 items).  Lost 1 pages of memory.
>> Freed UMA keg (rtentry) was not empty (18 items).  Lost 1 pages of memory.
>> Freed UMA keg (rtentry) was not empty (18 items).  Lost 1 pages of memory.
>> Freed UMA keg (rtentry) was not empty (18 items).  Lost 1 pages of memory.
>> Freed UMA keg (rtentry) was not empty (18 items).  Lost 1 pages of memory.
>> Freed UMA keg (rtentry) was not empty (18 items).  Lost 1 pages of memory.
>> Freed UMA keg (rtentry) was not empty (18 items).  Lost 1 pages of memory.
>> Freed UMA keg (rtentry) was not empty (18 items).  Lost 1 pages of memory.
>> Freed UMA keg (rtentry) was not empty (18 items).  Lost 1 pages of memory.
>> Freed UMA keg (rtentry) was not empty (18 items).  Lost 1 pages of memory.
>> Freed UMA keg (rtentry) was not empty (18 items).  Lost 1 pages of memory.
>> Freed UMA keg (rtentry) was not empty (18 items).  Lost 1 pages of memory.
>> Freed UMA keg (rtentry) was not empty (18 items).  Lost 1 pages of memory.
>> Freed UMA keg (rtentry) was not empty (18 items).  Lost 1 pages of memory.
>> 
>> But no failure reports seemed to be associated.
>> 
>> Still, I wonder if the block of messages is significant.
>> 
>> 
>> Some other console messages seen (extracted from various places):
>> 
>> GEOM_MIRROR: Request failed (error=5). md29[READ(offset=524288, length=2048)]
>> GEOM_MIRROR: Request failed (error=6). md28[READ(offset=1048576, length=2048)]
>> GEOM_MIRROR: Request failed (error=5). md28[WRITE(offset=0, length=2048)]
>> GEOM_MIRROR: Cannot write metadata on md29 (device=mirror.KRYGpE, error=5).
>> GEOM_MIRROR: Cannot update metadata on disk md29 (error=5).
>> GEOM_MIRROR: Request failed (error=5). md28[READ(offset=0, length=131072)]
>> GEOM_MIRROR: Synchronization request failed (error=5). mirror/mirror.YQGUHJ[READ(offset=0, length=131072)]
>> GEOM_MIRROR: Request failed (error=5). md29[READ(offset=0, length=131072)]
>> 
>> Again no failure reports seemed to be associated.
>> 
>> 
>> Some or all of the following may be normal/expected:
>> 
>> Sep 11 00:05:44 pine64 kernel: pid 21057 (process_test), uid 0: exited on signal 3 (core dumped)
>> Sep 11 00:05:49 pine64 kernel: pid 21071 (sanity_test), uid 0: exited on signal 6 (core dumped)
>> Sep 11 00:05:54 pine64 kernel: pid 21074 (sanity_test), uid 0: exited on signal 6 (core dumped)
>> Sep 11 00:05:58 pine64 kernel: pid 21077 (sanity_test), uid 0: exited on signal 6 (core dumped)
>> Sep 11 00:06:03 pine64 kernel: pid 21080 (sanity_test), uid 0: exited on signal 6 (core dumped)
>> Sep 11 00:06:44 pine64 kernel: pid 23170 (cpp_helpers), uid 0: exited on signal 6 (core dumped)
>> Sep 11 00:06:49 pine64 kernel: pid 23306 (c_helpers), uid 977: exited on signal 6 (core dumped)
>> Sep 11 00:06:54 pine64 kernel: pid 23308 (cpp_helpers), uid 977: exited on signal 6 (core dumped)
>> Sep 11 00:18:44 pine64 kernel: pid 38227 (assert_test), uid 0: exited on signal 6
>> Sep 11 00:51:38 pine64 kernel: pid 39883 (getenv_test), uid 0: exited on signal 11 (core dumped)
>> Sep 11 00:51:51 pine64 kernel: pid 40063 (memcmp_test), uid 0: exited on signal 6 (core dumped)
>> Sep 11 00:53:26 pine64 kernel: pid 40627 (wait_test), uid 0: exited on signal 11 (core dumped)
>> Sep 11 00:53:27 pine64 kernel: pid 40632 (wait_test), uid 0: exited on signal 3
>> Sep 11 00:53:27 pine64 kernel: pid 40634 (wait_test), uid 0: exited on signal 3
>> Sep 11 07:53:32 pine64 h_fgets[41013]: stack overflow detected; terminated
>> Sep 11 00:53:32 pine64 kernel: pid 41013 (h_fgets), uid 0: exited on signal 6
>> Sep 11 07:53:33 pine64 h_gets[41049]: stack overflow detected; terminated
>> Sep 11 00:53:33 pine64 kernel: pid 41049 (h_gets), uid 0: exited on signal 6
>> Sep 11 07:53:33 pine64 h_memcpy[41066]: stack overflow detected; terminated
>> Sep 11 00:53:33 pine64 kernel: pid 41066 (h_memcpy), uid 0: exited on signal 6
>> Sep 11 07:53:33 pine64 h_memmove[41083]: stack overflow detected; terminated
>> Sep 11 00:53:33 pine64 kernel: pid 41083 (h_memmove), uid 0: exited on signal 6
>> Sep 11 07:53:33 pine64 h_memset[41100]: stack overflow detected; terminated
>> Sep 11 00:53:33 pine64 kernel: pid 41100 (h_memset), uid 0: exited on signal 6
>> Sep 11 07:53:33 pine64 h_read[41135]: stack overflow detected; terminated
>> Sep 11 00:53:33 pine64 kernel: pid 41135 (h_read), uid 0: exited on signal 6
>> Sep 11 07:53:33 pine64 h_readlink[41152]: stack overflow detected; terminated
>> Sep 11 00:53:33 pine64 kernel: pid 41152 (h_readlink), uid 0: exited on signal 6
>> Sep 11 07:53:33 pine64 h_snprintf[41169]: stack overflow detected; terminated
>> Sep 11 00:53:33 pine64 kernel: pid 41169 (h_snprintf), uid 0: exited on signal 6
>> Sep 11 07:53:34 pine64 h_sprintf[41186]: stack overflow detected; terminated
>> Sep 11 00:53:34 pine64 kernel: pid 41186 (h_sprintf), uid 0: exited on signal 6
>> Sep 11 07:53:34 pine64 h_stpcpy[41203]: stack overflow detected; terminated
>> Sep 11 00:53:34 pine64 kernel: pid 41203 (h_stpcpy), uid 0: exited on signal 6
>> Sep 11 07:53:34 pine64 h_stpncpy[41220]: stack overflow detected; terminated
>> Sep 11 00:53:34 pine64 kernel: pid 41220 (h_stpncpy), uid 0: exited on signal 6
>> Sep 11 07:53:34 pine64 h_strcat[41237]: stack overflow detected; terminated
>> Sep 11 00:53:34 pine64 kernel: pid 41237 (h_strcat), uid 0: exited on signal 6
>> Sep 11 07:53:34 pine64 h_strcpy[41254]: stack overflow detected; terminated
>> Sep 11 00:53:34 pine64 kernel: pid 41254 (h_strcpy), uid 0: exited on signal 6
>> Sep 11 07:53:34 pine64 h_strncat[41271]: stack overflow detected; terminated
>> Sep 11 00:53:34 pine64 kernel: pid 41271 (h_strncat), uid 0: exited on signal 6
>> Sep 11 07:53:34 pine64 h_strncpy[41288]: stack overflow detected; terminated
>> Sep 11 00:53:34 pine64 kernel: pid 41288 (h_strncpy), uid 0: exited on signal 6
>> Sep 11 00:53:41 pine64 kernel: pid 41478 (target_prog), uid 0: exited on signal 5 (core dumped)
>> Sep 11 00:56:53 pine64 kernel: pid 43967 (exponential_test), uid 0: exited on signal 6 (core dumped)
>> Sep 11 00:56:58 pine64 kernel: pid 43972 (fenv_test), uid 0: exited on signal 6 (core dumped)
>> Sep 11 00:57:02 pine64 kernel: pid 43974 (fma_test), uid 0: exited on signal 6 (core dumped)
>> Sep 11 00:57:07 pine64 kernel: pid 43990 (invtrig_test), uid 0: exited on signal 6 (core dumped)
>> Sep 11 00:57:13 pine64 kernel: pid 44067 (logarithm_test), uid 0: exited on signal 6 (core dumped)
>> Sep 11 00:57:17 pine64 kernel: pid 44069 (lrint_test), uid 0: exited on signal 6 (core dumped)
>> Sep 11 00:57:21 pine64 kernel: pid 44073 (nearbyint_test), uid 0: exited on signal 6 (core dumped)
>> Sep 11 00:57:26 pine64 kernel: pid 44075 (next_test), uid 0: exited on signal 6 (core dumped)
>> Sep 11 00:57:31 pine64 kernel: pid 44100 (rem_test), uid 0: exited on signal 6 (core dumped)
>> Sep 11 00:57:43 pine64 kernel: pid 44248 (exhaust_test), uid 0: exited on signal 11 (core dumped)
>> 
>> I'm not sure that they all would be expected.
> 
> ===> Broken tests
> lib/libc/string/memcmp_test:diff  ->  broken: Premature exit; test case received signal 6 (core dumped)  [3.962s]
> lib/libregex/exhaust_test:regcomp_too_big  ->  broken: Premature exit; test case received signal 11 (core dumped)  [8.997s]
> lib/msun/exponential_test:main  ->  broken: Received signal 6  [3.893s]
> lib/msun/fenv_test:main  ->  broken: Received signal 6  [4.326s]
> lib/msun/fma_test:main  ->  broken: Received signal 6  [4.315s]
> lib/msun/invtrig_test:main  ->  broken: Received signal 6  [4.345s]
> lib/msun/logarithm_test:main  ->  broken: Received signal 6  [3.921s]
> lib/msun/lrint_test:main  ->  broken: Received signal 6  [4.416s]
> lib/msun/nearbyint_test:main  ->  broken: Received signal 6  [4.389s]
> lib/msun/next_test:main  ->  broken: Received signal 6  [4.401s]
> lib/msun/rem_test:main  ->  broken: Received signal 6  [4.385s]
> sbin/growfs/legacy_test:main  ->  broken: TAP test program yielded invalid data: Load of '/tmp/kyua.5BsFl9/3782/stdout.txt' failed: Reported plan differs from actual executed tests  [0.476s]
> 
> sys/cddl/zfs/ ones ignored here: no zfs context.

One more thing of note after kyua completed
(the Pine64+ 2GB has been mostly idle since then),
top shows:

last pid: 59782;  load averages:  0.22,  0.25,  0.19                                                                                                                            up 0+19:13:11  19:11:36
122 processes: 2 running, 119 sleeping, 1 waiting
CPU:  0.0% user,  0.0% nice,  0.0% system,  0.1% interrupt, 99.9% idle
Mem: 2164K Active, 1474M Inact, 14M Laundry, 365M Wired, 202M Buf, 122M Free
Swap: 3584M Total, 3584M Free

  PID USERNAME    THR PRI NICE   SIZE    RES SWAP STATE    C   TIME     CPU COMMAND
82157 root          1  20    -      0    16K    0 geli:w   3   0:00   0.00% [g_eli[3] md27]
82156 root          1  20    -      0    16K    0 geli:w   2   0:00   0.00% [g_eli[2] md27]
82155 root          1  20    -      0    16K    0 geli:w   1   0:00   0.00% [g_eli[1] md27]
82154 root          1  20    -      0    16K    0 geli:w   0   0:00   0.00% [g_eli[0] md27]
82147 root          1  -8    -      0    16K    0 mdwait   0   0:00   0.00% [md27]
82001 root          1  -8    -      0    16K    0 mdwait   3   0:00   0.00% [md26]
81941 root          1  20    -      0    16K    0 geli:w   3   0:00   0.00% [g_eli[3] md25]
81940 root          1  20    -      0    16K    0 geli:w   2   0:00   0.00% [g_eli[2] md25]
81939 root          1  20    -      0    16K    0 geli:w   1   0:00   0.00% [g_eli[1] md25]
81938 root          1  20    -      0    16K    0 geli:w   0   0:00   0.00% [g_eli[0] md25]
81925 root          1  -8    -      0    16K    0 mdwait   1   0:00   0.00% [md25]
81777 root          1  20    -      0    16K    0 geli:w   3   0:00   0.00% [g_eli[3] md24p1]
81776 root          1  20    -      0    16K    0 geli:w   2   0:00   0.00% [g_eli[2] md24p1]
81775 root          1  20    -      0    16K    0 geli:w   1   0:00   0.00% [g_eli[1] md24p1]
81774 root          1  20    -      0    16K    0 geli:w   0   0:00   0.00% [g_eli[0] md24p1]
81701 root          1  -8    -      0    16K    0 mdwait   2   0:00   0.00% [md24]
81598 root          1  -8    -      0    16K    0 mdwait   0   0:00   0.00% [md23]
72532 root          1  -8    -      0    16K    0 mdwait   0   0:01   0.00% [md22]
70666 root          1  -8    -      0    16K    0 mdwait   2   0:01   0.00% [md21]
70485 root          1  20    -      0    16K    0 geli:w   3   0:00   0.00% [g_eli[3] md20]
70484 root          1  20    -      0    16K    0 geli:w   2   0:00   0.00% [g_eli[2] md20]
70483 root          1  20    -      0    16K    0 geli:w   1   0:00   0.00% [g_eli[1] md20]
70482 root          1  20    -      0    16K    0 geli:w   0   0:00   0.00% [g_eli[0] md20]
70479 root          1  -8    -      0    16K    0 mdwait   2   0:00   0.00% [md20]
70413 root          1  20    -      0    16K    0 geli:w   3   0:00   0.00% [g_eli[3] md19.nop]
70412 root          1  20    -      0    16K    0 geli:w   2   0:00   0.00% [g_eli[2] md19.nop]
70411 root          1  20    -      0    16K    0 geli:w   1   0:00   0.00% [g_eli[1] md19.nop]
70410 root          1  20    -      0    16K    0 geli:w   0   0:00   0.00% [g_eli[0] md19.nop]
70393 root          1  -8    -      0    16K    0 mdwait   3   0:00   0.00% [md19]
70213 root          1  20    -      0    16K    0 geli:w   3   0:00   0.00% [g_eli[3] md18]
70212 root          1  20    -      0    16K    0 geli:w   2   0:00   0.00% [g_eli[2] md18]
70211 root          1  20    -      0    16K    0 geli:w   1   0:00   0.00% [g_eli[1] md18]
70210 root          1  20    -      0    16K    0 geli:w   0   0:00   0.00% [g_eli[0] md18]
70193 root          1  -8    -      0    16K    0 mdwait   2   0:00   0.00% [md18]
70088 root          1  -8    -      0    16K    0 mdwait   2   0:00   0.00% [md17]
59763 root          1  -8    -      0    16K    0 mdwait   3   0:01   0.00% [md16]
49482 root          1  -8    -      0    16K    0 mdwait   2   0:01   0.00% [md15]
27196 root          1  -8    -      0    16K    0 mdwait   0   0:04   0.00% [md14]
27018 root          1  -8    -      0    16K    0 mdwait   0   0:00   0.00% [md13]
26956 root          1  -8    -      0    16K    0 mdwait   0   0:00   0.00% [md12]
26364 root          1  -8    -      0    16K    0 mdwait   0   0:00   0.00% [md11]
16100 root          1  -8    -      0    16K    0 mdwait   2   0:03   0.00% [md10]
15556 root          1  -8    -      0    16K    0 mdwait   0   0:00   0.00% [md9]
15498 root          1  20    -      0    16K    0 geli:w   3   0:00   0.00% [g_eli[3] md8]
15497 root          1  20    -      0    16K    0 geli:w   2   0:00   0.00% [g_eli[2] md8]
15496 root          1  20    -      0    16K    0 geli:w   1   0:00   0.00% [g_eli[1] md8]
15495 root          1  20    -      0    16K    0 geli:w   0   0:00   0.00% [g_eli[0] md8]
15462 root          1  -8    -      0    16K    0 mdwait   2   0:00   0.00% [md8]
13400 root          1  -8    -      0    16K    0 mdwait   0   0:00   0.00% [md7]
13101 root          1  -8    -      0    16K    0 mdwait   2   0:00   0.00% [md6]
13005 root          1  20    -      0    16K    0 geli:w   3   0:00   0.00% [g_eli[3] md5]
13004 root          1  20    -      0    16K    0 geli:w   2   0:00   0.00% [g_eli[2] md5]
13003 root          1  20    -      0    16K    0 geli:w   1   0:00   0.00% [g_eli[1] md5]
13002 root          1  20    -      0    16K    0 geli:w   0   0:00   0.00% [g_eli[0] md5]
12995 root          1  -8    -      0    16K    0 mdwait   0   0:00   0.00% [md5]
12877 root          1  -8    -      0    16K    0 mdwait   3   0:00   0.00% [md4]
12719 root          1  -8    -      0    16K    0 mdwait   0   0:00   0.00% [md3]
12621 root          1  -8    -      0    16K    0 mdwait   0   0:00   0.00% [md2]
12559 root          1  20    -      0    16K    0 geli:w   3   0:00   0.00% [g_eli[3] md1]
12558 root          1  20    -      0    16K    0 geli:w   2   0:00   0.00% [g_eli[2] md1]
12557 root          1  20    -      0    16K    0 geli:w   1   0:00   0.00% [g_eli[1] md1]
12556 root          1  20    -      0    16K    0 geli:w   0   0:00   0.00% [g_eli[0] md1]
12549 root          1  -8    -      0    16K    0 mdwait   3   0:00   0.00% [md1]
12477 root          1  -8    -      0    16K    0 mdwait   2   0:00   0.00% [md0]
 1345 root          1 -16    -      0    16K    0 aiordy   1   0:00   0.00% [aiod4]
 1344 root          1 -16    -      0    16K    0 aiordy   3   0:00   0.00% [aiod3]
 1343 root          1 -16    -      0    16K    0 aiordy   2   0:00   0.00% [aiod2]
 1342 root          1 -16    -      0    16K    0 aiordy   0   0:00   0.00% [aiod1]
34265 root          1  20    0    14M  2668K    0 CPU3     3   3:10   0.28% top -CawSores
34243 root          1  23    0    12M  1688K    0 wait     2   0:00   0.00% su (sh)
34242 markmi        1  20    0    13M  1688K    0 wait     2   0:00   0.00% su
34236 markmi        1  21    0    12M  1688K    0 wait     0   0:00   0.00% -sh (sh)
34235 markmi        1  20    0    20M  1312K    0 select   1   0:09   0.01% sshd: markmi_at_pts/1 (sshd)
34230 root          1  21    0    20M  3460K    0 select   3   0:00   0.00% sshd: markmi [priv] (sshd)
  898 root          1  52    0    12M  1688K    0 ttyin    1   0:00   0.00% su (sh)
  897 markmi        1  21    0    13M  1688K    0 wait     2   0:00   0.00% su
  889 markmi        1  26    0    12M  1688K    0 wait     3   0:00   0.00% -sh (sh)
  888 markmi        1  20    0    21M  1016K    0 select   1   0:03   0.00% sshd: markmi_at_pts/0 (sshd)
  885 root          1  23    0    20M  3460K    0 select   2   0:00   0.00% sshd: markmi [priv] (sshd)
  836 root          1  20    0    12M  2164K    0 ttyin    0   0:03   0.00% -sh (sh)
  835 root          1  20    0    13M  1688K    0 wait     1   0:00   0.00% login [pam] (login)
  785 root          1  52    0    11M   884K    0 nanslp   0   0:01   0.00% /usr/sbin/cron -s
  781 smmsp         1  20    0    15M   796K    0 pause    3   0:00   0.00% sendmail: Queue runner_at_00:30:00 for /var/spool/clientmqueue (sendmail)
  778 root          1  20    0    15M  1832K    0 select   2   0:02   0.00% sendmail: accepting connections (sendmail)
  775 root          1  20    0    19M   788K    0 select   1   0:00   0.00% /usr/sbin/sshd
  731 root          1  20    0    18M    18M    0 select   3   0:07   0.01% /usr/sbin/ntpd -p /var/db/ntp/ntpd.pid -c /etc/ntp.conf -g
  694 root         32  52    0    11M  1112K    0 rpcsvc   0   0:00   0.00% nfsd: server (nfsd)


After the run top -CawSopid shows something interesting/odd:
lots of g_eli[?] and md?? processes are still around in geli:w
state for g_eli[?] and mdwait for md??.

Also there are 4 aiod? processes in the aiordy state as well.

===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)
Received on Wed Sep 12 2018 - 00:24:11 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:18 UTC