Re: VM UMA counters.

From: Santiago Martinez <sm_at_codenetworks.net>
Date: Wed, 20 Jan 2021 14:24:59 +0000
Hi Mark,

To the DRM question, indeed I am using drm-devel with amdgpu.

Here is the vmstat -s output.

Cheers

Santiago

root_at_tucho:/home/smartinez # vmstat -s
1882578 cpu context switches
100445 device interrupts
23777 software interrupts
1054356 traps
13811750 system calls
39 kernel threads created
1398  fork() calls
343 vfork() calls
84 rfork() calls
0 swap pager pageins
0 swap pager pages paged in
0 swap pager pageouts
0 swap pager pages paged out
12579 vnode pager pageins
138821 vnode pager pages paged in
4 vnode pager pageouts
37 vnode pager pages paged out
0 page daemon wakeups
160056 pages examined by the page daemon
0 clean page reclamation shortfalls
0 pages reactivated by the page daemon
194549 copy-on-write faults
190 copy-on-write optimized faults
697804 zero fill pages zeroed
0 zero fill pages prezeroed
2559 intransit blocking page faults
1018606 total VM faults taken
12262 page faults requiring I/O
0 pages affected by kernel thread creation
138718 pages affected by  fork()
12177 pages affected by vfork()
14704 pages affected by rfork()
746501 pages freed
0 pages freed by daemon
338813 pages freed by exiting processes
418069 pages active
200941 pages inactive
1123 pages in the laundry queue
513309 pages wired down
32 virtual user pages wired down
7003759 pages free
4096 bytes per page
4311229 total name lookups
cache hits (94% pos + 2% neg) system 0% per-directory
deletions 0%, falsehits 0%, toolong 0%


On 1/20/21 2:17 PM, Mark Johnston wrote:
> On Tue, Jan 19, 2021 at 12:44:14PM +0000, Santiago Martinez wrote:
>> Hi there, sorry to ask this as it might be a silly question...
>>
>> Since a few weeks im seeing random locks on application and sometimes
>> when using truss it show resource temporally unavailable.
>>
>> Now, checking random things, i see that the
>> vm.uma.256_Bucket.stats.fails counter is increasing while the other are
>> not (at least for now).
>>
>> Here goes the output:
>>
>> vm.uma.256_Bucket.stats.xdomain: 0
>> vm.uma.256_Bucket.stats.fails: 762142
>> vm.uma.256_Bucket.stats.frees: 41935
>> vm.uma.256_Bucket.stats.allocs: 42721
>> vm.uma.256_Bucket.stats.current: 786
>>
>> root_at_tucho:/home/smartinez # uname -a
>> FreeBSD tucho 13.0-ALPHA1 FreeBSD 13.0-ALPHA1 #13
>> main-c256107-g7d3310c4fcdd: Tue Jan 19 10:50:12 GMT 2021    
>> smartinez_at_tucho:/usr/obj/usr/src/amd64.amd64/sys/GENERIC-NODEBUG  amd64
>>
>> My question is, is this the expected behavior?
> There are situations where bucket allocations must fail to avoid
> recursing back into the VM.  For instance, allocation of a UMA slab may
> require allocation of a radix node entry from UMA, which may attempt
> allocation of a bucket, which could trigger allocation of a slab.
>
> It's therefore normal to see a non-zero number of failures after
> booting, but after that the bucket zone's caches are populated and
> failures should become rare.  Failures might also be triggered during
> severe memory shortages.  Could you show vmstat -s from an affected
> system?  Are you using any DRM graphics drivers by any chance?
> _______________________________________________
> freebsd-current_at_freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org"


Received on Wed Jan 20 2021 - 13:25:07 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:26 UTC