On Sun, Jul 18, 2010 at 10:06:06PM -0700, Doug Barton wrote: > On 07/18/10 12:41, Kostik Belousov wrote: > > On Sun, Jul 18, 2010 at 12:21:00PM -0700, Doug Barton wrote: > >> On 07/18/10 03:30, Kostik Belousov wrote: > >>> On Sun, Jul 18, 2010 at 01:14:41AM -0700, Doug Barton wrote: > >>>> On Sat, 17 Jul 2010, Kostik Belousov wrote: > >>>> > >>>>> Run top in the mode where all system threads are shown separately > >>>>> (e.g. top -HS seems to do it), then watch what thread eats the processor. > >>>> > >>>> And the winner is! > >>>> > >>>> 11 root -32 - 0K 168K WAIT 0 0:28 18.02% {swi4: > >>>> clock} > >>>> 11 root 21 -64 - 0K 168K WAIT 0 1:17 18.90% intr > >>>> > >>>> The first is with -H, the second without. > >>> > >>> Most likely it is some callout handling. Just in case, do you have > >>> console screensaver active ? > >> > >> I assume you mean "saver=yes" in rc.conf, and the answer is no, I am not > >> using that. Usually I run xscreensaver, but at the time this happened I > >> was not. I do have DPMS enabled in my X config though. > >> > >> Any suggestions on how to dig deeper on this? Are there any settings I > >> can twiddle to try and mitigate it? > > When intr time starts accumulating again, try to do > > "procstat -kk <intr process pid>" and correlate the clock thread tid > > with the backtrace. Might be, it helps to guess what callouts are eating > > the CPU. > > Ok, file attached. > > -- > > Improve the effectiveness of your Internet presence with > a domain name makeover! http://SupersetSolutions.com/ > > Computers are useless. They can only give you answers. > -- Pablo Picasso > > PID TID COMM TDNAME KSTACK > 11 100004 intr swi1: netisr 0 mi_switch+0x200 ithread_loop+0x1da fork_exit+0xb8 fork_trampoline+0x8 > 11 100005 intr swi4: clock mi_switch+0x200 ithread_loop+0x1da fork_exit+0xb8 fork_trampoline+0x8 > 11 100006 intr swi4: clock mi_switch+0x200 ithread_loop+0x1da fork_exit+0xb8 fork_trampoline+0x8 > 11 100007 intr swi3: vm > 11 100014 intr swi6: Giant task mi_switch+0x200 ithread_loop+0x1da fork_exit+0xb8 fork_trampoline+0x8 > 11 100015 intr swi6: task queue mi_switch+0x200 ithread_loop+0x1da fork_exit+0xb8 fork_trampoline+0x8 > 11 100020 intr swi2: cambio mi_switch+0x200 ithread_loop+0x1da fork_exit+0xb8 fork_trampoline+0x8 > 11 100021 intr swi5: + > 11 100022 intr irq9: acpi0 mi_switch+0x200 ithread_loop+0x1da fork_exit+0xb8 fork_trampoline+0x8 > 11 100023 intr irq16: > 11 100024 intr irq256: hdac0 mi_switch+0x200 ithread_loop+0x1da fork_exit+0xb8 fork_trampoline+0x8 > 11 100026 intr irq17: wpi0 mi_switch+0x200 ithread_loop+0x1da fork_exit+0xb8 fork_trampoline+0x8 > 11 100027 intr irq20: hpet0 uhc mi_switch+0x200 ithread_loop+0x1da fork_exit+0xb8 fork_trampoline+0x8 > 11 100032 intr irq21: uhci1 > 11 100037 intr irq22: uhci2 mi_switch+0x200 ithread_loop+0x1da fork_exit+0xb8 fork_trampoline+0x8 > 11 100042 intr irq23: uhci3 > 11 100052 intr irq14: ata0 mi_switch+0x200 ithread_loop+0x1da fork_exit+0xb8 fork_trampoline+0x8 > 11 100053 intr irq15: ata1 mi_switch+0x200 ithread_loop+0x1da fork_exit+0xb8 fork_trampoline+0x8 > 11 100055 intr irq1: atkbd0 mi_switch+0x200 ithread_loop+0x1da fork_exit+0xb8 fork_trampoline+0x8 > 11 100056 intr irq12: psm0 > 11 100057 intr swi0: uart You should correlate the backtrace and the id of the cpu-consuming thread (100005 or 100006, or both) and do periodic procstat -k to see which functions are referenced most often. Might be, suggested dtrace solution is easier.
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:05 UTC