Re: Is anyone using the schedgraph.d script?

From: Ryan Stone <rysto32_at_gmail.com>
Date: Mon, 26 Jan 2015 22:01:50 -0500
Hm, there was one bug in that script.  I uploaded a fixed version.  The fix was:

-       printf("%d %d KTRGRAPH group:\"thread\", id:\"%s/%s tid %d\",
state:\"runq add\", attributes: prio:%d, linkedto:\"%s/%s tid %d\"\n",
cpu, timestamp, args[0]->td_proc->p_comm, args[0]->td_name,
args[0]->td_tid, args[0]->td_priority, curthread->td_proc->p_comm,
curthread->td_name, args[0]->td_tid);
+       printf("%d %d KTRGRAPH group:\"thread\", id:\"%s/%s tid %d\",
state:\"runq add\", attributes: prio:%d, linkedto:\"%s/%s tid %d\"\n",
cpu, timestamp, args[0]->td_proc->p_comm, args[0]->td_name,
args[0]->td_tid, args[0]->td_priority, curthread->td_proc->p_comm,
curthread->td_name, curthread->td_tid);

Note that the last printf argument used args[0] instead of curthread
as intended.


One other thing that I have noticed with the schedgraph data gathering
is that unlike KTR, in dtrace every CPU gathers its data into a
CPU-local buffer.  This can mean that a CPU that sees a large number
of scheduler events will roll over its ring buffer much more quickly
than a lightly loaded CPU.  This can lead to a confusing or misleading
schedgraph output at the beginning of the time period.  You can
mitigate this problem by allowing dtrace to allocate a larger ring
buffer with:

#pragma D option bufsize=32m

(You can potentially tune it even higher than that, but that's a good
place to start)


Finally, I've noticed that schedgraph seems to have problems
auto-detecting the clock frequency, so I tend to forcifully specify
1GHz (dtrace always outputs time in units of ns, so this is always
correct to do with dtrace-gather data)
Received on Tue Jan 27 2015 - 02:01:53 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:55 UTC