On 26 January 2015 at 19:01, Ryan Stone <rysto32_at_gmail.com> wrote: > Hm, there was one bug in that script. I uploaded a fixed version. The fix was: > > - printf("%d %d KTRGRAPH group:\"thread\", id:\"%s/%s tid %d\", > state:\"runq add\", attributes: prio:%d, linkedto:\"%s/%s tid %d\"\n", > cpu, timestamp, args[0]->td_proc->p_comm, args[0]->td_name, > args[0]->td_tid, args[0]->td_priority, curthread->td_proc->p_comm, > curthread->td_name, args[0]->td_tid); > + printf("%d %d KTRGRAPH group:\"thread\", id:\"%s/%s tid %d\", > state:\"runq add\", attributes: prio:%d, linkedto:\"%s/%s tid %d\"\n", > cpu, timestamp, args[0]->td_proc->p_comm, args[0]->td_name, > args[0]->td_tid, args[0]->td_priority, curthread->td_proc->p_comm, > curthread->td_name, curthread->td_tid); > > Note that the last printf argument used args[0] instead of curthread > as intended. Cool! Thanks! > One other thing that I have noticed with the schedgraph data gathering > is that unlike KTR, in dtrace every CPU gathers its data into a > CPU-local buffer. This can mean that a CPU that sees a large number > of scheduler events will roll over its ring buffer much more quickly > than a lightly loaded CPU. This can lead to a confusing or misleading > schedgraph output at the beginning of the time period. You can > mitigate this problem by allowing dtrace to allocate a larger ring > buffer with: > > #pragma D option bufsize=32m > > (You can potentially tune it even higher than that, but that's a good > place to start) > > > Finally, I've noticed that schedgraph seems to have problems > auto-detecting the clock frequency, so I tend to forcifully specify > 1GHz (dtrace always outputs time in units of ns, so this is always > correct to do with dtrace-gather data) Good to know. Is there any reason why this isn't just checked into -HEAD and -10? -adrianReceived on Tue Jan 27 2015 - 05:36:37 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:55 UTC