Paolo Pisati (SoC work on in-kernel natd) has done some work on measuring where time is spent in servicing network interrupts, and what's the difference between 4.x and 7.x, and probably has some interesting result. He ran the experiments on his laptop (1.6-1.7GHz with APIC and bfe card), recording timestamps with the TSC in various places of the code path. What he saw is that the basic operation of the interruopt service routine, bfe_intr() is approx the same on 4.x and 7.x, taking between 7-10k TSC ticks on a lightly loaded system. On 7.x, however, there is an extra 9-10k TSC ticks (which may well be hidden in the assembly code on 4.x, but not sure about that) which apparently are spent half in this line in sys/i386/i386/intr_machdep.c if (thread) isrc->is_pic->pic_disable_source(isrc, PIC_EOI); and the other half in the block (removed in 1.274, but we ran the tests on an earlier version) in sys/kern/kern_synch.c - binuptime(&new_switchtime); - bintime_add(&p->p_rux.rux_runtime, &new_switchtime); - bintime_sub(&p->p_rux.rux_runtime, PCPU_PTR(switchtime)); I have good reasons to believe that on modern hardware the replacement, cpu_ticks() , is quite a bit faster. I have no idea, though, why the other pic_disable_source() is so expensive. The 4-5k TSC ticks are approx 3us Any clues ? Paolo should follow up in the next days with graphs and more data. cheers luigiReceived on Wed Mar 22 2006 - 19:29:41 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:53 UTC