Re: dhclient sucks cpu usage...

From: Bryan Venteicher <bryanv_at_daemoninthecloset.org>
Date: Tue, 10 Jun 2014 13:11:36 -0500 (CDT)
----- Original Message -----
> On 10.06.2014 07:03, Bryan Venteicher wrote:
> > Hi,
> >
> > ----- Original Message -----
> >> So, after finding out that nc has a stupidly small buffer size (2k
> >> even though there is space for 16k), I was still not getting as good
> >> as performance using nc between machines, so I decided to generate some
> >> flame graphs to try to identify issues...  (Thanks to who included a
> >> full set of modules, including dtraceall on memstick!)
> >>
> >> So, the first one is:
> >> https://www.funkthat.com/~jmg/em.stack.svg
> >>
> >> As I was browsing around, the em_handle_que was consuming quite a bit
> >> of cpu usage for only doing ~50MB/sec over gige..  Running top -SH shows
> >> me that the taskqueue for em was consuming about 50% cpu...  Also pretty
> >> high for only 50MB/sec...  Looking closer, you'll see that bpf_mtap is
> >> consuming ~3.18% (under ether_nh_input)..  I know I'm not running tcpdump
> >> or anything, but I think dhclient uses bpf to be able to inject packets
> >> and listen in on them, so I kill off dhclient, and instantly, the
> >> taskqueue
> >> thread for em drops down to 40% CPU... (transfer rate only marginally
> >> improves, if it does)
> >>
> >> I decide to run another flame graph w/o dhclient running:
> >> https://www.funkthat.com/~jmg/em.stack.nodhclient.svg
> >>
> >> and now _rxeof drops from 17.22% to 11.94%, pretty significant...
> >>
> >> So, if you care about performance, don't run dhclient...
> >>
> > Yes, I've noticed the same issue. It can absolutely kill performance
> > in a VM guest. It is much more pronounced on only some of my systems,
> > and I hadn't tracked it down yet. I wonder if this is fallout from
> > the callout work, or if there was some bpf change.
> >
> > I've been using the kludgey workaround patch below.
> Hm, pretty interesting.
> dhclient should setup proper filter (and it looks like it does so:
> 13:10 [0] m_at_ptichko s netstat -B
>    Pid  Netif   Flags      Recv      Drop     Match Sblen Hblen Command
>   1224    em0 -ifs--l  41225922         0        11     0     0 dhclient
> )
> see "match" count.
> And BPF itself adds the cost of read rwlock (+ bgp_filter() calls for
> each consumer on interface).
> It should not introduce significant performance penalties.
> 


It will be a bit before I'm able to capture that. Here's a Flamegraph from
earlier in the year showing an absurd amount of time spent in bpf_mtap():

http://people.freebsd.org/~bryanv/vtnet/vtnet-bpf-10.svg


> >
> > diff --git a/sys/net/bpf.c b/sys/net/bpf.c
> > index cb3ed27..9751986 100644
> > --- a/sys/net/bpf.c
> > +++ b/sys/net/bpf.c
> > _at__at_ -2013,9 +2013,11 _at__at_ bpf_gettime(struct bintime *bt, int tstype, struct
> > mbuf *m)
> >   			return (BPF_TSTAMP_EXTERN);
> >   		}
> >   	}
> > +#if 0
> >   	if (quality == BPF_TSTAMP_NORMAL)
> >   		binuptime(bt);
> >   	else
> > +#endif
> bpf_getttime() is called IFF packet filter matches some traffic.
> Can you show your "netstat -B" output ?
> >   		getbinuptime(bt);
> >   
> >   	return (quality);
> >
> >
> >> --
> >>    John-Mark Gurney				Voice: +1 415 225 5579
> >>
> >>       "All that I will do, has been done, All that I have, has not."
> >> _______________________________________________
> >> freebsd-current_at_freebsd.org mailing list
> >> http://lists.freebsd.org/mailman/listinfo/freebsd-current
> >> To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org"
> >>
> > _______________________________________________
> > freebsd-net_at_freebsd.org mailing list
> > http://lists.freebsd.org/mailman/listinfo/freebsd-net
> > To unsubscribe, send any mail to "freebsd-net-unsubscribe_at_freebsd.org"
> >
> 
> 
Received on Tue Jun 10 2014 - 16:11:55 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:49 UTC