Re: [RFC] Add support for hardware transmit rate limiting queues [WAS: Add support for changing the flow ID of TCP connections]

From: K. Macy <kmacy_at_freebsd.org>
Date: Wed, 20 Aug 2014 14:40:18 -0700
>
>
>
>> - uint32_t -> m_flowid_t is plain gratuitous.  Now we need to include
>>    mbuf.h in more places just to get this definition.  What's the
>>    advantage of this?  style(9) isn't too fond of typedefs either.  Also,
>>    drivers *do* need to know the width of the flowid.  At least lagg(4)
>>    looks at the high bits of the flowid (see flowid_shift in lagg).  How
>>    high it can go depends on the width of the flowid.
>>
>
> The flowid should be typedef'ed. Else how can you know its type passing
> flowid along function arguments and so on?



I agree with Navdeep. It's usage should be obvious  from context. This just
pollutes the namespace.

>
>
>
>> - Interfaces can come and go, routes can change, and so the relationship
>>    between an inpcb and txringid is not stable at all.  What happens when
>>    the outbound route for an inpcb changes?
>>
>
> This is managed separately by a daemon or such.


No it's not. Currently, unless you're using flowtables, the route and
llentry are looked up for every single outbound packet. Most users are
lightly enough loaded that they don't see the potential 8x reduction in pps
from the added overhead.




> The problem about using the "inpcb" approach which you are suggesting, is
> that you limit the rate control feature to traffic which is bound by
> sockets. Can your way of doing rate control be useful to non-socket based
> firewall applications, for example?
>
> You also assume a 1:1 mapping between "inpcb" and the flowID, right. What
> about M:N mappings, where multiple streams should share the same flowID,
> because it makes more sense?


That doesn't make any sense to me. FlowIDs are not a limited resource like
8-bit ASIDs where clever resource management was required. An M:N mapping
would permit arbitrary interleaving of multiple streams which simply
doesn't seem useful unless there is some critical case where it is a huge
performance win. In general it is adding complexity for a gratuitous
generalization.


>
>
>
>> - The in_ratectlreq structure that you propose is inadequate in its
>>    current form.  For example, cxgbe's hardware can do rate limiting on a
>>    per-ring as well as per-connection basis, and it allows for pps,
>>    bandwidth, or min-max limits.  I think this is the critical piece that
>>    we NIC maintainers must agree on before any code hits the core kernel:
>>    how to express a rate-limit policy in a standard way and allow for
>>    hardware assistance opportunistically.  ipfw(4)'s dummynet is probably
>>    interested in this part too, so it's great that Luigi is paying
>>    attention to this thread.
>>
>
> My "in_ratectlreq" is a work in progress.


Which means that it probably makes sense to not impinge upon the core
system until it is more refined.



> - The RATECTL ioctls deal with in_ratectlreq so we need to standardize
>>    the ratectlreq structure before these ioctls can be considered generic
>>    ifnet ioctls.  This is the reason cxgbetool (and not ifconfig) has a
>>    private ioctl to frob cxgbe's per-queue rate-limiters.  I did not want
>>    to add ifnet ioctls that in reality were cxgbe only.  Ditto for i2c
>>    ioctls.  Now we have multiple drivers with i2c and melifaro_at_ is doing
>>    the right thing by promoting these private ioctls to a standard ifnet
>>    ioctl.  Have you considered a private mlxtool as a stop gap measure?
>>
>
> It might end that we need to create our own tool for this, having vendor
> specific IOCTLs, if we cannot agree how to do this in a general way.
>
>
>
>> To summarize my take on all of this: we need a standard ratectlreq
>> structure,
>>
>
> Agree.
>
>
>  a standard way to associate an inpcb with one,
>>
>
> Maybe.


Associating it with an inpcb doesn't exclude adding a mechanism for
supporting it in firewalls.

-K
Received on Wed Aug 20 2014 - 19:40:20 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:51 UTC