On Thursday, May 29, 2014 2:24:45 pm Adrian Chadd wrote: > On 29 May 2014 10:18, John Baldwin <jhb_at_freebsd.org> wrote: > > >> > It costs wired memory to increase it for the kernel. The userland set size > >> > can be increased rather arbitrarily, so we don't need to make it but so large > >> > as it is easy to bump later (even with a branch). > >> > >> Well, what about making the API/KBI use cpuset_t pointers for things > >> rather than including it as a bitmask? Do you think there'd be a > >> noticable performance overhead for the bits where it's indirecting > >> through a pointer to get to the bitmask data? > > > > The wired memory is not due to cpuset_t. The wired memory usage is due to things > > that do 'struct foo foo_bits[MAXCPU]'. The KBI issues I mentioned above are > > 'struct rmlock' (so now you want any rmlock users to malloc space, or you > > want rmlock_init() call malloc? (that seems like a bad idea)). The other one > > is smp_rendezvous. Plus, it's not just a pointer, you really need a (pointer, > > size_t) tuple similar to what cpuset_getaffinity(), etc. use. > > Why would calling malloc be a problem? Except for the initial setup of > things, anything dynamically allocating structs with embedded things > like rmlocks are already dynamically allocating them via malloc or > uma. > > There's a larger fundamental problem with malloc, fragmentation and > getting the required larger allocations for things. But even a 4096 > CPU box would require a 512 byte malloc. That shouldn't be that hard > to do. It'd just be from some memory that isn't close to the rest of > the lock state. Other similar APIs like mtx_init() don't call malloc(), so it would be unusual behavior. However, we have several other problems before we can move beyond 256 anyway (like pf). -- John BaldwinReceived on Thu May 29 2014 - 17:15:52 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:49 UTC