Re: stack hogs in kernel

From: Randall Stewart <rrs_at_cisco.com>
Date: Wed, 16 Apr 2008 09:26:00 -0400
Julian Elischer wrote:
> Andrew Reilly wrote:
>> On Sat, Apr 12, 2008 at 08:16:01PM +0200, Roman Divacky wrote:
>>> On Sat, Apr 12, 2008 at 07:14:21PM +0100, Robert Watson wrote:
>>>> On Fri, 11 Apr 2008, Julian Elischer wrote:
>>>>
>>>>> 0xc05667e3 kldstat [kernel]:                2100
>>>>> 0xc07214f8 sendsig [kernel]:                1416
>>>>> 0xc04fb426 ugenread [kernel]:                1200
>>>>> 0xc070616b ipmi_smbios_identify [kernel]:        1136
>>>>> 0xc050bd26 usbd_new_device [kernel]:            1128
>>>>> 0xc0525a83 pfs_readlink [kernel]:            1092
>>>>> 0xc04fb407 ugenwrite [kernel]:                1056
>>>>> 0xc055ea33 prison_enforce_statfs [kernel]:        1044
>>>> This one, at least, is due to an issue Roman pointed out on hackers_at_ 
>>>> in the last 24 hours -- a MAXPATHLEN sized buffer on the stack.  
>>>> Looks like pfs_readlink() has the same issue.
>>> I plan to look at some of the MAXPATHLEN usage... I guess we can 
>>> shave a few
>>> tens of KBs from the kernel (static size and runtime size).
>>
>> Why are single-digit kilobytes of memory space interesting, in this
>> context?  Is the concern about L1 data cache footprint, for performance
>> reasons?  If that is the case, the MAXPATHLEN bufffer will only really
>> occupy the amount of cache actually touched.
> 
> We used to have 1 page in the beginning, but
> that quickly went to 2. We now Have, I think, 4 (I should go look I
> guess.). But that was with the possibility of multiple

Last time I checked (when we first went to gcc 4.x) we are still at
2 - 4k stack pages.

R

> interrupt frames all stacking on top of each other. Now that that has,
> been kept to a minimum we might be able to get to one or two again if we 
> tried..  kernel stacks are a scarse resource..  they are not really 
> swappable and are always present.
> 
> 
> 
> 
>> I've long wondered about the seemingly fanatical stack size concern in
>> kernel space.  In other domains (where I have more experience) you can
>> get good performance benefits from the essentially free memory management
>> and good cache re-use that comes from putting as much into the
>> stack/call-frame as possible. 
> 
> That is an interesting point..
> 
>>
>> Just curious.
>>
>> Cheers,
>>
> 
> _______________________________________________
> freebsd-current_at_freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org"
> 


-- 
Randall Stewart
NSSTG - Cisco Systems Inc.
803-345-0369 <or> 803-317-4952 (cell)
Received on Wed Apr 16 2008 - 11:35:10 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:29 UTC