Re: ufsstat - testers / feedback wanted!

From: Eric Anderson <anderson_at_centtech.com>
Date: Thu, 13 Oct 2005 08:27:50 -0500
Max Laier wrote:
> On Thursday 13 October 2005 13:36, Eric Anderson wrote:
> 
>>[resend to -current for broader test audience]
>>
>>I've just finished the first version of ufsstat, a tool to show local
>>filesystem statistics much like nfsstat does for NFS.  The patch and
>>tool is against 6.0, but it will probably apply and work fine under
>>-CURRENT and possibly 5.x as well.
>>
>>I'm looking for bug reports, comments/suggestions on style(9), and
>>anything else, since this is my first C project, and of course first
>>real FreeBSD contribution. :)
> 
> 
> The patch contains some jitter in the first three or four files due to older 
> versions in src-patched.  As all the statistic gathering is #ifdef'ed it 
> should not hurt performance in the disabled case.  It will look nicer if you 
> define a macro to update statistics like:
> 
> #ifdef UFS_STATS
> #define	UFS_STATS_UPDATE(field)	ufsstats.field++
> #else
> #define	UFS_STATS_UPDATE(field)
> #end
> 
> This will in turn only use one line per update point and you don't have to do 
> the ugly:
> #ifdef UFS_STATS
>        ufsstats.fsync++;
> #endif

Thanks - great suggestion!  I'll do that.  Any ideas how to remove the 
FBSDID line jitter from the patches?  I mean a 'correct' way - I could 
easily do it with some hacks/scripts/etc, but maybe there is a better 
way to do this.


> Also, make sure to declare "extern struct ufsstats ufsstats" in ufsstats.h 
> under _KERNEL and define it in just one place.  As is, you don't record the 
> updates from ffs_vnops.c into the right structure.  Finally, you should 
> consider 64 bit counter for some, if not all, fields as they will overflow 
> quickly.

Ok - I'm looking at that now.  For the 64bit counters, I can only guess 
at most of the ones that will be used a lot, so is the correct way to do 
this to be very conservative and set most to type int, and the ones I 
think will be large, to int64_t, or just set them all to 64bit and be 
done with it?


>>To use it, do this:
>>cd /tmp
>>fetch http://www.googlebit.com/software/ufsstat/ufsstat-20051011.tar.gz
>>cd /usr
>>tar xvzf /tmp/ufsstat-20051011.tar.gz
>>patch <./ufsstats.patch
>>
>>add:
>>OPTIONS		UFS_STAT
>>to your kernel.
>>
>>Rebuild and install world/kernel.
>>
>>Now, you can use ufsstat to show you statistics from your local
>>filesystems, like this:
>>
>># ufsstat
>>    Create    Remove      Link   Symlink     Mkdir     Rmdir    Rename
>>    289048    794043      4361     12558     25796    117739         0
>>   GetAttr   SetAttr      Open     Close   ReadDir  ReadLink     VInit
>>  64868230    759824  10701553   9891642   5042948         0  45315645
>>     Chmod     Chown  Whiteout  Strategy    Access     Mknod  NewInode
>>    409782     79612         0   4020035         0         3         0
>>     Fsync SyncVnode LockVnode   RdVnode   WrVNode
>>         0         0         0         0         0
>>   ExtRead  Extwrite FndExtAtt RdExtAttr OpnExtAtt ClseExtAt ExtStrtgy
>>         0         0         0         0         0         0         0
>>
>>or watch over time with the -w switch.
>>
>>I have not done any performance testing yet to see if it impacts
>>filesystem performance by any measurable amount, so if someone does do
>>this testing before I do, please post your results!
> 
> 
> I don't think you can measure one single interger (or 64bit) increase in face 
> of a operation that has to access backing store.  Even if there is a 
> performance hit, you don't have to build your kernel with the option enabled.

I was thinking of doing some accumulative tests - say 10000 various 
operations without, then those same ops (in the same order, on the same 
disk, freshly newfs'ed again) with it enabled.

> It might be (more) interesting to have these stats on a per-mountpoint basis.  
> Not sure if you have enough state available to record all of the above, but 
> since you asked for input - this might be worth investigating.

I agree, and have thought about that.  I expected this would be the 
first feature someone ask about. :)  I'm not sure if it would be best to 
store all filesystems someone in one sysctl area, or have a sysctl for 
each mounted filesystem, or something else.  Remember, I'm *very new* to 
this, so any hints or poking in the right direction is very helpful!

Thanks for the input so far!
Eric



-- 
------------------------------------------------------------------------
Eric Anderson        Sr. Systems Administrator        Centaur Technology
Anything that works is better than anything that doesn't.
------------------------------------------------------------------------
Received on Thu Oct 13 2005 - 11:28:13 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:45 UTC