Re: weird bugs with mmap-ing via NFS

From: Matthew Dillon <dillon_at_apollo.backplane.com>
Date: Tue, 21 Mar 2006 13:23:22 -0800 (PST)
:Hello!
:
:I have a program, that writes a file via mmap. Normally the target is on a 
:local filesystem, so there are no issues.
:
:Today, however, I tried running it on another machine writing via NFS.
:
:If the output share is mounted with default parameters, the writing succeeds, 
:but involves very high READ bandwidth (the client is not reading anything). 
:For example, here is the output of `netstat -1' on the client:
:
:            input        (Total)           output
:   packets  errs      bytes    packets  errs      bytes colls
:         2     0        152          0     0          0     0 
:      3081     0    4369834        519     0      82006     0 
:...

    You might be doing just writes to the mmap()'d memory, but the system
    doesn't know that.  The moment you touch any mmap()'d page, reading or
    writing, the system has to fault it in, which means it has to read it
    and load valid data into the page.

:When I mount with large read and write sizes:
:
:	mount_nfs -r 65536 -w 65536 -U -ointr pandora:/backup /backup
:
:it changes -- for the worse. Short time into it -- the file stops growing 
:according to the `ls -sl' run on the NFS server (pandora) at exactly 3200 FS 
:blocks (the FS was created with `-b 65536 -f 8129').
:
:At the same time, according to `systat -if' on both client and server, the  
:client continues to send (and the server continues to receive) about 30Mb of 
:some (?) data per second.
:
:The client is the freshly rebuilt FreeBSD-6.1/i386 -- with alc's recent big 
:MFC included. The server is an older 6.1/amd64 from Feb 7.
:
:Please, advise. Thanks!
:
:	-mi

    It kinda sounds like the buffer cache is getting blown out, but not
    having seen the program I can't really analyze it.

    It will always be more efficient to write to a file using write() then
    using mmap(), and it will always be far, far more efficient to write
    to an NFS file in nfs block-sized chunks rather then in smaller chunks
    due to the way the buffer cache works.  The only write case using
    write lengths less then the NFS block size that is optimized is the
    file-append case.  All other cases (when writing less then the NFS block
    size) will have to perform a read-before-write to validate the buffer
    cache buffer.  Writes that are multiples of the NFS block size (and
    aligned to the NFS block size) should be optimized and will not have to
    perform a read-before-write.

					-Matt
					Matthew Dillon 
					<dillon_at_backplane.com>
Received on Tue Mar 21 2006 - 20:24:02 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:53 UTC