Re: Increasing MAXPHYS

From: C. P. Ghost <cpghost_at_cordula.ws>
Date: Sat, 20 Mar 2010 19:13:58 +0100
On Sat, Mar 20, 2010 at 6:53 PM, Matthew Dillon
<dillon_at_apollo.backplane.com> wrote:
>
> :All above I have successfully tested last months with MAXPHYS of 1MB on
> :i386 and amd64 platforms.
> :
> :So my questions are:
> :- does somebody know any issues denying increasing MAXPHYS in HEAD?
> :- are there any specific opinions about value? 512K, 1MB, MD?
> :
> :--
> :Alexander Motin
>
>    (nswbuf * MAXPHYS) of KVM is reserved for pbufs, so on i386 you
>    might hit up against KVM exhaustion issues in unrelated subsystems.
>    nswbuf typically maxes out at around 256.  For i386 1MB is probably
>    too large (256M of reserved KVM is a lot for i386).  On amd64 there
>    shouldn't be a problem.

Pardon my ignorance, but wouldn't so much KVM make small embedded
devices like Soekris boards with 128 MB of physical RAM totally unusable
then? On my net4801, running RELENG_8:

vm.kmem_size: 40878080

hw.physmem: 125272064
hw.usermen: 84840448
hw.realmem: 134217728

>    Diminishing returns get hit pretty quickly with larger MAXPHYS values.
>    As long as the I/O can be pipelined the reduced transaction rate
>    becomes less interesting when the transaction rate is less than a
>    certain level.  Off the cuff I'd say 2000 tps is a good basis for
>    considering whether it is an issue or not.  256K is actually quite
>    a reasonable value.  Even 128K is reasonable.
>
>    Nearly all the issues I've come up against in the last few years have
>    been related more to pipeline algorithms breaking down and less with
>    I/O size.  The cluster_read() code is especially vulnerable to
>    algorithmic breakdowns when fast media (such as a SSD) is involved.
>    e.g.  I/Os queued from the previous cluster op can create stall
>    conditions in subsequent cluster ops before they can issue new I/Os
>    to keep the pipeline hot.

Thanks,
-cpghost.

-- 
Cordula's Web. http://www.cordula.ws/
Received on Sat Mar 20 2010 - 17:40:45 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:01 UTC