Re: access to hard drives is "blocked" by writes to a flash drive

From: Konstantin Belousov <kostikbel_at_gmail.com>
Date: Mon, 4 Mar 2013 07:35:47 +0200
On Sun, Mar 03, 2013 at 07:01:27PM -0800, Don Lewis wrote:
> On  3 Mar, Poul-Henning Kamp wrote:
> 
> > For various reasons (see: Lemming-syncer) FreeBSD will block all I/O
> > traffic to other disks too, when these pileups gets too bad.
> 
> The Lemming-syncer problem should have mostly been fixed by 231160 in
> head (231952 in stable/9 and 231967 in stable/8) a little over a year
> ago. The exceptions are atime updates, mmaped files with dirty pages,
> and quotas. Under certain workloads I still notice periodic bursts of
> seek noise. After thinking about it for a bit, I suspect that it could
> be atime updates, but I haven't tried to confirm that.
I never got a definition what a Lemming syncer term means. The (current)
syncer model is to iterate over the list of the active vnodes, i.e.
vnodes for which an open file exists, or a mapping is established, and
initiate the neccessary writes. The iterations over the active list is
performed several times during the same sync run over the filesystem,
this is considered acceptable.

(Mostly) independently, the syncer thread iterates over the list of the
dirty buffers and writes them.

The "wdrain" wait is independend from the syncer model used. It is entered
by a thread which intends to write in some future, but the wait is performed
before the entry into VFS is performed, in particular, before any VFS
resources are acquired. The wait sleeps when the total amount of the
buffer space for which the writes are active (runningbufspace counter)
exceeds the hirunningbufspace threshold. This way buffer cache tries to
avoid creating too long queue of the write requests.

If there is some device which has high latency with the write completion,
then it is easy to see that, for the load which creates intensive queue
of writes to the said device, regardless of amount of writes to other
devices, runningbufspace quickly gets populated with the buffers targeted
to the slow device.  Then, the "wdrain" wait mechanism kicks in, slowing
all writers until the queue is processed.

It could be argued that the current typical value of 16MB for the
hirunningbufspace is too low, but experiments with increasing it did
not provided any measureable change in the throughput or latency for
some loads.

And, just to wrestle with the misinformation, the unmapped buffer work
has nothing to do with either syncer or runningbufspace.

> 
> When using TCQ or NCQ, perhaps we should limit the number of outstanding
> writes per device to leave some slots open for reads.  We should
> probably also prioritize reads over writes unless we are under memory
> pressure.

Reads are allowed to start even when the runningbufspace is overflown.

Received on Mon Mar 04 2013 - 04:35:54 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:35 UTC