Re: access to hard drives is "blocked" by writes to a flash drive

From: Ian Lepore <ian_at_FreeBSD.org>
Date: Mon, 04 Mar 2013 08:37:20 -0700
On Mon, 2013-03-04 at 07:35 +0200, Konstantin Belousov wrote:
> On Sun, Mar 03, 2013 at 07:01:27PM -0800, Don Lewis wrote:
> > On  3 Mar, Poul-Henning Kamp wrote:
> > 
> > > For various reasons (see: Lemming-syncer) FreeBSD will block all I/O
> > > traffic to other disks too, when these pileups gets too bad.
> > 
> > The Lemming-syncer problem should have mostly been fixed by 231160 in
> > head (231952 in stable/9 and 231967 in stable/8) a little over a year
> > ago. The exceptions are atime updates, mmaped files with dirty pages,
> > and quotas. Under certain workloads I still notice periodic bursts of
> > seek noise. After thinking about it for a bit, I suspect that it could
> > be atime updates, but I haven't tried to confirm that.
> I never got a definition what a Lemming syncer term means. The (current)
> syncer model is to iterate over the list of the active vnodes, i.e.
> vnodes for which an open file exists, or a mapping is established, and
> initiate the neccessary writes. The iterations over the active list is
> performed several times during the same sync run over the filesystem,
> this is considered acceptable.
> 
> (Mostly) independently, the syncer thread iterates over the list of the
> dirty buffers and writes them.
> 
> The "wdrain" wait is independend from the syncer model used. It is entered
> by a thread which intends to write in some future, but the wait is performed
> before the entry into VFS is performed, in particular, before any VFS
> resources are acquired. The wait sleeps when the total amount of the
> buffer space for which the writes are active (runningbufspace counter)
> exceeds the hirunningbufspace threshold. This way buffer cache tries to
> avoid creating too long queue of the write requests.
> 
> If there is some device which has high latency with the write completion,
> then it is easy to see that, for the load which creates intensive queue
> of writes to the said device, regardless of amount of writes to other
> devices, runningbufspace quickly gets populated with the buffers targeted
> to the slow device.  Then, the "wdrain" wait mechanism kicks in, slowing
> all writers until the queue is processed.
> 
> It could be argued that the current typical value of 16MB for the
> hirunningbufspace is too low, but experiments with increasing it did
> not provided any measureable change in the throughput or latency for
> some loads.
> 

Useful information.  I might argue that 16MB is too big, not too small.
If you've got a device that only does 2MB/sec write throughput, that's
an 8 second backlog.  Lest you think such slow devices aren't in
everyday use, a couple years ago I struggled mightily to get an sd card
driver on an embedded system UP to that speed (from 300K/sec).

> And, just to wrestle with the misinformation, the unmapped buffer work
> has nothing to do with either syncer or runningbufspace.
> 
> > 
> > When using TCQ or NCQ, perhaps we should limit the number of outstanding
> > writes per device to leave some slots open for reads.  We should
> > probably also prioritize reads over writes unless we are under memory
> > pressure.
> 
> Reads are allowed to start even when the runningbufspace is overflown.

That seems to indicate that the problem isn't a failure of the
runningbufspace regulation mechanism, because when this problem happens
it's most noticible because reads take many seconds (often the read is
an attempt to launch a tool to figure out why performance is bad).

Hmm, on the other hand, I can't g'tee that I had 'noatime' on the mounts
when I've seen this, so maybe I'd better not point any fingers at
specific code until I have better information.  

I kind of like the suggestion someone else made of having a NOATIME
kernel config option.  I can't make a strong case for it, but it appeals
to me.  If it would allow signifcant pieces of kernel or filesystem code
to be eliminated at compile time it would be worth it.  If it
complicated the code without such savings, probably not worth it.

-- Ian
Received on Mon Mar 04 2013 - 14:37:25 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:35 UTC