Question about panic in brelse()

From: Christoph Mallon <christoph.mallon_at_gmx.de>
Date: Wed, 14 Jan 2009 08:32:13 +0100
Hi,

I wrote this to hackers_at_ two days ago, but I got no response so far. 
Maybe somebody with some VFS experience sees it on this list.


I observe a failed assertion in the VFS regarding a buffer. I
investigated a bit and now I want to present my findings and I have a
question:

Assume I have a buffer with
   b_iocmd   = BIO_WRITE
   b_ioflags = BIO_ERROR
   b_error   = EIO
   b_flags   = B_NOCACHE
passed to brelse() in kern/vfs_bio.c[0].

- This particular combination of values (line 1144) causes BIO_ERROR to
be cleared (line 1152) and B_DELWRI is set in bdirty() (line 1031,
called in line 1153).

- Because of B_NOCACHE (line 1343) this buffer gets moved to QUEUE_CLEAN
  (line 1349). Also B_INVAL gets set here (line 1345).

- A few lines down (line 1375) bundirty() gets called because of B_INVAL
and B_DELWRI.

- bundirty() instantly panics because the buffer is not in QUEUE_NONE
(line 1075).

My question is: Is this a bug in brelse() or was the combination of flag
B_NOCACHE with a failed write attempt (BIO_WRITE, BIO_ERROR, EIO)
invalid when the buffer was passed to brelse()?

Below is a dump of the buffer right when the assertion is triggered. If
you want any further information about this issue, please tell me.

Hopefully somebody can shed some light on this

	Christoph


{
b_bufobj = 0xffffff0030005e00,
b_bcount = 16384,
b_caller1 = 0x0,
b_data = 0xfffffffea2c57000 "",
b_error = 5, (EIO)
b_iocmd = 2 '\002', (BIO_WRITE)
b_ioflags = 2 '\002', (BIO_DONE)
b_iooffset = 98304,
b_resid = 16384,
b_iodone = 0,
b_blkno = 192,
b_offset = 98304,
b_bobufs = { tqe_next = 0x0, tqe_prev = 0xffffff0030005e40},
b_left = 0x0,
b_right = 0x0,
b_vflags = 0,
b_freelist = {
   tqe_next = 0xfffffffe92d747c8,
   tqe_prev = 0xffffffff80d340f0
},
b_qindex = 1, (QUEUE_CLEAN)
b_flags = 41092, (B_NOCACHE | b_INVAL | B_DELWRI | B_ASYNC)
b_xflags = 33 '!',
b_lock = {
   lock_object = {
     lo_name = 0xffffffff808d01b6 "bufwait",
     lo_flags = 91947008,
     lo_data = 0,
     lo_witness = 0xfffffffe40206180
   },
   lk_lock = 18446744073709551608,
   lk_timo = 0,
   lk_pri = 80
},
b_bufsize = 16384,
b_runningbufspace = 0,
b_kvabase = 0xfffffffea2c57000 "",
b_kvasize = 16384,
b_lblkno = 192,
b_vp = 0xffffff0030005ce8,
b_dirtyoff = 0,
b_dirtyend = 0,
b_rcred = 0x0,
b_wcred = 0x0,
b_saveaddr = 0xfffffffea2c57000,
b_pager = {pg_reqpage = 0},
b_cluster = {
   cluster_head = {
     tqh_first = 0xfffffffe92d747c8,
     tqh_last = 0xfffffffe92d73ad0
   },
   cluster_entry = {
     tqe_next = 0xfffffffe92d747c8,
     tqe_prev = 0xfffffffe92d73ad0
   }
},
b_pages = {
   0xffffff00de3ce5a0, 0xffffff00de3ce610,
   0xffffff00de3ce680, 0xffffff00de3ce6f0,
   $0x0 <repeats 28 times>
},
b_npages = 4,
b_dep = { lh_first = 0x0 },
b_fsprivate1 = 0x0,
b_fsprivate2 = 0x0,
b_fsprivate3 = 0x0,
b_pin_count = 0
}


[0] r183754 in head/, which is the latest version of kern/vfs_bio.c.
Received on Wed Jan 14 2009 - 06:32:16 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:40 UTC