2008/2/7, Marcel Moolenaar <xcllnt_at_mac.com>: > > On Feb 7, 2008, at 10:41 AM, Attilio Rao wrote: > > > 2008/2/7, Marcel Moolenaar <xcllnt_at_mac.com>: > >> On Feb 7, 2008, at 6:11 AM, Attilio Rao wrote: > >> > >>>>>>>>>> Correct lock order is devfs vnode -> devfs mount sx lock. > >>>>>>>>>> When > >>>>>>>>>> allocating new devfs vnode, see devfs_allocv(), the newly > >>>>>>>>>> created > >>>>>>>>>> vnode is locked while devfs mount lock already held (see line > >>>>>>>>>> 250 of > >>>>>>>>>> fs/devfs/devfs_vnops.c). Nonetheless, this cannot cause > >>>>>>>>>> deadlock since > >>>>>>>>>> no other thread can find the new vnode, and thus perform the > >>>>>>>>>> other lock > >>>>>>>>>> order for this vnode lock. > >>>>>>>>>> > >>>>>>>>>> The fix is to shut the witness in this particular case. > >>>>>>>>>> Attilio, how to > >>>>>>>>>> do this ? > >>>>>>>>> > >>>>>>>>> Just add LK_NOWITNESS for one of the lock involved in the > >>>>>>>>> lockinit(). > >>>>>>>> > >>>>>>>> > >>>>>>>> Then, we loss the useful reports of the actual LORs later, > >>>>>>>> isn't it ? > >>>>>>> > >>>>>>> Another solution would be to rewamp BLESSING option which > >>>>>>> allow to > >>>>>>> 'bless' some LORs. > >>>>>>> jhb and me, btw, didn't want to enable it because it could lead > >>>>>>> some > >>>>>>> less experienced developer to hide LORs under this label and > >>>>>>> this is > >>>>>>> something we want to avoid. > >>>>>> > >>>>>> > >>>>>> This LOR shall not be ignored globally. When real, it caused the > >>>>>> easily > >>>>>> reproducable lockup of the machine. > >>>>>> > >>>>>> It would be better to introduce some lockmgr flag to ignore > >>>>>> _this_ locking. > >>>>> > >>>>> flag to pass where? > >>>> To the lockmgr itself at the point of aquisition, like > >>>> lockmgr(&lk, LK_EXCLUSIVE | LK_INTERLOCK | LK_NOWARN, > >>>> &interlk, ...); > >>> > >>> No, I really want a general WITNESS support for this (as I also > >>> think > >>> that having something more fine grained than BLESSING will break all > >>> concerns jhb and me are considering now). > >>> A simple way to do it would mean hard-coding file and line in a > >>> witness table. While file is ok, line makes trouble so we should > >>> find > >>> an alternative way to do this. Otherwise we can consider skiping > >>> checks for a whole function, this should be not so difficult to > >>> achive. > >>> > >>> I need to think more about this. > >> > >> > >> What about a linker set that lists file regions (based on line > >> number). > >> If you want to exclude a particular lock from WITNESS you can do > >> something like this: > >> WITNESS_REGION_START(function) > >> lockmgr(...) > >> WITNESS_REGION_END > >> > >> The WITNESS_REGION_START and WITNESS_WITNESS_END together create a > >> region in the linker set and witness can check if a lock operation > >> falls within that region. If yes, we can make it do something special > >> by given the _START and/or _END a function pointer or we can make it > >> ignore the operation by passing NULL or something. > >> > >> You can safely use file & line numbers in this case. Something along > >> those lines... > >> > >> Thoughts? > > > > Really, if I wanted to pollute consumers code I would have use a lot > > of simpler ideas. > > > Where you see pollution, I see opportunity. In principle no code > should have to use these region markers, so in the few cases > it's needed it not only stands out, but also allows any sized > region, possibly covering multiple functions to be marked. That's > much less pollution and/or change than having to tag each and > every lock operation. Yes, but what I mean is that it is enough to do something like: WITNESS_SKIPNEXTCHECK(); lockmgr(...) and have the same result. Before to try something like that, I want to explore another way of doing things as I don't like the idea to see consumers code "polluted" by WITNESS_* functions. Is this clearer? Attilio -- Peace can only be achieved by understanding - A. EinsteinReceived on Thu Feb 07 2008 - 18:33:00 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:27 UTC