On Tuesday, January 21, 2014 9:25:27 pm Luigi Rizzo wrote: > Looking at how selrecord() / selwakeup() and their Linux counterparts > poll_wait() and wake_up() are used, i noticed the following: > > - linux tends to call wake_up() unconditionally > at the beginning of the poll handler > > - FreeBSD tends to call selrecord() only when it detects a blocking > situation, and this also requires something (a lock or a retry; > the lock in selinfo is not good for this) to avoid the race between > the blocking_test..selrecord pair and the selwakeup(). > > FreeBSD could call selrecord unconditionally (and save the extra > lock/retry), but selrecord is expensive as it queues the thread on > the struct selinfo, and this takes a lock. > > I wonder if we could use the same optimization as Linux: > as soon as pollscan/selscan detects a non-blocking fd, > make selrecord a no-op (which is probably as simple > as setting SELTD_RESCAN; and since it only goes up > we do not need to lock to check it). Yes, I think this would work fine. I think setting SELTD_RESCAN as a way to do it is fine as well. > This way, we would pay at most for one extra selrecord per > poll/select. > > Even more interesting, it could simplify the logic and locking > in poll handlers. > > As an example, in sys/uipc_socket.c :: sopoll_generic() > we could completely decouple the locks on so_snd and so_rcv. Splitting the locks up is not really feasible because the so_rcv lock doubles as SOCK_LOCK() and is needed even in the POLLOUT case as sowritable() needs it for so_state. What you might be able to do instead is be a bit fancier and only lock so_snd if writing is being polled for by making it conditional on the values in events. This would avoid locking so_snd when just polling for read: Index: uipc_socket.c =================================================================== --- uipc_socket.c (revision 261029) +++ uipc_socket.c (working copy) _at__at_ -2912,7 +2912,12 _at__at_ sopoll_generic(struct socket *so, int events, stru { int revents = 0; - SOCKBUF_LOCK(&so->so_snd); + if (events & (POLLOUT | POLLWRNORM)) + SOCKBUF_LOCK(&so->so_snd); + /* + * The so_rcv lock doubles as SOCK_LOCK so it it is needed for + * all requests. + */ SOCKBUF_LOCK(&so->so_rcv); if (events & (POLLIN | POLLRDNORM)) if (soreadabledata(so)) _at__at_ -2947,7 +2952,8 _at__at_ sopoll_generic(struct socket *so, int events, stru } SOCKBUF_UNLOCK(&so->so_rcv); - SOCKBUF_UNLOCK(&so->so_snd); + if (events & (POLLOUT | POLLWRNORM)) + SOCKBUF_UNLOCK(&so->so_snd); return (revents); } -- John BaldwinReceived on Wed Jan 22 2014 - 19:29:51 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:46 UTC