On 5/17/11 4:03 AM, Andriy Gapon wrote: > on 16/05/2011 23:09 John Baldwin said the following: >> is probably just cut and pasted to match the other uses of values in >> the smp_rv_waiters[] array. >> >> (atomic_add_acq_int() could spin on architectures where it is implemented >> using compare-and-swap (e.g. sparc64) or locked-load conditional-store (e.g. >> Alpha).) > > > When you say "not strictly necessary", do you mean "not necessary"? > If you do not mean that, then when could it be (non-strictly) necessary? :) > > Couldn't [Shouldn't] the whole: > >>>> /* Ensure we have up-to-date values. */ >>>> atomic_add_acq_int(&smp_rv_waiters[0], 1); >>>> while (smp_rv_waiters[0]< smp_rv_ncpus) >>>> cpu_spinwait(); > > be just replaced with: > > rmb(); > > Or a proper MI function that does just a read memory barrier, if rmb() is not that. No, you could replace it with: atomic_add_acq_int(&smp_rv_waiters[0], 1); The key being that atomic_add_acq_int() will block (either in hardware or software) until it can safely perform the atomic operation. That means waiting until the write to set smp_rv_waiters[0] to 0 by the rendezvous initiator is visible to the current CPU. On some platforms a write by one CPU may not post instantly to other CPUs (e.g. it may sit in a store buffer). That is fine so long as an attempt to update that value atomically (using cas or a conditional-store, etc.) fails. For those platforms, the atomic(9) API is required to spin until it succeeds. This is why the mtx code spins if it can't set MTX_CONTESTED for example. -- John BaldwinReceived on Tue May 17 2011 - 09:56:08 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:14 UTC