Re: proposed smp_rendezvous change

From: Andriy Gapon <avg_at_FreeBSD.org>
Date: Fri, 13 May 2011 21:52:16 +0300
on 13/05/2011 18:50 Max Laier said the following:
> On Friday 13 May 2011 11:28:33 Andriy Gapon wrote:
>> on 13/05/2011 17:41 Max Laier said the following:
>>> this ncpus isn't the one you are looking for.
>>
>> Thank you!
>>
>> Here's an updated patch:
> 
> Can you attach the patch, so I can apply it locally.  This code is really hard 
> to read without context.  Some more comments inline ...

Attached.

>>
>> Index: sys/kern/subr_smp.c
>> ===================================================================
>> --- sys/kern/subr_smp.c	(revision 221835)
>> +++ sys/kern/subr_smp.c	(working copy)
>> _at__at_ -316,19 +316,14 _at__at_
>>  	void (*local_action_func)(void*)   = smp_rv_action_func;
>>  	void (*local_teardown_func)(void*) = smp_rv_teardown_func;
>>
>> -	/* Ensure we have up-to-date values. */
>> -	atomic_add_acq_int(&smp_rv_waiters[0], 1);
>> -	while (smp_rv_waiters[0] < smp_rv_ncpus)
>> -		cpu_spinwait();
>> -
> 
> You really need this for architectures that need the memory barrier to ensure 
> consistency.  We also need to move the reads of smp_rv_* below this point to 
> provide a consistent view.

I thought that this would be automatically handled by the fact that a master CPU
sets smp_rv_waiters[0] using atomic operation with release semantics.
But I am not very proficient in this matters...
But I fail to see why we need to require that all CPUs should gather at this
point/condition.

That is, my point is that we don't start a new rendezvous until a previous one
is completely finished.  Then we set up the new rendezvous, finish the setup
with an operation with release semantics and only then notify the target CPUs.
I can't see how the slave CPUs would see stale values in the rendezvous
pseudo-object, but, OTOH, I am not very familiar with architectures that have
weaker memory consistency rules as compared to x86.

-- 
Andriy Gapon

Received on Fri May 13 2011 - 16:52:20 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:14 UTC