You are viewing paulmck

RCU
Suppose that you have an initially empty RCU-protected hash table with per-bucket-locked updates. Suppose that one thread concurrently inserts items A and B in that order (but into different buckets) while a second thread concurrently looks up item A then item B—and while yet a third thread concurrently looks up these two items in the opposite order. The code for these three threads might look something like the following:

 1 void t1(void)
 2 {
 3   spin_lock(chain(A));
 4   insert(A);
 5   spin_unlock(chain(A));
 6   spin_lock(chain(B));
 7   insert(B);
 8   spin_unlock(chain(B));
 9 }
10 
11 void t2(void)
12 {
13   rcu_read_lock();
14   l1 = lookup(A);
15   rcu_read_unlock();
16   rcu_read_lock();
17   l2 = lookup(B);
18   rcu_read_unlock();
19 }
20 
21 void t3(void)
22 {
23   rcu_read_lock();
24   l3 = lookup(B);
25   rcu_read_unlock();
26   rcu_read_lock();
27   l4 = lookup(A);
28   rcu_read_unlock();
29 }

Because there is absolutely nothing guaranteeing the order of t2()'s and t3's lookups, it is quite possible that we could end up with l1==1&&l2==0&&l3==1&&l4==0, which would mean that t2() and t3() disagree on the order of insertion. However, this outcome is excluded if we place a synchronize_rcu() between lines 5 and 6 above.

But how can synchronize_rcu() possibly exclude this outcome given that t2()'s and t3()'s lookups can still be reordered?



Comment Form

No HTML allowed in subject

  
 
   
 

Notice! This user has turned on the option that logs your IP address when posting. 

(will be screened)