You are viewing paulmck

(no subject)

inside
I believe that the leaked memory is "freed" when the program exits, avoiding the need to track the leaked memory as well as any use-after-free bugs.

On weaker memory models, I suspect that you would need a memory barrier just prior to storing the pointer to the object you are attempting to add. Dependency ordering would handle traversals to newly added data.

The point about non-uniform n-body problems is a good one, and came up during the talk. Given your specific example, I would add the young star last using single-threaded execution, thus guaranteeing that it gets added. This sort of strategy would work well for problems with a modest number of large masses and a huge number of insignificant masses. For example, to model the solar system, one might add the sun, planets, dwarf planets, moons, and large asteroids during single-threaded execution, but only after concurrently adding the other asteroids, the comets, the Kuiper-belt objects, and so on concurrently.

On x86, there was an earlier suggestion that the additions be done using the x86 xchg() instruction. If an attempt to add via xchg() returned non-NULL, the CPU would add the corresponding object back in. I have no idea how much performance this would give up, but it would clearly avoid any leakage.

Another point raised during the talk was the possibility of more efficient algorithms than Barnes-Hutt, but on that question I must defer to someone with actual experience with such algorithms.

Comment Form

No HTML allowed in subject

  
 
   
 

Notice! This user has turned on the option that logs your IP address when posting. 

(will be screened)