Log in

No account? Create an account

Previous Entry | Next Entry

Earlier, we noted that parallel programming requires additional planning. Failure to properly carry out this additional planning can result in deadlocks, data races, livelocks, and the usual litany of perils of parallelism. It is important to note that many of these perils are global in nature, for example, a deadlock cycle might involve any or all portions of a shared-memory parallel program.

Deadlocks are not difficult to find: once the system deadlocks, it certainly isn't going anywhere, and straightforward instrumentation can usually trace out the deadlock cycle. Although repairing deadlocks can occasionally be extremely challenging, it is quite often a simple matter of adjusting lock-acquisition order.

Not much of a problem, that is, if you have access to the source code. If different pieces of the source code are owned by different organizations, the process of fixing the code can easily be subordinated to a finger-pointing exercise. After all, breaking the deadlock cycle at any point fixes the problem, so which of the players is going to blink first and prepare the fix? And when one of the players does provide a fix, how can anyone else be sure that it actually fixes the problem, as opposed to greatly reducing its probability of occurrence? Or even not so greatly reducing its probability of occurrence?

In short, given the current state of the shared-memory-parallel software engineering art, it seems best to place an address-space boundary between code from different proprietary players — and between proprietary code and FOSS code.

Of course, this latter is something that numerous GPL boosters, including many in the Linux kernel community, have been advocating for quite a few years. Those of you who dismissed their stance as irrational free-software religion just might want to think again. ;–)


Dec. 18th, 2009 01:44 am (UTC)
FOSS is certainly no panacea, but it can provide additional options in situations like this one.