You are viewing paulmck

Previous Entry | Next Entry

inside
I had the good fortune to attend a panel on parallel-programming education at the recent SC09 conference. To their credit, many of the panelists noted that parallel programming was not all that hard, and many of them took their more sequential-minded colleagues to task for telling students that parallel programming is difficult. Of course, many of them also said that the locking-plus-threads programming paradigm is difficult, never mind the fact that locking-plus-threads has been used very successfully in a great many software projects.

Nevertheless, this panel's message was a welcome change of pace from the many years of hand-wringing over parallel programming. And although I have given much thought to whether and why parallel programming might be difficult, this panel's message motivated me to step back and think on it some more.

There have been any number of people claiming that parallel programming is completely counter-intuitive, often with little or no supporting evidence. In contrast, we need to think carefully about this important issue, and the place to start is with sequential programming. Please note that it is easy to make a solid case for sequential programming being counter-intuitive. If you don't believe me, you should talk to people who took up programming, who mastered it, but who found it unappealing.

My discussions with such people uncovered the following major insults to their intuition:

  1. People expect intelligent beings, whether organic or inorganic, to have some degree of common sense. Despite the decades of research sacrificed at the altar of artificial intelligence, computers remain almost completely devoid of common sense.
  2. People also expect intelligent beings to have some understanding of their intent, which is called theory of mind. If anything, computers' lack of theory of mind is even more profound than their lack of common sense.
  3. People expect to be able to make a fragmentary plan, and nevertheless expect this plan to have the desired results. Computers are famously unforgiving of fragmentary plans, though perhaps GPS units are moving in the right direction.

The first two points should be uncontroversial. If you doubt them, consider the failures of the much-maligned Clippy and Microsoft Bob. These two product features attempted to relate to the user as people, thus raising common-sense and theory-of-mind expectations, expectations that these two products proved completely incapable of meeting.

The reliance people cheerfully place on fragmentary plans deserves further discussion. This reliance is apparently due to the expectation that the person executing the plan will be intelligent, have common sense, and have an understanding of the intent behind the plan, especially in the real-world common case where the person making the plan and the person executing it are one and the same. In the case, the plan will be revised as a matter of course to account for unexpected events and obstacles. This worked quite well through much of human history, which is no surprise: far better to start hunting in a random direction than to starve to death while planning for all possible situations that might arise during the hunt.

Unfortunately, even the best human reflexes simply cannot keep up with a 5GHz CPU. Even if we imagine a hyper-caffeinated 5GHz superhero, there are many millions of computers to be kept up with. And so the modern microprocessor invalidates untold millenia of evolution, frustrating untold numbers of would-be computer professionals.

Please note that these three inappropriate intuitions have absolutely nothing to do with parallel processing. This should be no surprise, as the universe is, always has been, and always will be highly concurrent. Given that, at least as far as we know, human beings have spent their entire history living in this universe, shouldn't we expect concurrency to be intuitive?

Of course, concurrency being intuitive does not necessarily mean that concurrent programming is intuitive. After all, the typical American-football player must deal with 21 other players on the field, the ball, and a few coaches and umpires, which should be excellent concurrency training. And perhaps it is excellent training, but I know of only one football player who went on to be a parallel programmer. Then again, he is an extremely talented and capable parallel programmer, so perhaps the generation being raised by soccer moms will show us old guys a thing or three about concurrency.

All that aside, the usual examples of parallel-programming difficulties — deadlocks, race conditions, memory misordering, performance and scalability issues — are really examples of item #3 above: failure to plan properly. In other words, parallel programming does not add new types of counter-intuitive behavior, but rather exacerbates the effects of existing sequential-program counter-intuitive behavior. One of the parallel-education panelists at the recent SC 09 conference claimed that parallel programming represented only about 5% additional difficulty over and above that of sequential programming, and this line of reasoning certainly supports that claim.

Nevertheless, 5% of the effort required to truly master sequential programming is a substantial undertaking, so the remainder of this series of blog posts will examine exactly what makes parallel programming such a challenge — and how to surmount this challenge.

Comments

paulmck
Jan. 4th, 2010 05:15 pm (UTC)
Debugging has always been the hard part...
And indeed the difficulty of debugging single-threaded programs was held up as an example of why we could not expect reliable software to be produced, at least not unless we used whatever the speaker was pushing. This was a common position put forth in the 1970s and early 1980s with respect to loop termination, memory allocation/freeing, and much else besides.

Of course, cynics would argue that these guys were correct, that we still don't have truly reliable software. But this would apply to both single-threaded and to parallel software.

That said, debugging parallel programs is in my opinion indeed somewhat more difficult than debugging single-threaded programs. The industry-wide state of the software-validation and tooling arts has not quite caught up to parallelism, but much progress is being made. More on that later.