You are viewing paulmck

Previous Entry | Next Entry

Parallel Programming: Parallel Paranoia

inside
It is all too easy to feel smug when remembering all the dismissals of parallelism, some as recent as five years ago. These dismissals portrayed parallelism as an aberration that would soon be swept into the dustbin of computing history by inexorable Moore's-Law increases in CPU frequency. It is easy to lampoon these dismissals as shortsighted or even wrongheaded, but doing so is counterproductive. As always, the reality is much more complex and much more instructive than any lampoonery, entertaining though the lampoonery might be.

But let me give you an example.

At a 1995 workshop, I was approached by a software architect asking for advice on parallelism. This guy was quite forward-thinking: even though he understood little about parallelism, he clearly understood that hardware-cost trends meant that he was going to have to deal with it sooner or later. Not many people can legitimately claim to have been ten years ahead of their fellows, but this guy was one of them.

So he asked me how best to go about parallelizing his application. After a bit of dicussion, I learned that it looked roughly like this:

Multiple instances connected to database

Here, an instance of the application is spawned for each user in order to mediate that user's interaction with the database. I asked if the application's execution was a significant component of the overall response time, and learned that it was not. I therefore told him that the best way to handle parallel systems was to continue running single-threaded per-user instances, and to let the database deal with the shared-memory parallelism. He seemed both happy with and relieved by this answer.

Why happy and relieved? Keep in mind that in the great majority of cases, parallelism is a performance optimization, and is but one potential optimization of many. Furthermore, performance is but one of many features that are of interest to the user. So the choice of parallelism can be an unwise one, especially when you have no parallel-savvy developers on your project and when parallel machines are a good order of magnitude more expensive than single-CPU machines, as they were in 1995. Adding parallelism to your project too soon might benefit a tiny fraction of your users, but the resulting deadlocks, race conditions, and other instability would greatly annoy your entire user base. Going parallel too soon could thus jeopardize your project's very existence. Therefore, simple Darwinian selection could easily generate reactions to parallelism ranging from strong distaste to absolute paranoia.

But then in the early part of this decade, the rules suddenly changed. Parallel systems are now quite cheap and readily available. So, should I give different advice to the same project today?

Regardless of the answer to this question, it is clear that some projects now need to lose their parallel paranoia if they are to thrive in our new multi-core world.

Comments

(Anonymous)
Jan. 27th, 2011 12:53 am (UTC)
Yes, you have missed something - all the theory, and most of the practice, on this. At least theoretically 2 cores at 1Ghz is better for power consumption than 1 core at 2Ghz, assuming your tasks parallelize.