You are viewing paulmck

Previous Entry | Next Entry

Parallel Programming: Heeding History

inside
Given that parallel systems have been in existence for decades, it is worth asking why they have caused so much fuss over the past few years. Many argue that this is due to the end of Moore's-Law-induced frequency scaling, while others note that the business models of some corporations would suffer if people no longer felt the need to buy new computers every few years.

Although there is no doubt some validity to both of these arguments, the real reason is economics. Sure, parallel systems have been commercially available for some decades, but how many people could afford to splash out $1M in the 1980s? $100K in the early 1990s? $10K in the late 1990s? In contrast, how about a few hundred of 2009's sadly inflated US dollars?

As the price of parallel systems has plummeted, the number of situations where it makes economic sense to use them has increased exponentially. This in turn means that the demand for parallel software has also grown suddenly, outstripping the supply of developers with parallel-programming experience. Voilà, a parallel-software crisis.

But this is most definitely not the first software crisis. A very similar crisis arose in the late 1970s, with very similar history. A computer cost millions of dollars in the 1960s, tens of thousands with the advent of the minicomputer in the early 1970s, and mere thousands with the advent of the microcomputer and the personal computer in the late 1970s and early 1980s. Then as now, as the price of computer systems plummeted, the number of situations where it made economic sense to use them increased exponentially. Then as now, the demand for computer software grew suddenly, outstripping the supply of programmers. Then as now, a software crisis was proclaimed.

Many new programming languages were put forward to deal with this crisis, and these can be categorized, not into the good, the bad, and the ugly, but rather into the good, the fad, and the ugly.

The programming languages in the “ugly” category are still with us, though the fraction of code written using them has decreased. We still use various flavors of shell, sed, awk, and C, as well as holdovers from earlier times, including FORTRAN and COBOL. I myself used the Bourne shell and C for production software in the early 1980s, and would never have guessed that I would still be using them more than a quarter century later. They are simply too ugly — and too useful — to die.

The programming languages in the “fad” category include darlings such as PASCAL, MODULA, Scheme, Eiffel, Smalltalk, and CLU. There are no doubt a few developers still playing with these toys, but these darlings never were able to deliver on the promises made by their proponents, and never managed to gain a large developer base. (And given that I designed, coded, and put a 50,000-line PASCAL program into production, I know whereof I speak.)

So what does it mean for a programming language to be “good”? Given that the goal is to solve a software crisis, the only reasonable measures of goodness are: (1) an increase in productivity of existing developers by orders of magnitude, (2) an increase in the fraction of the population who can use computers, again by orders of magnitude, or preferably (3) both.

So, what were the “good” programming languages that solved the Great Software Crisis of the late 1970s and early 1980s?

And what lessons should we draw from the Great Software Crisis to help us deal with the Great Parallel Software Crisis?

Comments

(Anonymous)
Dec. 8th, 2009 02:49 pm (UTC)
Excellent article (as always, but this one struck a chord). Thank you.
paulmck
Dec. 9th, 2009 05:30 am (UTC)
Glad you liked it!!!
And I hope that the chord it struck was interesting and pleasing.