You are viewing paulmck

Previous Entry | Next Entry

Parallel Programming: Announcement

SequentialCaveman
As some of you know, I have been working on a book on parallel programming. My thought had been to complete it, then announce it. But I finally realized that it really never will be complete, at least not as long as people keep coming up with new parallel-programming ideas, which I most definitely hope will continue for a very long time.

So, here it is!

Comments

(Anonymous)
Jan. 5th, 2011 06:27 pm (UTC)
What, specifically, is the focus of the book?
Is this intended to be an introduction to parallel programming as a concept (using POSIX as an illustration), or more of an introduction to parallel programming as a practical endeavour?

If the former, there's really not a whole lot that needs to be added to the book. It's very clearly written and covers most of what anyone would need to understand parallelism. The only chapter I don't see is one on interconnects. This matters because interconnects define what will (and will not) work in the way of techniques. It's not that the interconnects are directly important, but they do place limits on those things that are.

On the other hand, if it's a book on practical parallel programming, you'd need a section covering the ideas in communication. "Conflicting Visions of the Future" is excellent as-is and might well be the correct place to discuss things like RDMA, but it doesn't feel right for discussing message passing versus networked inter-process communication.

I have absolutely no idea if you'd want to cover instruction-level parallelism (as per Silk and Silk++, but also an area a lot of early parallel research looked into). It's such a totally different beast than regular parallelism.

Bottom line, great book and I'd love to see if there's anything I could usefully contribute, but to avoid wasting your time or anyone else's, I'd want to know what would be interesting to you as the author-in-chief and editor-in-chief.
paulmck
Jan. 5th, 2011 07:34 pm (UTC)
Re: What, specifically, is the focus of the book?
An introduction to parallel programming as a practical endeavor, with emphasis on helping people understand how to adjust the overall design of their software so as to get the most parallelism benefit for the least pain, time, and trouble.

The focus at the moment is primarily on shared memory because such systems are cheap, easy to set up, and readily available. Plus because my own experience has been primarily with shared memory, and additionally because the most likely audience are people using multi-core systems.

When you talk about interconnects, are you thinking in terms of the internal interconnects in shared-memory/multicore systems, or in terms of the message-passing interconnects used in large supercomputing clusters?

"Conflicting Visions of the Future" should be for topics where there is genuine uncertainty and conflict. Possible topics include limits to hardware and software scalability, multicore-computing fads from various groups, and what sorts of parallel-debug assists are required. That said, this book is not to be a marketing vehicle for either vendors or academia.

I of course welcome contributions! One question to ask yourself is "what area am I expert in that a lot of people need to know or will soon need to know?" Thoghts?
(Anonymous)
Jan. 5th, 2011 10:28 pm (UTC)
Re: What, specifically, is the focus of the book?
In terms of interconnects, I'm thinking busses (in the case of multicores and SMP) and ethernet (for budget message-passing clusters, MOSIX and Kerrighed users and remote calls - including RPC, CORBA and Linux' TIPC). I wish that I could include fast serial links, but the Transputer died in the 1990s and Intel's iWarp died soon after. Those are more Visions of the Past, which I consider a shame.

In other words, stuff that makes a system almost transparently scalable. Explicit high-level stuff like MPI-2 and Bulk Synchronous Processing get ugly, there's masses of tedious detail, and frankly most of the important stuff (avoiding deadlocks, avoiding simultaneous writes, etc) is stuff you already cover. Some aspects of parallelism are universal.

In the case of ethernet, for example, there's the question of how to pretend to have shared memory. (Distributed Shared Memory schemes tend to be rare and are often badly written.) There's also the question of whether you can use process migration systems like MOSIX or Kerrighed to do MIMD-style parallel processing more effectively than you could on a multi-core or SMP computer.

Also on Ethernet, you can very easily pass around a lot of data to a lot of machines simultaneously, using Scalable Reliable Multicast. Which, incidentally, MPI does not use. MPI rotates round the list of destinations in a collective call and sends messages on a sequential basis. Implementations of SRM and NORM (NACK-Oriented Reliable Multicast) are widely available for most platforms, Windows and Linux included, so it's vendor-independent. They are not, however, significantly used in the industry, although one would think that MISD-style parallelism would find the mechanism to be extremely valuable.

For many busses, the time it takes to switch a lane from one direction or target to another is absurdly high. The cost of working at such fine detail is also absurdly high, since a lot of parallel processing languages (UPC, Erlang, etc) are too high a level to let you do much tuning. The question is how to make the best use of time to get the best use out of the system. And that's a hard problem.

"What area am I expert in that a lot of people need to know or will soon need to know?" If I knew that, I'd be rich. :) I'm expert in plenty of areas, I frequently find use for that expertise, but anticipating which areas would be useful for others has always been a tough one for me.