Providence Salumu
Now that processor manufacturers have largely given up trying to squeeze more performance out of individual processors and have refocused their attention on providing us with more processors instead, the biggest gains in performance are to be had by using parallel techniques in our programs so as to make use of these extra cores. Parallel Haskell is aimed at providing access to multiple processors in a natural and robust way.
You might wonder whether the compiler could automatically parallelize programs for us. After all, it should be easier to do this in a purely functional language, where the only dependencies between computations are data dependencies, which are mostly perspicuous and thus readily analyzed. However, even in a purely functional language, automatic parallelization is thwarted by an age-old problem: To make the program faster, we have to gain more from parallelism than we lose due to the overhead of adding it, and compile-time analysis cannot make good judgments in this area. An alternative approach is to use runtime profiling to find good candidates for parallelization and to feed this information back into the compiler. Even this, however, has not been terribly successful in practice.
Fully automatic parallelization is still a pipe dream. However, the parallel programming models provided by Haskell do succeed in eliminating some mundane or error-prone aspects traditionally associated with parallel programming:
Parallel Haskell programs are high-level and declarative and do not explicitly deal with concepts like synchronization or communication. The programmer indicates where the parallelism is, and the details of actually running the program in parallel are left to the runtime system. This is both a blessing and a curse:
The main thing that the parallel Haskell programmer has to think about is partitioning: dividing up the problem into pieces that can be computed in parallel. Ideally, you want to have enough tasks to keep all the processors busy continuously. However, your efforts may be frustrated in two ways:
In the following chapters, we will describe the various parallel programming models that Haskell provides:
Eval
monad and Evaluation Strategies, which are suitable for
expressing parallelism in Haskell programs that are not
heavily numerical or array-based. These programming models are
well established, and there are many good examples of using them to
achieve parallelism.
Par
monad, a more recent
parallel programming model that also aims at parallelizing ordinary
Haskell code but with a different trade-off: It affords the
programmer more control in exchange for some of the conciseness
and modularity of Strategies.
Parallelizing Haskell code can be a joyful experience: Adding a small annotation to your program can suddenly make it run several times faster on a multicore machine. It can also be a frustrating experience. As we’ll see over the course of the next few chapters, there are a number of pitfalls waiting to trap you. Some of these are Haskell-specific, and some are part and parcel of parallel programming in any language. Hopefully by the end you’ll have built up enough of an intuition for parallel programming that you’ll be able to achieve decent parallel speedups in your own code using the techniques covered.
Keep in mind while reading this part of the book that obtaining reliable results with parallelism is inherently difficult because in today’s complex computing devices, performance depends on a vast number of interacting components. For this reason, the results I get from running the examples on my computers might differ somewhat from the results you get on your hardware. Hopefully the difference isn’t huge—if it is, that might indicate a problem in GHC that you should report. The important thing is to be aware that performance is fragile, especially where parallelism is concerned.