Providence Salumu
Concurrent Haskell is the collective name for the facilities that Haskell provides for programming with multiple threads of control. Unlike parallel programming, where the goal is to make the program run faster by using more CPUs, the goal in concurrent programming is usually to write a program with multiple interactions. These interactions might be with the user via a user interface of some kind, with other systems, or indeed between different subsystems within the same program. Concurrency allows us to write a program in which each of these interactions is described separately but all happen at the same time. As we shall see, concurrency is a powerful tool for structuring programs with multiple interactions.
In many application areas today, some kind of concurrency is a necessity. A typical user-facing application will have an interface that must remain responsive while the application is downloading data from the network or calculating some results. Often these applications may be interacting with multiple servers over the network at the same time; a web browser, for example, will have many concurrent connections open to the sites that the user is browsing, while all the time maintaining a responsive user interface. Server-side applications also need concurrency in order to manage multiple client interactions simultaneously.
Haskell takes the view that concurrency is a useful abstraction because it allows each interaction to be programmed separately, resulting in greater modularity. Abstractions should not be too expensive because then we won’t use them—hence GHC provides lightweight threads so that concurrency can be used for a wide range of applications, without needing to worry about the overhead.
Haskell’s philosophy is to provide a set of very simple but general features that you can use to build higher-level functionality. So while the built-in functionality may seem quite sparse, in practice it is general enough to implement elaborate abstractions. Furthermore, because these abstractions are not built in, you can make your own choices about which programming model to adopt, or to program down to the low-level interfaces for performance.
Therefore, to learn Concurrent Haskell, we can start from the low-level interfaces and then explore how to combine them and build on top to create higher-level abstractions, which is exactly the approach taken in this book. The aim is that by building up the implementations of higher-level abstractions using the low-level features, the higher-level abstractions will be more accessible and less mysterious than if we had just described an API. Furthermore, by seeing examples of how to build higher-level abstractions, you should be able to go away and build your own variations or entirely new libraries.
Haskell does not take a stance on which concurrent programming model is best: actors, shared memory, and transactions are all supported, for example. (Conversely, Haskell does take a stance on parallel programming; we strongly recommend that you use one of the deterministic programming models from Part I for parallel programming.) Haskell provides all of these concurrent programming models and more—but this flexibility is a double-edged sword. The advantage is that you can choose from a wide range of tools and pick the one best suited to the task at hand, but the disadvantage is that it can be hard to decide which tool is best for the job. Hopefully by demonstrating a series of examples using each of the programming models that Haskell provides, this book will help you develop an intuition for which tool to pick for your own projects.
In the following chapters we’re going on a tour of Concurrent Haskell, starting with the basics of threads and communication in Chapter 7 through Chapter 10, moving on to some higher-level abstractions in Chapter 11, and then we’ll look at how to build multithreaded network applications in Chapter 12. Chapter 13 deals with using Concurrent Haskell to achieve parallelism, and in Chapter 14 we look at writing distributed programs that run on multiple computers. Finally, Chapter 15 will present some techniques for debugging and performance-tuning and talk about the interaction between Concurrent Haskell and foreign code.