Nov 9

Overview

I want to stress the difference between parallel programming and concurrent programming. Parallel programming is interested in deterministic evaluation of a problem as quick as possible. Deterministic here means that the same result should be computed every time the program runs. Concurrency is a programming technique to manage non-determinism. With concurrency, our program will execute differently and have different results every time it is run (because it depends on external actors or agents) and concurrent programming is a technique to manage this non-determinism.

More Details

Parallel and Distributed Computing

One simple method to implement parallelism is to just use multiple physical computers communicating with each other over the network. Each computer will run a program designed using the features we have discussed so far this semester. See Distributed computing.

Concurrency

Two events are concurrent if we cannot tell by looking at the program which will happen first. Note that it is tempting to think of concurrent events as happening at the same time but this is incorrect.

Implementation

We can implement distributed computing without any major new features, just write programs communicating with each other over the network. But for other types of parallel computing and for concurrency, we need some assistance from the kernel and/or CPU.

Threads

Up until a few years ago, in popular languages like C, C++, Java, C#, ObjectiveC, Perl, Python, and others the main way to tell the language about concurrency is a technique which is called threaded programming. Threaded programming is tremendously difficult and is easily the number one source of bugs (the most infamous bug is Therac 25 which killed six people).

Over the years, safety critical code (like code running by NASA, medical devices, fighter jets, telecommunications for 911, etc.) has been written in programming languages like Ada, Erlang, and Haskell which have much better ways of specifying concurrency than threads. In the last four or five years, the popular programming languages (including python) are (finally) adding these language features which replace threaded programming, are easier to use, but still provide similar benefits. So as new programmers, I suggest you avoid threads if possible and learn these new techniques. There are too many to mention here, but in MCS 275 we spend a few weeks on this. If you are interested in threads, this guide is very good (and also highlights the complexity of threads).

Multicore Computing

As transistors have gotten smaller, chip designers like Intel and AMD (and others) have realized that they can pack more transistors into a given area but that there isn't a way to make a single fetch/decode/execute circuit any faster since more transistors in a fetch/decode/execute circuit means a slower execution because the electrical signals need to travel further. Therefore, they have just put multiple fetch/decode/execute circuits into the same package. Each fetch/decode/execute circuit is designed like we discussed and is called a core.

We can take advantage of multiple cores in the following ways:

Exercises

Read the first chapter of The Little Book of Semaphores - 6 pages available for free online. You don't have to turn anything in and we won't check that you actually read it except the content might appear on the final exam.