Concurrency - New Frontier?

A major change is occurring in how computers are increasing their performance. For many years, processor manufacturers have increased the performance of their wares by making them run faster. This approach is starting to reach the end of its usefulness, however, partly due to things like the speed electrons can be persuaded to travel and other hard to change physical properties of the world in which we live.

Alternative ways to speed up computing operations exist, and one of these seems to be the New Way which is being trodden. The chosen path involves having multiple processors which can execute different bits of code at the same time, thus enabling more work to be done in a given amount of time. This approach has been used in servers for years by having multiple physical processors, of course, but is now becoming more mainstream on desktop computers. This is due to manufacturers starting to build more than one logical processor onto a single physical processor chip. Each logical processor is called a core. Both Intel and AMD now have been producing multi-core processors for long enough they are now common in desktop computers.

The change in the method new chips enable faster computing has a problem. Most applications written for desktop computers have not been built with multiple processors in mind and, as such, will not benefit from any extra processors to anywhere near the extent that is theoretically possible. A program needs to be explicitly written to use more than one processor; a program which not written in this manner will be confined to a single core and will not gain the benefits of parallel execution. As more work begins to be put into purely multi-proccessor speed improvements — e.g., more cores on a single chip — these programs will not see any improvements to their execution time.

One of the major reasons that many, if not most, applications are not programmed with multiple processors in mind is simply that it is much more difficult to do so. Visualising the way a program works when multiple parts of the program can be executing at the same time is much harder than the case when things always happen one after another in a predictable manner.

Programming for multiple processors is often termed concurrent programming. The tools currently available in mainstream programming to aid the programmer with the rather difficult task of concurrent programming are primitive in comparison to the advances which have been made in other areas. For this reason, I believe work to make it easier to write efficient and correct concurrent programs is one of the most important current research topics in computer science.

To try and consolidate some of the reading on the subject I have done, I am going to try to write some posts here about the current state of tooling for concurrent programming and about some of the new approaches and technologies that should help with creating concurrent programs which take advantage of all your processor’s, uhm, processors.

← Older
Showdown
→ Newer
Microsoft loves Thumbnail view which Sucks