Categories
English Parallelism

Types of parallelism

Parallelism can be defined as the concept of dividing big problems into smaller ones, and solve the smaller problems by multiple processors at the same time. Parallelism is sometimes confused with concurrency even though it’s not necessarily that every parallel processing is considered concurrent, some types of parallelism like bit-level parallelism is not concurrent. Also, concurrency may involve working on different tasks like time-sharing multitasking, while parallelism means working on specific task in parallel computing.

Bit-level parallelism

Dividing the bits of the processed task to be processed by different processors, example:

  • Doing an arithmetic operation on 16 bit numbers in 8 bit processor would require dividing the operation into two 8 bit operations.

There are no many examples about bit-level parallelism because it’s very simple and always involve similar operations of dividing larger bits into smaller bits in the tasks.

The advantage of bit-level parallelism is that it’s independent of the application, because it’s running on the processor level and doesn’t need to be considered in programming logic. The disadvantage of bit-level parallelism is that it’s limited by the number of bits[1]. We can know that the program will use bit-level parallelism if we use parameters like (int64) in 32-bit PC.

Instruction-level parallelism

To better understand Instruction-level parallelism you have to read about Flynn taxonomy, and specifically the architecture multiple instructions multiple data (MIMD) namely the superscalar processors and it’s also applicable on multiple instruction single data architecture (MISD) in VLIW processors. So this type of parallelism is done mainly on hardware level and it includes any architecture that does more than one instruction in single CPU clock cycle.

Data Parallelism

Data parallelism is the execution of multiple data units, or an array in the same time by applying the same operation to them. In data parallelism we can start seeing high level languages’ codes. The simplest example of data parallelism in dot net framework is parallel for and for each loops.

SIMD architecture is the architecture where data parallelism can be implemented.

Task Parallelism

Task parallelism is the mode of parallelism where the tasks are divided among the processors to be processed simultaneously. Dot net framework example of task parallelism is the thread. Threads uses context switching and time slicing or to run multiple threads on different processors in multiprocessor systems. Currently I have about 3300 thread running on my PC and I have 8 cores running together, so we can say that time slicing is being done on my multiple processors to handle all of these threads.

How to know that a computer supports parallel computing?

In all computers there should be a one or more supported types of parallelism. The simplest one is bit-level parallelism, every computer type, even the one with 1 bit can support this type or parallelism when doing larger bits’ operations. On the other hand we may not find instruction-level parallelism only in few complicated systems like superscalar computers. Data and task parallelism also very common in operating systems and applications.

__________________________________

[1] Ronald Sass, Andrew G. Schmidt, “Embedded Systems Design with Platform FPGAs: Principles and Practices”, 2010, Elsevier Inc. Chapter 5, Page 250

Leave a Reply

Your email address will not be published. Required fields are marked *