Glossary
An arrangement of memory elements in one or more planes. A one dimensional array is called a vector, a multi-dimensional array is called a matrix.
CompilerA compiler is a computer program that translates a computer language(source language) into another computer language(target language). Read more about the Gedae compiler, a compiler built for multiprocessors.
CPU/Socket/Processor/CoreThis varies, depending upon who you talk to. In the past, a CPU (Central Processing Unit) was a singular execution component for a computer. Then, multiple CPUs were incorporated into a node. Then, individual CPUs were subdivided into multiple "cores", each being a unique execution unit. CPUs with multiple cores are sometimes called "sockets" - vendor dependent. The result is a node with multiple CPUs, each containing multiple cores. The nomenclature is confused at times.
Cell Broadband EngineThe Cell Broadband Engine (Cell/B.E.) is a microprocessor jointly developed by IBM, Toshiba and Sony. The Cell architecture is intended to be scalable from hand-held devices to mainframe computers by utilizing parallel processing. Sony is using the chip in their PlayStation 3 game console.
GranularityIn parallel computing, granularity is a qualitative measure of the ratio of computation to communication.
- Coarse: relatively large amounts of computational work are done between communication events
- Fine: relatively small amounts of computational work are done between communication events
An artificial language used to write instructions that can be translated into machine language and then executed by a computer. Read more about the Gedae language, built for multiprocessors.
Parallel OverheadThe amount of time required to coordinate parallel tasks, as opposed to doing useful work. Parallel overhead can include factors such as:
- Task start-up time
- Synchronizations
- Data communications
- Software overhead imposed by parallel compilers, libraries, tools, operating system, etc.
- Task termination time
The breaking down or decomposition of work into multiple tasks or threads. There are two basic ways to partition computational work among parallel tasks: domain decomposition and functional decomposition.
Load BalancingLoad balancing refers to the practice of distributing work among tasks so that all tasks are kept busy all of the time. It can be considered a minimization of task idle time. Load balancing is important to parallel programs for performance reasons. For example, if all tasks are subject to a barrier synchronization point, the slowest task will determine the overall performance.
Multi-threadingMultitasking within a single program. It allows multiple threads of execution to take place concurrently within the same program, each thread processing a different transaction or message
NodeA standalone "computer in a box". Usually comprised of multiple CPUs/processors/cores. Nodes are networked together to comprise a supercomputer.
ScalabilityRefers to a parallel system's (hardware and/or software) ability to demonstrate a proportionate increase in parallel speedup with the addition of more processors. Factors that contribute to scalability include:
- Hardware - particularly memory-cpu bandwidths and network communications
- Application algorithm
- Parallel overhead related
- Characteristics of your specific application and coding
The coordination of parallel tasks in real time, very often associated with communications. Often implemented by establishing a synchronization point within an application where a task may not proceed further until another task(s) reaches the same or logically equivalent point. Synchronization usually involves waiting by at least one task, and can therefore cause a parallel application's wall clock execution time to increase.
ThreadA portion of a program that can run independently of and concurrently with other portions of the program.
Return to the LEARN main page.