Graphics Processing Unit (GPU)
A GPU is graphics processor specialised for graphical calculations and computing.
General Purpose Computation on Graphics Processing Unit (GPGPU)
GPGPU is a programming interface (API) by which computations, as they occur in numerical simulations, can be executed by the graphics processor (GPU) on the graphic card. GPUs have a massively parallel architecture, i.e. thousands of cores are used, especially for SIMD operations (single instruction multiple data). Many substeps in a CAE simulation utilize SIMD operations, e.g. when solving systems of linear equations.
Transferring such stages to GPUs enables performance to be boosted significantly. For example, a current NVIDIA Tesla GPU K40 can achieve a maximum of 1.43 TFlops (double precision) or 4.29 TFlops (single precision). Whereas a single CPU core is always faster than a GPU core, the significantly higher number of GPU cores results in an additional acceleration of the runtime. In comparison, an NVIDIA Tesla GPU K40 has a total of 2880 cores.
This performance increase becomes particularly obvious when theoretical performance of a CPU and GPU are compared: an Intel Xeon-2690 v2 CPU has 0,024 TFlops per core while a NVIDIA Tesla K40 GPU with 1,43 TFlops has a sixty times higher performance.
When it comes to resource intensive numerical computing one’s own computing resources might quickly be exhausted. Grid Computing means an additional use of distributed computing resources that are bundled in a network and work as a substitute for a supercomputer. Hence, whenever it is necessary computing power and storage capacity from other resources can be used via internet or VPN.