Glossary

The power to question is the basis
of all human progress.

Indira Gandhi

You are an engineer and you can’t do anything with IT-language? You are new here and wonder what all those abbreviations are about? You already worked on the desktop for years and simulate the most beautiful cases, but still don’t know what CAE as a Service is supposed to be? With our glossary we might have the right answer for you! Are you missing certain terms? Please help us and send us your suggestions to !

A

Abaqus

Abaqus is a software suite for finite element analysis and computer-aided engineering, originally released in 1978.

ANSYS

ANSYS CFX

ANSYS CFX software is a high-performance, general-purpose fluid dynamics program that engineers have applied to solve wide-ranging fluid flow problems for over 20 years.

ANSYS Fluent

ANSYS Fluent is a CFD software tool to model flow, turbulence, heat transfer, and reactions for industrial purposes.

ANSYS Mechanical

ANSYS Mechanical is a FEA software and provides purpose of stress, thermal, modal, fatigue, nonlinear stress simulations.

ANSYS Workbench

ANSYS Workbench functions as main simulation environment and simplifies the input of computational problems.

B

Bare-Metal-Server

Bare metal is our preferred method of making resources available to customers. Each CPU 24/7 client is assigned a customized simulation environment with access to exclusively physically dedicated servers. In the interests of security, performance and flexibility, and in order to make the appropriate customer-specific adaptations, we do not use any virtualisations. With us there is no lost data.

Broadwell

Broadwell is Intel’s codename for the 14 nanometer die shrink of its Haswell microarchitecture. It has 5% better IPC compared to its predecessor Haswell and a better cooling.

C

Computer Aided Design (CAD)

CAD refers to the use of computer systems to aid in the creation, modification, analysis, or optimization of a design. Today CAE applications are complex expert systems for the design and construction of technical solutions. Meanwhile, in almost every CAD application a third dimension was added, so CAD also describes the development of a virtual model of three dimensional objects by means of a computer.

Computer Aided Engineering (CAE)

CAE comprises all computer-based working processes that are used in engineering. CAE areas covered include amongst others stress analysis on components and assemblies using Finite Element Analysis (FEA); Thermal and fluid flow analysis Computational fluid dynamics (CFD); Multibody dynamics (MBD) and Kinematics. In general, there are three phases in any computer-aided engineering task:

  • Pre-processing – defining the model and environmental factors to be applied to it;
  • Analysis solver - usually performed on high powered computers;
  • Post-processing of results using visualization tools.

CAE as a Service (CAEaaS)

https://www.cpu-24-7.com/cae/cae-as-a-service/

Cluster Computing

Cluster computing is a form of distributed computing. A virtual supercomputer is made up of a cluster consisting of loosely linked computers. In order to achieve the desired computing performance, single servers are interconnected. Cluster computing is used in areas, which require high performance computing to solve CPU- intensive problems such as different calculations in computer aided engineering (CAE).

The individual cluster computers are controlled by one central head node and provided with new tasks. A software running on the cluster computer solves a certain part of the task, which will be sent back afterwards to the head node. The head node’s software divides a big task into a number of subtasks and reassembles all partial results after calculating. Hence, a high performance computer system is made of many individual components whereby a symmetric multiprocessor system (SMP) supports the computing power on a long-term basis.

CD-adapco

CD-adapco is a global engineering simulation company with a unique vision for Multidisciplinary Design eXploration (MDX). Various simulation tools, led by the flagship product STAR-CCM+®, allow customers to discover better designs, faster. CD-adapco is a Siemens business.

Computational Fluid Dynamics (CFD)

CFD is a branch of fluid mechanics that uses numerical analysis and algorithms to solve and analyze problems that involve fluid flows. Computers are used to perform the calculations required to simulate the interaction of liquids and gases with surfaces defined by boundary conditions. CFD is the cost-effective alternative to experiments in wind tunnels or water channels.

COMSOL

COMSOL was founded in Stockholm, Sweden in 1986. It now provides simulation software for product design and research through a worldwide network of offices and distributors. Its flagship products are software environments for modeling and simulating any physics-based system and for building applications, which can be used at any stage of the product development cycle.

Cores vs. CPUs vs. Nodes

Cores are the individual processor cores of a CPU. Each core mostly has its own resources (incl. level 1 and level 2 caches) and is logically independent of the other cores. Consequently each core can operate separately from the others and can, for example, calculate simulations. Normally a core executes a sequence of consecutive cycles for each process (without sacrificing performance).

The CPU is the actual chip. There are always two CPUs in our compute nodes, and these are therefore deployed on dual socket mainboards. The CPU provides the individual cores, including the level 3 (L3) cache, by means of which data can be exchanged.

A node is the actual “computer” or server, which is mainly defined by the mainboard. A CPU 24/7 node always has two CPUs, each with between 6 and 14 cores, making a total of between 12 and 28 cores.

CPU Socket

The socket is a slot for the processor housing the CPU. It is placed on the system’s motherboard and provides the electrical contact to the CPU.

D

Dassault Systems

Dassault Systems is a global software development business headquartered in France. It is specialized in solving 3D as well as PLM problems.

Data Transfer

In discussions with customers we are often asked about the possibilities for data transfer. Firstly we must make it clear: the transfer of data between your terminal and our resources is at all times authenticated and encrypted, without exception, from start to finish. Well-known standard protocols such as Secure Shell (SSH), Secure Copy (SCP) or Secure File Transfer Protocol (FTPS) are used for the transfer.

The advantage of SSH is that the protocol can also be used for accessing clusters via the terminal. With reference to the performance of the data transfers, we achieve a transfer speed of up to 10 Gbit/s using rapid symmetrical buses. Remember, you can test the transfer rate free of charge, in particular to eliminate any ’last mile’ problems. In individual cases encrypted hard drives can also be dispatched.

Digital Prototyping

Digital prototyping refers to the development of prototypes with the help of virtual 3D computer models and simulations.

E

E5-2600

CPU 24/7 uses the latest Intel Xeon E5-2600 series of processors. In contrast to other processors, for example, the AMD Opteron, with its large number of cores, at 3.0 GHz (E5-2690 v2) this CPU series provides top level performance and very good energy efficiency.

Ethernet

Ethernet involves protocols and hardware for the communication in clusters. In contrast to InfiniBand Ethernet is characterized by a significantly higher latency. But by now Ethernet is also performing at a bandwidth of up to 100 Gbit/s.

F

Finite Element Analysis (FEA)

FEA is a numerical solution method to determine how an object reacts to various physical circumstances.

Finite Element Method (FEM)

The finite element method (FEM) is a numerical technique for finding approximate solutions to boundary value problems for partial differential equations. It is also referred to as finite element analysis (FEA). FEM subdivides a large problem into smaller, simpler, parts, called finite elements.

The simple equations that model these finite elements are then assembled into a larger system of equations that models the entire problem. FEM then uses variational methods from the calculus of variations to approximate a solution by minimizing an associated error function. FEM is best understood from its practical application, known as finite element analysis (FEA).

Flops

Theoretical flops indicate the peak performance of an individual compute node in an HPC cluster. They are a unit that indicates the performance capability of a computer and the number of floating point operations per second. The decisive factor is the number of floating point operations per second for each core. Some advice: if you want to compare costs using the project as a basis, you should take the number of TFlops per hour as the coefficient. The price per TFlop per hour is used to compare the performance of the servers with the costs.

Fluid Mechanics

Fluid mechanics is the branch of physics that studies the mechanics of fluids (liquids, gases, and plasmas) and the forces on them. Fluid mechanics has a wide range of applications, including mechanical engineering, chemical engineering, geophysics, astrophysics, and biology. Fluid mechanics can be divided into fluid statics, the study of fluids at rest; and fluid dynamics, the study of the effect of forces on fluid motion.

Fluid mechanics, especially fluid dynamics, is an active field of research with many problems that are partly or wholly unsolved. Fluid mechanics can be mathematically complex, and can best be solved by numerical methods, typically using computers. A modern discipline, called computational fluid dynamics (CFD), is devoted to this approach to solving fluid mechanics problems.

FTP

FTP is a standard network protocol used to transfer computer files between a client and server on a computer network.

FTPS

FTPS is an extension to the commonly used File Transfer Protocol (FTP) that adds support for the Transport Layer Security (TLS) and the Secure Sockets Layer (SSL) cryptographic protocols.

G

Graphics Processing Unit (GPU)

A GPU is graphics processor specialised for graphical calculations and computing.

General Purpose Computation on Graphics Processing Unit (GPGPU)

GPGPU is a programming interface (API) by which computations, as they occur in numerical simulations, can be executed by the graphics processor (GPU) on the graphic card. GPUs have a massively parallel architecture, i.e. thousands of cores are used, especially for SIMD operations (single instruction multiple data). Many substeps in a CAE simulation utilize SIMD operations, e.g. when solving systems of linear equations.

Transferring such stages to GPUs enables performance to be boosted significantly. For example, a current NVIDIA Tesla GPU K40 can achieve a maximum of 1.43 TFlops (double precision) or 4.29 TFlops (single precision). Whereas a single CPU core is always faster than a GPU core, the significantly higher number of GPU cores results in an additional acceleration of the runtime. In comparison, an NVIDIA Tesla GPU K40 has a total of 2880 cores.

This performance increase becomes particularly obvious when theoretical performance of a CPU and GPU are compared: an Intel Xeon-2690 v2 CPU has 0,024 TFlops per core while a NVIDIA Tesla K40 GPU with 1,43 TFlops has a sixty times higher performance.

Grid Computing

When it comes to resource intensive numerical computing one’s own computing resources might quickly be exhausted. Grid Computing means an additional use of distributed computing resources that are bundled in a network and work as a substitute for a supercomputer. Hence, whenever it is necessary computing power and storage capacity from other resources can be used via internet or VPN.

H

Haswell

Haswell is an Intel processor micro architecture, based on 22nm die shrink. Haswell is the predecessor of Broadwell.

Heat Transfer

Heat transfer has broad application to the functioning of numerous devices and systems. Heat-transfer principles may be used to preserve, increase, or decrease temperature in a wide variety of circumstances. Heat transfer methods are used in numerous disciplines, such as automotive engineering, thermal management of electronic devices and systems, climate control, insulation, materials processing, and power station engineering.

High Performance Computing (HPC)

High-performance computing (HPC) is the use of parallel processing for running advanced application programs efficiently and quickly. The term applies especially to systems that function above a teraflop or 1012 floating-point operations per second.

HPC becomes more important in the fields of commercial and scientific computing as a means of calculation, modelling and simulation of complex systems and of processing big amounts of data. Typical applications can be found in CAE, e.g. crash tests or CFD, as well as in meteorology, astrophysics, biology, quantum chemistry and genetics.

HPC Cluster

HPC clusters in a CPU 24/7 high security computing centre usually consist of one or more head nodes and several compute nodes, which are freely scalable according to the application and specific requirements.

The head node is a server in the HPC cluster, on which the individual calculations are distributed to the compute nodes (link) with the aid of software. Head nodes may also be referred to as access servers, because they are usually used for remote visualisation.

The compute nodes are servers in the HPC cluster with high performance CPUs, on which the computations or jobs are carried out. They are also equipped with a large memory and are connected to other nodes by means of a high performance InfiniBand network.

The head and compute nodes are preceded by one or more security access gateways and adaptive firewall appliances, such as VPN. A high performance, parallel file system with data tiering, several replication levels and a native InfiniBand connection form the basis for the HPC cluster. The HPC cluster is completed by management units, consisting of a choice of a licence server, GPU server or queuing system, as well as many other items.

Please note that CPU 24/7 HPC clusters are not based on a shared IT infrastructure and consequently their use is confined to the specific client. Customers and their data will be isolated by completely separate systems. Do you require other individual components? No problem! We will provide the basic framework and you supply us with the details.

HPC On Demand

HPC On Demand is a means for companies to temporarily access scalable HPC resources that can be activated and intensively used as burst capacities for CAE projects. For users the acquisition of comparatively cost-intensive hardware and specific software solutions represents a large investment, tying up capital, expertise and manpower in areas outside those of the company’s main business activities.

CPU 24/7 offers many solutions for meeting HPC On Demand requirements, all of which are simple, secure, readily available and perfectly designed to meet your requirements. Remote and on-demand, for engineering consultancies and major clients alike.

I

InfiniBand

InfiniBand is a specification for describing a serial transfer technology form Mellanox for data networks in HPC clusters. CPU 24/7 uses high performance InfiniBand links to compute your numerical simulations. QDR and the FDR standard are among the current transfer protocols. Transfer rates of 40 Gbit/s or 56 Gbit/s are possible with these standards. In the meantime, EDR (Enhanced Data Rate) was released with 100 Gbit/s and in 2017 experts expect to be in the 200 Gb/s era.

Comparing 1 Gbit/s Ethernet with EDR InfiniBand, there is an increase of 100x in bandwidth just as with faster Ethernet solutions, for example 10 Gbit/s, there is a 10x increase in bandwidth. However, from a latency perspective, InfiniBand enables applications to communicate in less than 1 microsecond, as compared to Ethernet ranging from tens if not hundreds of microseconds. InfiniBand has at least a 10x advantage in bandwidth and latency assuming an Ethernet solution is performing at 10 Gbit/s.

Intel® Cluster Ready

This is a certification programme used for improved harmonisation of cluster systems and applications. The Intel® Cluster Ready certificate provides you with a guarantee that all our cluster components will work together perfectly to provide high performance. Our HPC clusters have Intel® Cluster Ready certification.

J

Job Scheduler

A job scheduler enables computations to be sent to a queue so that they can automatically begin as soon as the necessary resources are available. This can be particularly effective if computations are being carried out at night or outside normal business hours.

Job scheduling is also useful if a large number of users are working on the cluster simultaneously, but with insufficient resources to enable all the jobs to be executed parallel with one another.

With CPU 24/7 queuing systems are deployed exclusively to meet specific customer requirements. Long waiting in the queue and competition to prioritise outstanding jobs are therefore avoided.

K

Key User

A key user is a cluster user with special authorisation. The details are specified prior to use. Specific restrictions on the allocation of user rights have an important part to play in our system security policies.

L

Licenses

What use is the best computing performance if the applications cannot be used? The dynamic aspect of HPC Cloud Computing creates a need to provide more flexible, time-dependent user licences. Annual licences are rarely worthwhile for small and medium-sized companies with widely fluctuating workloads, and therefore on demand models provide a good alternative.

To ensure the most effective functioning of the hardware and software we handle all aspects of licence management, from consultancy and procurement to integration and support, including updates and upgrades. We also provide you with such open source products as OpenFOAM and numerous commercial products from ANSYS, CD-adapco, NUMECA, COMSOL and many more.

Ready-made workflows and in-depth partnerships ensure that we can not just obtain and integrate the required licence promptly but also give a qualitative impetus, make existing licensing models more flexible. We will be happy to install your own in-house code on your customised HPC cluster.

LS-DYNA

The FEM computing program LS-Dyna enables simulations and investigations of profoundly nonlinear physical processes on the computer.

M

Mellanox Technologies

Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end InfiniBand and Ethernet interconnect solutions and services for servers and storage.

Message Passing Interface (MPI)

MPI is a standard for exchanging messages in parallel computing on distributed computers, where the aim is to always ensure the maximum portability and performance. CPU 24/7 uses various implementations such as OpenMPI and IntelMPI for this purpose. The impact on cluster performance can vary depending on the particular application. We have extensive knowledge of these effects and can regulate and optimise them, for example using the relevant software packages or acceleration engines.

Multiphysics

Multiphysics treats simulations that involve multiple physical models or multiple simultaneous physical phenomena. For example, combining chemical kinetics and fluid mechanics or combining finite elements with molecular dynamics. Multiphysics typically involves solving coupled systems of partial differential equations.

Many physical simulations include coupled systems, such as electric and magnetic fields for electromagnetism, pressure and velocity for sound, or the real and the imaginary part of the quantum mechanical wave function. Another case is the mean field approximation for the electronic structure of atoms, where the electric field and the electron wave functions are coupled.

N

Network File System (NFS)

The NFS enables several nodes to access a common file system. In this process an NFS server exports a register of its local storage, which is “mounted” by several NFS clients. Access takes place as if the register were mounted locally on the client. The disadvantage of this is that each file is stored just once on the NFS server (= single storage node) and as it is accessed by more nodes so the performance declines.

O

OpenMP

OpenMP (Open Multi-Processing) is an application programming interface (API) that supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran, on most platforms, processor architectures and operating systems. It consists of a set of compiler directives, library routines, and environment variables that influence run-time behavior.

OpenMP uses a portable, scalable model that gives programmers a simple and flexible interface for developing parallel applications for platforms ranging from the standard desktop computer to the supercomputer. An application built with the hybrid model of parallel programming can run on a computer cluster using both OpenMP and Message Passing Interface (MPI), or more transparently through the use of OpenMP extensions for non-shared memory system.

Operating System

For our HPC cluster the clear favourite is SuSE Linux Enterprise Server (SLES) 11, which has regularly demonstrated its long-term stability and reliability. Unlike most others, this system is also designated a “Supported Platform” by many manufacturers in the HPC field. Trouble-free operation with the applications is guaranteed. The very new SLES 12 is currently undergoing rigorous trials and will only be introduced after it has proven to be equally robust.

P

Parallel File System

In contrast to NFS, with the parallel file system a number of nodes can access the same data simultaneously and in coordination, because this data is distributed among several storage nodes. This improves the performance of the entire system significantly, especially when access involves a high volume of writing or reading that requires large bandwidths, as is commonly the case with HPC.

Petaflop

One petaFLOPS (PFLOPS) are 1015 FLOPS.

Private Managed Cloud

In an age of rapid technological development companies have recognised the benefits of making their own IT infrastructure more flexible. Transferring data to the public cloud is not an option for companies, because of data protection and information security issues, and this is not surprising given the sensitivity of data that can have a decisive effect on competitiveness. Alternatives are therefore needed. The private cloud could offer just such an alternative, if the total cost of ownership were not disproportionately high.

The private managed cloud provides a compromise. In technical terms a private managed cloud is a cloud solution that is not based on a shared infrastructure and is intended exclusively for a particular client, but is managed by a third party provider in accordance with appropriate individual agreements. In operational and security terms, for our clients this means that, in principle, with CPU 24/7 HPC resources the connection and work performed are carried out as if at one’s own additional business location.

Q

Queuing System

Queuing systems ensure that hardware and software resources are explicitly allocated to individual jobs and that the workload is ideally distributed among the available resources. If you have several computing jobs or wish to start them at a particular time, these are usually placed in the job scheduler’s queue.

CPU 24/7 uses Queuing Systems only for client-specific environments. So take care: some other providers offer special terms for using a global, non-exclusive queue. This is the case if a number of clients have to simultaneously share a computer pool or infrastructure, and it is not always clear how jobs are prioritised in the queue, when the job starts or what happens if it is interrupted, or if a security-critical situation arises.

R

Remote Visualisation

Remote visualisation finally allows computational engineers to make qualified statements about their results. In the age of “big data” the visualisation of computations and simulations can be controlled most effectively through remote visualisation software such as VNC, NoMachine NX or NICE DCV.

One major advantage of remote visualisation is that the vast amounts of data are not stored locally and capacity does not have to be provided for uploads and downloads. Instead access takes place remotely from a local computer to a computer or server at another location. CPU 24/7 can offer you various open source and commercial possibilities for desktop visualisation.

Rigid Body Simulation

The rigid body simulation describes the simulation of movement of virtual modelled spatial objects. These movements can be either translational or rotational.

S

Security

CPU 24/7 HPC clusters are characterized by the following key security features:

Data Centre Security

The HPC clusters are located at an ISO 27001 certified, verifiable, high-security computing centre in Berlin, Germany. The equipment in the data centre conforms to the Level TIER 4 and includes redundant infrastructures, access controls, authorisation concepts and video monitoring.

Server Security

Each CPU 24/7 customer is provided with a non-virtualised, customised simulation environment with access to exclusively dedicated and private bare-metal servers. A CPU 24/7 solution is not based on a shared IT infrastructure and consequently its use is confined to the specific client.

Network Security

Customers and their data are isolated by completely separate systems. In technical terms the connection and operation are equivalent to adding an additional business location. For communication, secure industry standard and authenticated connections and protocols with basic encryption (e.g. SCP, SSH or SFTP) are used.

Application and Platform Security

CPU 24/7 carries out a well-integrated, effective patch and change management, to avoid operating faults and minimise security vulnerabilities, which can be quickly resolved. CPU 24/7 ensures that patches are compatible on test systems before adopting them in production.

Data Security

Data is stored solely in German high security data centres and is handled in accordance with the Federal Data Protection Act and the Federal Office for Information Security (BSI). Data deletion is executed according to common data deletion standards. Data media which are not in use any more will be destroyed by appropriate means.

Additional Individual Agreements

On request, CPU 24/7 sets up individual, enterpriseclass security and availability agreements as well as wide-ranging confidentiality agreements, service level agreements or a system security policy to cope 100% with the customer’s security and availability requirements.

Secure Shell (SSH)

Secure Shell (SSH) is a cryptographic network protocol for operating network services securely over an unsecured network. SSH provides a secure channel over an unsecured network in a client-server architecture, connecting an SSH client application with an SSH server. Common applications include remote command-line login and remote command execution, but any network service can be secured with SSH. SSH is typically used to log into a remote machine and execute commands, and it can transfer files using the associated SSH file transfer (SFTP) or secure copy (SCP) protocols.

STAR CCM+

STAR-CCM+ is a CFD tool which was released by CD-adapco and employs a client-server architecture. It provides a complete GUI for meshing, modeling and visualization.

Stationary Calculation

A stationary simulation is a calculation that does not take temporal changes of events but the final state of an object into account. This term is usually used for simulations of temperature and magnetic fields. The energy of one component only consists of potential energy. In structural mechanics for example we speak of stationary events when there is no temporally change in a vibration.

T

Teraflop

One teraFLOPS (TFLOPS) are 1012 FLOPS.

Thread

A thread is a sequence of programmed instructions that follows a strict sequence, one step at a time, in other words, operating sequentially. Threads are generated and controlled by the relevant application software. In principle several threads can be combined to form a process, and they are executed either sequentially or alternately within that process, thereby avoiding any time advantage. Best performance is always the objective in the HPC field, and therefore a process usually comprises a single thread, which then runs on one specific core.

TOP500. The list

TOP500. The list is an international ranking system for supercomputers using the LINPACK benchmark (measure for system’s floating point computing power). The TOP500 table shows the 500 most powerful commercially available computer systems in the world.

Transient Calculation

The characteristic of a parameter to not follow a constant value over time is called non-stationarity or transience. In structural mechanics it is also named dynamic simulation. The opposite is a stationary calculation.

Transient calculations are of great importance in engineering, particularly for hydraulic engineering, flow calculations, heat transfer calculations or other numeric calculations, where objects that change their condition over time are examined. The additional differentiated consideration of a temporal component in a transient calculation requires a higher performance to finally solve all physical effects within a reasonable time.

Tests in the automotive sector showed that transient calculations on CPU 24/7 resources were 5-6 times faster than in-house workstations.

U

User Management

NIS (Network Information Service) is primarily deployed for user management in our adapted cluster systems. This is where all the user accounts are administered centrally and are available to the individual nodes as a register. Users are administered by means of an LDAP server (Lightweight Directory Access Protocol) in CAE Express, which approves the users exclusively for the nodes which they have booked.

V

Virtual Network Computing (VNC)

In computing, Virtual Network Computing (VNC) is a graphical desktop sharing system that uses the Remote Frame Buffer protocol (RFB) to remotely control another computer. It transmits the keyboard and mouse events from one computer to another, relaying the graphical screen updates back in the other direction, over a network. VNC is a solution for remote visualisation.

Virtual Private Network (VPN)

Proof that we can adapt flexibly to meet our clients’ security requirements: setting up a VPN is often the basic prerequisite for the existence of a contract. A VPN enables an encrypted connection to be set up in an insecure network, with the aid of VPN gateways. This involves the creation of a site-to-site tunnel, confined by two permanently installed VPN firewalls at two locations. All the data traffic intended for the remote network runs through this “tunnel”.

Another possibility is to install VPN client software on a computer, thereby setting up a tunnel to the VPN gateway of a particular location, through which the encrypted data runs.

W

Wall-clock Time

In computing, wall time or wall-clock time is the actual time, usually measured in seconds, that elapses from the start to the completion of a task, or in other words, that a program takes to run or to execute its assigned tasks/jobs.

X

x86/x64 Architecture

The term refers to the bus width of the CPU. CPUs with a bus width of 64 Bits can access a considerably larger address space (i.e. more memory, in theory 264 GiB = 16 billion GiB = 16 EiB (ExbiByte)). Moreover, 64 Bit CPUs are significantly faster, because 64 rather than 32 Bits can be transported in a single cycle. There are no longer any 32 Bit CPUs in the HPC field.

Z

Zuse, Konrad

Konrad Zuse is the “godfather” of the first freely programmable computer using binary switching and floating point calculation. “In the past the public was protected from many bureaucratic abuses through bureaucratic lethargy. Now we have computers, which do everything in milliseconds, […]”. Nowadays we are more likely to refer to them as nanoseconds.