High Performance Clusters (HPC)

HPC installations can use either the workstation or thin client install packages

Note: The system administrator should note that the install folder of the workstation install should not be exported across nodes. This makes the thin client install a better choice when sharing a common file system across nodes

The Simulation Compute Manager (SCM) must be started on each node prior to running a study. The Simulation Compute Manager uses node-local folders to store its logs, temporary files and databases. Click here for configuration details.

The solvers are threaded, not Message Passing Interface (MPI) enabled, and use as many threads as are physically present on the execution machine, by default. For jobs that spawn child jobs, such as DoE, Optimization, or Runner Balance Studies, the SCM can distribute these child studies to other nodes, in order to spread the workload.

Your cluster should include a number of possibly identical hosts, or nodes, each with its own address, and each running the same version of Linux. The cluster is fire-walled from outside communication, with the following exceptions:

Nodes within the cluster either are not fire-walled, or can have ports opened on demand. Please see the additional notes associated with using Portable Batch System (PBS).