[ skip to content ]

High Performance Computing

ITS supports high performance research computing by providing both symmetric multiprocessing (SMP) shared memory computers and High Performance Computing clusters for parallel programming applications.

A multi-terabyte storage access network (SAN) is available to provide research activities with secure storage of research data.

Available Resources Include:

The Turing cluster contains 52 multi-core compute nodes and 17 Appro GPU nodes. Standard compute nodes are comprised of 28 ivy bridge nodes and 20 sandy bridge compute nodes while the GPU nodes each contain 4 Nvidia Tesla GPU's for a total of 68 GPU's. Compute nodes have between 32 Gb and 128 Gb of RAM. The cluster is highly redundant with clustered head nodes and a dedicated login node.

Turing Cluster Materials:

The Nikola cluster contains 52 multi-core compute nodes and 17 Appro GPU nodes. Standard compute nodes are comprised of 24 ivy bridge nodes and 28 sandy bridge compute nodes while the GPU nodes each contain 4 Nvidia Tesla GPU's for a total of 68 GPU's. Compute nodes have between 32 Gb and 128 Gb of RAM. The cluster is highly redundant with clustered head nodes and a dedicated login node.

Zorka is a 40 node parallel computing cluster. Each node contains (2) dual core 2.992 GHz processors with 8 Gb of RAM.

A storage access network (SAN) provides 50 terabytes for storage and tape archive for research data. The Research SAN architecture is five 6130 fiber channel disk arrays, four SATA disk arrays, and a StorageTEK L500 tape library connected redundantly via QLOGIC fiber channel switches.


Software

ODU currently provides a variety of software packages for use in the cluster environment(s):

Software Resources
Abaqus Fluent Lapack PGI Cluster Development Kit
Ansys g03 Mathematica R
Atlas google-perftools Matlab smxgauss
Bio gotoblas Metis Superlu-dist
Charmm gromacs Mumps Valgrind
Comsole IMB_2.3 Parmetis Zoltan
Fftw Intel cluster compiler suite Petsc