[ skip to content ]

High Performance Computing

ITS supports high performance research computing by providing both symmetric multiprocessing (SMP) shared memory computers and High Performance Computing clusters for parallel programming applications.

A multi-terabyte storage access network (SAN) is available to provide research activities with secure storage of research data.

HPC Resources

The Turing cluster contains 140 multi-core compute nodes each containing 20 cores and 128 Gb of RAM. The cluster also provides 4 high memory nodes containing 768 Gb of RAM, 17 Appro GPU nodes each with 4 Nvidia GPU's and 10 Intel Xeon Phi nodes each with 2 Xeon Phi MIC cards. An FDR inifiniband network provides fast cluster communication. The cluster is highly redundant with clustered head nodes and a dedicated login node. Users connect to the login node over aggregated 10 Gb connections.

The Turing cluster supports a variety of research including fluid dynamics, genomics, molecular dynamics and oceanographic research.

Turing Cluster Materials:

A storage access network (SAN) provides 50 terabytes for storage and tape archive for research data. The Research SAN architecture is five 6130 fiber channel disk arrays, four SATA disk arrays, and a StorageTEK L500 tape library connected redundantly via QLOGIC fiber channel switches.

ODU currently provides a variety of software packages for use in the cluster environment(s):

Software Resources
Abaqus Fluent Lapack PGI Cluster Development Kit
Ansys g03 Mathematica R
Atlas google-perftools Matlab smxgauss
Bio gotoblas Metis Superlu-dist
Charmm gromacs Mumps Valgrind
Comsole IMB_2.3 Parmetis Zoltan
Fftw Intel cluster compiler suite Petsc

New to High Performance Computing

The following information is provided to allow new users to gain a general understanding of how to utilize the cluster resource.

What is it?

The Turing ODU Community Cluster is a heterogeneous group of commodity Intel-based servers (primarily CRAY and Dell) linked by management (Bright Cluster Manager) and job scheduling software (Grid Engine) that allows users to submit distributed, parallel, and interactive research driven applications/jobs.

1. Getting an Account

To connect to the Turing cluster, the "High Performance Computing (HPC) Service" must be enabled within your ODU MIDAS services. To do this, send an email to itshelp@odu.edu requesting access.

2. Connecting to the Turing Cluster (turing.hpc.odu.edu)

Step 1: Open an SSH Client

  • Windows - PuTTY (http://www.putty.org)
  • OSX - Terminal.app (Applications → Utilities → Terminal)
  • Linux/Unix - Terminal or Console/Konsole

Step 2: Connect to Turing

  • Windows - PuTTY
    • Host: turing.hpc.odu.edu
    • Username: MIDASID
    • Password: MIDAS Password
  • OSX / Linux/ Unix
    • Type: ssh midasid@turing.odu.edu and press [Enter]
    • If asked, agree to the security RSA security window
    • Type your MIDAS password and press [Enter]

3. Transferring Data Files

  • Download the program FileZilla (https://filezilla-project.org)
  • Install FileZilla
  • Connect to the remote host - turing.hpc.odu.edu
    • Use your MIDAS ID and password to login
  • Drag and drop needed files from the local computer (computer connecting from) to the remote host (Turing Cluster)

4. Getting Help

Appointments: The HPC Group would be glad to meet with you to discuss your research. To setup a meeting please email hpc@odu.edu.