[ skip to content ]

More Information about this image

You Visit Tour. Webb Lion Fountain. June 1 2017. Photo David B. Hollingsworth

Meet Zorka, Old Dominion University's First Teraflop Compute Cluster

Old Dominion University's first teraflop compute cluster, which has been given the name Zorka, has been installed on the fourth floor of the E.V. Williams Engineering and Computational Sciences Building and is already winning rave reviews from the university's research community.

A teraflop equals 1,000 gigaflops, and this is a measure of performance that Zorka can attain when running even at partial capacity. (An average desktop system peaks near 5 gigaflops.) The new, Dell high-performance cluster can handle the data crunching required for complex studies and simulations in fields such as aerospace engineering, mathematics, oceanography and bioelectric engineering.

Michael Sachon, assistant director for research computing in ODU's Office of Computing and Communications Services (OCCS), said the Zorka cluster is rated at 1.5 teraflops and comprises:

• Forty compute nodes, each with two 3-gigahertz dual-core Intel processors and 8 gigabytes of memory, providing 160 processor cores for parallel or serial applications.

• Four symmetric multiprocessor (SMP) nodes, each with four 2.4-gigahertz quad-core processors and 32 gigabytes of memory, providing 64 additional processor cores for large, shared memory applications.

• Four input/output (I/O) nodes supplying disk space to research applications: 9 terabytes of parallel file system disk and 3 terabytes of network file system (NFS) disk.

• A 20 gigabit-per-second Infiniband fabric connecting the compute nodes and I/O nodes.

Defined in plainer language, Zorka offers the nifty combination of high-performance computing

and fast disk space within the cluster to allow applications to run at very high speed and with low latency. This setup solves a problem akin to traffic congestion that researchers have encountered with the university's older equipment. Sachon refers to it as a "bottleneck" in the link between the compute processors and the external disk space that, during peak usage, was like the Hampton Roads Bridge-Tunnel on a Friday afternoon in the summer.

Michael Dinniman, a research scientist with ODU's Center for Coastal Physical Oceanography, is an example of a satisfied customer. He moved quickly to begin using Zorka in his computer modeling to simulate the ocean circulation and sea-ice dynamics on the western side of the Antarctic Peninsula. His simulations, which will provide valuable information about effects of climate change, are running on Zorka more than four times faster than they did on the university's aging Orion cluster.

One aspect of his work, Dinniman said, is modeling the interaction between the ocean and the enormous ice sheets on Antarctica that slide off the continent in some coastal areas. "We want to look at possible changes in the melting at the base of these ice shelves due to warmer ocean waters," he explained. "Changes in the basal melting of the ice shelves may change the speed at which ice sheets slide off the continent. If more ice slides off, that will change the sea level."

When Dinniman ran the model on the older cluster, he would be allocated 16 central processing units (CPUs) and each year of simulation would take about six days to run. He said he usually got only a couple of model years of useful simulation, which made it difficult for him to see the broad picture of inter-annual variability. "For example, some runs only covered 2001-02, and it would be impossible to say which year was the anomalous one. With the new cluster, so far I am able to get more CPUs per model run and each CPU is faster. If I use 32 CPUs, which is what I'm using right now, it only takes about 32 hours to run one year of simulation. At this speed, we can actually do simulations spanning a decade or two and look at things like trying to tell if current conditions are different from the 1990s and, if so, why."

Sachon smiles when he hears how pleased Dinniman is with Zorka. "That's what I want to hear," he said. "I'm hearing good things from other researchers, too."

Sachon said he and his colleagues are frankly surprised that they were able to implement this architecture within their budget constraints. The ODU experts worked on the project with a Dell, Inc., group led by major account manager Tim Wilkinson. "Tim and his Dell team were very supportive of what we wanted to do from a technology perspective, and also understood our need to have maintenance for the equipment for its life cycle," Sachon said. "We are very appreciative of Dell working with us to advance ODU research computing."

Wilkinson was pleased with the outcome, as well, and said in a telephone interview, "Mike and his team deserve praise, too."

The name, Zorka, derives from a name of another ODU computer cluster, Mileva. Albert Einstein's first wife, Mileva Maric Einstein, had a younger sister, Zorka.

Other than Sachon, the university's Research Computing Group includes Ruben Igloria, who focuses on high-performance computing applications and is the lead systems administrator; Mahantesh Halappanavar, who focuses on parallel programming and grid computing; Amit Kumar, who focuses on parallel applications and cluster and research storage; and George McLeod, a systems engineer for Geographic Information Systems.

The team set out to deliver technological advancements that are found in modern supercomputers, such as multi-core processors, fast-disk I/O and high-speed interconnects. Another goal was a scalable architecture. For example, Zorka's chassis for its Infiniband switch can be scaled up to 144 ports and has a capacity of 5.76 terabits/second. "Although only 48 ports were initially purchased, we can add new servers to this architecture very economically and each server will have access to 12 terabytes of disk space delivered over Infiniband," Sachon said. Since many science and engineering applications are sensitive to communication latency, they can greatly benefit from the high-speed Infiniband switch. The system has been built to efficiently address serial applications with large memory requirements, as well as parallel applications built for shared memory or distributed memory architectures.

ODU's high performance computing team has installed many science and engineering applications for the research community including Abaqus, Ansys, Charmm, Fluent, Gromacs, Matlab, Quant, SuperLU, and Bioinformatics tools such as ClustalW, EMBOSS, HMMER, MrBayes and MPI-BLAST. Zorka has both GNU and Intel compilers, and scientific libraries such as Intel MKL, ATLAS, FFTW, GotoBLAS and GSL. The team can assist users with deploying and running their applications.

Zorka can be accessed via three log-in nodes that use round-robin domain name system (DNS) load balancing to distribute users across the three servers. The lightweight directory access protocol (LDAP)-based authentication system allows users to log in with a MIDAS username and password.

The Research Computing Group has prepared a user guide for Zorka. This and other information about the cluster can be obtained by calling or sending e-mail to msachon@odu.edu (ext. 4856) or rigloria@odu.edu (ext 4842).

Site Navigation

Experience Guaranteed

Enhance your college career by gaining relevant experience with the skills and knowledge needed for your future career. Discover our experiential learning opportunities.

Academic Days

Picture yourself in the classroom, speak with professors in your major, and meet current students.

Upcoming Events

From sports games to concerts and lectures, join the ODU community at a variety of campus events.