Overview
UMIACS manages High Performance Computing systems on a variety of platforms and operating systems in support of its
faculty and
research programs. In addition to managing the hardware, data center facilities, file systems, and networks associated with these systems,
the Institute's systems and research staff also maintain a development environment with a comprehensive set of compilers, parallel programming tools, and libraries for scientific and numerical computing as well as software applications that are specific to each lab's research area.
Rather than build any single very large system, the Institute works closely with the faculty and researchers to customize each HPC system for their needs. The resulting systems are very heterogeneous. They incorporate a wide range of computing architectures, and parallel programming models on hardware from many different original equipment manufacturers.
Applications
These systems support faculty members who need significant computing resources for their research. In addition to helping them study new methods for parallel and distributed computing and software engineering, they support several novel applications, such as:
Selected Facilities
The Chimera Cluster
The
Chimera Cluster is a high-performance computing and visualization cluster that takes advantage of the synergies afforded by coupling central processing units (CPUs), graphics processing units (GPUs), large-scale tiled displays, and storage. It is built from fifty-seven servers, each equipped with two Intel Nehalem processors, twenty-four GB of main memory, an NVIDIA programmable GPU, an Infiniband host channel adapter, and a Gigabit Ethernet host bus adapter. It also includes a 100 -Megapixel display wall built from twenty-five 30-inch LCD panels.
The cluster supports research and development for a variety of
research areas in
graphics and visualization,
scientific computing,
computational biology,
real-time virtual audio, and
real-time computer vision. It also supports classes on
Information processing, Visualization, and Computer Graphics and
High Performance Computing in the
Computer Science Department.
The Crocco Cluster
The
Crocco Clusteris a high performance computer to support the
Crocco Laboratory's research in computational fluid dynamics and the numerical simulation of turbulent flows, surface reactions, and fluid interaction. The cluster is built from 96 servers, each equipped with two quad-core Intel Xeon 2.96 Ghz processors, twenty-four gigabytes of 1.3Ghz main memory, a Gigabit Ethernet adapter, and an Infiniband interconnect.
The cluster is primarily used for developing and running the lab's numerical simulations using algorithms implemented with the
Message Passing Interface (MPI) and it's Input-Output Interface, MPI-IO. It is also used to store, analyze, and extend
a large database of turbulent flows that includes over 50 Terabytes of both experimental and computational data. Since 2007, the lab has used the cluster to contribute new flows the
iCFDdatabase, a popular and open database of numerical flow simulations.
The Bug and Skoll Clusters
The
Bug and
Skoll clusters support computer science research in Parallel and Distributed Computing, Software Engineering, and Quality Assurance. The Bug cluster is a Myrinet-connected distributed memory system built from sixty-four servers that support parallelism through the Message Passing Interface and OpenMP as well as fine-grained system performance measurements based on
Linux Performance Counters. The Skoll Cluster is a serial computing system that supports the
Skoll Project's implementation of a
Distributed Continuous Quality Assurance (DCQA) system on 120 servers for a wide range of applications using the Linux operating system and VMware.