High Performance Computing

The Miami Redhawk cluster is available for use by faculty, staff, and students. Contact the RCS group to arrange a discussion about how the cluster can support your research and teaching efforts.

For larger computing needs, the RCS group can assist researchers with using resources available from the Ohio Supercomputer Center.


Faculty can request account activation on the cluster management website. Students can ask a faculty member to sponsor an account for them.


There are different ways to access the Redhawk cluster. Note that when you are accessing the cluster, you are connecting to the head or login node of the cluster. See the section on using the cluster for information about accessing the compute nodes.

Command line access is available using SSH, with SFTP/SCP used to transfer files. 

Access to the cluster with a full Linux desktop is also available using a tool called NoMachine or NX.

To get more information on how to connect to the Redhawk cluster, submit a Connection Instruction Request


Miami's current HPC cluster consists of:

  • 2 login nodes – 24 cores, 384 GB of memory each. Machine names:
    • mualhplp01
    • mualhplp02
  • 26 compute nodes – 24 cores, Intel Xeon Gold 6126 2.6 GHz processors, 96 GB of memory each. Machine names:
    • mualhpcp10.mpi-mualhpcp26.mpi
    • mualhpcp28.mpi-mualhpcp35.mpi
    • mualhpcp37.mpi
  • 5 compute nodes - 24 cores, Intel Xeon Gold 6226 2.7 GHz processors, 96 GB memory each. Machine names:
    • mualhpcp42.mpi-mualhpcp45.mpi
    • mualhpcp47.mpi
  • 2 large memory nodes – 24 cores, Intel Xeon Gold 6126 2.6 GHz processors , 1.5 TB of memory each. Machine names:
    • mualhpcp27.mpi
    • mualhpcp36.mpi
  • 4 GPU nodes – 96 GB of RAM, 24 cores, Intel Xeon Gold 6126 2.6 GHz processors and each with 2 Nvidia Tesla V100-PCIE-16GB GPUs. Machine names:
    • mualhpcp38.mpi-mualhpcp41.mpi
  • Shared storage system with approximately 30 TB of storage, expandable.


Most software on the cluster is managed with the modules tool.

The HPC Software table lists software installed on the Redhawk cluster.

Contact the RCS group to request installation of additional packages or with other questions about software on the cluster.

Using the Cluster

Cluster usage is broken into two categories – interactive and batch. Interactive use allows the user to interact with a cluster node to run the software and receive output in real-time. In batch usage, work is submitted to cluster and executes when the needed resources are available, with optional e-mail notifications to the user when the job starts and ends.

We recently switched the scheduler and resource manager from Moab/Torque to Slurm and have prepared side by side comparison charts for translating between Torque and Slurm. We also provide wrapper scripts for Slurm commands which allows users to continue to use basic Moab/Torque commands and scripts during a transitional period of time. 

More details can be found at:


Governance is provided through the HPC advisory committee, a group of faculty and university administrators. For information about policies and related information please contact Research Computing Support.