About
The University of Cincinnati’s Advanced Research Computing (ARC)
center offers a readily accessible hybrid CPU/GPU computing cluster,
supporting the next generation of computational and data science researchers
while developing a highly competitive workforce.
We will partner with you to utilize the core suite of HPC services and
resources. With the ARC resources, researchers can advance theoretical
knowledge and expand the realm of discovery, generating leading edge
research and applications suitable for innovation and commercialization in
line with UC’s Next Lives Here strategic direction.
This sustainable high-performance computing (HPC) infrastructure with
technical support services, accelerates the time to discovery and enables
sophisticated and increasingly realistic modeling, simulation and data
analysis and will help to bridge users to the local, regional and national
HPC ecosystem.
Faculty can immediately gain access to the HPC cluster resources by filling out the ARC Computing Resources Request (you must login with your UC Credentials to access the form).
IMPACT
ARC resources support all disciplines, including healthcare, sciences,
engineering and social sciences/humanities, in their quest to harness big
data via analytics, modeling and simulation, visualization, artificial
intelligence and machine learning.
Partners
The ARC center is a collaboration between the Office of Research,
University of Cincinnati faculty, the Office of Information Technologies
(UCIT) technical and research services teams, the College of Engineering and
Applied Sciences (CEAS) technical staff, Indiana University Information
Technology Service’s Chief HPC Systems Architect, and XSEDE
Capabilities and Resource Integration (XCRI) HPC Systems Administration
staff. This partnership is made possible as part of a long-term commitment
by UC to create an environment to advance the University of
Cincinnati’s leadership position in innovative research and impact.
Faculty Advisory Committee – ARC Pilot
-
Chair, CEAS, Aerospace Engineering
-
-
-
-
Member, CoM, Environmental Health
-
-
Ad-hoc member, Executive Director, Digital Futures Resilience Platform
-
Ad-hoc member, Assoc. Director, IT@UC Research Computing Services
-
HPC System Administrator and Facilitator
-
Linux System Administrator
HPC Cluster Available Hardware/Software
ARC is equipped with 50 teraFLOPS of peak CPU performance and 2 NVIDIA Tesla
V100 GPU nodes (224 teraFLOPS deep learning peak performance) connected with
high-performance 100 GB/s Omnipath (OPA) interconnect, a significant step
forward in both bandwidth and latency.
Hardware
- 50 teraFLOPS of peak CPU performance
- Intel Xeon Gold 6148 2.4G, 20C/40T, 192 GB RAM/node
- Plans to increase it to 140 teraFLOPS peak CPU performance in
the next year
- 224 teraFLOPS deep learning peak performance
- NVIDIA Tesla V100 32G Passive GPU
- Plans to increase it to 896 teraFLOPS deep learning peak
performance in the next year
- ZFS Storage Node – 96TB raw storage
- Omnipath HPC Networking infrastructure
- Maximum Ominpath bandwidth between nodes = 100Gbps
Software
- OpenHPC environment
- Warewulf cluster provisioning system and managed by the SLURM
- Singularity containers
- Developmental tools, including compilers, OpenMP, MPI, OpenMPI libraries
for parallel code development, debuggers, and open source AI tools
- FLEXlm being installed so that individual researchers can easily
maintain and use their software resources
- User login is based on UC/AD, so that user groups and easier access
- ARC Cluster Report
- ARC 6-month Status Report June 2019
Getting Access
ARC pilot HPC Cluster access (CPUs, GPUs, basic storage)
Currently, access and usage of the cluster is provided at no cost on a first-come, first-served basis. Fair-share scheduling is utilized to fairly distribute the resources. If you have a specific need and deadline, please contact us, and we will work with you to get your jobs done.
Faculty can immediately gain access to the HPC cluster resources by filling out the ARC Computing Resources Request (you must login with your UC Credentials to access the form).
Cost: No cost
System Wide Downtime: All ARC HPC systems will be down for regular maintenance on the 2nd Tuesday Every Month
Contribute nodes to the cluster – priority access to your nodes plus access to additional shared resources
Faculty can use their HPC and research computing funding to contribute nodes to the central cluster. Priority access is given to the owner of the nodes, however, when not in use by the owner, the nodes can be shared with others. This is a good option for faculty who need to have full access to their nodes periodically and can take advantage of access to additional shared resources in the cluster. Using the central resource maximizes the amount of compute resources a faculty can purchase because the HPC infrastructure (networking, racks, head/management nodes, support) are provided at no cost. Contact: arc_info@uc.edu
Cost: Nodes contributed to the cluster must be consistent with current cluster hardware configurations. The ARC team can work with you to review your needs and provide an estimate for your purchase.
How to Cite ARC
Thanks for choosing ARC! Please use the following in a citation:
“This research was supported in part through research cyberinfrastructure resources and services provided by the Advanced Research Computing (ARC) center at the University of Cincinnati, Cincinnati, OH, USA.”