Increasing the Performance of Data Centers by Combining Remote GPU Virtualization with Slurm

Sergio Iserte, Javier Prades, Carlos Reano, Federico Silla

Research output: Chapter in Book/Report/Conference proceedingChapter

23 Citations (Scopus)

Abstract

© 2016 IEEE. The use of Graphics Processing Units (GPUs) presents several side effects, such as increased acquisition costs as well as larger space requirements. Furthermore, GPUs require a non-negligible amount of energy even while idle. Additionally, GPU utilization is usually low for most applications. Using the virtual GPUs provided by the remote GPU virtualization mechanism may address the concerns associated with the use of these devices. However, in the same way as workload managers map GPU resources to applications, virtual GPUs should also be scheduled before job execution. Nevertheless, current workload managers are not able to deal with virtual GPUs. In this paper we analyze the performance attained by a cluster using the rCUDA remote GPU virtualization middleware and a modified version of the Slurm workload manager, which is now able to map remote virtual GPUs to jobs. Results show that cluster throughput is doubled at the same time that total energy consumption is reduced up to 40%. GPU utilization is also increased.
Original languageEnglish
Title of host publicationProceedings - 2016 16th IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing, CCGrid 2016
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages98-101
Number of pages4
ISBN (Print)9781509024520
DOIs
Publication statusPublished - 18 Jul 2016

Publication series

NameProceedings - 2016 16th IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing, CCGrid 2016

Keywords

  • CUDA
  • GPGPU
  • HPC
  • InfiniBand
  • Slurm
  • data centers
  • rCUDA
  • virtualization

Fingerprint

Dive into the research topics of 'Increasing the Performance of Data Centers by Combining Remote GPU Virtualization with Slurm'. Together they form a unique fingerprint.

Cite this