Improving the management efficiency of GPU workloads in data centers through GPU virtualization

    Research output: Contribution to journalArticle

    Early online date

    View graph of relations

    Graphics processing units (GPUs) are currently used in data centers to reduce the execution time of compute-intensive applications. However, the use of GPUs presents several side effects, such as increased acquisition costs and larger space requirements. Furthermore, GPUs require a nonnegligible amount of energy even while idle. Additionally, GPU utilization is usually low for most applications. In a similar way to the use of virtual machines, using virtual GPUs may address the concerns associated with the use of these devices. In this regard, the remote GPU virtualization mechanism could be leveraged to share the GPUs present in the computing facility among the nodes of the cluster. This would increase overall GPU utilization, thus reducing the negative impact of the increased costs mentioned before. Reducing the amount of GPUs installed in the cluster could also be possible. However, in the same way as job schedulers map GPU resources to applications, virtual GPUs should also be scheduled before job execution. Nevertheless, current job schedulers are not able to deal with virtual GPUs. In this paper, we analyze the performance attained by a cluster using the remote Compute Unified Device Architecture middleware and a modified version of the Slurm scheduler, which is now able to assign remote GPUs to jobs. Results show that cluster throughput, measured as jobs completed per time unit, is doubled at the same time that the total energy consumption is reduced up to 40%. GPU utilization is also increased.

    Documents

    • Improving the management efficiency of GPU workloads in data centers through GPU virtualization

      Rights statement: Copyright 2019 Wiley. This work is made available online in accordance with the publisher’s policies. Please refer to any applicable terms of use of the publisher.

      Accepted author manuscript, 1 MB, PDF-document

      Embargo ends: 10/04/2020

    DOI

    Original languageEnglish
    Article numbere5275
    Pages (from-to)1-16
    JournalConcurrency Computation
    Journal publication date10 Apr 2019
    Early online date10 Apr 2019
    DOIs
    Publication statusEarly online date - 10 Apr 2019

      Research areas

    • CUDA, data centers, GPU, InfiniBand, rCUDA, Slurm

    ID: 168413823