Improving DRAM Energy-efficiency

Lev Mukhanov, Georgios Karakonstantis*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

The rapid growth of IoT triggered the exponential growth of generated data transferred to/from Cloud and emerging Edge servers. As a result, recent projections forecast that the DRAM subsystem will soon be responsible for more than 40% of the overall power consumption within most servers [1]. One of the reasons for the high energy consumed by the DRAM devices is the usage of pessimistic DRAM operating parameters, such as voltage, refresh rate and timing parameters, set by the vendors. Vendors use these parameters to handle possible failures induced by the charge leakage and cell-to-cell interference. Moreover, such failures prevent scaling further scaling the size of DRAM cells. This reality has led researchers to question if such pessimistic parameters can be relaxed and if the induced failures can be handled at the hardware or software level. In this chapter, we discuss the challenges related to the DRAM reliability and present a systematic study on exceeding the conservative operating DRAM margins to improve the energy efficiency of Edge servers. We demonstrate a machine learning-based technique that enables us to scale down the DRAM operating parameters and the hardware/software stack that handles all the induced failures.

Original languageEnglish
Title of host publicationComputing at the EDGE. New challenges for service provision
PublisherSpringer International Publishing AG
Pages123-140
Number of pages18
ISBN (Electronic)9783030745363
ISBN (Print)9783030745356
DOIs
Publication statusPublished - 20 Sep 2022

Bibliographical note

Publisher Copyright:
© Springer Nature Switzerland AG 2022.

ASJC Scopus subject areas

  • Engineering(all)

Fingerprint

Dive into the research topics of 'Improving DRAM Energy-efficiency'. Together they form a unique fingerprint.

Cite this