Low latency execution guarantee under uncertainty in serverless platforms

M. Reza HoseinyFarahabady*, Javid Taheri, Albert Y. Zomaya, Zahir Tari

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Serverless computing recently emerged as a new run-time paradigm to disentangle the client from the burden of provisioning physical computing resources, leaving such difficulty on the service provider’s side. However, an unsolved problem in such an environment is how to cope with the challenges of executing several co-running applications while fulfilling the requested Quality of Service (QoS) level requested by all application owners. In practice, developing an efficient mechanism to reach the requested performance level (such as p-99 latency and throughput) is limited to the awareness (resource availability, performance interference among consolidation workloads, etc.) of the controller about the dynamics of the underlying platforms. In this paper, we develop an adaptive feedback controller for coping with the buffer instability of serverless platforms when several collocated applications are run in a shared environment. The goal is to support a low-latency execution by managing the arrival event rate of each application when shared resource contention causes a significant throughput degradation among workloads with different priorities. The key component of the proposed architecture is a continues management of server-side internal buffers for each application to provide a low-latency feedback control mechanism based on the requested QoS level of each application (e.g., buffer information) and the worker nodes throughput. The empirical results confirm the response stability for high priority workloads when a dynamic condition is caused by low priority applications. We evaluate the performance of the proposed solution with respect to the response time and the QoS violation rate for high priority applications in a serverless platform with four worker nodes set up in our in-house virtualized cluster. We compare the proposed architecture against the default resource management policy in Apache OpenWhisk which is extensively used in commercial serverless platforms. The results show that our approach achieves a very low overhead (less than 0.7%) while it can improve the p-99 latency of high priority applications by 64%, on average, in the presence of dynamic high traffic conditions.

Original languageEnglish
Title of host publicationParallel and Distributed Computing, Applications and Technologies: 22nd International Conference, PDCAT 2021, proceedings
EditorsHong Shen, Yingpeng Sang, Yong Zhang, Nong Xiao, Hamid R. Arabnia, Geoffrey Fox, Ajay Gupta, Manu Malek
PublisherSpringer Cham
Number of pages12
ISBN (Electronic)9783030967727
ISBN (Print)9783030967710
Publication statusPublished - 16 Mar 2022
Externally publishedYes
Event22nd International Conference on Parallel and Distributed Computing, Applications and Technologies 2021 - Guangzhou, China
Duration: 17 Dec 202119 Dec 2021

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13148 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Conference22nd International Conference on Parallel and Distributed Computing, Applications and Technologies 2021
Abbreviated titlePDCAT 2021


  • Dynamic controller of computer systems
  • Quality of Service (QoS)
  • Serverless computing
  • Virtualized platforms

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science


Dive into the research topics of 'Low latency execution guarantee under uncertainty in serverless platforms'. Together they form a unique fingerprint.

Cite this