Skip to main navigation Skip to search Skip to main content

Alleviating I/O Interference in Virtualized Systems with VM-Aware Persistency Control

Research output: Contribution to journalArticlepeer-review

Abstract

Consolidating multiple servers into a physical machine is now a commonplace in cloud infrastructures. The virtualized systems often arrange virtual disks of multiple virtual machines (VMs) on the same underlying storage device while striving to guarantee the service level objective (SLO) of the performance of each VM. Unfortunately, sync operations called by a VM may make it hard to satisfy the performance SLO by disturbing I/O activities of other VMs. In this paper, we experimentally uncover that the disk cache flush operation incurs significant I/O interference among VMs, and revisit the internal architecture and flush mechanism of the flash memory-based SSD. Then, we present vFLUSH, a novel VM-aware flush mechanism, that supports VM-based persistency control for the disk cache flush operation. We also discuss the long-tail latency issue in vFLUSH and an efficient scheme for mitigating the problem. Our evaluation with various micro- and macro-benchmarks shows that vFLUSH reduces the average latency of disk cache flush operations by up to 58.5%, thereby producing improvements in throughput by up to 1.93times . The method for alleviating the long-tail latency problem, which is applied to vFLUSH, achieves a significant reduction in tail latency by up to 75.9%, with a modest throughput degradation by 2.9-7.2%.

Original languageEnglish
Article number9460993
Pages (from-to)89263-89275
Number of pages13
JournalIEEE Access
Volume9
DOIs
StatePublished - 2021

Keywords

  • Disk cache flush operation
  • map table
  • virtual machine
  • write buffer

Fingerprint

Dive into the research topics of 'Alleviating I/O Interference in Virtualized Systems with VM-Aware Persistency Control'. Together they form a unique fingerprint.

Cite this