ACM DL

ACM Transactions on

Storage (TOS)

Menu
NEWS

LATEST ISSUE

Volume 15, Issue 1, March 2019 is now available.

VIEW LATEST ISSUE IN THE ACM DL

We would like to thank our colleagues who have served as reviewers for ACM TOS between 2016 ~ 2018. Through this list, we express our deepest gratitude for sacrificing your time and effort in providing your valuable comments. Thanks!

CALL FOR PAPERS

Special Issue on Computational Storage

Since the first hard disk drive (HDD) was introduced in 1956, storage devices have remained “dumb” for more than 60 years. However, ever growing demand for big data processing and recent advances in storage technology are reshaping the traditional CPU-centric computing paradigm. Many studies show that the energy consumed by data movement is starting to exceed the energy consumed by computation. Especially, the advent of high-performance solid-state drives (SSDs) based on non-volatile memory (e.g., NAND flash memory, 3D XPoint, etc.) opens up new opportunities for storage-centric computing paradigm.

Read more.

Forthcoming Articles
CORES: Towards Scan-Optimized Columnar Storage for Nested Records

Due to the record transformation in storage layer, the unnecessary processing costs derived from either unwanted fields or unsatisfied rows may be very heavy in complex schemas, significantly wasting the computational resources in the large-scale analytical workloads. We present CORES (Column-Oriented Regeneration Embedding Scheme) to push highly-selective filters down into the column-based storage, where each filter consists of several filtering conditions on a field. By applying highly-selective filters to column scan in storage, we demonstrate that both the IO and the deserialization cost could be significantly reduced by introducing a fine-gained composition based on bitset. We also generalize this technique by two pair-wise operations rollup and drilldown, such that a series of conjunctive filters can effectively deliver their payloads in nested schema. The proposed methods are implemented on an open source platform. For practical purposes, we highlight how to effectively construct a nested column storage and efficiently drive multiple filters by a cost model. We apply this design to the nested relational model especially when hierarchical entities are frequently required by ad-hoc queries. The experiments, covering a real workload and the modified TPCH benchmark, demonstrate that CORES improves the performance by 0.7X<26.9X compared to the state-of-the-art platforms in scan-intensive workloads.

ZoneTier: A Zone-based Storage Tiering and Caching Co-Design to Integrate SSDs with SMR Drives

Integrating solid state drives (SSDs) and host-aware shingled magnetic recording (HA-SMR) drives can potentially build a cost-effective high-performance storage system. However, existing SSD tiering and caching designs in such a hybrid system are not fully matched with the intrinsic properties of HA-SMR drives due to their lacking consideration of how to handle non-sequential writes (NSWs). We propose ZoneTier, a zone-based storage tiering and caching co-design, to effectively control all the NSWs by leveraging the host-aware property of HA-SMR drives. ZoneTier exploits real-time data layout of SMR zones to optimize zone placement, reshapes NSWs generated from zone demotions to SMR preferred sequential writes, and transforms the inevitable NSWs to cleaning-friendly write traffics for SMR zones. ZoneTier can be easily extended to match host-managed SMR drives using proactive cleaning policy. We implement a prototype of ZoneTier with user space data management algorithms and real SSD and HA-SMR drives which are manipulated by the functions provided by libzbc and libaio. Our experiments show that ZoneTier can reduce zone relocation overhead by 29.41% in average, shorten performance recovery time of HA-SMR drives from cleaning by up to 33.37%, and improve performance by up to 32.31% than existing hybrid storage designs.

Mitigating Synchronous I/O Overhead in File Systems on Open-Channel SSDs

Synchronous I/O has long been a design challenge in file systems. Although open-channel SSDs provide better performance and endurance to file systems, they still suffer from synchronous I/Os due to the amplified writes and worse hot/cold data grouping. The reason lies in the controversy design choices between flash write and read/erase operations. While fine-grained logging improves performance and endurance in writes, it hurts indexing and data grouping efficiency in read and erase operations. In this paper, we propose a flash-friendly data layout by introducing a built-in persistent staging layer, to provide balanced read, write and garbage collection performance. Based on this, we design a new flash file system named StageFS, which decouples the content and structure updates. Content updates are logically logged to the staging layer in a persistence-efficient way, which achieves better write performance and lower write amplification. The updated contents are reorganized into the normal data area for structure update, with improved hot/cold grouping and in page-level indexing way, which is more friendly to read and garbage collection operations. Evaluation results show that, compared to F2FS, StageFS effectively improves performance by up to 211.4% and achieves low garbage collection overhead for workloads with frequent synchronization.

Level Hashing: A High-performance and Flexible-resizing Persistent Hashing Index Structure

Non-volatile memory (NVM) as persistent memory is expected to substitute or complement DRAM in memory hierarchy, due to the strengths of non-volatility, high density, and near-zero standby power. However, due to the requirement of data consistency and hardware limitations of NVM, traditional indexing techniques originally designed for DRAM become inefficient in persistent memory. To efficiently index the data in persistent memory, this paper proposes a write-optimized and high-performance hashing index scheme, called level hashing, with low-overhead consistency guarantee and cost-efficient resizing. Level hashing provides a sharing-based two-level hash table, which achieves a constant-scale search/insertion/deletion/update time complexity in the worst case and rarely incurs extra NVM writes. To cost-efficiently resize this hash table, level hashing leverages an in-place resizing scheme that only needs to rehash 1/3 of buckets instead of the entire table to expand a hash table and rehash 2/3 of buckets to shrink a hash table, thus significantly reducing the number of rehashed buckets and improving the resizing performance. Experimental results demonstrate that level hashing achieves 1.4×?3.0× speedup for insertions, 1.2×?2.1× speedup for updates, 4.3× speedup for expanding, and 1.4× speedup for shrinking a hash table, while maintaining high search and deletion performance, compared with state-of-the-art hashing schemes

SolarDB: Towards a Shared-Everything Database on Distributed Log-Structured Storage

Efficient transaction processing over large databases is a key requirement for many mission-critical applications. Though modern databases have achieved good performance through horizontal partitioning, their performance deteriorates when cross-partition distributed transactions have to be executed. This paper presents Solar, a distributed relational database system that has been successfully tested at a large commercial bank. The key features of Solar include: 1) a shared-everything architecture based on a two-layer log-structured merge-tree; 2) a new concurrency control algorithm that works with the log-structured storage, which ensures efficient and non-blocking transaction processing even when the storage layer is compacting data among nodes in the background; 3) fine-grained data access to effectively minimize and balance network communication within the cluster. According to our empirical evaluations on TPC-C, Smallbank and a real-world workload, Solar outperforms the existing shared-nothing systems by up to 50x when there are close to or more than 5% distributed transactions.

Characterizing output behaviors of a production supercomputer: analysis and implications

This paper studies the output behavior of the Titan supercomputer and its Lustre file stores. We introduce a statistical benchmarking methodology that collects/combines samples over times and settings: 1) To measure the performance impact of parameter choices against the interference in the production setting; 2) to derive the performance of individual stages/components in the multi-stage write pipelines, and their variations over time. We find that Titan's I/O system is highly variable with two major implications: 1) Stragglers lessen the benefit of coupled I/O parallelism. I/O parallelism is most effective when the application distributes the I/O load so that each target stores files for multiple clients and each client writes files on multiple targets, in a balanced way with minimal contention. 2) our results also suggest that the potential benefit of dynamic adaptation is limited. In particular, it is not fruitful to attempt to identify "good locations" in the machine or in the file system: component performance is driven by transient load conditions, and past performance is not a useful predictor of future performance. For example, we do not observe diurnal load patterns that are predictable.

Introduction to the Special Section on OSDI'18

All ACM Journals | See Full Journal Index

Search TOS
enter search term and/or author name