logo

SCIENCE CHINA Information Sciences, Volume 61, Issue 5: 050104(2018) https://doi.org/10.1007/s11432-017-9401-y

IO dependent SSD cache allocation for elastic Hadoop applications

More info
  • ReceivedNov 15, 2017
  • AcceptedMar 29, 2018
  • PublishedApr 18, 2018

Abstract

Elastic Hadoop applications consisting of multiple virtual machines (VMs) are widely used to support big data analysis and processing. In this scenario, flash-based solid state drive (SSD) is usually deployed on hypervisors and used as the cache to improve the IO performance. However, existing SSD caching schemes are mostly VM-centric, which focus on the low-level IO performance metrics of individual VMs. They may not lead to the optimized performance of elastic Hadoop applications, i.e., the job completion time (JCT), as the importance of VMs inside the application are different even though they have the similar low-level IO patterns. Considering the IO dependency among VMs and figuring out the importance, which we regard as the application-centric metrics, may potentially better improve the performance. We present IO dependency based requirement model, to characterize the requirement of SSD cache for each VM inside the elastic Hadoop application, and then use it in a genetic algorithm (GA) based approach to calculate the nearly optimal weights of VMs for allocating the per-VM SSD cache space and the capacity of the I/O operations per second (IOPS). Furthermore, we present a tool AC-SSD based on the approach and introduce the closed-loop adaptation to react to continuously changing workloads. The evaluation shows that by using AC-SSD, the JCT is reduced by up to 39% for IO sensitive workloads, up to 29% for continuously changing workloads, and over 12.5% for different scale of data comparing to the shared cache.


Acknowledgment

This work was supported by National Key Research and Development Program of China (Grant No. 2016YFB1000103), National Natural Science Foundation of China (Grant No. 61572480), Tianjin Massive Data Processing Technology Laboratory, and Youth Innovation Promotion Association, Chinese Academy of Sciences (Grant No. 2015088).


References

[1] Shvachko K, Hairong K, Radia S, et al. The hadoop distributed file system. In: Proceedings of the 26th Symposium on Mass Storage Systems and Technologies (MSST), Incline Village, 2010. Google Scholar

[2] Wei H F, De Biasi M, Huang Y. Verifying pipelined-RAM consistency over read/write traces of data replicas. IEEE Trans Parallel Distrib Syst, 2016, 27: 1511-1523 CrossRef Google Scholar

[3] Wei H F, Huang Y, Lu J. Probabilistically-atomic 2-atomicity: enabling almost strong consistency in distributed storage systems. IEEE Trans Comput, 2017, 66: 502-514 CrossRef Google Scholar

[4] Kim J, Lee D, Noh S H. Towards slo complying SSDs through ops isolation. In: Proceedings of the 13th USENIX Conference on File and Storage Technologies (FAST), Santa Clara, 2015. 183--189. Google Scholar

[5] Lu L, Pillai T S, Arpaci-Dusseau A C, et al. WiscKey: separating keys from values in SSD-conscious storage. In: Proceedings of the 14th Usenix Conference on File and Storage Technologies (FAST), Santa Clara, 2016. 133--148. Google Scholar

[6] Hansen J G, Jul E. Lithium: virtual machine storage for the cloud. In: Proceedings of the 1st ACM Symposium on Cloud Computing (SoCC), Indianapolis, 2010. 15--26. Google Scholar

[7] Ye L, Lu G, Kumar S, et al. Energy-efficient storage in virtual machine environments. In: Proceedings of the 6th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments (VEE), Pittsburgh, 2010. 75--84. Google Scholar

[8] Arteaga D, Cabrera J, Xu J, et al. CloudCache: on-demand flash cache management for cloud computing. In: Proceedings of the 14th Usenix Conference on File and Storage Technologies (FAST), Berkeley, 2016. 355--369. Google Scholar

[9] Byan S, Lentini J, Madan A, et al. Mercury: host-side flash caching for the data center. In: Proceedings of the 28th Symposium on Mass Storage Systems and Technologies (MSST), San Diego, 2012. Google Scholar

[10] Koller R, Mashtizadeh A J, Rangaswami R. Centaur: host-side SSD caching for storage performance control. In: Proceedings of IEEE International Conference on Autonomic Computing (ICAC), Grenoble, 2015. 51--60. Google Scholar

[11] Oh Y, Lee E, Hyun C, et al. Enabling cost-effective flash based caching with an array of commodity SSDs. In: Proceedings of the 16th Annual Middleware Conference, Vancouver, 2015. 63--74. Google Scholar

[12] Tang Z, Wang W, Huang Y, et al. Application-centric SSD cache allocation for hadoop applications. In: Proceedings of the 9th Asia-Pacific Symposium on Internetware, Shanghai, 2017. Google Scholar

[13] Vavilapalli V K, Murthy A C, Douglas C, et al. Apache hadoop yarn: yet another resource negotiator. In: Proceedings of the 4th Annual Symposium on Cloud Computing (SoCC), Santa Clara, 2013. Google Scholar

[14] George L. HBase — The Definitive Guide: Random Access to Your Planet-Size Data. Sebastopol: O'Reilly Media, 2011. Google Scholar

[15] Agarwal G, Shah R, Walrand J, et al. An architectural blueprint for autonomic computing. IBM White Paper. 2013.. Google Scholar

[16] Barham P, Dragovic B, Fraser K, et al. Xen and the art of virtualization. In: Proceedings of the 19th ACM Symposium on Operating Systems Principles (SOSP), New York, 2003. 164--177. Google Scholar

[17] Huang S S, Huang J, Dai J Q, et al. The HiBench benchmark suite: characterization of the mapreduce-based data analysis. In: Proceedings of the 26th International Conference on Data Engineering Workshops (ICDEW), Long Beach, 2010. 41--51. Google Scholar

[18] Tang Z, Wu H, Sun L, et al. Transaction-aware SSD cache allocation for the virtualization environment. In: Proceedings of the 12th International Symposium on Service-Oriented System Engineering Workshops (SOSEW), Bamberg, 2018. 174--179. Google Scholar

[19] Shamma M, Meyer D T, Wires J, et al. Capo: recapitulating storage for virtual desktops. In: Proceedings of the 9th USENIX Conference on File and Stroage Technologies (FAST), San Jose, 2011. Google Scholar

[20] Luo T, Ma S Y, Lee R B, et al. S-CAVE: effective SSD caching to improve virtual machine storage performance. In: Proceedings of the 22nd International Conference on Parallel Architectures and Compilation Techniques (PACT), Edinburgh, 2013. 103--112. Google Scholar

[21] Meng F, Zhou L, Ma X S, et al. vCacheShare: automated server flash cache space management in a virtualization environment. In: Proceedings of the 2014 USENIX Conference on USENIX Annual Technical Conference (USENIX ATC), Philadelphia, 2014. 133--144. Google Scholar

  • Figure 1

    VM-centric SSD cache allocation.

  • Figure 2

    Application-centric SSD cache allocation.

  • Figure 3

    (Color online) Job completion time of 4 benchmarks when allocating different size of SSD cache.

  • Figure 4

    (Color online) The changes of throughput and IOPS during the execution of the K-Means benchmark.

  • Figure 5

    IO dependency based requirement model.

  • Figure 6

    (Color online) Execution of the closed-loop.

  • Figure 7

    (Color online) Job completion time of TestDFSIO for 3 clusters under 3 modes.

  • Figure 8

    (Color online) Throughput of TestDFSIO for 3 clusters under 3 modes.

  • Figure 9

    (Color online) Job completion time for 3 clusters running different workloads. (a) Cluster 1; (b) cluster 2;protect łinebreak (c) cluster 3.

  • Figure 10

    (Color online) Reduction of JCT in the dynamic environment. (a) Cluster 1; (b) cluster 2; (c) cluster 3.

  • Figure 11

    (Color online) Reduction of job completion for different data scale. (a) Small; (b) large.

  •   

    Algorithm 1 Fitness calculation

    Require:Target genome; CPU time, IO usage (IO time, Bandwidth and IOPS), network throughput for each VM;

    Output:Fitness for target genome.

    Initialization;

    for all chromosome $c$ in target genome

    VM ${\rm~c.v}$ $\gets$ VM bound to $c$;

    Application ${\rm~c.app}$ $\gets$ Application bound to $c$;

    Weight ${\rm~c.ws}$ $\gets$ Weight of cache space bound to $c$;

    Weight ${\rm~c.wi}$ $\gets$ Weight of IOPS capacity bound to $c$;

    IO sensitivity ${\rm~c.s}$ $\gets$ ${\rm~c.v}.{\rm~IOTime}~/~{\rm~c.v}$.CPUTime;

    Random access intensity ${\rm~c.r}$ $\gets$ ${\rm~c.v}.{\rm~TotalIOPS}~/~{\rm~c.v}$.TotalBandwidth;

    $//$ Calculating the IO dependency

    IO dependency ${\rm~c.d}$ $\gets$ 0;

    for all network throughput $t$ between ${\rm~c.v}$ and VMs in ${\rm~c.app}$

    ${\rm~c.d}$ $\gets$ ${\rm~c.d}$ + $t~/~{\rm~c.v}$.TotalBandwidth;

    end for

    end for

    $//$ Normalization

    for all chromosome $c$ in target genome

    ${\rm~c.s}$ $\gets$ ${\rm~c.s}~/~{\rm~Total}~{\rm~c.s}$;

    ${\rm~c.r}$ $\gets$ ${\rm~c.r}~/~{\rm~Total}~{\rm~c.r}$;

    ${\rm~c.d}$ $\gets$ ${\rm~c.d}~/~{\rm~Total}~{\rm~c.d}$;

    end for

    $//$ Calculate the match degree

    for all chromosome $c$ in target genome

    Match degree ${\rm~c.m}$ $\gets$ $({\rm~c.ws}-{\rm~c.s})^2+({\rm~c.wi}-{\rm~c.r})^2+({\rm~c.ws}-{\rm~c.d})^2+({\rm~c.wi}-{\rm~c.d})^2$;

    end for

    Fitness $f$ $\gets$ $4~\times~({\rm~count}_{\rm~VM})-~\sum{{\rm~c.m}}$;

    return $f$

Copyright 2019 Science China Press Co., Ltd. 《中国科学》杂志社有限责任公司 版权所有

京ICP备18024590号-1