Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Online Resource
    Online Resource
    Angle Publishing Co., Ltd. ; 2022
    In:  網際網路技術學刊 Vol. 23, No. 6 ( 2022-11), p. 1185-1190
    In: 網際網路技術學刊, Angle Publishing Co., Ltd., Vol. 23, No. 6 ( 2022-11), p. 1185-1190
    Abstract: 〈p〉Spark is currently the most widely used distributed computing framework, and its key data abstraction concept, Resilient Distributed Dataset (RDD), brings significant performance improvements in big data computing. In application scenarios, Spark jobs often need to replace RDDs due to insufficient memory. Spark uses the Least Recently Used (LRU) algorithm by default as the cache replacement strategy. This algorithm only considers the most recent use time of RDDs as the replacement basis. This characteristic may cause the RDDs that need to be reused to be evicted when performing cache replacement, resulting in a decrease in Spark performance. In response to the above problems, this paper proposes a memory-aware Spark cache replacement strategy, which comprehensively considers the cluster memory usage, RDD size, RDD dependencies, usage times and other information when performing cache replacement and selects the RDDs to be evicted. Furthermore, this paper designs extensive corresponding experiments to test and analyze the performance of the memory-aware Spark cache replacement strategy. The experimental data show that the proposed strategy can improve the performance by up to 13% compared with the LRU algorithm in different scenarios.〈/p〉 〈p〉 〈/p〉
    Type of Medium: Online Resource
    ISSN: 1607-9264 , 1607-9264
    Uniform Title: A Memory-Aware Spark Cache Replacement Strategy
    Language: Unknown
    Publisher: Angle Publishing Co., Ltd.
    Publication Date: 2022
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. Further information can be found on the KOBV privacy pages