Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    UID:
    almahu_9948180605102882
    Format: XI, 262 p. 120 illus., 69 illus. in color. , online resource.
    Edition: 1st ed. 2019.
    ISBN: 9783030328139
    Series Statement: Information Systems and Applications, incl. Internet/Web, and HCI ; 11459
    Content: This book constitutes the refereed proceedings of the First International Symposium on Benchmarking, Measuring, and Optimization, Bench 2018, held in Seattle, WA, USA, in December 2018. The 20 full papers presented were carefully reviewed and selected from 51 submissions. The papers are organized in topical sections named: AI Benchmarking; Cloud; Big Data; Modelling and Prediction; and Algorithm and Implementations.
    Note: AI Benchmarking -- Cloud -- Big Data -- Modeling and Prediction -- Algorithm and Implementations.
    In: Springer eBooks
    Additional Edition: Printed edition: ISBN 9783030328122
    Additional Edition: Printed edition: ISBN 9783030328146
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 2
    UID:
    b3kat_BV046229889
    Format: 1 Online-Ressource (xi, 262 Seiten) , 120 Illustrationen, 69 in Farbe
    ISBN: 9783030328139
    Series Statement: Lecture notes in computer science 11459
    Additional Edition: Erscheint auch als Druck-Ausgabe ISBN 978-3-030-32812-2
    Additional Edition: Erscheint auch als Druck-Ausgabe ISBN 978-3-030-32814-6
    Language: English
    Subjects: Computer Science
    RVK:
    Keywords: Informationstechnik ; Benchmarking ; Cloud Computing ; Big Data ; Konferenzschrift
    URL: Volltext  (URL des Erstveröffentlichers)
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 3
    UID:
    b3kat_BV047692623
    Format: 1 online resource (268 pages)
    ISBN: 9783030328139
    Series Statement: Lecture Notes in Computer Science Ser. v.11459
    Note: Description based on publisher supplied metadata and other sources , Intro -- BenchCouncil: Benchmarking and Promoting Innovative Techniques -- Organization -- Contents -- AI Benchmarking -- AIBench: Towards Scalable and Comprehensive Datacenter AI Benchmarking -- 1 Introduction -- 2 Related Work -- 3 Datacenter AI Benchmark Suite-AIBench -- 3.1 Datacenter AI Micro Benchmarks -- 3.2 Datacenter AI Component Benchmarks -- 3.3 Application Benchmarks -- 3.4 AI Competition -- 4 Conclusion -- References -- HPC AI500: A Benchmark Suite for HPC AI Systems -- 1 Introduction -- 2 Deep Learning in Scientific Computing -- 2.1 Extreme Weather Analysis -- 2.2 High Energy Physics -- 2.3 Cosmology -- 2.4 Summary -- 3 Benchmarking Methodology and Decisions -- 3.1 Methodology -- 3.2 The Selected Datasets -- 3.3 The Selected Workloads -- 3.4 Metrics -- 4 Reference Implementation -- 4.1 Component Benchmarks -- 4.2 Micro Benchmarks -- 5 Conclusion -- References -- Edge AIBench: Towards Comprehensive End-to-End Edge Computing Benchmarking -- 1 Introduction -- 2 Related Work -- 3 The Summary of Edge AIBench -- 3.1 ICU Patient Monitor -- 3.2 Surveillance Camera -- 3.3 Smart Home -- 3.4 Autonomous Vehicle -- 3.5 A Federated Learning Framework Testbed -- 4 Conclusion -- References -- AIoT Bench: Towards Comprehensive Benchmarking Mobile and Embedded Device Intelligence -- 1 Introduction -- 2 Benchmarking Requirements -- 3 AIoT Bench -- 4 Related Work -- 5 Conclusion -- References -- A Survey on Deep Learning Benchmarks: Do We Still Need New Ones? -- 1 Introduction -- 2 A Survey on Deep Learning Benchmarks -- 2.1 Stanford DAWNBench -- 2.2 Baidu DeepBench -- 2.3 Facebook AI Performance Evaluation Platform -- 2.4 ICT BigDataBench -- 2.5 Other Benchmarks -- 3 Discussion -- 3.1 Benchmark Comparison -- 3.2 Observations -- 4 Conclusion and Future Work -- References -- Cloud -- Benchmarking VM Startup Time in the Cloud -- 1 Introduction , 2 Related Works -- 3 Methodology -- 3.1 Environment Setup -- 3.2 Algorithm -- 4 Result -- 4.1 By Instance Type -- 4.2 By Time of the Day -- 4.3 By Instance Location -- 4.4 By Cluster -- 5 Conclusions and Future Work -- References -- An Open Source Cloud-Based NoSQL and NewSQL Database Benchmarking Platform for IoT Data -- Abstract -- 1 Introduction -- 2 Background -- 2.1 NewSQL -- 2.2 NoSQL -- 2.3 MongoDB -- 2.4 VoltDB -- 2.5 Apache Kafka -- 2.6 Cloud Computing -- 3 Benchmarking Framework -- 3.1 Framework Components -- 3.2 Architecture -- 3.3 Data Generation and Consumption Algorithms -- 4 Experiments -- 4.1 System Configuration, Sensor Data Structure, and Formats -- 4.2 Experiment I: Data Injection with Different Volume and Velocity -- 4.3 Experiment II: Transactional Data Processing on High Volume of Data -- 4.4 Experiment III: Analytical Data Processing on High Volume of Data -- 4.5 Findings from Experiments -- 5 Related Work -- 6 Conclusion -- Acknowledgment -- References -- Scalability Evaluation of Big Data Processing Services in Clouds -- 1 Introduction -- 2 Related Work -- 2.1 Big Data Benchmarks -- 2.2 Scalability Evaluation of Big Data Processing Systems -- 3 Evaluation Model -- 4 Experiments -- 4.1 Experiment Environment -- 4.2 Scale-Out Analysis -- 4.3 Scale-Up Experiment -- 4.4 Experimental Results -- 5 Conclusion -- References -- PAIE: A Personal Activity Intelligence Estimator in the Cloud -- 1 Introduction -- 2 Related Work -- 3 Overview of PAIE -- 4 Statistic Issues -- 4.1 PAI Computing Mechanism -- 4.2 Statistical Modeling -- 4.3 PAI Estimating and Error Bounding -- 5 Implementing over Storm -- 6 Performance Evaluation -- 6.1 Experiment Methodology -- 6.2 Performance Analysis -- 6.3 Scalability Evaluation -- 7 Conclusions -- References -- DCMIX: Generating Mixed Workloads for the Cloud Data Center -- 1 Introduction -- 2 Related Work , 3 DCMIX -- 3.1 Workloads -- 3.2 Mixed Workload Generator -- 4 System Entropy -- 5 Experiment and Experimental Analysis -- 5.1 Experimental Configurations and Methodology -- 5.2 Experiment Results and Observations -- 5.3 Summary -- 6 Conclusion -- References -- Machine-Learning Based Spark and Hadoop Workload Classification Using Container Performance Patterns -- 1 Introduction -- 1.1 Background -- 1.2 Problem -- 1.3 Limitations of Previous Approaches -- 1.4 Our Contribution -- 1.5 Resource Managers and Containers -- 2 Evaluation Methodology -- 2.1 Container Performance Metrics -- 2.2 Workloads and Workload Transitions -- 2.3 Parameter Settings -- 2.4 Hardware and Software -- 3 Results -- 3.1 Steady State Workload Characteristics -- 3.2 Dynamic Workload Characteristics - Workload Transitions -- 4 Identifying and Classifying Workloads -- 5 Detecting Workload Transitions -- 6 Relative Value and Importance of Container Performance Metrics -- 7 Conclusion -- References -- Testing Raft-Replicated Database Systems -- 1 Introduction -- 2 Background -- 2.1 Replicated State Machines -- 2.2 Raft Overview -- 3 System Model -- 4 Evaluation Metrics -- 4.1 Correctness -- 4.2 Performance -- 4.3 Scalability -- 5 Test Dimensions -- 5.1 Fault Type -- 5.2 Data Operation Type -- 5.3 System Configuration -- 6 Experiments -- 6.1 Experimental Setups -- 6.2 Recovery Time -- 6.3 Throughput and Latency -- 6.4 Stability -- 7 Related Work -- 8 Conclusion -- References -- Big Data -- Benchmarking for Transaction Processing Database Systems in Big Data Era -- 1 Introuduction -- 2 Requirements and Challenges -- 2.1 Data Generation -- 2.2 Workload Generation -- 2.3 Measurement Definition -- 2.4 Others -- 3 PeakBench: Benchmarking Transaction Processing Database Systems on Intensive Workloads -- 3.1 Business Description -- 3.2 Implementation of Benchmark Tool -- 3.3 Workloads , 4 Test of PeakBench -- 5 Related Work -- 6 Conclusion -- References -- UMDISW: A Universal Multi-Domain Intelligent Scientific Workflow Framework for the Whole Life Cycle of Scientific Data -- 1 Introduction -- 2 The Status of UMDISW in System Architecture -- 3 The Model of UMDISW -- 3.1 Workflow and Task -- 3.2 Data Flow and Information Flow -- 3.3 Data Node and Algorithm Node -- 3.4 Example -- 4 The Structure and Execution of UMDISW -- 4.1 Running Service Layer -- 4.2 Workflow Execution Layer -- 4.3 Data Resource Layer -- 5 The Application Scenario of UMDISW -- 5.1 Fully Automated Workflow -- 5.2 Semi-custom Workflow -- 5.3 Fully Custom Workflow -- 6 The Implementation of UMDISW -- 7 Conclusion -- References -- MiDBench: Multimodel Industrial Big Data Benchmark -- 1 Introduction -- 2 Big Data Benchmarking Requirements -- 3 Related Work -- 4 Our Benchmarking Methodology -- 4.1 BoM Data Scenario Analysis -- 4.2 Analysis of Time Series Data Scenario -- 4.3 Unstructured Data Scenario Analysis -- 5 Synthetic Data Generation -- 6 Workload Characterization Experiments -- 6.1 Performance Tests on BoM Database Systems -- 6.2 Performance Tests on Time Series Database Systems -- 6.3 Performance Tests on Unstructured Database Systems -- 7 Conclusion -- References -- Modelling and Prediction -- Power Characterization of Memory Intensive Applications: Analysis and Implications -- 1 Motivation -- 2 Evolution of Server Energy Efficiency -- 2.1 Metrics of Energy Efficiency and Energy Proportionality -- 2.2 Experiment Setup -- 3 Experiment Results and Observations -- 3.1 Results of SPECpower Workload -- 3.2 Results of STREAM Workload -- 3.3 Insights on Energy Efficiency of Memory Intensive Applications -- 4 Related Work -- 5 Conclusions -- References -- Multi-USVs Coordinated Detection in Marine Environment with Deep Reinforcement Learning -- 1 Introduction , 2 Background -- 2.1 USV Overview -- 2.2 Reinforcement Learning -- 3 Approach -- 3.1 Single-USV RL -- 3.2 Multi-USVs Coordinated Detection -- 4 Results and Discussion -- 5 Conclusion -- References -- EC-Bench: Benchmarking Onload and Offload Erasure Coders on Modern Hardware Architectures -- 1 Introduction -- 2 Background -- 2.1 Erasure Coding -- 2.2 Onload and Offload Erasure Coders -- 3 EC-Bench Design -- 3.1 Design -- 3.2 Parameter Space -- 3.3 Metrics -- 4 Evaluation -- 4.1 Open Source Libraries -- 4.2 Experimental Setup -- 4.3 Experimental Results -- 5 Related Work -- 6 Conclusion -- References -- Algorithm and Implementations -- Benchmarking SpMV Methods on Many-Core Platforms -- 1 Introduction -- 2 Benchmarking Methodology -- 2.1 Selected SpMV Methods -- 2.2 Selected Features -- 2.3 Hardware Configuration -- 2.4 Data Sets -- 2.5 Experimental Methods -- 3 Experimental Results -- 3.1 SpMV Performance -- 3.2 Best-Method Analysis -- 3.3 Correlation Analysis of Performance and Sparse Pattern -- 4 Conclusion -- References -- Benchmarking Parallel K-Means Cloud Type Clustering from Satellite Data -- 1 Introduction -- 2 Background -- 2.1 Cloud Joint Histograms -- 2.2 K-means Clustering -- 3 Implementation Details -- 3.1 OpenMP Based Implementation -- 3.2 OpenMP and MPI Based Implementation -- 3.3 Spark Based Implementation -- 4 Results -- 4.1 Code Validity -- 4.2 Performance -- 4.3 Cross Comparison -- 5 Related Work -- 6 Conclusions -- References -- Correction to: MiDBench: Multimodel Industrial Big Data Benchmark -- Correction to: Chapter "MiDBench: Multimodel Industrial Big Data Benchmark" in: C. Zheng and J. Zhan (Eds.): Benchmarking, Measuring, and Optimizing, LNCS 11459, https://doi.org/10.1007/978-3-030-32813-9_15 -- Author Index
    Additional Edition: Erscheint auch als Druck-Ausgabe Zheng, Chen Benchmarking, Measuring, and Optimizing Cham : Springer International Publishing AG,c2019 ISBN 9783030328122
    Language: English
    Subjects: Computer Science
    RVK:
    Keywords: Informationstechnik ; Benchmarking ; Cloud Computing ; Big Data ; Konferenzschrift
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 4
    UID:
    b3kat_BV046319124
    Format: xi, 262 Seiten , Illustrationen, Diagramme
    ISBN: 9783030328122
    Series Statement: Lecture notes in computer science 11459
    Additional Edition: Erscheint auch als Online-Ausgabe ISBN 978-3-030-32813-9
    Language: English
    Subjects: Computer Science
    RVK:
    Keywords: Informationstechnik ; Benchmarking ; Cloud Computing ; Big Data ; Konferenzschrift
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Did you mean 9783030308322?
Did you mean 9783000328022?
Did you mean 9783030302122?
Close ⊗
This website uses cookies and the analysis tool Matomo. Further information can be found on the KOBV privacy pages