feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Online Resource
    Online Resource
    Singapore, Singapore : Springer
    UID:
    b3kat_BV047047632
    Format: 1 Online-Ressource
    ISBN: 9789811576836
    Additional Edition: Erscheint auch als Druck-Ausgabe ISBN 978-981-15-7682-9
    Language: English
    URL: Volltext  (kostenfrei)
    URL: Volltext  (kostenfrei)
    Author information: Sato, Mitsuhisa
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 2
    Online Resource
    Online Resource
    Springer Nature | Singapore :Springer Singapore :
    UID:
    almahu_9948620818102882
    Format: 1 online resource (IX, 262 p. 367 illus., 57 illus. in color.)
    Edition: 1st ed. 2021.
    ISBN: 981-15-7683-1
    Content: XcalableMP is a directive-based parallel programming language based on Fortran and C, supporting a Partitioned Global Address Space (PGAS) model for distributed memory parallel systems. This open access book presents XcalableMP language from its programming model and basic concept to the experience and performance of applications described in XcalableMP.  XcalableMP was taken as a parallel programming language project in the FLAGSHIP 2020 project, which was to develop the Japanese flagship supercomputer, Fugaku, for improving the productivity of parallel programing. XcalableMP is now available on Fugaku and its performance is enhanced by the Fugaku interconnect, Tofu-D. The global-view programming model of XcalableMP, inherited from High-Performance Fortran (HPF), provides an easy and useful solution to parallelize data-parallel programs with directives for distributed global array and work distribution and shadow communication. The local-view programming adopts coarray notation from Coarray Fortran (CAF) to describe explicit communication in a PGAS model. The language specification was designed and proposed by the XcalableMP Specification Working Group organized in the PC Consortium, Japan. The Omni XcalableMP compiler is a production-level reference implementation of XcalableMP compiler for C and Fortran 2008, developed by RIKEN CCS and the University of Tsukuba. The performance of the XcalableMP program was used in the Fugaku as well as the K computer. A performance study showed that XcalableMP enables a scalable performance comparable to the message passing interface (MPI) version with a clean and easy-to-understand programming style requiring little effort.
    Note: Chapter 1: XcalableMP programming model and language -- Chapter 2: Design and Performance Evaluation of the Omni XcalableMP Compiler -- Chapter 3: Coarrays in the Context of XcalableMP -- Chapter 4: XcalableACC: an Integration of XcalableMP and OpenACC -- Chapter 5: Mixed-language programming with XMP and Python -- Chapter 6: Three-dimensional Fluid Code with XcalableMP -- Chapter 7: Hybrid-View Data Model Programming of Nuclear Fusion Simulation Code in XcalableMP -- Chapter 8: Parallelization of Atomic Image Reconstruction from X-ray Fluorescence Holograms by XcalableMP -- Chapter 9: Multi-SPMD programming model with YML and XcalableMP -- Chapter 10: XcalableMP 2.0 and Future Directions. , English
    Additional Edition: ISBN 981-15-7682-3
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 3
    Online Resource
    Online Resource
    [Erscheinungsort nicht ermittelbar] : Springer Nature
    UID:
    gbv_1778426123
    Format: 1 Online-Ressource (262 p.)
    ISBN: 9789811576836
    Content: XcalableMP is a directive-based parallel programming language based on Fortran and C, supporting a Partitioned Global Address Space (PGAS) model for distributed memory parallel systems. This open access book presents XcalableMP language from its programming model and basic concept to the experience and performance of applications described in XcalableMP.  XcalableMP was taken as a parallel programming language project in the FLAGSHIP 2020 project, which was to develop the Japanese flagship supercomputer, Fugaku, for improving the productivity of parallel programing. XcalableMP is now available on Fugaku and its performance is enhanced by the Fugaku interconnect, Tofu-D. The global-view programming model of XcalableMP, inherited from High-Performance Fortran (HPF), provides an easy and useful solution to parallelize data-parallel programs with directives for distributed global array and work distribution and shadow communication. The local-view programming adopts coarray notation from Coarray Fortran (CAF) to describe explicit communication in a PGAS model. The language specification was designed and proposed by the XcalableMP Specification Working Group organized in the PC Consortium, Japan. The Omni XcalableMP compiler is a production-level reference implementation of XcalableMP compiler for C and Fortran 2008, developed by RIKEN CCS and the University of Tsukuba. The performance of the XcalableMP program was used in the Fugaku as well as the K computer. A performance study showed that XcalableMP enables a scalable performance comparable to the message passing interface (MPI) version with a clean and easy-to-understand programming style requiring little effort
    Note: English
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 4
    UID:
    b3kat_BV045389047
    Format: 1 Online-Ressource (VIII, 317 Seiten, 183 illus., 112 illus. in color)
    ISBN: 9789811319242
    Additional Edition: Erscheint auch als Druck-Ausgabe ISBN 978-981-131-923-5
    Additional Edition: Erscheint auch als Druck-Ausgabe ISBN 978-981-131-925-9
    Language: English
    URL: Volltext  (URL des Erstveröffentlichers)
    Author information: Sato, Mitsuhisa
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 5
    UID:
    b3kat_BV036581569
    Format: 1 Online-Ressource (X, 173 S.) , graph. Darst.
    ISBN: 9783642132162 , 9783642132179
    Series Statement: Lecture notes in computer science 6132
    Language: English
    Subjects: Computer Science
    RVK:
    Keywords: OpenMP ; Konferenzschrift ; Konferenzschrift
    URL: Volltext  (lizenzpflichtig)
    Author information: Sato, Mitsuhisa
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 6
    Online Resource
    Online Resource
    Singapore :Springer Singapore Pte. Limited,
    UID:
    almahu_9949301343102882
    Format: 1 online resource (265 pages)
    ISBN: 9789811576836
    Note: Intro -- Preface -- Contents -- XcalableMP Programming Model and Language -- 1 Introduction -- 1.1 Target Hardware -- 1.2 Execution Model -- 1.3 Data Model -- 1.4 Programming Models -- 1.4.1 Partitioned Global Address Space -- 1.4.2 Global-View Programming Model -- 1.4.3 Local-View Programming Model -- 1.4.4 Mixture of Global View and Local View -- 1.5 Base Languages -- 1.5.1 Array Section in XcalableMP C -- 1.5.2 Array Assignment Statement in XcalableMP C -- 1.6 Interoperability -- 2 Data Mapping -- 2.1 nodes Directive -- 2.2 template Directive -- 2.3 distribute Directive -- 2.3.1 Block Distribution -- 2.3.2 Cyclic Distribution -- 2.3.3 Block-Cyclic Distribution -- 2.3.4 Gblock Distribution -- 2.3.5 Distribution of Multi-Dimensional Templates -- 2.4 align Directive -- 2.5 Dynamic Allocation of Distributed Array -- 2.6 template_fix Construct -- 3 Work Mapping -- 3.1 task and tasks Construct -- 3.1.1 task Construct -- 3.1.2 tasks Construct -- 3.2 loop Construct -- 3.2.1 Reduction Computation -- 3.2.2 Parallelizing Nested Loop -- 3.3 array Construct -- 4 Data Communication -- 4.1 shadow Directive and reflect Construct -- 4.1.1 Declaring Shadow -- 4.1.2 Updating Shadow -- 4.2 gmove Construct -- 4.2.1 Collective Mode -- 4.2.2 In Mode -- 4.2.3 Out Mode -- 4.3 barrier Construct -- 4.4 reduction Construct -- 4.5 bcast Construct -- 4.6 wait_async Construct -- 4.7 reduce_shadow Construct -- 5 Local-View Programming -- 5.1 Introduction -- 5.2 Coarray Declaration -- 5.3 Put Communication -- 5.4 Get Communication -- 5.5 Synchronization -- 5.5.1 Sync All -- 5.5.2 Sync Images -- 5.5.3 Sync Memory -- 6 Procedure Interface -- 7 XMPT Tool Interface -- 7.1 Overview -- 7.2 Specification -- 7.2.1 Initialization -- 7.2.2 Events -- References -- Implementation and Performance Evaluation of Omni Compiler -- 1 Overview -- 2 Implementation -- 2.1 Operation Flow. , 2.2 Example of Code Translation -- 2.2.1 Distributed Array -- 2.2.2 Loop Statement -- 2.2.3 Communication -- 3 Installation -- 3.1 Overview -- 3.2 Get Source Code -- 3.2.1 From GitHub -- 3.2.2 From Our Website -- 3.3 Software Dependency -- 3.4 General Installation -- 3.4.1 Build and Install -- 3.4.2 Set PATH -- 3.5 Optional Installation -- 3.5.1 OpenACC -- 3.5.2 XcalableACC -- 3.5.3 One-Sided Library -- 4 Creation of Execution Binary -- 4.1 Compile -- 4.2 Execution -- 4.2.1 XcalableMP and XcalableACC -- 4.2.2 OpenACC -- 4.3 Cooperation with Profiler -- 4.3.1 Scalasca -- 4.3.2 tlog -- 5 Performance Evaluation -- 5.1 Experimental Environment -- 5.2 EP STREAM Triad -- 5.2.1 Design -- 5.2.2 Implementation -- 5.2.3 Evaluation -- 5.3 High-Performance Linpack -- 5.3.1 Design -- 5.3.2 Implementation -- 5.3.3 Evaluation -- 5.4 Global Fast Fourier Transform -- 5.4.1 Design -- 5.4.2 Implementation -- 5.4.3 Evaluation -- 5.5 RandomAccess -- 5.5.1 Design -- 5.5.2 Implementation -- 5.5.3 Evaluation -- 5.6 Discussion -- 6 Conclusion -- References -- Coarrays in the Context of XcalableMP -- 1 Introduction -- 2 Requirements from Language Specifications -- 2.1 Images Mapped to XMP Nodes -- 2.2 Allocation of Coarrays -- 2.3 Communication -- 2.4 Synchronization -- 2.5 Subarrays and Data Contiguity -- 2.6 Coarray C Language Specifications -- 3 Implementation -- 3.1 Omni XMP Compiler Framework -- 3.2 Allocation and Registration -- 3.2.1 Three Methods of Memory Management -- 3.2.2 Initial Allocation for Static Coarrays -- 3.2.3 Runtime Allocation for Allocatable Coarrays -- 3.3 PUT/GET Communication -- 3.3.1 Determining the Possibility of DMA -- 3.3.2 Buffering Communication Methods -- 3.3.3 Non-blocking PUT Communication -- 3.3.4 Optimization of GET Communication -- 3.4 Runtime Libraries -- 3.4.1 Fortran Wrapper -- 3.4.2 Upper-layer Runtime (ULR) Library. , 3.4.3 Lower-layer Runtime (LLR) Library -- 3.4.4 Communication Libraries -- 4 Evaluation -- 4.1 Fundamental Performance -- 4.2 Non-blocking Communication -- 4.3 Application Program -- 4.3.1 Coarray Version of the Himeno Benchmark -- 4.3.2 Measurement Result -- 4.3.3 Productivity -- 5 Related Work -- 6 Conclusion -- References -- XcalableACC: An Integration of XcalableMP and OpenACC -- 1 Introduction -- 1.1 Hardware Model -- 1.2 Programming Model -- 1.2.1 XMP Extensions -- 1.2.2 OpenACC Extensions -- 1.3 Execution Model -- 1.4 Data Model -- 2 XcalableACC Language -- 2.1 Data Mapping -- Example -- 2.2 Work Mapping -- Restriction -- Example 1 -- Example 2 -- 2.3 Data Communication and Synchronization -- Example -- 2.4 Coarrays -- Restriction -- Example -- 2.5 Handling Multiple Accelerators -- 2.5.1 devices Directive -- Example -- 2.5.2 on_device Clause -- 2.5.3 layout Clause -- Example -- 2.5.4 shadow Clause -- Example -- 2.5.5 barrier_device Construct -- Example -- 3 Omni XcalableACC Compiler -- 4 Performance of Lattice QCD Application -- 4.1 Overview of Lattice QCD -- 4.2 Implementation -- 5 Performance Evaluation -- 5.1 Result -- 5.2 Discussion -- 6 Productivity Improvement -- 6.1 Requirement for Productive Parallel Language -- 6.2 Quantitative Evaluation by Delta Source Lines of Codes -- 6.3 Discussion -- References -- Mixed-Language Programming with XcalableMP -- 1 Background -- 2 Translation by Omni Compiler -- 3 Functions for Mixed-Language -- 3.1 Function to Call MPI Program from XMP Program -- 3.2 Function to Call XMP Program from MPI Program -- 3.3 Function to Call XMP Program from Python Program -- 3.3.1 From Parallel Python Program -- 3.3.2 From Sequential Python Program -- 4 Application to Order/Degree Problem -- 4.1 What Is Order/Degree Program -- 4.2 Implementation -- 4.3 Evaluation -- 5 Conclusion -- References. , Three-Dimensional Fluid Code with XcalableMP -- 1 Introduction -- 2 Global-View Programming Model -- 2.1 Domain Decomposition Methods -- 2.2 Performance on the K Computer -- 2.2.1 Comparison with Hand-Coded MPI Program -- 2.2.2 Optimization for SIMD -- 2.2.3 Optimization for Allocatable Arrays -- 3 Local-View Programming Model -- 3.1 Communications Using Coarray -- 3.2 Performance on the K Computer -- 4 Summary -- References -- Hybrid-View Programming of Nuclear Fusion Simulation Code in XcalableMP -- 1 Introduction -- 2 Nuclear Fusion Simulation Code -- 2.1 Gyrokinetic PIC Simulation -- 2.2 GTC -- 3 Implementation of GTC-P by Hybrid-view Programming -- 3.1 Hybrid-View Programming Model -- 3.2 Implementation Based on the XMP-Localview Model: XMP-localview -- 3.3 Implementation Based on the XMP-Hybridview Model: XMP-Hybridview -- 4 Performance Evaluation -- 4.1 Experimental Setting -- 4.2 Results -- 4.3 Productivity and Performance -- 5 Related Research -- 6 Conclusion -- References -- Parallelization of Atomic Image Reconstruction from X-ray Fluorescence Holograms with XcalableMP -- 1 Introduction -- 2 X-ray Fluorescence Holography -- 2.1 Reconstruction of Atomic Images -- 2.2 Analysis Procedure of XFH -- 3 Parallelization -- 3.1 Parallelization of Reconstruction of Two-Dimensional Atomic Images by OpenMP -- 3.2 Parallelization of Reconstruction of Three-dimensional Atomic Images by XcalableMP -- 4 Performance Evaluation -- 4.1 Performance Results of Reconstruction of Two-Dimensional Atomic Images -- 4.2 Performance Results of Reconstruction of Three-dimensional Atomic Images -- 4.3 Comparison of Parallelization with MPI -- 5 Conclusion -- References -- Multi-SPMD Programming Model with YML and XcalableMP -- 1 Introduction -- 2 Background: International Collaborations for the Post-Petascale and Exascale Computing -- 3 Multi-SPMD Programming Model. , 3.1 Overview -- 3.2 YML -- 3.3 OmniRPC-MPI -- 4 Application Development in the mSPMD Programming Environment -- 4.1 Task Generator -- 4.2 Workflow Development -- 4.3 Workflow Execution -- 5 Experiments -- 6 Eigen Solver on the mSPMD Programming Model -- 6.1 Implicitly Restarted Arnoldi Method (IRAM), Multiple Implicitly Restarted Arnoldi Method (MIRAM) and Their Implementations for the mSPMD Programming Model -- 6.2 Experiments -- 7 Fault-Tolerance Features in the mSPMD Programming Model -- 7.1 Overview and Implementation -- 7.2 Experiments -- 8 Runtime Correctness Check for the mSPMD Programming Model -- 8.1 Overview and Implementation -- 8.2 Experiments -- 9 Summary -- References -- XcalableMP 2.0 and Future Directions -- 1 Introduction -- 2 XcalableMP on Fugaku -- 2.1 Performance of XcalableMP Global View Programming -- 2.2 Performance of XcalableMP Local View Programming -- 3 Global Task Parallel Programming -- 3.1 OpenMP and XMP Tasklet Directive -- 3.2 A Proposal for Global Task Parallel Programming -- 3.3 Prototype Design of Code Transformation -- 3.4 Preliminary Performance -- 3.5 Communication Optimization for Manycore Clusters -- 4 Retrospectives and Challenges for Future PGAS Models -- 4.1 Low-Level Communication Layer for PGAS Model -- 4.2 XcalableMP as a DSL for Stencil Applications -- 4.3 XcalableMP API: Compiler-Free Approach -- 4.4 Global Task Parallel Programming Model for Accelerators -- References.
    Additional Edition: Print version: Sato, Mitsuhisa XcalableMP PGAS Programming Language Singapore : Springer Singapore Pte. Limited,c2020 ISBN 9789811576829
    Language: English
    Keywords: Electronic books.
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 7
    UID:
    kobvindex_ZLB15184897
    Format: X, 173 Seiten , graph. Darst. , 24 cm
    ISBN: 9783642132162
    Series Statement: Lecture notes in computer science 6132
    Note: Literaturangaben , Text engl.
    Language: English
    Keywords: OpenMP ; Kongress ; Tsukuba 〈2010〉 ; Kongress
    Author information: Sato, Mitsuhisa
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 8
    UID:
    gbv_1646449517
    Format: Online-Ressource (XV, 564 pp, digital)
    ISBN: 9783540478478
    Series Statement: Lecture Notes in Computer Science 2327
    Content: Invited Papers -- The Gilgamesh MIND Processor-in-Memory Architecture for Petaflops-Scale Computing -- The UK e-Science Program and the Grid -- SPEC HPC2002: The Next High-Performance Computer Benchmark -- Award Papers -- Language and Compiler Support for Hybrid-Parallel Programming on SMP Clusters -- Parallelizing Merge Sort onto Distributed Memory Parallel Computers -- Networks -- Avoiding Network Congestion with Local Information -- Improving InfiniBand Routing through Multiple Virtual Networks -- Architectures I -- Minerva: An Adaptive Subblock Coherence Protocol for Improved SMP Performance -- Active Memory Clusters: Efficient Multiprocessing on Commodity Clusters -- The Impact of Alias Analysis on VLIW Scheduling -- Low-Cost Value Predictors Using Frequent Value Locality -- Architectures II -- Integrated I-cache Way Predictor and Branch Target Buffer to Reduce Energy Consumption -- A Comprehensive Analysis of Indirect Branch Prediction -- High Performance and Energy Efficient Serial Prefetch Architecture -- A Programmable Memory Hierarchy for Prefetching Linked Data Structures -- HPC Systems -- Block Red-Black Ordering Method for Parallel Processing of ICCG Solver -- Integrating Performance Analysis in the Uintah Software Development Cycle -- Performance of Adaptive Mesh Refinement Scheme for Hydrodynamics on Simulations of Expanding Supernova Envelope -- Earth Simulator -- An MPI Benchmark Program Library and Its Application to the Earth Simulator -- Parallel Simulation of Seismic Wave Propagation -- Large-Scale Parallel Computing of Cloud Resolving Storm Simulator -- Short Papers -- Routing Mechanism for Static Load Balancing in a Partitioned Computer System with a Fully Connected Network -- Studying New Ways for Improving Adaptive History Length Branch Predictors -- Speculative Clustered Caches for Clustered Processors -- The Effects of Timing Dependence and Recursion on Parallel Program Schemata -- Cache Line Impact on 3D PDE Solvers -- An EPIC Processor with Pending Functional Units -- Software Energy Optimization of Real Time Preemptive Tasks by Minimizing Cache-Related Preemption Costs -- Distributed Genetic Algorithm with Multiple Populations Using Multi-agent -- Numerical Weather Prediction on the Supercomputer Toolkit -- OpenTella: A Peer-to-Peer Protocol for the Load Balancing in a System Formed by a Cluster from Clusters -- Power Estimation of a C Algorithm Based on the Functional-Level Power Analysis of a Digital Signal Processor -- Irregular Assignment Computations on cc-NUMA Multiprocessors -- International Workshop on OpenMP: Experiences and Implementations (WOMPEI 2002) -- Large System Performance of SPEC OMP2001 Benchmarks -- A Shared Memory Benchmark in OpenMP -- Performance Evaluation of the Hitachi SR8000 Using OpenMP Benchmarks -- Communication Bandwidth of Parallel Programming Models on Hybrid Architectures -- Performance Comparisons of Basic OpenMP Constructs -- SPMD OpenMP versus MPI on a IBM SMP for 3 Kernels of the NAS Benchmarks -- Parallel Iterative Solvers for Unstructured Grids Using an OpenMP/MPI Hybrid Programming Model for the GeoFEM Platform on SMP Cluster Architectures -- A Parallel Computing Model for the Acceleration of a Finite Element Software -- Towards OpenMP Execution on Software Distributed Shared Memory Systems -- Dual-Level Parallelism Exploitation with OpenMP in Coastal Ocean Circulation Modeling -- Static Coarse Grain Task Scheduling with Cache Optimization Using OpenMP -- HPF International Workshop: Experiences and Progress (HiWEP 2002) -- High Performance Fortran - History, Status and Future -- Performance Evaluation for Japanese HPF Compilers with Special Benchmark Suite -- Evaluation of the HPF/JA Extensions on Fujitsu VPP Using the NAS Parallel Benchmarks -- Three-Dimensional Electromagnetic Particle-in-Cell Code Using High Performance Fortran on PC Cluster -- Towards a Lightweight HPF Compiler -- Parallel I/O Support for HPF on Computational Grids -- Optimization of HPF Programs with Dynamic Recompilation Technique.
    Content: I wish to welcome all of you to the International Symposium on High Perf- mance Computing 2002 (ISHPC2002) and to Kansai Science City, which is not farfromtheancientcapitalsofJapan:NaraandKyoto.ISHPC2002isthefourth in the ISHPC series, which consists, to date, of ISHPC ’97 (Fukuoka, November 1997), ISHPC ’99 (Kyoto, May 1999), and ISHPC2000 (Tokyo, October 2000). The success of these symposia indicates the importance of this area and the strong interest of the research community. With all of the recent drastic changes in HPC technology trends, HPC has had and will continue to have a signi?cant impact on computer science and technology. I am pleased to serve as General Chair at a time when HPC plays a crucial role in the era of the IT (Information Technology) revolution. The objective of this symposium is to exchange the latest research results in software, architecture, and applications in HPC in a more informal and friendly atmosphere. I am delighted that the symposium is, like past successful ISHPCs, comprised of excellent invited talks, panels, workshops, as well as high-quality technical papers on various aspects of HPC. We hope that the symposium will provide an excellent opportunity for lively exchange and discussion about - rections in HPC technologies and all the participants will enjoy not only the symposium but also their stay in Kansai Science City.
    Note: Includes bibliographical references and index
    Additional Edition: ISBN 9783540436744
    Additional Edition: Buchausg. u.d.T. High performance computing Berlin : Springer, 2002 ISBN 354043674X
    Language: English
    Subjects: Computer Science
    RVK:
    Keywords: Supercomputer ; Wissenschaftliches Rechnen ; Supercomputer ; Wissenschaftliches Rechnen ; Konferenzschrift
    URL: Volltext  (lizenzpflichtig)
    URL: Volltext  (lizenzpflichtig)
    URL: Cover
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 9
    UID:
    kobvindex_ZIB000002470
    Format: 564 S.
    ISBN: 3-540-43674-X
    Series Statement: Lecture notes in computer science 2327
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 10
    UID:
    almahu_9947364212902882
    Format: XII, 207 p. , online resource.
    ISBN: 9783540693031
    Series Statement: Lecture Notes in Computer Science, 4935
    Content: This book constitutes the thoroughly refereed post-workshop proceedings of the Third International Workshop on OpenMP, IWOMP 2007, held in Beijing, China, in June 2007. The 14 revised full papers and 8 revised short papers presented were carefully reviewed and selected from 28 submissions. The papers address all topics related to OpenMP, such as OpenMP performance analysis and modeling, OpenMP performance and correctness tools and proposed OpenMP extensions, as well as applications in various domains, e.g., scientific computation, video games, computer graphics, multimedia, information retrieval, optimization, text processing, data mining, finance, signal and image processing, and numerical solvers.
    Note: A Proposal for Task Parallelism in OpenMP -- Support for Fine Grained Dependent Tasks in OpenMP -- Performance Evaluation of a Multi-zone Application in Different OpenMP Approaches -- Transactional Memory and OpenMP -- OpenMP on Multicore Architectures -- Supporting OpenMP on Cell -- CMP Cache Architecture and the OpenMP Performance -- Exploiting Loop-Level Parallelism for SIMD Arrays Using OpenMP -- OpenMP Extensions for Irregular Parallel Applications on Clusters -- Optimization Strategies Using Hybrid MPI+OpenMP Parallelization for Large-Scale Data Visualization on Earth Simulator -- An Investigation on Testing of Parallelized Code with OpenMP -- Loading OpenMP to Cell: An Effective Compiler Framework for Heterogeneous Multi-core Chip -- OpenMP Implementation of Parallel Linear Solver for Reservoir Simulation -- Parallel Data Flow Analysis for OpenMP Programs -- Design and Implementation of OpenMPD: An OpenMP-Like Programming Language for Distributed Memory Systems -- A New Memory Allocation Model for Parallel Search Space Data Structures with OpenMP -- Implementation of OpenMP Work-Sharing on the Cell Broadband Engine Architecture -- Toward an Automatic Code Layout Methodology -- An Efficient OpenMP Runtime System for Hierarchical Architectures -- Problems, Workarounds and Possible Solutions Implementing the Singleton Pattern with C++ and OpenMP -- Web Service Call Parallelization Using OpenMP -- Distributed Implementation of OpenMP Based on Checkpointing Aided Parallel Execution.
    In: Springer eBooks
    Additional Edition: Printed edition: ISBN 9783540693024
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. Further information can be found on the KOBV privacy pages