Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    UID:
    gbv_1851410457
    Format: 1 online resource (234 pages)
    ISBN: 9781000791563
    Content: The work reflected in this book was done in the scope of the European project P SOCRATES, funded under the FP7 framework program of the European Commission.
    Note: Description based on publisher supplied metadata and other sources
    Additional Edition: ISBN 9788793609693
    Additional Edition: Erscheint auch als Druck-Ausgabe ISBN 9788793609693
    Language: English
    Keywords: Electronic books.
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 2
    Online Resource
    Online Resource
    Gistrup, Denmark : River Publishers,
    UID:
    gbv_1870232917
    Format: 1 online resource (xxv, 207 pages) , illustrations (some color).
    ISBN: 9788793609693 , 8793609698 , 9788793609624 , 8793609620 , 9781003338413 , 1003338410 , 9781000791563 , 1000791564 , 9781000794687 , 1000794687
    Series Statement: River Publishers Series in Information Science and Technology
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 3
    Online Resource
    Online Resource
    Milton :River Publishers,
    UID:
    almahu_9949517345402882
    Format: 1 online resource (234 pages)
    ISBN: 9781000791563
    Content: The work reflected in this book was done in the scope of the European project P SOCRATES, funded under the FP7 framework program of the European Commission.
    Additional Edition: Print version: Pinho, Luís Miguel High Performance Embedded Computing Milton : River Publishers,c2018 ISBN 9788793609693
    Language: English
    Keywords: Electronic books.
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 4
    Online Resource
    Online Resource
    Gistrup, Denmark :River Publishers,
    UID:
    almahu_9949385600802882
    Format: 1 online resource (xxv, 207 pages) : , illustrations (some color).
    ISBN: 9788793609693 , 8793609698 , 9788793609624 , 8793609620 , 9781003338413 , 1003338410 , 9781000791563 , 1000791564 , 9781000794687 , 1000794687
    Series Statement: River Publishers Series in Information Science and Technology
    Content: Nowadays, the prevalence of computing systems in our lives is so ubiquitous that we live in a cyber-physical world dominated by computer systems, from pacemakers to cars and airplanes. These systems demand for more computational performance to process large amounts of data from multiple data sources with guaranteed processing times. Actuating outside of the required timing bounds may cause the failure of the system, being vital for systems like planes, cars, business monitoring, e-trading, etc. High-Performance and Time-Predictable Embedded Computing presents recent advances in software architecture and tools to support such complex systems, enabling the design of embedded computing devices which are able to deliver high-performance whilst guaranteeing the application required timing bounds. Technical topics discussed in the book include: Parallel embedded platforms Programming models Mapping and scheduling of parallel computations Timing and schedulability analysis Runtimes and operating systemsThe work reflected in this book was done in the scope of the European project P SOCRATES, funded under the FP7 framework program of the European Commission. High-performance and time-predictable embedded computing is ideal for personnel in computer/communication/embedded industries as well as academic staff and master/research students in computer science, embedded systems, cyber-physical systems and internet-of-things.
    Additional Edition: Print version: High-performance and time-predictable embedded computing. Gistrup, Denmark : River Publishers, [2018] ISBN 9788793609693
    Language: English
    Keywords: Electronic books.
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 5
    Online Resource
    Online Resource
    Gistrup, Denmark :River Publishers,
    UID:
    almahu_9949378080702882
    Format: 1 online resource (236 pages).
    Edition: 1st ed.
    ISBN: 1-00-333841-0 , 1-000-79156-4 , 1-003-33841-0 , 1-000-79468-7 , 87-93609-62-0
    Series Statement: River Publishers series in information science and technology
    Content: Journal of Cyber Security and Mobility provides an in-depth and holistic view of security and solutions from practical to theoretical aspects. It covers topics that are equally valuable for practitioners as well as those new in the field.
    Note: Front Cover -- Half Title page -- RIVER PUBLISHERS SERIES IN INFORMATIONSCIENCE AND TECHNOLOGY -- Title page -- Copyright page -- Contents -- Preface -- List of Contributors -- List of Figures -- List of Tables -- List of Abbreviations -- Chapter 1 - Introduction -- 1.1 Introduction -- 1.1.1 The Convergence of High-performance and Embedded Computing Domains -- 1.1.2 Parallelization Challenge -- 1.2 The P-SOCRATES Project -- 1.3 Challenges Addressed in This Book -- 1.3.1 Compiler Analysis of Parallel Programs -- 1.3.2 Predictable Scheduling of Parallel Tasks on Many-core Systems -- 1.3.3 Methodology for Measurement-based Timing Analysis -- 1.3.4 Optimized OpenMP Tasking Runtime System -- 1.3.5 Real-time Operating Systems -- 1.4 The UpScale SDK -- 1.5 Summary -- References -- Chapter 2 - Manycore Platforms -- 2.1 Introduction -- 2.2 Manycore Architectures -- 2.2.1 Xeon Phi -- 2.2.2 Pezy SC -- 2.2.3 NVIDIA Tegra X1 -- 2.2.4 Tilera Tile -- 2.2.5 STMicroelectronics STHORM -- 2.2.6 Epiphany-V -- 2.2.7 TI Keystone II -- 2.2.8 Kalray MPPA-256 -- 2.2.8.1 The I/O subsystem -- 2.2.8.2 The Network-on-Chip (NoC) -- 2.2.8.3 The Host-to-IOS communication protocol -- 2.2.8.4 Internal architecture of the compute clusters -- 2.2.8.5 The shared memory -- 2.3 Summary -- References -- Chapter 3 - Predictable Parallel Programming with OpenMP -- 3.1 Introduction -- 3.1.1 Introduction to Parallel Programming Models -- 3.1.1.1 POSIX threads -- 3.1.1.2 OpenCLTM -- 3.1.1.3 NVIDIA R CUDA -- 3.1.1.4 Intel R CilkTM Plus -- 3.1.1.5 Intel R TBB -- 3.1.1.6 OpenMP -- 3.2 The OpenMP Parallel Programming Model -- 3.2.1 Introduction and Evolution of OpenMP -- 3.2.2 Parallel Model of OpenMP -- 3.2.2.1 Execution model -- 3.2.2.2 Acceleration model -- 3.2.2.3 Memory model -- 3.2.3 An OpenMP Example -- 3.3 Timing Properties of the OpenMP Tasking Model. , 3.3.1 Sporadic DAG Scheduling Model of Parallel Applications -- 3.3.2 Understanding the OpenMP Tasking Model -- 3.3.3 OpenMP and Timing Predictability -- 3.3.3.1 Extracting the DAG of an OpenMP program -- 3.3.3.2 WCET analysis is applied to tasks and task parts -- 3.3.3.3 DAG-based scheduling must not violate the TSCs -- 3.4 Extracting the Timing Information of an OpenMP Program -- 3.4.1 Parallel Structure Stage -- 3.4.1.1 Parallel control flow analysis -- 3.4.1.2 Induction variables analysis -- 3.4.1.3 Reaching definitions and range analysis -- 3.4.1.4 Putting all together: The wave-front example -- 3.4.2 Task Expansion Stage -- 3.4.2.1 Control flow expansion and synchronization predicate resolution -- 3.4.2.2 tid: A unique task instance identifier -- 3.4.2.3 Missing information when deriving the DAG -- 3.4.3 Compiler Complexity -- 3.5 Summary -- References -- Chapter 4 - Mapping, Scheduling, and Schedulability Analysis -- 4.1 Introduction -- 4.2 System Model -- 4.3 Partitioned Scheduler -- 4.3.1 The Optimality of EDF on Preemptive Uniprocessors -- 4.3.2 FP-scheduling Algorithms -- 4.3.3 Limited Preemption Scheduling -- 4.3.4 Limited Preemption Schedulability Analysis -- 4.4 Global Scheduler with Migration Support -- 4.4.1 Migration-based Scheduler -- 4.4.2 Putting All Together -- 4.4.3 Implementation of a Limited Preemption Scheduler -- 4.5 Overall Schedulability Analysis -- 4.5.1 Model Formalization -- 4.5.2 Critical Interference of cp-tasks -- 4.5.3 Response Time Analysis -- 4.5.3.1 Inter-task interference -- 4.5.3.2 Intra-task interference -- 4.5.3.3 Computation of cp-task parameters -- 4.5.4 Non-conditional DAG Tasks -- 4.5.5 Series-Parallel Conditional DAG Tasks -- 4.5.6 Schedulability Condition -- 4.6 Specializing Analysis for Limited Pre-emption Global/Dynamic Approach -- 4.6.1 Blocking Impact of the Largest NPRs (LP-max). , 4.6.2 Blocking Impact of the Largest Parallel NPRs (LP-ILP) -- 4.6.2.1 LP worst-case workload of a task executing on c cores -- 4.6.2.2 Overall LP worst-case workload -- 4.6.2.3 Lower-priority interference -- 4.6.3 Computation of Response Time Factors of LP-ILP -- 4.6.3.1 Worst-case workload of ˝ i executing on c cores: i[c] -- 4.6.3.2 Overall LP worst-case workload of lp(k) per executionscenario sl: ˆk[sl] -- 4.6.4 Complexity -- 4.7 Specializing Analysis for the Partitioned/Static Approach -- 4.7.1 ILP Formulation -- 4.7.1.1 Tied tasks -- 4.7.1.2 Untied tasks -- 4.7.1.3 Complexity -- 4.7.2 Heuristic Approaches -- 4.7.2.1 Tied tasks -- 4.7.2.2 Untied tasks -- 4.7.3 Integrating Interference from Additional RT Tasks -- 4.7.4 Critical Instant -- 4.7.5 Response-time Upper Bound -- 4.8 Scheduling for I/O Cores -- 4.9 Summary -- References -- Chapter 5 - Timing Analysis Methodology -- 5.1 Introduction -- 5.1.1 Static WCET Analysis Techniques -- 5.1.2 Measurement-based WCET Analysis Techniques -- 5.1.3 Hybrid WCET Techniques -- 5.1.4 Measurement-based Probabilistic Techniques -- 5.2 Our Choice of Methodology for WCET Estimation -- 5.2.1 Why Not Use Static Approaches? -- 5.2.2 Why Use Measurement-based Techniques? -- 5.3 Description of Our Timing Analysis Methodology -- 5.3.1 Intrinsic vs. Extrinsic Execution Times -- 5.3.2 The Concept of Safety Margins -- 5.3.3 Our Proposed Timing Methodology at a Glance -- 5.3.4 Overview of the Application Structure -- 5.3.5 Automatic Insertion and Removal of the Trace-points -- 5.3.5.1 How to insert the trace-points -- 5.3.5.2 How to remove the trace-points -- 5.3.6 Extract the Intrinsic Execution Time: The Isolation Mode -- 5.3.7 Extract the Extrinsic Execution Time: The Contention Mode -- 5.3.8 Extract the Execution Time in Real Situation: The Deployment Mode -- 5.3.9 Derive WCET Estimates -- 5.4 Summary -- References. , Chapter 6 - OpenMP Runtime -- 6.1 Introduction -- 6.2 Offloading Library Design -- 6.3 Tasking Runtime -- 6.3.1 Task Dependency Management -- 6.4 Experimental Results -- 6.4.1 Offloading Library -- 6.4.2 Tasking Runtime -- 6.4.2.1 Applications with a linear generation pattern -- 6.4.2.2 Applications with a recursive generation pattern -- 6.4.2.3 Applications with mixed patterns -- 6.4.2.4 Impact of cutoff on LINEAR and RECURSIVE applications -- 6.4.2.5 Real applications -- 6.4.3 Evaluation of the Task Dependency Mechanism -- 6.4.3.1 Performance speedup and memory usage -- 6.4.3.2 The task dependency mechanism on the MPPA -- 6.5 Summary -- References -- Chapter 7 - Embedded Operating Systems -- 7.1 Introduction -- 7.2 State of The Art -- 7.2.1 Real-time Support in Linux -- 7.2.1.1 Hard real-time support -- 7.2.1.2 Latency reduction -- 7.2.1.3 Real-time CPU scheduling -- 7.2.2 Survey of Existing Embedded RTOSs -- 7.2.3 Classification of Embedded RTOSs -- 7.3 Requirements for The Choice of The Run Time System -- 7.3.1 Programming Model -- 7.3.2 Preemption Support -- 7.3.3 Migration Support -- 7.3.4 Scheduling Characteristics -- 7.3.5 Timing Analysis -- 7.4 RTOS Selection -- 7.4.1 Host Processor -- 7.4.2 Manycore Processor -- 7.5 Operating System Support -- 7.5.1 Linux -- 7.5.2 ERIKA Enterprise Support -- 7.5.2.1 Exokernel support -- 7.5.2.2 Single-ELF multicore ERIKA Enterprise -- 7.5.2.3 Support for limited preemption, job, and global scheduling -- 7.5.2.4 New ERIKA Enterprise primitives -- 7.5.2.5 New data structures -- 7.5.2.6 Dynamic task creation -- 7.5.2.7 IRQ handlers as tasks -- 7.5.2.8 File hierarchy -- 7.5.2.9 Early performance estimation -- 7.6 Summary -- References -- Index -- About the Editors -- Back Cover. , English
    Additional Edition: ISBN 87-93609-69-8
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 6
    Online Resource
    Online Resource
    Gistrup, Denmark :River Publishers,
    UID:
    edocfu_9960868942502883
    Format: 1 online resource (236 pages).
    Edition: 1st ed.
    ISBN: 1-00-333841-0 , 1-000-79156-4 , 1-003-33841-0 , 1-000-79468-7 , 87-93609-62-0
    Series Statement: River Publishers series in information science and technology
    Content: Journal of Cyber Security and Mobility provides an in-depth and holistic view of security and solutions from practical to theoretical aspects. It covers topics that are equally valuable for practitioners as well as those new in the field.
    Note: Front Cover -- Half Title page -- RIVER PUBLISHERS SERIES IN INFORMATIONSCIENCE AND TECHNOLOGY -- Title page -- Copyright page -- Contents -- Preface -- List of Contributors -- List of Figures -- List of Tables -- List of Abbreviations -- Chapter 1 - Introduction -- 1.1 Introduction -- 1.1.1 The Convergence of High-performance and Embedded Computing Domains -- 1.1.2 Parallelization Challenge -- 1.2 The P-SOCRATES Project -- 1.3 Challenges Addressed in This Book -- 1.3.1 Compiler Analysis of Parallel Programs -- 1.3.2 Predictable Scheduling of Parallel Tasks on Many-core Systems -- 1.3.3 Methodology for Measurement-based Timing Analysis -- 1.3.4 Optimized OpenMP Tasking Runtime System -- 1.3.5 Real-time Operating Systems -- 1.4 The UpScale SDK -- 1.5 Summary -- References -- Chapter 2 - Manycore Platforms -- 2.1 Introduction -- 2.2 Manycore Architectures -- 2.2.1 Xeon Phi -- 2.2.2 Pezy SC -- 2.2.3 NVIDIA Tegra X1 -- 2.2.4 Tilera Tile -- 2.2.5 STMicroelectronics STHORM -- 2.2.6 Epiphany-V -- 2.2.7 TI Keystone II -- 2.2.8 Kalray MPPA-256 -- 2.2.8.1 The I/O subsystem -- 2.2.8.2 The Network-on-Chip (NoC) -- 2.2.8.3 The Host-to-IOS communication protocol -- 2.2.8.4 Internal architecture of the compute clusters -- 2.2.8.5 The shared memory -- 2.3 Summary -- References -- Chapter 3 - Predictable Parallel Programming with OpenMP -- 3.1 Introduction -- 3.1.1 Introduction to Parallel Programming Models -- 3.1.1.1 POSIX threads -- 3.1.1.2 OpenCLTM -- 3.1.1.3 NVIDIA R CUDA -- 3.1.1.4 Intel R CilkTM Plus -- 3.1.1.5 Intel R TBB -- 3.1.1.6 OpenMP -- 3.2 The OpenMP Parallel Programming Model -- 3.2.1 Introduction and Evolution of OpenMP -- 3.2.2 Parallel Model of OpenMP -- 3.2.2.1 Execution model -- 3.2.2.2 Acceleration model -- 3.2.2.3 Memory model -- 3.2.3 An OpenMP Example -- 3.3 Timing Properties of the OpenMP Tasking Model. , 3.3.1 Sporadic DAG Scheduling Model of Parallel Applications -- 3.3.2 Understanding the OpenMP Tasking Model -- 3.3.3 OpenMP and Timing Predictability -- 3.3.3.1 Extracting the DAG of an OpenMP program -- 3.3.3.2 WCET analysis is applied to tasks and task parts -- 3.3.3.3 DAG-based scheduling must not violate the TSCs -- 3.4 Extracting the Timing Information of an OpenMP Program -- 3.4.1 Parallel Structure Stage -- 3.4.1.1 Parallel control flow analysis -- 3.4.1.2 Induction variables analysis -- 3.4.1.3 Reaching definitions and range analysis -- 3.4.1.4 Putting all together: The wave-front example -- 3.4.2 Task Expansion Stage -- 3.4.2.1 Control flow expansion and synchronization predicate resolution -- 3.4.2.2 tid: A unique task instance identifier -- 3.4.2.3 Missing information when deriving the DAG -- 3.4.3 Compiler Complexity -- 3.5 Summary -- References -- Chapter 4 - Mapping, Scheduling, and Schedulability Analysis -- 4.1 Introduction -- 4.2 System Model -- 4.3 Partitioned Scheduler -- 4.3.1 The Optimality of EDF on Preemptive Uniprocessors -- 4.3.2 FP-scheduling Algorithms -- 4.3.3 Limited Preemption Scheduling -- 4.3.4 Limited Preemption Schedulability Analysis -- 4.4 Global Scheduler with Migration Support -- 4.4.1 Migration-based Scheduler -- 4.4.2 Putting All Together -- 4.4.3 Implementation of a Limited Preemption Scheduler -- 4.5 Overall Schedulability Analysis -- 4.5.1 Model Formalization -- 4.5.2 Critical Interference of cp-tasks -- 4.5.3 Response Time Analysis -- 4.5.3.1 Inter-task interference -- 4.5.3.2 Intra-task interference -- 4.5.3.3 Computation of cp-task parameters -- 4.5.4 Non-conditional DAG Tasks -- 4.5.5 Series-Parallel Conditional DAG Tasks -- 4.5.6 Schedulability Condition -- 4.6 Specializing Analysis for Limited Pre-emption Global/Dynamic Approach -- 4.6.1 Blocking Impact of the Largest NPRs (LP-max). , 4.6.2 Blocking Impact of the Largest Parallel NPRs (LP-ILP) -- 4.6.2.1 LP worst-case workload of a task executing on c cores -- 4.6.2.2 Overall LP worst-case workload -- 4.6.2.3 Lower-priority interference -- 4.6.3 Computation of Response Time Factors of LP-ILP -- 4.6.3.1 Worst-case workload of ˝ i executing on c cores: i[c] -- 4.6.3.2 Overall LP worst-case workload of lp(k) per executionscenario sl: ˆk[sl] -- 4.6.4 Complexity -- 4.7 Specializing Analysis for the Partitioned/Static Approach -- 4.7.1 ILP Formulation -- 4.7.1.1 Tied tasks -- 4.7.1.2 Untied tasks -- 4.7.1.3 Complexity -- 4.7.2 Heuristic Approaches -- 4.7.2.1 Tied tasks -- 4.7.2.2 Untied tasks -- 4.7.3 Integrating Interference from Additional RT Tasks -- 4.7.4 Critical Instant -- 4.7.5 Response-time Upper Bound -- 4.8 Scheduling for I/O Cores -- 4.9 Summary -- References -- Chapter 5 - Timing Analysis Methodology -- 5.1 Introduction -- 5.1.1 Static WCET Analysis Techniques -- 5.1.2 Measurement-based WCET Analysis Techniques -- 5.1.3 Hybrid WCET Techniques -- 5.1.4 Measurement-based Probabilistic Techniques -- 5.2 Our Choice of Methodology for WCET Estimation -- 5.2.1 Why Not Use Static Approaches? -- 5.2.2 Why Use Measurement-based Techniques? -- 5.3 Description of Our Timing Analysis Methodology -- 5.3.1 Intrinsic vs. Extrinsic Execution Times -- 5.3.2 The Concept of Safety Margins -- 5.3.3 Our Proposed Timing Methodology at a Glance -- 5.3.4 Overview of the Application Structure -- 5.3.5 Automatic Insertion and Removal of the Trace-points -- 5.3.5.1 How to insert the trace-points -- 5.3.5.2 How to remove the trace-points -- 5.3.6 Extract the Intrinsic Execution Time: The Isolation Mode -- 5.3.7 Extract the Extrinsic Execution Time: The Contention Mode -- 5.3.8 Extract the Execution Time in Real Situation: The Deployment Mode -- 5.3.9 Derive WCET Estimates -- 5.4 Summary -- References. , Chapter 6 - OpenMP Runtime -- 6.1 Introduction -- 6.2 Offloading Library Design -- 6.3 Tasking Runtime -- 6.3.1 Task Dependency Management -- 6.4 Experimental Results -- 6.4.1 Offloading Library -- 6.4.2 Tasking Runtime -- 6.4.2.1 Applications with a linear generation pattern -- 6.4.2.2 Applications with a recursive generation pattern -- 6.4.2.3 Applications with mixed patterns -- 6.4.2.4 Impact of cutoff on LINEAR and RECURSIVE applications -- 6.4.2.5 Real applications -- 6.4.3 Evaluation of the Task Dependency Mechanism -- 6.4.3.1 Performance speedup and memory usage -- 6.4.3.2 The task dependency mechanism on the MPPA -- 6.5 Summary -- References -- Chapter 7 - Embedded Operating Systems -- 7.1 Introduction -- 7.2 State of The Art -- 7.2.1 Real-time Support in Linux -- 7.2.1.1 Hard real-time support -- 7.2.1.2 Latency reduction -- 7.2.1.3 Real-time CPU scheduling -- 7.2.2 Survey of Existing Embedded RTOSs -- 7.2.3 Classification of Embedded RTOSs -- 7.3 Requirements for The Choice of The Run Time System -- 7.3.1 Programming Model -- 7.3.2 Preemption Support -- 7.3.3 Migration Support -- 7.3.4 Scheduling Characteristics -- 7.3.5 Timing Analysis -- 7.4 RTOS Selection -- 7.4.1 Host Processor -- 7.4.2 Manycore Processor -- 7.5 Operating System Support -- 7.5.1 Linux -- 7.5.2 ERIKA Enterprise Support -- 7.5.2.1 Exokernel support -- 7.5.2.2 Single-ELF multicore ERIKA Enterprise -- 7.5.2.3 Support for limited preemption, job, and global scheduling -- 7.5.2.4 New ERIKA Enterprise primitives -- 7.5.2.5 New data structures -- 7.5.2.6 Dynamic task creation -- 7.5.2.7 IRQ handlers as tasks -- 7.5.2.8 File hierarchy -- 7.5.2.9 Early performance estimation -- 7.6 Summary -- References -- Index -- About the Editors -- Back Cover. , English
    Additional Edition: ISBN 87-93609-69-8
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 7
    Online Resource
    Online Resource
    Gistrup, Denmark :River Publishers,
    UID:
    edoccha_9960868942502883
    Format: 1 online resource (236 pages).
    Edition: 1st ed.
    ISBN: 1-00-333841-0 , 1-000-79156-4 , 1-003-33841-0 , 1-000-79468-7 , 87-93609-62-0
    Series Statement: River Publishers series in information science and technology
    Content: Journal of Cyber Security and Mobility provides an in-depth and holistic view of security and solutions from practical to theoretical aspects. It covers topics that are equally valuable for practitioners as well as those new in the field.
    Note: Front Cover -- Half Title page -- RIVER PUBLISHERS SERIES IN INFORMATIONSCIENCE AND TECHNOLOGY -- Title page -- Copyright page -- Contents -- Preface -- List of Contributors -- List of Figures -- List of Tables -- List of Abbreviations -- Chapter 1 - Introduction -- 1.1 Introduction -- 1.1.1 The Convergence of High-performance and Embedded Computing Domains -- 1.1.2 Parallelization Challenge -- 1.2 The P-SOCRATES Project -- 1.3 Challenges Addressed in This Book -- 1.3.1 Compiler Analysis of Parallel Programs -- 1.3.2 Predictable Scheduling of Parallel Tasks on Many-core Systems -- 1.3.3 Methodology for Measurement-based Timing Analysis -- 1.3.4 Optimized OpenMP Tasking Runtime System -- 1.3.5 Real-time Operating Systems -- 1.4 The UpScale SDK -- 1.5 Summary -- References -- Chapter 2 - Manycore Platforms -- 2.1 Introduction -- 2.2 Manycore Architectures -- 2.2.1 Xeon Phi -- 2.2.2 Pezy SC -- 2.2.3 NVIDIA Tegra X1 -- 2.2.4 Tilera Tile -- 2.2.5 STMicroelectronics STHORM -- 2.2.6 Epiphany-V -- 2.2.7 TI Keystone II -- 2.2.8 Kalray MPPA-256 -- 2.2.8.1 The I/O subsystem -- 2.2.8.2 The Network-on-Chip (NoC) -- 2.2.8.3 The Host-to-IOS communication protocol -- 2.2.8.4 Internal architecture of the compute clusters -- 2.2.8.5 The shared memory -- 2.3 Summary -- References -- Chapter 3 - Predictable Parallel Programming with OpenMP -- 3.1 Introduction -- 3.1.1 Introduction to Parallel Programming Models -- 3.1.1.1 POSIX threads -- 3.1.1.2 OpenCLTM -- 3.1.1.3 NVIDIA R CUDA -- 3.1.1.4 Intel R CilkTM Plus -- 3.1.1.5 Intel R TBB -- 3.1.1.6 OpenMP -- 3.2 The OpenMP Parallel Programming Model -- 3.2.1 Introduction and Evolution of OpenMP -- 3.2.2 Parallel Model of OpenMP -- 3.2.2.1 Execution model -- 3.2.2.2 Acceleration model -- 3.2.2.3 Memory model -- 3.2.3 An OpenMP Example -- 3.3 Timing Properties of the OpenMP Tasking Model. , 3.3.1 Sporadic DAG Scheduling Model of Parallel Applications -- 3.3.2 Understanding the OpenMP Tasking Model -- 3.3.3 OpenMP and Timing Predictability -- 3.3.3.1 Extracting the DAG of an OpenMP program -- 3.3.3.2 WCET analysis is applied to tasks and task parts -- 3.3.3.3 DAG-based scheduling must not violate the TSCs -- 3.4 Extracting the Timing Information of an OpenMP Program -- 3.4.1 Parallel Structure Stage -- 3.4.1.1 Parallel control flow analysis -- 3.4.1.2 Induction variables analysis -- 3.4.1.3 Reaching definitions and range analysis -- 3.4.1.4 Putting all together: The wave-front example -- 3.4.2 Task Expansion Stage -- 3.4.2.1 Control flow expansion and synchronization predicate resolution -- 3.4.2.2 tid: A unique task instance identifier -- 3.4.2.3 Missing information when deriving the DAG -- 3.4.3 Compiler Complexity -- 3.5 Summary -- References -- Chapter 4 - Mapping, Scheduling, and Schedulability Analysis -- 4.1 Introduction -- 4.2 System Model -- 4.3 Partitioned Scheduler -- 4.3.1 The Optimality of EDF on Preemptive Uniprocessors -- 4.3.2 FP-scheduling Algorithms -- 4.3.3 Limited Preemption Scheduling -- 4.3.4 Limited Preemption Schedulability Analysis -- 4.4 Global Scheduler with Migration Support -- 4.4.1 Migration-based Scheduler -- 4.4.2 Putting All Together -- 4.4.3 Implementation of a Limited Preemption Scheduler -- 4.5 Overall Schedulability Analysis -- 4.5.1 Model Formalization -- 4.5.2 Critical Interference of cp-tasks -- 4.5.3 Response Time Analysis -- 4.5.3.1 Inter-task interference -- 4.5.3.2 Intra-task interference -- 4.5.3.3 Computation of cp-task parameters -- 4.5.4 Non-conditional DAG Tasks -- 4.5.5 Series-Parallel Conditional DAG Tasks -- 4.5.6 Schedulability Condition -- 4.6 Specializing Analysis for Limited Pre-emption Global/Dynamic Approach -- 4.6.1 Blocking Impact of the Largest NPRs (LP-max). , 4.6.2 Blocking Impact of the Largest Parallel NPRs (LP-ILP) -- 4.6.2.1 LP worst-case workload of a task executing on c cores -- 4.6.2.2 Overall LP worst-case workload -- 4.6.2.3 Lower-priority interference -- 4.6.3 Computation of Response Time Factors of LP-ILP -- 4.6.3.1 Worst-case workload of ˝ i executing on c cores: i[c] -- 4.6.3.2 Overall LP worst-case workload of lp(k) per executionscenario sl: ˆk[sl] -- 4.6.4 Complexity -- 4.7 Specializing Analysis for the Partitioned/Static Approach -- 4.7.1 ILP Formulation -- 4.7.1.1 Tied tasks -- 4.7.1.2 Untied tasks -- 4.7.1.3 Complexity -- 4.7.2 Heuristic Approaches -- 4.7.2.1 Tied tasks -- 4.7.2.2 Untied tasks -- 4.7.3 Integrating Interference from Additional RT Tasks -- 4.7.4 Critical Instant -- 4.7.5 Response-time Upper Bound -- 4.8 Scheduling for I/O Cores -- 4.9 Summary -- References -- Chapter 5 - Timing Analysis Methodology -- 5.1 Introduction -- 5.1.1 Static WCET Analysis Techniques -- 5.1.2 Measurement-based WCET Analysis Techniques -- 5.1.3 Hybrid WCET Techniques -- 5.1.4 Measurement-based Probabilistic Techniques -- 5.2 Our Choice of Methodology for WCET Estimation -- 5.2.1 Why Not Use Static Approaches? -- 5.2.2 Why Use Measurement-based Techniques? -- 5.3 Description of Our Timing Analysis Methodology -- 5.3.1 Intrinsic vs. Extrinsic Execution Times -- 5.3.2 The Concept of Safety Margins -- 5.3.3 Our Proposed Timing Methodology at a Glance -- 5.3.4 Overview of the Application Structure -- 5.3.5 Automatic Insertion and Removal of the Trace-points -- 5.3.5.1 How to insert the trace-points -- 5.3.5.2 How to remove the trace-points -- 5.3.6 Extract the Intrinsic Execution Time: The Isolation Mode -- 5.3.7 Extract the Extrinsic Execution Time: The Contention Mode -- 5.3.8 Extract the Execution Time in Real Situation: The Deployment Mode -- 5.3.9 Derive WCET Estimates -- 5.4 Summary -- References. , Chapter 6 - OpenMP Runtime -- 6.1 Introduction -- 6.2 Offloading Library Design -- 6.3 Tasking Runtime -- 6.3.1 Task Dependency Management -- 6.4 Experimental Results -- 6.4.1 Offloading Library -- 6.4.2 Tasking Runtime -- 6.4.2.1 Applications with a linear generation pattern -- 6.4.2.2 Applications with a recursive generation pattern -- 6.4.2.3 Applications with mixed patterns -- 6.4.2.4 Impact of cutoff on LINEAR and RECURSIVE applications -- 6.4.2.5 Real applications -- 6.4.3 Evaluation of the Task Dependency Mechanism -- 6.4.3.1 Performance speedup and memory usage -- 6.4.3.2 The task dependency mechanism on the MPPA -- 6.5 Summary -- References -- Chapter 7 - Embedded Operating Systems -- 7.1 Introduction -- 7.2 State of The Art -- 7.2.1 Real-time Support in Linux -- 7.2.1.1 Hard real-time support -- 7.2.1.2 Latency reduction -- 7.2.1.3 Real-time CPU scheduling -- 7.2.2 Survey of Existing Embedded RTOSs -- 7.2.3 Classification of Embedded RTOSs -- 7.3 Requirements for The Choice of The Run Time System -- 7.3.1 Programming Model -- 7.3.2 Preemption Support -- 7.3.3 Migration Support -- 7.3.4 Scheduling Characteristics -- 7.3.5 Timing Analysis -- 7.4 RTOS Selection -- 7.4.1 Host Processor -- 7.4.2 Manycore Processor -- 7.5 Operating System Support -- 7.5.1 Linux -- 7.5.2 ERIKA Enterprise Support -- 7.5.2.1 Exokernel support -- 7.5.2.2 Single-ELF multicore ERIKA Enterprise -- 7.5.2.3 Support for limited preemption, job, and global scheduling -- 7.5.2.4 New ERIKA Enterprise primitives -- 7.5.2.5 New data structures -- 7.5.2.6 Dynamic task creation -- 7.5.2.7 IRQ handlers as tasks -- 7.5.2.8 File hierarchy -- 7.5.2.9 Early performance estimation -- 7.6 Summary -- References -- Index -- About the Editors -- Back Cover. , English
    Additional Edition: ISBN 87-93609-69-8
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Did you mean 1000096564?
Did you mean 1000191567?
Did you mean 1000391264?
Close ⊗
This website uses cookies and the analysis tool Matomo. Further information can be found on the KOBV privacy pages