Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    UID:
    almafu_BV049526971
    Format: 1 Online-Ressource (XIX, 396 p. 41 illus).
    ISBN: 978-3-031-40180-0
    Series Statement: International series in operations research & management science 349
    Additional Edition: Erscheint auch als Druck-Ausgabe ISBN 978-3-031-40179-4
    Additional Edition: Erscheint auch als Druck-Ausgabe ISBN 978-3-031-40181-7
    Additional Edition: Erscheint auch als Druck-Ausgabe ISBN 978-3-031-40182-4
    Language: English
    Subjects: Economics
    RVK:
    RVK:
    URL: Volltext  (URL des Erstveröffentlichers)
    URL: Volltext  (URL des Erstveröffentlichers)
    URL: Volltext  (URL des Erstveröffentlichers)
    Author information: Pickl, Stefan 1967-
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 2
    UID:
    almahu_9949709239202882
    Format: XIX, 396 p. 41 illus. , online resource.
    Edition: 1st ed. 2024.
    ISBN: 9783031401800
    Series Statement: International Series in Operations Research & Management Science, 349
    Content: This book presents recent findings and results concerning the solutions of especially finite state-space Markov decision problems and determining Nash equilibria for related stochastic games with average and total expected discounted reward payoffs. In addition, it focuses on a new class of stochastic games: stochastic positional games that extend and generalize the classic deterministic positional games. It presents new algorithmic results on the suitable implementation of quasi-monotonic programming techniques. Moreover, the book presents applications of positional games within a class of multi-objective discrete control problems and hierarchical control problems on networks. Given its scope, the book will benefit all researchers and graduate students who are interested in Markov theory, control theory, optimization and games.
    Note: Discrete Markov Processes and Numerical Algorithms for Markov Chains -- Markov Decision Processes and Stochastic Control Problems on Networks -- Stochastic Games and Positional Games on Networks.
    In: Springer Nature eBook
    Additional Edition: Printed edition: ISBN 9783031401794
    Additional Edition: Printed edition: ISBN 9783031401817
    Additional Edition: Printed edition: ISBN 9783031401824
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 3
    UID:
    edoccha_9961418405502883
    Format: 1 online resource (412 pages)
    Edition: First edition.
    ISBN: 3-031-40180-8
    Series Statement: International Series in Operations Research and Management Science Series ; Volume 349
    Note: Intro -- Preface -- Contents -- Notation -- 1 Discrete Markov Processes and Numerical Algorithms for Markov Chains -- 1.1 Definitions and Some Preliminary Results -- 1.1.1 Stochastic Processes and Markov Chains -- 1.1.2 State-Time Probabilities in a Markov Chain -- 1.1.3 Limiting Probabilities and Stationary Distributions -- 1.1.4 Definition of the Limiting Matrix -- 1.1.5 Classification of States for a Markov Chain -- 1.1.6 An Algorithm for Determining the Limiting Matrix -- 1.1.7 An Approximation Algorithm for Limiting Probabilities Based on the Ergodicity Condition -- 1.2 Asymptotic Behavior of State-Time Probabilities -- 1.3 Determining the Limiting Matrix Based on the z-Transform -- 1.3.1 Main Results -- 1.3.2 Constructing the Characteristic Polynomial -- 1.3.3 Determining the z-Transform Function -- 1.3.4 The Algorithm for Calculating the Limiting Matrix -- 1.4 An Approach for Finding the Differential Matrices -- 1.4.1 The General Scheme of the Algorithm -- 1.4.2 Linear Recurrent Equations and Their Main Properties -- 1.4.3 The Main Results and the Algorithm -- 1.4.4 Comments on the Complexity of the Algorithm -- 1.5 An Algorithm to Find the Limiting and Differential Matrices -- 1.5.1 The Representation of the z-Transform -- 1.5.2 Expansion of the z-Transform -- 1.5.3 The Main Conclusion and the Algorithm -- 1.6 Fast Computing Schemes for Limiting and Differential Matrices -- 1.6.1 Fast Matrix Multiplication and Matrix Inversion -- 1.6.2 Determining the Characteristic Polynomial and Resuming the Matrix Polynomial -- 1.6.3 A Modified Algorithm to Find the Limiting Matrix -- 1.6.4 A Modified Algorithm to Find Differential Matrices -- 1.7 Dynamic Programming Algorithms for Markov Chains -- 1.7.1 Determining the State-Time Probabilities with Restrictions on the Number of Transitions. , 1.7.2 An Approach to Finding the Limiting Probabilities Based on Dynamic Programming -- 1.7.3 A Modified Algorithm to Find the Limiting Matrix -- 1.7.4 Calculation of the First Hitting Probability of a State -- 1.8 State-Time Probabilities for Non-stationary Markov Processes -- 1.9 Markov Processes with Rewards -- 1.9.1 The Expected Total Reward -- 1.9.2 Asymptotic Behavior of the Expected Total Reward -- 1.9.3 The Expected Total Reward for Non-stationaryProcesses -- 1.9.4 The Variance of the Expected Total Reward -- 1.10 Markov Processes with Discounted Rewards -- 1.11 Semi-Markov Processes with Rewards -- 1.12 Expected Total Reward for Processes with Stopping States -- 2 Markov Decision Processes and Stochastic Control Problems on Networks -- 2.1 Markov Decision Processes -- 2.1.1 Model Formulation and Basic Problems -- 2.1.2 Optimality Criteria for Markov Decision Processes -- 2.2 Finite Horizon Markov Decision Problems -- 2.2.1 Optimality Equations for Finite Horizon Problems -- 2.2.2 The Backward Induction Algorithm -- 2.3 Discounted Markov Decision Problems -- 2.3.1 The Optimality Equation and Algorithms -- 2.3.2 The Linear Programming Approach -- 2.3.3 A Nonlinear Model for the Discounted Problem -- 2.3.4 The Quasi-monotonic Programming Approach -- 2.4 Average Markov Decision Problems -- 2.4.1 The Main Results for the Unichain Model -- 2.4.2 Linear Programming for a Unichain Problem -- 2.4.3 A Nonlinear Model for the Unichain Problem -- 2.4.4 Optimality Equations for Multichain Processes -- 2.4.5 Linear Programming for Multichain Problems -- 2.4.6 A Nonlinear Model for the Multichain Problem -- 2.4.7 A Quasi-monotonic Programming Approach -- 2.5 Stochastic Discrete Control Problems on Networks -- 2.5.1 Deterministic Discrete Optimal Control Problems -- 2.5.2 Stochastic Discrete Optimal Control Problems. , 2.6 Average Stochastic Control Problems on Networks -- 2.6.1 Problem Formulation -- 2.6.2 Algorithms for Solving Average Control Problems -- 2.6.3 Linear Programming for Unichain Control Problems -- 2.6.4 Optimality Equations for an Average Control Problem -- 2.6.5 Linear Programming for Multichain Control Problems -- 2.6.6 An Iterative Algorithm Based on a Unichain Model -- 2.6.7 Markov Decision Problems vs. Control on Networks -- 2.7 Discounted Control Problems on Networks -- 2.7.1 Problem Formulation -- 2.7.2 Optimality Equations and Algorithms -- 2.8 Decision Problems with Stopping States -- 2.8.1 Problem Formulation and Main Results -- 2.8.2 Optimal Control on Networks with Stopping States -- 2.9 Deterministic Control Problems on Networks -- 2.9.1 Dynamic Programming for Finite Horizon Problems -- 2.9.2 Optimal Paths in Networks with Rated Costs -- 2.9.3 Control Problems with Varying Time of Transitions -- 2.9.4 Reduction of the Problem in the Case of Unite Time of State Transitions -- 3 Stochastic Games and Positional Games on Networks -- 3.1 Foundation and Development of Stochastic Games -- 3.2 Nash Equilibria Results for Non-cooperative Games -- 3.3 Formulation of Stochastic Games -- 3.3.1 The Framework of an m-Person Stochastic Game -- 3.3.2 Stationary, Non-stationary, and Markov Strategies -- 3.3.3 Stochastic Games in Pure and Mixed Strategies -- 3.4 Stationary Equilibria for Discounted Stochastic Games -- 3.5 On Nash Equilibria for Average Stochastic Games -- 3.5.1 Stationary Equilibria for Unichain Games -- 3.5.2 Some Results for Multichain Stochastic Games -- 3.5.3 Equilibria for Two-Player Average Stochastic Games -- 3.5.4 The Big Match and the Paris Match -- 3.5.5 A Cubic Three-Person Average Game -- 3.6 Stochastic Positional Games -- 3.6.1 The Framework of a Stochastic Positional Game. , 3.6.2 Positional Games in Pure and Mixed Strategies -- 3.6.3 Stationary Equilibria for Average Positional Games -- 3.6.4 Average Positional Games on Networks -- 3.6.5 Pure Stationary Nash Equilibria for Unichain Stochastic Positional Games -- 3.6.6 Pure Nash Equilibria Conditions for Cyclic Games -- 3.6.7 Pure Stationary Equilibria for Two-Player Zero-Sum Average Positional Games -- 3.6.8 Pure Stationary Equilibria for Discounted Stochastic Positional Games -- 3.6.9 Pure Nash Equilibria for Discounted Gameson Networks -- 3.7 Single-Controller Stochastic Games -- 3.7.1 Single-Controller Discounted Stochastic Games -- 3.7.2 Single-Controller Average Stochastic Games -- 3.8 Switching Controller Stochastic Games -- 3.8.1 Formulation of Switching Controller Stochastic Games -- 3.8.2 Discounted Switching Controller Stochastic Games -- 3.8.3 Average Switching Controller Stochastic Games -- 3.9 Stochastic Games with a Stopping State -- 3.9.1 Stochastic Positional Games with a Stopping State -- 3.9.2 Positional Games on Networks with a Stopping State -- 3.10 Nash Equilibria for Dynamic c-Games on Networks -- 3.11 Two-Player Zero-Sum Positional Games on Networks -- 3.11.1 An Algorithm for Games on Acyclic Networks -- 3.11.2 The Main Results for the Gameson Arbitrary Networks -- 3.11.3 Determining the Optimal Strategies of the Players -- 3.11.4 An Algorithm for Zero-Sum Dynamic c-Games -- 3.12 Acyclic l-Games on Networks -- 3.12.1 Problem Formulation -- 3.12.2 The Main Properties of Acyclic l-Games -- 3.12.3 An Algorithm for Solving Acyclic l-Games -- 3.13 Determining the Optimal Strategies for Cyclic Games -- 3.13.1 Problem Formulation and the Main Properties -- 3.13.2 Some Preliminary Results -- 3.13.3 The Reduction of Cyclic Games to Ergodic Ones -- 3.13.4 An Algorithm for Ergodic Cyclic Games -- 3.13.5 An Algorithm Based on the Reductionof Acyclic l-Games. , 3.13.6 A Dichotomy Method for Cyclic Games -- 3.14 Multi-Objective Control Based on the Concept of Non-cooperative Games: Nash Equilibria -- 3.14.1 Stationary and Non-stationary Control Models -- 3.14.2 Infinite Horizon Multi-Objective Control Problems -- 3.15 Hierarchical Control and Stackelberg's Optimization Principle -- 3.16 Multi-Objective Control Based on the Concept of Cooperative Games: Pareto Optima -- 3.17 Alternate Players' Control Conditions and Nash Equilibria for Dynamic Games in Positional Form -- 3.18 Stackelberg Solutions for Hierarchical Control Problems -- 3.18.1 Stackelberg Solutions for Static Games -- 3.18.2 Hierarchical Control on Networks -- 3.18.3 Optimal Stackelberg Strategies on Acyclic Networks -- 3.18.4 An Algorithm for Hierarchical Control Problems -- Reference -- Index.
    Additional Edition: Print version: Lozovanu, Dmitrii Markov Decision Processes and Stochastic Positional Games Cham : Springer International Publishing AG,c2023 ISBN 9783031401794
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Did you mean 9783031401749?
Did you mean 9783031201790?
Did you mean 9783030401894?
Close ⊗
This website uses cookies and the analysis tool Matomo. Further information can be found on the KOBV privacy pages