Ihre E-Mail wurde erfolgreich gesendet. Bitte prüfen Sie Ihren Maileingang.

Leider ist ein Fehler beim E-Mail-Versand aufgetreten. Bitte versuchen Sie es erneut.

Vorgang fortführen?

Exportieren
Filter
Medientyp
Sprache
Region
Bibliothek
Erscheinungszeitraum
Fachgebiete(RVK)
Schlagwörter
Zugriff
  • 1
    UID:
    almahu_9949198400802882
    Umfang: VIII, 565 p. , online resource.
    Ausgabe: 1st ed. 2002.
    ISBN: 9781461508052
    Serie: International Series in Operations Research & Management Science, 40
    Inhalt: Eugene A. Feinberg Adam Shwartz This volume deals with the theory of Markov Decision Processes (MDPs) and their applications. Each chapter was written by a leading expert in the re­ spective area. The papers cover major research areas and methodologies, and discuss open questions and future research directions. The papers can be read independently, with the basic notation and concepts ofSection 1.2. Most chap­ ters should be accessible by graduate or advanced undergraduate students in fields of operations research, electrical engineering, and computer science. 1.1 AN OVERVIEW OF MARKOV DECISION PROCESSES The theory of Markov Decision Processes-also known under several other names including sequential stochastic optimization, discrete-time stochastic control, and stochastic dynamic programming-studiessequential optimization ofdiscrete time stochastic systems. The basic object is a discrete-time stochas­ tic system whose transition mechanism can be controlled over time. Each control policy defines the stochastic process and values of objective functions associated with this process. The goal is to select a "good" control policy. In real life, decisions that humans and computers make on all levels usually have two types ofimpacts: (i) they cost orsavetime, money, or other resources, or they bring revenues, as well as (ii) they have an impact on the future, by influencing the dynamics. In many situations, decisions with the largest immediate profit may not be good in view offuture events. MDPs model this paradigm and provide results on the structure and existence of good policies and on methods for their calculation.
    Anmerkung: 1 Introduction -- I Finite State and Action Models -- 2 Finite State and Action MDPs -- 3 Bias Optimality -- 4 Singular Perturbations of Markov Chains and Decision Processes -- II Infinite State Models -- 5 Average Reward Optimization Theory for Denumerable State Spaces -- 6 Total Reward Criteria -- 7 Mixed Criteria -- 8 Blackwell Optimality -- 9 The Poisson Equation for Countable Markov Chains: Probabilistic Methods and Interpretations -- 10 Stability, Performance Evaluation, and Optimization -- 11 Convex Analytic Methods in Markov Decision Processes -- 12 The Linear Programming Approach -- 13 Invariant Gambling Problems and Markov Decision Processes -- III Applications -- 14 Neuro-Dynamic Programming: Overview and Recent Trends -- 15 Markov Decision Processes in Finance and Dynamic Options -- 16 Applications of Markov Decision Processes in Communication Networks -- 17 Water Reservoir Applications of Markov Decision Processes.
    In: Springer Nature eBook
    Weitere Ausg.: Printed edition: ISBN 9780792374596
    Weitere Ausg.: Printed edition: ISBN 9781461352488
    Weitere Ausg.: Printed edition: ISBN 9781461508069
    Sprache: Englisch
    Fachgebiete: Wirtschaftswissenschaften , Mathematik
    RVK:
    RVK:
    Schlagwort(e): Aufsatzsammlung
    URL: Volltext  (URL des Erstveröffentlichers)
    Bibliothek Standort Signatur Band/Heft/Jahr Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 2
    Online-Ressource
    Online-Ressource
    Boston, MA : Springer
    UID:
    gbv_749140461
    Umfang: Online-Ressource (VIII, 565 p) , digital
    Ausgabe: Springer eBook Collection. Business and Economics
    ISBN: 9781461508052
    Serie: International Series in Operations Research & Management Science 40
    Inhalt: The theory of Markov Decision Processes - also known under several other names including sequential stochastic optimization, discrete-time stochastic control, and stochastic dynamic programming - studies sequential optimization of discrete time stochastic systems. Fundamentally, this is a methodology that examines and analyzes a discrete-time stochastic system whose transition mechanism can be controlled over time. Each control policy defines the stochastic process and values of objective functions associated with this process. Its objective is to select a "good" control policy. In real life, decisions that humans and computers make on all levels usually have two types of impacts: (i) they cost or save time, money, or other resources, or they bring revenues, as well as (ii) they have an impact on the future, by influencing the dynamics. In many situations, decisions with the largest immediate profit may not be good in view of future events. Markov Decision Processes (MDPs) model this paradigm and provide results on the structure and existence of good policies and on methods for their calculations. MDPs are attractive to many researchers because they are important both from the practical and the intellectual points of view. MDPs provide tools for the solution of important real-life problems. In particular, many business and engineering applications use MDP models. Analysis of various problems arising in MDPs leads to a large variety of interesting mathematical and computational problems. Accordingly, the Handbook of Markov Decision Processes is split into three parts: Part I deals with models with finite state and action spaces and Part II deals with infinite state problems, and Part III examines specific applications. Individual chapters are written by leading experts on the subject
    Weitere Ausg.: ISBN 9781461352488
    Weitere Ausg.: Erscheint auch als Druck-Ausgabe ISBN 9780792374596
    Weitere Ausg.: Erscheint auch als Druck-Ausgabe ISBN 9781461352488
    Weitere Ausg.: Erscheint auch als Druck-Ausgabe ISBN 9781461508069
    Sprache: Englisch
    URL: Volltext  (lizenzpflichtig)
    Bibliothek Standort Signatur Band/Heft/Jahr Verfügbarkeit
    BibTip Andere fanden auch interessant ...
Meinten Sie 9781461500063?
Meinten Sie 9781461501039?
Meinten Sie 9781461501169?
Schließen ⊗
Diese Webseite nutzt Cookies und das Analyse-Tool Matomo. Weitere Informationen finden Sie auf den KOBV Seiten zum Datenschutz