Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Book
    Book
    Cambridge :Cambridge University Press,
    UID:
    almafu_BV046831306
    Format: xviii, 518 Seiten : , Illustrationen, Diagramme.
    ISBN: 978-1-108-48682-8
    Additional Edition: Erscheint auch als Online-Ausgabe, EPUB ISBN 978-1-108-57140-1
    Language: English
    Subjects: Computer Science , Economics
    RVK:
    RVK:
    Keywords: Markov-Entscheidungsprozess ; Optimierung ; Maschinelles Lernen ; Markov-Entscheidungsprozess ; N-armiger Bandit
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 2
    Online Resource
    Online Resource
    Cambridge :Cambridge University Press,
    UID:
    almahu_9948557362702882
    Format: 1 online resource (xviii, 518 pages) : , digital, PDF file(s).
    ISBN: 9781108571401 (ebook)
    Content: Decision-making in the face of uncertainty is a significant challenge in machine learning, and the multi-armed bandit model is a commonly used framework to address it. This comprehensive and rigorous introduction to the multi-armed bandit problem examines all the major settings, including stochastic, adversarial, and Bayesian frameworks. A focus on both mathematical intuition and carefully worked proofs makes this an excellent reference for established researchers and a helpful resource for graduate students in computer science, engineering, statistics, applied mathematics and economics. Linear bandits receive special attention as one of the most useful models in applications, while other chapters are dedicated to combinatorial bandits, ranking, non-stationary problems, Thompson sampling and pure exploration. The book ends with a peek into the world beyond bandits with an introduction to partial monitoring and learning in Markov decision processes.
    Note: Title from publisher's bibliographic system (viewed on 07 Jul 2020). , Foundations of probability -- Stochastic processes and Markov chains -- Stochastic bandits -- Concentration of measure -- The explore-then-commit algorithm -- The upper confidence bound algorithm -- The upper confidence bound algorithm: asymptotic optimality -- The upper confidence bound algorithm: minimax optimality -- The upper confidence bound algorithm: Bernoulli noise -- The Exp3 algorithm -- The Exp3-IX algorithm -- Lower bounds: basic ideas -- Foundations of information theory -- Minimax lower bounds -- Instance dependent lower bounds -- High probability lower bounds -- Contextual bandits -- Stochastic linear bandits -- Confidence bounds for least squares estimators -- Optimal design for least squares estimators -- Stochastic linear bandits with finitely many arms -- Stochastic linear bandits with sparsity -- Minimax lower bounds for stochastic linear bandits -- Asymptotic lower bounds for stochastic linear bandits -- Foundations of convex analysis -- Exp3 for adversarial linear bandits -- Follow the regularized leader and mirror descent -- The relation between adversarial and stochastic linear bandits -- Combinatorial bandits -- Non-stationary bandits -- Ranking -- Pure exploration -- Foundations of Bayesian learning -- Bayesian bandits -- Thompson sampling -- Partial monitoring -- Markov decision processes.
    Additional Edition: Print version: ISBN 9781108486828
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 3
    Online Resource
    Online Resource
    Cambridge : Cambridge University Press
    UID:
    gbv_1726824764
    Format: 1 online resource (xviii, 518 pages) , digital, PDF file(s).
    ISBN: 9781108571401 , 9781108486828
    Content: Decision-making in the face of uncertainty is a significant challenge in machine learning, and the multi-armed bandit model is a commonly used framework to address it. This comprehensive and rigorous introduction to the multi-armed bandit problem examines all the major settings, including stochastic, adversarial, and Bayesian frameworks. A focus on both mathematical intuition and carefully worked proofs makes this an excellent reference for established researchers and a helpful resource for graduate students in computer science, engineering, statistics, applied mathematics and economics. Linear bandits receive special attention as one of the most useful models in applications, while other chapters are dedicated to combinatorial bandits, ranking, non-stationary problems, Thompson sampling and pure exploration. The book ends with a peek into the world beyond bandits with an introduction to partial monitoring and learning in Markov decision processes.
    Note: Title from publisher's bibliographic system (viewed on 07 Jul 2020)
    Additional Edition: ISBN 9781108486828
    Additional Edition: Erscheint auch als Druck-Ausgabe ISBN 9781108486828
    Language: English
    Subjects: Mathematics
    RVK:
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Did you mean 9781108571494?
Did you mean 9781108471701?
Did you mean 9781108021401?
Close ⊗
This website uses cookies and the analysis tool Matomo. Further information can be found on the KOBV privacy pages