Ihre E-Mail wurde erfolgreich gesendet. Bitte prüfen Sie Ihren Maileingang.

Leider ist ein Fehler beim E-Mail-Versand aufgetreten. Bitte versuchen Sie es erneut.

Vorgang fortführen?

Exportieren
Filter
Medientyp
Sprache
Region
Erscheinungszeitraum
Person/Organisation
Fachgebiete(RVK)
Zugriff
  • 1
    Buch
    Buch
    Amsterdam [u.a.] :Elsevier, Academic Press,
    Bibliothek Standort Signatur Band/Heft/Jahr Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 2
    Online-Ressource
    Online-Ressource
    Amsterdam :Elsevier Academic Press,
    UID:
    edoccha_BV042527205
    Umfang: 1 Online-Ressource (xxi, 1050 pages) : , Illustrationen, Diagramme.
    ISBN: 978-0-12-801722-7 , 0-12-801722-8
    Anmerkung: Includes bibliographical references and index. - "This tutorial text gives a unifying perspective on machine learning by covering both probabilistic and deterministic approaches--which are based on optimization techniques--together with the Bayesian inference approach, whose essence lies in the use of a hierarchy of probabilistic models"--Publisher's website
    Weitere Ausg.: Erscheint auch als Druck-Ausgabe ISBN 978-0-12-801522-3
    Sprache: Englisch
    Fachgebiete: Informatik
    RVK:
    RVK:
    Schlagwort(e): Maschinelles Lernen ; Bayes-Verfahren ; Optimierung
    URL: Volltext  (URL des Erstveröffentlichers)
    Bibliothek Standort Signatur Band/Heft/Jahr Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 3
    Online-Ressource
    Online-Ressource
    Amsterdam :Elsevier Academic Press,
    UID:
    edocfu_BV042527205
    Umfang: 1 Online-Ressource (xxi, 1050 pages) : , Illustrationen, Diagramme.
    ISBN: 978-0-12-801722-7 , 0-12-801722-8
    Anmerkung: Includes bibliographical references and index. - "This tutorial text gives a unifying perspective on machine learning by covering both probabilistic and deterministic approaches--which are based on optimization techniques--together with the Bayesian inference approach, whose essence lies in the use of a hierarchy of probabilistic models"--Publisher's website
    Weitere Ausg.: Erscheint auch als Druck-Ausgabe ISBN 978-0-12-801522-3
    Sprache: Englisch
    Fachgebiete: Informatik
    RVK:
    RVK:
    Schlagwort(e): Maschinelles Lernen ; Bayes-Verfahren ; Optimierung
    URL: Volltext  (URL des Erstveröffentlichers)
    Bibliothek Standort Signatur Band/Heft/Jahr Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 4
    Online-Ressource
    Online-Ressource
    Amsterdam :Elsevier Academic Press,
    UID:
    almafu_BV042527205
    Umfang: 1 Online-Ressource (xxi, 1050 pages) : , Illustrationen, Diagramme.
    ISBN: 978-0-12-801722-7 , 0-12-801722-8
    Anmerkung: Includes bibliographical references and index. - "This tutorial text gives a unifying perspective on machine learning by covering both probabilistic and deterministic approaches--which are based on optimization techniques--together with the Bayesian inference approach, whose essence lies in the use of a hierarchy of probabilistic models"--Publisher's website
    Weitere Ausg.: Erscheint auch als Druck-Ausgabe ISBN 978-0-12-801522-3
    Sprache: Englisch
    Fachgebiete: Informatik
    RVK:
    RVK:
    Schlagwort(e): Maschinelles Lernen ; Bayes-Verfahren ; Optimierung
    URL: Volltext  (URL des Erstveröffentlichers)
    Bibliothek Standort Signatur Band/Heft/Jahr Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 5
    Online-Ressource
    Online-Ressource
    Amsterdam, [Netherlands] :Academic Press,
    UID:
    almahu_9948320730502882
    Umfang: 1 online resource (1,075 pages) : , illustrations
    ISBN: 9780128017227 (e-book)
    Weitere Ausg.: Print version: Theodoridis, Sergios. Machine learning : a Bayesian and optimization perspective. Amsterdam, [Netherlands] : Academic Press, c2015 ISBN 9780128015223
    Sprache: Englisch
    Schlagwort(e): Electronic books.
    Bibliothek Standort Signatur Band/Heft/Jahr Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 6
    Online-Ressource
    Online-Ressource
    Amsterdam, [Netherlands] :Academic Press,
    UID:
    edoccha_9958115550302883
    Umfang: 1 online resource (1075 p.)
    Ausgabe: First edition.
    ISBN: 0-12-801722-8
    Serie: .NET Developers Series
    Inhalt: This tutorial text gives a unifying perspective on machine learning by covering both probabilistic and deterministic approaches -which are based on optimization techniques – together with the Bayesian inference approach, whose essence lies in the use of a hierarchy of probabilistic models. The book presents the major machine learning methods as they have been developed in different disciplines, such as statistics, statistical and adaptive signal processing and computer science. Focusing on the physical reasoning behind the mathematics, all the various methods and techniques are explained in depth, supported by examples and problems, giving an invaluable resource to the student and researcher for understanding and applying machine learning concepts. The book builds carefully from the basic classical methods  to  the most recent trends, with chapters written to be as self-contained as possible, making the text suitable for  different courses: pattern recognition, statistical/adaptive signal processing, statistical/Bayesian learning, as well as short courses on sparse modeling, deep learning, and probabilistic graphical models. All major classical techniques: Mean/Least-Squares regression and filtering, Kalman filtering, stochastic approximation and online learning, Bayesian classification, decision trees, logistic regression and boosting methods. The latest trends: Sparsity, convex analysis and optimization, online distributed algorithms, learning in RKH spaces, Bayesian inference, graphical and hidden Markov models, particle filtering, deep learning, dictionary learning and latent variables modeling. Case studies - protein folding prediction, optical character recognition, text authorship identification, fMRI data analysis, change point detection, hyperspectral image unmixing, target localization, channel equalization and echo cancellation, show how the theory can be applied. MATLAB code for all the main algorithms are available on an accompanying website, enabling the reader to experiment with the code.
    Anmerkung: Description based upon print version of record. , Front Cover; Machine Learning: A Bayesian and Optimization Perspective; Copyright ; Contents; Preface; Acknowledgments; Notation; Dedication ; Chapter 1: Introduction; 1.1 What Machine Learning is About; 1.1.1 Classification; 1.1.2 Regression; 1.2 Structure and a Road Map of the Book; References; Chapter 2: Probability and Stochastic Processes ; 2.1 Introduction; 2.2 Probability and Random Variables; 2.2.1 Probability; Relative frequency definition; Axiomatic definition; 2.2.2 Discrete Random Variables; Joint and conditional probabilities; Bayes theorem; 2.2.3 Continuous Random Variables , 2.2.4 Mean and VarianceComplex random variables; 2.2.5 Transformation of Random Variables; 2.3 Examples of Distributions; 2.3.1 Discrete Variables; The Bernoulli distribution; The Binomial distribution; The Multinomial distribution; 2.3.2 Continuous Variables; The uniform distribution; The Gaussian distribution; The central limit theorem; The exponential distribution; The beta distribution; The gamma distribution; The Dirichlet distribution; 2.4 Stochastic Processes; 2.4.1 First and Second Order Statistics; 2.4.2 Stationarity and Ergodicity; 2.4.3 Power Spectral Density , Properties of the autocorrelation sequencePower spectral density; Transmission through a linear system; Physical interpretation of the PSD; 2.4.4 Autoregressive Models; 2.5 Information Theory; 2.5.1 Discrete Random Variables; Information; Mutual and conditional information; Entropy and average mutual information; 2.5.2 Continuous Random Variables; Average mutual information and conditional information; Relative entropy or Kullback-Leibler divergence; 2.6 Stochastic Convergence; Convergence everywhere; Convergence almost everywhere; Convergence in the mean-square sense , Convergence in probabilityConvergence in distribution; Problems; References; Chapter 3: Learning in Parametric Modeling: Basic Concepts and Directions ; 3.1 Introduction; 3.2 Parameter Estimation: The Deterministic Point of View; 3.3 Linear Regression; 3.4 Classification; Generative versus discriminative learning; Supervised, semisupervised, and unsupervised learning; 3.5 Biased Versus Unbiased Estimation; 3.5.1 Biased or Unbiased Estimation?; 3.6 The Cramér-Rao Lower Bound; 3.7 Sufficient Statistic; 3.8 Regularization; Inverse problems: Ill-conditioning and overfitting , 3.9 The Bias-Variance Dilemma3.9.1 Mean-Square Error Estimation; 3.9.2 Bias-Variance Tradeoff; 3.10 Maximum Likelihood Method; 3.10.1 Linear Regression: The Nonwhite Gaussian Noise Case; 3.11 Bayesian Inference; 3.11.1 The Maximum A Posteriori Probability Estimation Method; 3.12 Curse of Dimensionality; 3.13 Validation; Cross-validation; 3.14 Expected and Empirical Loss Functions; 3.15 Nonparametric Modeling and Estimation; Problems; References; Chapter 4: Mean-Square Error Linear Estimation; 4.1 Introduction; 4.2 Mean-Square Error Linear Estimation: The Normal Equations , 4.2.1 The Cost Function Surface , English
    Weitere Ausg.: ISBN 0-12-801522-5
    Sprache: Englisch
    Bibliothek Standort Signatur Band/Heft/Jahr Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 7
    Online-Ressource
    Online-Ressource
    Amsterdam, [Netherlands] :Academic Press,
    UID:
    edocfu_9958115550302883
    Umfang: 1 online resource (1075 p.)
    Ausgabe: First edition.
    ISBN: 0-12-801722-8
    Serie: .NET Developers Series
    Inhalt: This tutorial text gives a unifying perspective on machine learning by covering both probabilistic and deterministic approaches -which are based on optimization techniques – together with the Bayesian inference approach, whose essence lies in the use of a hierarchy of probabilistic models. The book presents the major machine learning methods as they have been developed in different disciplines, such as statistics, statistical and adaptive signal processing and computer science. Focusing on the physical reasoning behind the mathematics, all the various methods and techniques are explained in depth, supported by examples and problems, giving an invaluable resource to the student and researcher for understanding and applying machine learning concepts. The book builds carefully from the basic classical methods  to  the most recent trends, with chapters written to be as self-contained as possible, making the text suitable for  different courses: pattern recognition, statistical/adaptive signal processing, statistical/Bayesian learning, as well as short courses on sparse modeling, deep learning, and probabilistic graphical models. All major classical techniques: Mean/Least-Squares regression and filtering, Kalman filtering, stochastic approximation and online learning, Bayesian classification, decision trees, logistic regression and boosting methods. The latest trends: Sparsity, convex analysis and optimization, online distributed algorithms, learning in RKH spaces, Bayesian inference, graphical and hidden Markov models, particle filtering, deep learning, dictionary learning and latent variables modeling. Case studies - protein folding prediction, optical character recognition, text authorship identification, fMRI data analysis, change point detection, hyperspectral image unmixing, target localization, channel equalization and echo cancellation, show how the theory can be applied. MATLAB code for all the main algorithms are available on an accompanying website, enabling the reader to experiment with the code.
    Anmerkung: Description based upon print version of record. , Front Cover; Machine Learning: A Bayesian and Optimization Perspective; Copyright ; Contents; Preface; Acknowledgments; Notation; Dedication ; Chapter 1: Introduction; 1.1 What Machine Learning is About; 1.1.1 Classification; 1.1.2 Regression; 1.2 Structure and a Road Map of the Book; References; Chapter 2: Probability and Stochastic Processes ; 2.1 Introduction; 2.2 Probability and Random Variables; 2.2.1 Probability; Relative frequency definition; Axiomatic definition; 2.2.2 Discrete Random Variables; Joint and conditional probabilities; Bayes theorem; 2.2.3 Continuous Random Variables , 2.2.4 Mean and VarianceComplex random variables; 2.2.5 Transformation of Random Variables; 2.3 Examples of Distributions; 2.3.1 Discrete Variables; The Bernoulli distribution; The Binomial distribution; The Multinomial distribution; 2.3.2 Continuous Variables; The uniform distribution; The Gaussian distribution; The central limit theorem; The exponential distribution; The beta distribution; The gamma distribution; The Dirichlet distribution; 2.4 Stochastic Processes; 2.4.1 First and Second Order Statistics; 2.4.2 Stationarity and Ergodicity; 2.4.3 Power Spectral Density , Properties of the autocorrelation sequencePower spectral density; Transmission through a linear system; Physical interpretation of the PSD; 2.4.4 Autoregressive Models; 2.5 Information Theory; 2.5.1 Discrete Random Variables; Information; Mutual and conditional information; Entropy and average mutual information; 2.5.2 Continuous Random Variables; Average mutual information and conditional information; Relative entropy or Kullback-Leibler divergence; 2.6 Stochastic Convergence; Convergence everywhere; Convergence almost everywhere; Convergence in the mean-square sense , Convergence in probabilityConvergence in distribution; Problems; References; Chapter 3: Learning in Parametric Modeling: Basic Concepts and Directions ; 3.1 Introduction; 3.2 Parameter Estimation: The Deterministic Point of View; 3.3 Linear Regression; 3.4 Classification; Generative versus discriminative learning; Supervised, semisupervised, and unsupervised learning; 3.5 Biased Versus Unbiased Estimation; 3.5.1 Biased or Unbiased Estimation?; 3.6 The Cramér-Rao Lower Bound; 3.7 Sufficient Statistic; 3.8 Regularization; Inverse problems: Ill-conditioning and overfitting , 3.9 The Bias-Variance Dilemma3.9.1 Mean-Square Error Estimation; 3.9.2 Bias-Variance Tradeoff; 3.10 Maximum Likelihood Method; 3.10.1 Linear Regression: The Nonwhite Gaussian Noise Case; 3.11 Bayesian Inference; 3.11.1 The Maximum A Posteriori Probability Estimation Method; 3.12 Curse of Dimensionality; 3.13 Validation; Cross-validation; 3.14 Expected and Empirical Loss Functions; 3.15 Nonparametric Modeling and Estimation; Problems; References; Chapter 4: Mean-Square Error Linear Estimation; 4.1 Introduction; 4.2 Mean-Square Error Linear Estimation: The Normal Equations , 4.2.1 The Cost Function Surface , English
    Weitere Ausg.: ISBN 0-12-801522-5
    Sprache: Englisch
    Bibliothek Standort Signatur Band/Heft/Jahr Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 8
    Online-Ressource
    Online-Ressource
    Amsterdam, [Netherlands] :Academic Press,
    UID:
    almahu_9948391930502882
    Umfang: 1 online resource (1075 p.)
    Ausgabe: First edition.
    ISBN: 0-12-801722-8
    Serie: .NET Developers Series
    Inhalt: This tutorial text gives a unifying perspective on machine learning by covering both probabilistic and deterministic approaches -which are based on optimization techniques – together with the Bayesian inference approach, whose essence lies in the use of a hierarchy of probabilistic models. The book presents the major machine learning methods as they have been developed in different disciplines, such as statistics, statistical and adaptive signal processing and computer science. Focusing on the physical reasoning behind the mathematics, all the various methods and techniques are explained in depth, supported by examples and problems, giving an invaluable resource to the student and researcher for understanding and applying machine learning concepts. The book builds carefully from the basic classical methods  to  the most recent trends, with chapters written to be as self-contained as possible, making the text suitable for  different courses: pattern recognition, statistical/adaptive signal processing, statistical/Bayesian learning, as well as short courses on sparse modeling, deep learning, and probabilistic graphical models. All major classical techniques: Mean/Least-Squares regression and filtering, Kalman filtering, stochastic approximation and online learning, Bayesian classification, decision trees, logistic regression and boosting methods. The latest trends: Sparsity, convex analysis and optimization, online distributed algorithms, learning in RKH spaces, Bayesian inference, graphical and hidden Markov models, particle filtering, deep learning, dictionary learning and latent variables modeling. Case studies - protein folding prediction, optical character recognition, text authorship identification, fMRI data analysis, change point detection, hyperspectral image unmixing, target localization, channel equalization and echo cancellation, show how the theory can be applied. MATLAB code for all the main algorithms are available on an accompanying website, enabling the reader to experiment with the code.
    Anmerkung: Description based upon print version of record. , Front Cover; Machine Learning: A Bayesian and Optimization Perspective; Copyright ; Contents; Preface; Acknowledgments; Notation; Dedication ; Chapter 1: Introduction; 1.1 What Machine Learning is About; 1.1.1 Classification; 1.1.2 Regression; 1.2 Structure and a Road Map of the Book; References; Chapter 2: Probability and Stochastic Processes ; 2.1 Introduction; 2.2 Probability and Random Variables; 2.2.1 Probability; Relative frequency definition; Axiomatic definition; 2.2.2 Discrete Random Variables; Joint and conditional probabilities; Bayes theorem; 2.2.3 Continuous Random Variables , 2.2.4 Mean and VarianceComplex random variables; 2.2.5 Transformation of Random Variables; 2.3 Examples of Distributions; 2.3.1 Discrete Variables; The Bernoulli distribution; The Binomial distribution; The Multinomial distribution; 2.3.2 Continuous Variables; The uniform distribution; The Gaussian distribution; The central limit theorem; The exponential distribution; The beta distribution; The gamma distribution; The Dirichlet distribution; 2.4 Stochastic Processes; 2.4.1 First and Second Order Statistics; 2.4.2 Stationarity and Ergodicity; 2.4.3 Power Spectral Density , Properties of the autocorrelation sequencePower spectral density; Transmission through a linear system; Physical interpretation of the PSD; 2.4.4 Autoregressive Models; 2.5 Information Theory; 2.5.1 Discrete Random Variables; Information; Mutual and conditional information; Entropy and average mutual information; 2.5.2 Continuous Random Variables; Average mutual information and conditional information; Relative entropy or Kullback-Leibler divergence; 2.6 Stochastic Convergence; Convergence everywhere; Convergence almost everywhere; Convergence in the mean-square sense , Convergence in probabilityConvergence in distribution; Problems; References; Chapter 3: Learning in Parametric Modeling: Basic Concepts and Directions ; 3.1 Introduction; 3.2 Parameter Estimation: The Deterministic Point of View; 3.3 Linear Regression; 3.4 Classification; Generative versus discriminative learning; Supervised, semisupervised, and unsupervised learning; 3.5 Biased Versus Unbiased Estimation; 3.5.1 Biased or Unbiased Estimation?; 3.6 The Cramér-Rao Lower Bound; 3.7 Sufficient Statistic; 3.8 Regularization; Inverse problems: Ill-conditioning and overfitting , 3.9 The Bias-Variance Dilemma3.9.1 Mean-Square Error Estimation; 3.9.2 Bias-Variance Tradeoff; 3.10 Maximum Likelihood Method; 3.10.1 Linear Regression: The Nonwhite Gaussian Noise Case; 3.11 Bayesian Inference; 3.11.1 The Maximum A Posteriori Probability Estimation Method; 3.12 Curse of Dimensionality; 3.13 Validation; Cross-validation; 3.14 Expected and Empirical Loss Functions; 3.15 Nonparametric Modeling and Estimation; Problems; References; Chapter 4: Mean-Square Error Linear Estimation; 4.1 Introduction; 4.2 Mean-Square Error Linear Estimation: The Normal Equations , 4.2.1 The Cost Function Surface , English
    Weitere Ausg.: ISBN 0-12-801522-5
    Sprache: Englisch
    Bibliothek Standort Signatur Band/Heft/Jahr Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 9
    Online-Ressource
    Online-Ressource
    Amsterdam : Elsevier Academic Press
    UID:
    b3kat_BV042527205
    Umfang: 1 Online-Ressource (xxi, 1050 pages) , Illustrationen, Diagramme
    ISBN: 9780128017227 , 0128017228
    Anmerkung: Includes bibliographical references and index , "This tutorial text gives a unifying perspective on machine learning by covering both probabilistic and deterministic approaches--which are based on optimization techniques--together with the Bayesian inference approach, whose essence lies in the use of a hierarchy of probabilistic models"--Publisher's website
    Weitere Ausg.: Erscheint auch als Druck-Ausgabe ISBN 978-0-12-801522-3
    Sprache: Englisch
    Fachgebiete: Informatik
    RVK:
    RVK:
    Schlagwort(e): Maschinelles Lernen ; Bayes-Verfahren ; Optimierung
    URL: Volltext  (URL des Erstveröffentlichers)
    Bibliothek Standort Signatur Band/Heft/Jahr Verfügbarkeit
    BibTip Andere fanden auch interessant ...
Meinten Sie 0228010225?
Meinten Sie 0128017228?
Meinten Sie 0120152215?
Schließen ⊗
Diese Webseite nutzt Cookies und das Analyse-Tool Matomo. Weitere Informationen finden Sie auf den KOBV Seiten zum Datenschutz