Format:
606 Seiten
,
Diagramme
ISBN:
9783031106019
,
9783031106040
Content:
Dimensionality reduction, also known as manifold learning, is an area of machine learning used for extracting informative features from data for better representation of data or separation between classes. This book presents a cohesive review of linear and nonlinear dimensionality reduction and manifold learning. Three main aspects of dimensionality reduction are covered: spectral dimensionality reduction, probabilistic dimensionality reduction, and neural network-based dimensionality reduction, which have geometric, probabilistic, and information-theoretic points of view to dimensionality reduction, respectively. The necessary background and preliminaries on linear algebra, optimization, and kernels are also explained to ensure a comprehensive understanding of the algorithms.The tools introduced in this book can be applied to various applications involving feature extraction, image processing, computer vision, and signal processing. This book is applicable to a wide audience who would like to acquire a deep understanding of the various ways to extract, transform, and understand the structure of data. The intended audiences are academics, students, and industry professionals. Academic researchers and students can use this book as a textbook for machine learning and dimensionality reduction. Data scientists, machine learning scientists, computer vision scientists, and computer scientists can use this book as a reference. It can also be helpful to statisticians in the field of statistical learning and applied mathematicians in the fields of manifolds and subspace analysis. Industry professionals, including applied engineers, data engineers, and engineers in various fields of science dealing with machine learning, can use this as a guidebook for feature extraction from their data, as the raw data in industry often require preprocessing.The book is grounded in theory but provides thorough explanations and diverse examples to improve the reader's comprehension of the advanced topics. Advanced methods are explained in a step-by-step manner so that readers of all levels can follow the reasoning and come to a deep understanding of the concepts. This book does not assume advanced theoretical background in machine learning and provides necessary background, although an undergraduate-level background in linear algebra and calculus is recommended
Note:
Chapter 1: Introduction- Part 1: Preliminaries and Background- Chapter 2: Background on Linear Algebra- Chapter 3: Background on Kernels- Chapter 4: Background on Optimization- Part 2: Spectral dimensionality Reduction- Chapter 5: Principal Component Analysis- Chapter 6: Fisher Discriminant Analysis- Chapter 7: Multidimensional Scaling, Sammon Mapping, and Isomap- Chapter 8: Locally Linear Embedding- Chapter 9: Laplacian-based Dimensionality Reduction- Chapter 10: Unified Spectral Framework and Maximum Variance Unfolding- Chapter 11: Spectral Metric Learning- Part 3: Probabilistic Dimensionality Reduction- Chapter 12: Factor Analysis and Probabilistic Principal Component Analysis- Chapter 13: Probabilistic Metric Learning- Chapter 14: Random Projection- Chapter 15: Sufficient Dimension Reduction and Kernel Dimension Reduction- Chapter 16: Stochastic Neighbour Embedding- Chapter 17: Uniform Manifold Approximation and Projection (UMAP)- Part 4: Neural Network-based Dimensionality Reduction- Chapter 18: Restricted Boltzmann Machine and Deep Belief Network- Chapter 19: Deep Metric Learning- Chapter 20: Variational Autoencoders- Chapter 21: Adversarial Autoencoders
Additional Edition:
ISBN 9783031106026
Language:
English
DOI:
10.1007/978-3-031-10602-6