UID:
almahu_9949292205402882
Format:
1 online resource (xix, 248 pages)
Edition:
1st ed. 2015.
ISBN:
1-4302-5990-6
Series Statement:
The expert's voice in machine learning
Content:
Machine learning techniques provide cost-effective alternatives to traditional methods for extracting underlying relationships between information and data and for predicting future events by processing existing information to train models. Efficient Learning Machines explores the major topics of machine learning, including knowledge discovery, classifications, genetic algorithms, neural networking, kernel methods, and biologically-inspired techniques. Mariette Awad and Rahul Khanna’s synthetic approach weaves together the theoretical exposition, design principles, and practical applications of efficient machine learning. Their experiential emphasis, expressed in their close analysis of sample algorithms throughout the book, aims to equip engineers, students of engineering, and system designers to design and create new and more efficient machine learning systems. Readers of Efficient Learning Machines will learn how to recognize and analyze the problems that machine learning technology can solve for them, how to implement and deploy standard solutions to sample problems, and how to design new systems and solutions. Advances in computing performance, storage, memory, unstructured information retrieval, and cloud computing have coevolved with a new generation of machine learning paradigms and big data analytics, which the authors present in the conceptual context of their traditional precursors. Awad and Khanna explore current developments in the deep learning techniques of deep neural networks, hierarchical temporal memory, and cortical algorithms. Nature suggests sophisticated learning techniques that deploy simple rules to generate highly intelligent and organized behaviors with adaptive, evolutionary, and distributed properties. The authors examine the most popular biologically-inspired algorithms, together with a sample application to distributed datacenter management. They also discuss machine learning techniques for addressing problems of multi-objective optimization in which solutions in real-world systems are constrained and evaluated based on how well they perform with respect to multiple objectives in aggregate. Two chapters on support vector machines and their extensions focus on recent improvements to the classification and regression techniques at the core of machine learning.
Note:
Includes index.
,
Contents at a Glance; Chapter 1: Machine Learning; Key Terminology; Developing a Learning Machine; Machine Learning Algorithms; Popular Machine Learning Algorithms; C4.5; k -Means; Support Vector Machines; Apriori; Estimation Maximization; PageRank; AdaBoost (Adaptive Boosting); k -Nearest Neighbors; Naive Bayes; Classification and Regression Trees; Challenging Problems in Data Mining Research; Scaling Up for High-Dimensional Data and High-Speed Data Streams; Mining Sequence Data and Time Series Data; Mining Complex Knowledge from Complex Data
,
Distributed Data Mining and Mining Multi-Agent DataData Mining Process-Related Problems; Security, Privacy, and Data Integrity; Dealing with Nonstatic, Unbalanced, and Cost-Sensitive Data; Summary; References; Chapter 2: Machine Learning and Knowledge Discovery; Knowledge Discovery; Classification; Clustering; Dimensionality Reduction; Collaborative Filtering; Machine Learning: Classification Algorithms; Logistic Regression; Random Forest; Hidden Markov Model; Multilayer Perceptron; Machine Learning: Clustering Algorithms; k -Means Clustering; Fuzzy k -Means (Fuzzy c - Means)
,
Streaming k -MeansStreaming Step; Ball K-Means Step; Machine Learning: Dimensionality Reduction; Singular Value Decomposition; Principal Component Analysis; Lanczos Algorithm; Initialize; Algorithm; Machine Learning: Collaborative Filtering; User-Based Collaborative Filtering; Item-Based Collaborative Filtering; Alternating Least Squares with Weighted- l -Regularization; Machine Learning: Similarity Matrix; Pearson Correlation Coefficient; Spearman Rank Correlation Coefficient; Euclidean Distance; Jaccard Similarity Coefficient; Summary; References
,
Chapter 3: Support Vector Machines for ClassificationSVM from a Geometric Perspective; SVM Main Properties; Hard-Margin SVM; Soft-Margin SVM; Kernel SVM; Multiclass SVM; SVM with Imbalanced Datasets; Improving SVM Computational Requirements; Case Study of SVM for Handwriting Recognition; Preprocessing; Feature Extraction; Hierarchical, Three-Stage SVM; Experimental Results; Complexity Analysis; References; Chapter 4: Support Vector Regression; SVR Overview; SVR: Concepts, Mathematical Model, and Graphical Representation
,
Kernel SVR and Different Loss Functions: Mathematical Model and Graphical RepresentationBayesian Linear Regression; Asymmetrical SVR for Power Prediction: Case Study; References; Chapter 5: Hidden Markov Model; Discrete Markov Process; Definition 1; Definition 2; Definition 3; Introduction to the Hidden Markov Model; Essentials of the Hidden Markov Model; The Three Basic Problems of HMM; Solutions to the Three Basic Problems of HMM; Solution to Problem 1; Forward Algorithm; Backward Algorithm; Scaling; Solution to Problem 2; Initialization; Recursion; Termination; State Sequence Backtracking
,
Solution to Problem 3
,
English
Additional Edition:
ISBN 1-4302-5989-2
Language:
English
DOI:
10.1007/978-1-4302-5990-9
Bookmarklink