Ihre E-Mail wurde erfolgreich gesendet. Bitte prüfen Sie Ihren Maileingang.

Leider ist ein Fehler beim E-Mail-Versand aufgetreten. Bitte versuchen Sie es erneut.

Vorgang fortführen?

Exportieren
Filter
Medientyp
Sprache
Region
Erscheinungszeitraum
Person/Organisation
  • 1
    Online-Ressource
    Online-Ressource
    Cambridge :Cambridge University Press,
    UID:
    almahu_9948633269202882
    Umfang: 1 online resource (xvi, 470 pages) : , digital, PDF file(s).
    ISBN: 9781108332873 (ebook)
    Inhalt: With a machine learning approach and less focus on linguistic details, this gentle introduction to natural language processing develops fundamental mathematical and deep learning models for NLP under a unified framework. NLP problems are systematically organised by their machine learning nature, including classification, sequence labelling, and sequence-to-sequence problems. Topics covered include statistical machine learning and deep learning models, text classification and structured prediction models, generative and discriminative models, supervised and unsupervised learning with latent variables, neural networks, and transition-based methods. Rich connections are drawn between concepts throughout the book, equipping students with the tools needed to establish a deep understanding of NLP solutions, adapt existing models, and confidently develop innovative models of their own. Featuring a host of examples, intuition, and end of chapter exercises, plus sample code available as an online resource, this textbook is an invaluable tool for the upper undergraduate and graduate student.
    Anmerkung: Title from publisher's bibliographic system (viewed on 01 Jan 2021).
    Weitere Ausg.: Print version: ISBN 9781108420211
    Sprache: Englisch
    Bibliothek Standort Signatur Band/Heft/Jahr Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 2
    Online-Ressource
    Online-Ressource
    Cambridge, England :Cambridge University Press,
    UID:
    almafu_9961294064502883
    Umfang: 1 online resource (xvi, 470 pages) : , digital, PDF file(s).
    Ausgabe: First edition.
    ISBN: 9781108352178 , 1108352170 , 9781108349772 , 1108349773 , 9781108332873 , 1108332870
    Inhalt: With a machine learning approach and less focus on linguistic details, this gentle introduction to natural language processing develops fundamental mathematical and deep learning models for NLP under a unified framework. NLP problems are systematically organised by their machine learning nature, including classification, sequence labelling, and sequence-to-sequence problems. Topics covered include statistical machine learning and deep learning models, text classification and structured prediction models, generative and discriminative models, supervised and unsupervised learning with latent variables, neural networks, and transition-based methods. Rich connections are drawn between concepts throughout the book, equipping students with the tools needed to establish a deep understanding of NLP solutions, adapt existing models, and confidently develop innovative models of their own. Featuring a host of examples, intuition, and end of chapter exercises, plus sample code available as an online resource, this textbook is an invaluable tool for the upper undergraduate and graduate student.
    Anmerkung: Title from publisher's bibliographic system (viewed on 01 Jan 2021). , Cover -- Half-title page -- Title page -- Copyright page -- Contents -- Preface -- Acknowledgements -- Notation -- Part I Basics -- 1 Introduction -- 1.1 What is NLP? -- 1.2 NLP Tasks -- 1.3 NLP from a Machine Learning Perspective -- Summary -- Chapter Notes -- Exercises -- 2 Counting Relative Frequencies -- 2.1 Probabilistic Modelling -- 2.2 n-gram Language Models -- 2.3 A Probabilistic Model for Text Classification -- Summary -- Chapter Notes -- Exercises -- 3 Feature Vectors -- 3.1 Modelling Documents in Vector Spaces -- 3.2 Multi-Class Classification -- 3.3 Discriminative Linear Models -- 3.4 Vector Spaces and Model Training -- Summary -- Chapter Notes -- Exercises -- 4 Discriminative Linear Classifiers -- 4.1 Log-Linear Models -- 4.2 SGD Training of SVMs -- 4.3 A Generalised Linear Model -- 4.4 Choosing and Combining Models -- Summary -- Chapter Notes -- Exercises -- 5 A Perspective from Information Theory -- 5.1 The Maximum Entropy Principle -- 5.2 KL-Divergence and Cross-Entropy -- 5.3 Mutual Information -- Summary -- Chapter Notes -- Exercises -- 6 Hidden Variables -- 6.1 Expectation Maximisation -- 6.2 Using EM for Training Models with Hidden Variables -- 6.3 Theory behind EM -- Summary -- Chapter Notes -- Exercises -- Part II Structures -- 7 Generative Sequence Labelling -- 7.1 Sequence Labelling -- 7.2 Hidden Markov Models -- 7.3 Finding Marginal Probabilities -- 7.4 EM for Unsupervised HMM Training -- Summary -- Chapter Notes -- Exercises -- 8 Discriminative Sequence Labelling -- 8.1 Locally Trained Models for Discriminative Sequence Labelling -- 8.2 The Label Bias Problem -- 8.3 Conditional Random Fields -- 8.4 Structured Perceptron -- 8.5 Structured SVM -- Summary -- Chapter Notes -- Exercises -- 9 Sequence Segmentation -- 9.1 Segmentation by Sequence Labelling -- 9.2 Discriminative Models for Sequence Segmentation. , 9.3 Structured Perceptron and Beam Search -- Summary -- Chapter Notes -- Exercises -- 10 Predicting Tree Structures -- 10.1 Generative Constituent Parsing -- 10.2 More Features for Constituent Parsing -- 10.3 Reranking -- 10.4 Beyond Sequences and Trees -- Summary -- Chapter Notes -- Exercises -- 11 Transition-Based Methods for Structured Prediction -- 11.1 Transition-Based Structured Prediction -- 11.2 Transition-Based Constituent Parsing -- 11.3 Transition-Based Dependency Parsing -- 11.4 Joint Parsing Models -- Summary -- Chapter Notes -- Exercises -- 12 Bayesian Network -- 12.1 A General Probabilistic Model -- 12.2 Training Bayesian Networks -- 12.3 Inference -- 12.4 Latent Dirichlet Allocation -- 12.5 Bayesian IBM Model 1 -- Summary -- Chapter Notes -- Exercises -- Part III Deep Learning -- 13 Neural Network -- 13.1 From One Layer to Multiple Layers -- 13.2 Building a Text Classifier without Manual Features -- 13.3 Improving Neural Network Training -- Summary -- Chapter Notes -- Exercises -- 14 Representation Learning -- Recurrent Neural Network -- 14.1 Recurrent Neural Network -- 14.2 Neural Attention -- 14.3 Representing Trees -- 14.4 Representing Graphs -- 14.5 Analysing Representation -- 14.6 More on Neural Network Training -- Summary -- Chapter Notes -- Exercises -- 15 Neural Structured Prediction -- 15.1 Local Graph-Based Models -- 15.2 Local Transition-Based Models -- 15.3 Global Structured Models -- Summary -- Chapter Notes -- Exercises -- 16 Working with Two Texts -- 16.1 Sequence-to-Sequence Models -- 16.2 Text Matching Models -- Summary -- Chapter Notes -- Exercises -- 17 Pre-training and Transfer Learning -- 17.1 Neural Language Models and Word Embedding -- 17.2 Contextualised Word Representations -- 17.3 Transfer Learning -- Summary -- Chapter Notes -- Exercises -- 18 Deep Latent Variable Models. , 18.1 Introducing Latent Variables into a Neural Network Model -- 18.2 Working with Categorical Latent Variables -- 18.3 Working with Structured Latent Variables -- 18.4 Variational Inference -- 18.5 Neural Topic Models -- 18.6 VAEs for Language Modelling -- Summary -- Chapter Notes -- Exercises -- Bibliography -- Index.
    Weitere Ausg.: ISBN 9781108420211
    Weitere Ausg.: ISBN 1108420214
    Sprache: Englisch
    Bibliothek Standort Signatur Band/Heft/Jahr Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 3
    Buch
    Buch
    Cambridge ; New York ; Melbourne ; New Delhi ; Singapore : Cambridge University Press
    UID:
    b3kat_BV047218955
    Umfang: xvi, 470 Seiten , Diagramme
    ISBN: 9781108420211 , 1108420214
    Sprache: Englisch
    Fachgebiete: Informatik , Komparatistik. Außereuropäische Sprachen/Literaturen
    RVK:
    RVK:
    Schlagwort(e): Natürliche Sprache ; Computerlinguistik
    Bibliothek Standort Signatur Band/Heft/Jahr Verfügbarkeit
    BibTip Andere fanden auch interessant ...
Meinten Sie 9781108120210?
Meinten Sie 9781107420212?
Meinten Sie 9781108402811?
Schließen ⊗
Diese Webseite nutzt Cookies und das Analyse-Tool Matomo. Weitere Informationen finden Sie auf den KOBV Seiten zum Datenschutz