Deutsch Englisch

Home

Speichern

Fernleihe


Thematische Suche - RVK


Informationen zum Benutzerkonto


Impressum

Datenschutz

Abmelden

 
 
 
 
1 von 1
      
* Ihre Aktion  suchen [und] ([PPN] Pica-Produktionsnummer) 1780712162
Online Ressourcen (ohne Zeitschr.)
PPN: 
1780712162 Über den Zitierlink können Sie diesen Titel als Lesezeichen ablegen oder weiterleiten
Titel: 
Person/en: 
Ausgabe: 
1st ed. 2021.
Sprache/n: 
Englisch
Veröffentlichungsangabe: 
Cham : Springer International Publishing [2021.] ; Cham : Imprint: Springer [2021.], 2021
Umfang: 
1 Online-Ressource(XXXVI, 630 p. 330 illus., 146 illus. in color.)
Schriftenreihe: 
Bibliogr. Zusammenhang: 
Erscheint auch als (Druck-Ausgabe) : ISBN 9783030884826
Erscheint auch als (Druck-Ausgabe) : ISBN 9783030884840
ISBN: 
978-3-030-88483-3
Weitere Ausgaben: 978-3-030-88482-6 (Druckausgabe), 978-3-030-88484-0 (Druckausgabe)
Identifier: 
DOI: 10.1007/978-3-030-88483-3
Schlagwörter: 
Mehr zum Thema: 
Dewey Dezimal-Klassifikation: 006.3;
Book Industry Communication: UYQ
bisacsh: COM004000
Inhalt: 
Posters - Fundamentals of NLP -- Syntax and Coherence - The Effect on Automatic Argument Quality Assessment -- ExperienceGen 1.0: A Text Generation Challenge Which Requires Deduction and Induction Ability -- Machine Translation and Multilinguality -- SynXLM-R: Syntax-enhanced XLM-R in Translation Quality Estimation -- Machine Learning for NLP -- Memetic Federated Learning for Biomedical Natural Language Processing -- Information Extraction and Knowledge Graph -- Event Argument Extraction via a Distance-Sensitive Graph Convolutional Network -- Exploit Vague Relation: An Augmented Temporal Relation Corpus and Evaluation -- Searching Effective Transformer for Seq2Seq Keyphrase Generation -- Prerequisite Learning with Pre-trained Language and Graph Embedding Models -- Summarization and Generation -- Variational Autoencoder with Interactive Attention for Affective Text Generation -- CUSTOM: Aspect-Oriented Product Summarization for E-Commerce -- Question Answering -- FABERT: A Feature Aggregation BERT-Based Model for Document Reranking -- Generating Relevant, Correct and Fluent Answers in Natural Answer Generation -- GeoCQA: A Large-scale Geography-Domain Chinese Question Answering Dataset from Examination -- Dialogue Systems -- Generating Informative Dialogue Responses with Keywords-Guided Networks -- Zero-Shot Deployment for Cross-Lingual Dialogue System -- MultiWOZ 2.3: A multi-domain task-oriented dialogue dataset enhanced with annotation corrections and co-reference annotation -- EmoDialoGPT: Enhancing DialoGPT with Emotion -- Social Media and Sentiment Analysis -- BERT-based Meta-learning Approach with Looking Back for Sentiment Analysis of Literary Book Reviews -- ISWR: an Implicit Sentiment Words Recognition Model Based on Sentiment Propagation -- An Aspect-Centralized Graph Convolutional Network for Aspect-based Sentiment Classification -- NLP Applications and Text Mining -- Capturing Global Informativeness in Open Domain Keyphrase Extraction -- Background Semantic Information Improves VerbalMetaphor Identification -- Multimodality and Explainability -- Towards unifying the explainability evaluation methods for NLP -- Explainable AI Workshop -- Detecting Covariate Drift with Explanations -- A Data-Centric Approach Towards Deducing Bias in Artificial Intelligence Systems for Textual Contexts -- Student Workshop -- Enhancing Model Robustness via Lexical Distilling -- Multi-stage Multi-modal Pre-training for Video Representation -- Nested Causality Extraction on Traffic Accident Texts as Question Answering -- Evaluation Workshop -- MSDF: A General Open-Domain Multi-Skill Dialog Framework -- RoKGDS: A Robust Knowledge Grounded Dialog System -- Enhanced Few-shot Learning with Multiple-Pattern-Exploiting Training -- BIT-Event at NLPCC-2021 Task 3: Subevent Identification via Adversarial Training -- Few-shot Learning for Chinese NLP tasks -- When Few-shot Learning Meets Large-scale Knowledge-enhanced Pre-training: Alibaba at FewCLUE -- TKB²ert: Two-stage Knowledge Infused Behavioral Fine-tuned BERT -- A Unified Information Extraction System Based on Role Recognition and Combination -- A Simple but Effective System for Multi-format Information Extraction -- A Hierarchical Sequence Labeling Model for Argument Pair Extraction -- Distant finetuning with discourse relations for stance classification -- The Solution of Xiaomi AI Lab to the 2021 Language and Intelligence Challenge: Multi-Format Information Extraction Task -- A Unified Platform for Information Extraction with Two-stage Process -- Overview of the NLPCC 2021 Shared Task: AutoIE2 -- Task 1 - Argumentative Text Understanding for AI Debater (AIDebater) -- Two Stage Learning for Argument Pairs Extraction -- Overview of Argumentative Text Understanding for AI Debater Challenge -- ACE: A Context-Enhanced model for Interactive Argument Pair Identification -- Context-Aware and Data-Augmented Transformer for Interactive Argument Pair Identification -- ARGUABLY AI Debater-NLPCC 2021 Task 3: Argument Pair Extraction from Peer Review and Rebuttals -- Sentence Rewriting for Fine-Tuned Model Based on Dictionary: Taking the Track 1 of NLPCC 2021 Argumentative Text Understanding for AI Debater as an Example -- Knowledge Enhanced transformers System for Claim Stance Classification.
This two-volume set of LNAI 13028 and LNAI 13029 constitutes the refereed proceedings of the 10th CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2021, held in Qingdao, China, in October 2021. The 66 full papers, 23 poster papers, and 27 workshop papers presented were carefully reviewed and selected from 446 submissions. They are organized in the following areas: Fundamentals of NLP; Machine Translation and Multilinguality; Machine Learning for NLP; Information Extraction and Knowledge Graph; Summarization and Generation; Question Answering; Dialogue Systems; Social Media and Sentiment Analysis; NLP Applications and Text Mining; and Multimodality and Explainability.
 
Anmerkung: 
Vervielfältigungen (z.B. Kopien, Downloads) sind nur von einzelnen Kapiteln oder Seiten und nur zum eigenen wissenschaftlichen Gebrauch erlaubt. Keine Weitergabe an Dritte. Kein systematisches Downloaden durch Robots.
Volltext: 
 
 
 
1 von 1
      
Über den Zitierlink können Sie diesen Titel als Lesezeichen ablegen oder weiterleiten
 
1 von 1