0
  • 聊天消息
  • 系統(tǒng)消息
  • 評論與回復(fù)
登錄后你可以
  • 下載海量資料
  • 學(xué)習(xí)在線課程
  • 觀看技術(shù)視頻
  • 寫文章/發(fā)帖/加入社區(qū)
會員中心
創(chuàng)作中心

完善資料讓更多小伙伴認(rèn)識你,還能領(lǐng)取20積分哦,立即完善>

3天內(nèi)不再提示

AAAI2021 NLP論文各個細(xì)方向的接收列表

深度學(xué)習(xí)自然語言處理 ? 來源:深度學(xué)習(xí)自然語言處理 ? 作者:深度學(xué)習(xí)自然語言 ? 2021-01-07 14:30 ? 次閱讀

最近整理了下AAAI2021 NLP論文各個細(xì)方向的接收列表!應(yīng)該還有幾篇漏網(wǎng)之魚,之后發(fā)現(xiàn)了補上~ AAAI2021接收論文PDF查看地址如下: https://aaai.org/Conferences/AAAI-21/wp-content/uploads/2020/12/AAAI-21_Accepted-Paper-List.Main_.Technical.Track_.pdf

NLP細(xì)方向論文列表

情感分析

Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis

An Adaptive Hybrid Framework for Cross-Domain Aspect-Based Sentiment Analysis

Bridging Towers of Multi-Task Learning with a Gating Mechanism for Aspect-Based Sentiment Analysis and Sequential Metaphor Identification

Human-Level Interpretable Learning for Aspect-Based Sentiment Analysis

A Joint Training Dual-MRC Framework for Aspect Based Sentiment Analysis

Quantum Cognitively Motivated Decision Fusion for Video Sentiment Analysis

Context-Guided BERT for Targeted Aspect-Based Sentiment Analysis

Segmentation of Tweets with URLs and its Applications to Sentiment Analysis

A Neural Group-Wise Sentiment Analysis Model with Data Sparsity Awareness

句法分析

Encoder-Decoder Based Unified Semantic Role Labeling with Label-Aware Syntax

Code Completion by Modeling Flattened Abstract Syntax Trees as Graphs

Story Ending Generation with Multi-Level Graph Convolutional Networks over Dependency Trees

命名實體識別

Multi-Modal Graph Fusion for Named Entity Recognition with Targeted Visual Guidance

CrossNER: Evaluating Cross-Domain Named Entity Recognition

A Supervised Multi-Head Self-Attention Network for Nested Named Entity Recognition

Nested Named Entity Recognition with Partially-Observed TreeCRFs

Continual Learning for Named Entity Recognition

Knowledge-Aware Named Entity Recognition with Alleviating Heterogeneity

Denoising Distantly Supervised Named Entity Recognition via a Hypergeometric Probabilistic Model

MTAAL: Multi-Task Adversarial Active Learning for Medical Named Entity Recognition and Normalization

人機(jī)對話/問答

Reinforced History Backtracking for Conversational Question Answering

Quantum-Inspired Neural Network for Conversational Emotion Recognition

DialogXL: All-in-One XLNet for Multi-Party Conversation Emotion Recognition

Learning from My Friends: Few-Shot Personalized Conversation Systems via Social Networks

NaturalConv: A Chinese Dialogue Dataset Towards Multi-Turn Topic-Driven Conversation

Conversational Neuro-Symbolic Commonsense Reasoning

Keyword-Guided Neural Conversational Model

Infusing Multi-Source Knowledge with Heterogeneous Graph Neural Network for Emotional Conversation Generation

MultiTalk: A Highly-Branching Dialog Testbed for Diverse Conversations

Knowledge-Driven Data Construction for Zero-Shot Evaluation in Commonsense Question Answering

Reinforced History Backtracking for Conversational Question Answering

What the Role Is vs. What Plays the Role: Semi-Supervised Event Argument Extraction via Dual Question Answering

Regularizing Attention Networks for Anomaly Detection in Visual Question Answering50

TSQA: Tabular Scenario Based Question Answering

Unanswerable Question Correction in Question Answering over Personal Knowledge Base

Benchmarking Knowledge-Enhanced Commonsense Question Answering via Knowledge-to Text Transformation

Dynamic Neuro-Symbolic Knowledge Graph Construction for Zero-Shot Commonsense Question Answering

Asking the Right Questions: Learning Interpretable Action Models through Query Answering

HopRetriever: Retrieve Hops over Wikipedia to Answer Complex Questions

關(guān)系抽取

FL-MSRE: A Few-Shot Learning Based Approach to Multimodal Social Relation Extraction

Multi-View Inference for Relation Extraction with Uncertain Knowledge

GDPNet: Refining Latent Multi-View Graph for Relation Extraction

Progressive Multi-Task Learning with Controlled information Flow for Joint Entity and Relation Extraction

Curriculum-Meta Learning for Order-Robust Continual Relation Extraction

Document-Level Relation Extraction with Reconstruction

Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling

Entity Structure Within and Throughout: Modeling Mention Dependencies for Document Level Relation Extraction

Empower Distantly Supervised Relation Extraction with Collaborative Adversarial Training

Clinical Temporal Relation Extraction with Probabilistic Soft Logic Regularization and Global Inference

A Unified Multi-Task Learning Framework for Joint Extraction of Entities and Relations

事件抽取

GATE: Graph Attention Transformer Encoder for Cross-Lingual Relation and Event Extraction

What the Role Is vs. What Plays the Role: Semi-Supervised Event Argument Extraction via Dual Question Answering

Span-Based Event Coreference Resolution

機(jī)器翻譯

Self-Supervised Bilingual Syntactic Alignment for Neural Machine Translation

Empirical Regularization for Synthetic Sentence Pairs in Unsupervised Neural Machine Translation8

Efficient Object-Level Visual Context Modeling for Multimodal Machine Translation: Masking Irrelevant Objects Helps Grounding

Lexically Constrained Neural Machine Translation with Explicit Alignment Guidance

Finding Sparse Structure for Domain Specific Neural Machine Translation

Meta-Curriculum Learning for Domain Adaptation in Neural Machine Translation

Guiding Non-Autoregressive Neural Machine Translation Decoding with Reordering Information

Synchronous Interactive Decoding for Multilingual Neural Machine Translation

Accelerating Neural Machine Translation with Partial Word Embedding Compression

DirectQE: Direct Pretraining for Machine Translation Quality Estimation

We Don't Speak the Same Language: Interpreting Polarization through Machine Translation

知識圖譜

Dual Quaternion Knowledge Graph Embeddings

Type-Augmented Relation Prediction in Knowledge Graphs

ChronoR: Rotation Based Temporal Knowledge Graph Embedding

PASSLEAF: A Pool-Based Semi-Supervised Learning Framework for Uncertain Knowledge Graph Embedding

KG-BART: Knowledge Graph-Augmented Bart for Generative Commonsense Reasoning

Answering Complex Queries in Knowledge Graphs with Bidirectional Sequence Encoders

GaussianPath: A Bayesian Multi-Hop Reasoning Framework for Knowledge Graph Reasoning

Topology-Aware Correlations between Relations for Inductive Link Prediction in Knowledge Graphs

Learning from History: Modeling Temporal Knowledge Graphs with Sequential Copy Generation Networks

Neural Latent Space Model for Dynamic Networks and Temporal Knowledge Graphs

Knowledge Graph Embeddings with Projective Transformations

(Comet-) Atomic 2020: On Symbolic and Neural Commonsense Knowledge Graphs

Randomized Generation of Adversary-Aware Fake Knowledge Graphs to Combat Intellectual Property Theft

Dynamic Knowledge Graph Alignment

Dynamic Neuro-Symbolic Knowledge Graph Construction for Zero-Shot Commonsense Question Answering

閱讀理解

Semantics Altering Modifications for Evaluating Comprehension in Machine Reading

Retrospective Reader for Machine Reading Comprehension

Reasoning in Dialog: Improving Response Generation by Context Reading Comprehension

VisualMRC: Machine Reading Comprehension on Document Images

A Bidirectional Multi-Paragraph Reading Model for Zero-Shot Entity Linking

Bidirectional Machine Reading Comprehension for Aspect Sentiment Triplet Extraction

Audio-Oriented Multimodal Machine Comprehension via Dynamic Inter- and Intra-Modality Attention

Semantics Altering Modifications for Evaluating Comprehension in Machine Reading

文本生成

A Theoretical Analysis of the Repetition Problem in Text Generation

TextGAIL: Generative Adversarial Imitation Learning for Text Generation

Towards Faithfulness in Open Domain Table-to-Text Generation from an Entity-Centric View

Perception Score: A Learned Metric for Open-Ended Text Generation Evaluation

Generate Your Counterfactuals: Towards Controlled Counterfactual Generation for Text

Write-a-Speaker: Text-Based Emotional and Rhythmic Talking-Head Generation

Stylized Dialogue Response Generation Using Stylized Unpaired Texts

知識蒸餾

PSSM-Distil: Protein Secondary Structure Prediction (PSSP) on Low-Quality PSSM by Knowledge Distillation with Contrastive Learning

LRC-BERT: Latent-Representation Contrastive Knowledge Distillation for Natural Language Understanding

Peer Collaborative Learning for Online Knowledge Distillation

Few-Shot Class-Incremental Learning via Relation Knowledge Distillation

Harmonized Dense Knowledge Distillation Training for Multi-Exit Architectures

Diverse Knowledge Distillation for End-to-End Person Search

Learning to Augment for Data-Scarce Domain BERT Knowledge Distillation

Data-Free Knowledge Distillation with Soft Targeted Transfer Set Synthesis

ALP-KD: Attention-Based Layer Projection for Knowledge Distillation

Progressive Network Grafting for Few-Shot Knowledge Distillation

Show, Attend and Distill: Knowledge Distillation via Attention-Based Feature Matching

Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks

Reinforced Multi-Teacher Selection for Knowledge Distillation

少量或者零樣本學(xué)習(xí)

Tailoring Embedding Function to Heterogeneous Few-Shot Tasks by Global and Local Feature Adaptors

Learning Intact Features by Erasing-Inpainting for Few-Shot Classification

PTN: A Poisson Transfer Network for Semi-Supervised Few-Shot Learning

Few-Shot Lifelong Learning

Partial Is Better Than All: Revisiting Fine-Tuning Strategy for Few-Shot Learning

Few-Shot Font Generation with Localized Style Representations and Factorization

Few-Shot One-Class Classification via Meta-Learning

Relative and Absolute Location Embedding for Few-Shot Node Classification on Graph

FL-MSRE: A Few-Shot Learning Based Approach to Multimodal Social Relation Extraction

Learning a Few-Shot Embedding Model with Contrastive Learning

Learning from My Friends: Few-Shot Personalized Conversation Systems via Social Networks

Looking Wider for Better Adaptive Representation in Few-Shot Learning

Few-Shot Class-Incremental Learning via Relation Knowledge Distillation

Attributes-Guided and Pure-Visual Attention Alignment for Few-Shot Recognition

Task Cooperation for Semi-Supervised Few-Shot Learning

SALNet: Semi-Supervised Few-Shot Text Classification with Attention-Based Lexicon Construction

StarNet: Towards Weakly Supervised Few-Shot Object Detection

Progressive Network Grafting for Few-Shot Knowledge Distillation

Few-Shot Learning for Multi-Label Intent Detection

Incremental Embedding Learning via Zero-Shot Translation

Task Aligned Generative Meta-Learning for Zero-Shot Learning

Knowledge-Driven Data Construction for Zero-Shot Evaluation in Commonsense Question Answering

Generalized Zero-Shot Learning via Disentangled Representation

DASZL: Dynamic Action Signatures for Zero-Shot Learning

Semantic-Guided Reinforced Region Embedding for Generalized Zero-Shot Learning

Extracting Zero-Shot Structured Information from Form-like Documents: Pretraining with Keys and Triggers

A Bidirectional Multi-Paragraph Reading Model for Zero-Shot Entity Linking

Leveraging Table Content for Zero-Shot Text-to-SQL with Meta-Learning

Meta-Learning Framework with Applications to Zero-Shot Time-Series Forecasting

Dynamic Neuro-Symbolic Knowledge Graph Construction for Zero-Shot Commonsense Question Answering

跨領(lǐng)域

Error-Aware Density Isomorphism Reconstruction for Unsupervised Cross-Domain Crowd Counting

SD-Pose: Semantic Decomposition for Cross-Domain 6D Object Pose Estimation17

Dynamic Hybrid Relation Exploration Network for Cross-Domain Context-Dependent Semantic Parsing

An Adaptive Hybrid Framework for Cross-Domain Aspect-Based Sentiment Analysis

CrossNER: Evaluating Cross-Domain Named Entity Recognition

Learning Cycle-Consistent Cooperative Networks via Alternating MCMC Teaching for Unsupervised Cross-Domain Translation

Cross-Domain Grouping and Alignment for Domain Adaptive Semantic Segmentation

Embracing Domain Differences in Fake News: Cross-Domain Fake News Detection Using Multi-Modal Data

跨語言

FILTER: An Enhanced Fusion Method for Cross-Lingual Language Understanding

On the Importance of Word Order Information in Cross-Lingual Sequence Labeling

XL-WSD: An Extra-Large and Cross-Lingual Evaluation Framework for Word Sense Disambiguation

GATE: Graph Attention Transformer Encoder for Cross-Lingual Relation and Event Extraction

多語言

How Linguistically Fair Are Multilingual Pre-Trained Language Models?

Synchronous Interactive Decoding for Multilingual Neural Machine Translation

Adversarial Meta Sampling for Multilingual Low-Resource Speech Recognition

Analogy Training Multilingual Encoders

Multilingual Transfer Learning for QA Using Translation as Data Augmentation

新bert改進(jìn)模型

RpBERT: A Text-Image Relation Propagation-Based BERT Model for Multimodal NER

LRC-BERT: Latent-Representation Contrastive Knowledge Distillation for Natural Language Understanding

U-BERT: Pre-Training User Representations for Improved Recommendation

ActionBert: Leveraging User Actions for Semantic Understanding of User Interfaces

RareBERT: Transformer Architecture for Rare Disease Patient Identification Using Administrative Claims

DialogBERT: Discourse-Aware Response Generation via Learning to Recover and Rank Utterances

Degree Planning with PLAN-BERT: Multi-Semester Recommendation Using Future Courses of Interest

多模態(tài)

Audio-Oriented Multimodal Machine Comprehension via Dynamic Inter- and Intra-Modality Attention

SMIL: Multimodal Learning with Severely Missing Modality

RpBERT: A Text-Image Relation Propagation-Based BERT Model for Multimodal NER

Confidence-Aware Non-Repetitive Multimodal Transformers for TextCaps

Efficient Object-Level Visual Context Modeling for Multimodal Machine Translation: Masking Irrelevant Objects Helps Grounding

Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis

FL-MSRE: A Few-Shot Learning Based Approach to Multimodal Social Relation Extraction

Multi-Modal Multi-Label Emotion Recognition with Heterogeneous Hierarchical Message Passing

Multi-Modal Graph Fusion for Named Entity Recognition with Targeted Visual Guidance

Robust Multi-Modality Person Re-Identification

Deep Probabilistic Imaging: Uncertainty Quantification and Multi-Modal Solution Characterization for Computational Imaging

VMLoc: Variational Fusion for Learning-Based Multimodal Camera Localization

MUFASA: Multimodal Fusion Architecture Search for Electronic Health Records

Multimodal Fusion via Teacher-Student Network for Indoor Action Recognition

Dynamic Graph Representation Learning for Video Dialog via Multi-Modal Shuffled Transformers

Learning Intuitive Physics with Multimodal Generative Models

MERL: Multimodal Event Representation Learning in Heterogeneous Embedding Spaces

Theoretical Analyses of Multi-Objective Evolutionary Algorithms on Multi-Modal Objectives

Embracing Domain Differences in Fake News: Cross-Domain Fake News Detection Using Multi-Modal Data

Noise Estimation Using Density Estimation for Self-Supervised Multimodal Learning

Humor Knowledge Enriched Transformer for Understanding Multimodal Humor

MELINDA: A Multimodal Dataset for Biomedical Experiment Method Classification

責(zé)任編輯:xj

原文標(biāo)題:【AAAI2021】NLP所有方向論文列表(情感分析、句法、NER、對話/問答、關(guān)系抽取、KD等)

文章出處:【微信公眾號:深度學(xué)習(xí)自然語言處理】歡迎添加關(guān)注!文章轉(zhuǎn)載請注明出處。

聲明:本文內(nèi)容及配圖由入駐作者撰寫或者入駐合作網(wǎng)站授權(quán)轉(zhuǎn)載。文章觀點僅代表作者本人,不代表電子發(fā)燒友網(wǎng)立場。文章及其配圖僅供工程師學(xué)習(xí)之用,如有內(nèi)容侵權(quán)或者其他違規(guī)問題,請聯(lián)系本站處理。 舉報投訴
  • 機(jī)器學(xué)習(xí)

    關(guān)注

    66

    文章

    8349

    瀏覽量

    132312
  • 情感分析
    +關(guān)注

    關(guān)注

    0

    文章

    14

    瀏覽量

    5231
  • nlp
    nlp
    +關(guān)注

    關(guān)注

    1

    文章

    484

    瀏覽量

    21987

原文標(biāo)題:【AAAI2021】NLP所有方向論文列表(情感分析、句法、NER、對話/問答、關(guān)系抽取、KD等)

文章出處:【微信號:zenRRan,微信公眾號:深度學(xué)習(xí)自然語言處理】歡迎添加關(guān)注!文章轉(zhuǎn)載請注明出處。

收藏 人收藏

    評論

    相關(guān)推薦

    nlp邏輯層次模型的特點

    NLP(自然語言處理)邏輯層次模型是一種用于理解和生成自然語言文本的計算模型。它將自然語言文本分解為不同的層次,以便于計算機(jī)更好地處理和理解。以下是對NLP邏輯層次模型特點的分析: 詞匯層次 詞匯
    的頭像 發(fā)表于 07-09 10:39 ?297次閱讀

    nlp神經(jīng)語言和NLP自然語言的區(qū)別和聯(lián)系

    來改變我們的行為和情感。NLP的目標(biāo)是幫助人們實現(xiàn)自我改進(jìn),提高溝通技巧,增強領(lǐng)導(dǎo)力和解決問題的能力。 NLP的主要組成部分包括: 感知:了解我們?nèi)绾?b class='flag-5'>接收和處理信息。 語言:研究我們?nèi)绾问褂谜Z言來表達(dá)我們的思想和情感。 編程:研
    的頭像 發(fā)表于 07-09 10:35 ?679次閱讀

    nlp自然語言處理框架有哪些

    自然語言處理(Natural Language Processing,簡稱NLP)是計算機(jī)科學(xué)和人工智能領(lǐng)域的一個重要分支,它致力于使計算機(jī)能夠理解和處理人類語言。隨著技術(shù)的發(fā)展,NLP領(lǐng)域出現(xiàn)了
    的頭像 發(fā)表于 07-09 10:28 ?467次閱讀

    nlp自然語言處理的主要任務(wù)及技術(shù)方法

    自然語言處理(Natural Language Processing,簡稱NLP)是人工智能和語言學(xué)領(lǐng)域的一個分支,它研究如何讓計算機(jī)能夠理解、生成和處理人類語言。NLP技術(shù)在許多領(lǐng)域都有廣泛
    的頭像 發(fā)表于 07-09 10:26 ?764次閱讀

    nlp自然語言處理模型怎么做

    自然語言處理(Natural Language Processing,簡稱NLP)是人工智能領(lǐng)域的一個重要分支,它涉及到計算機(jī)對人類語言的理解和生成。隨著深度學(xué)習(xí)技術(shù)的發(fā)展,NLP領(lǐng)域取得了顯著
    的頭像 發(fā)表于 07-05 09:59 ?493次閱讀

    nlp自然語言處理的應(yīng)用有哪些

    的應(yīng)用。以下是一些NLP的主要應(yīng)用領(lǐng)域,以及它們在各個領(lǐng)域的具體應(yīng)用。 機(jī)器翻譯 機(jī)器翻譯是NLP的一個重要應(yīng)用領(lǐng)域。它利用計算機(jī)自動將一種語言的文本翻譯成另一種語言。這在全球化的今天尤為重要,因為它可以幫助人們跨越語言障礙,進(jìn)
    的頭像 發(fā)表于 07-05 09:55 ?2375次閱讀

    深度學(xué)習(xí)與nlp的區(qū)別在哪

    深度學(xué)習(xí)和自然語言處理(NLP)是計算機(jī)科學(xué)領(lǐng)域中兩個非常重要的研究方向。它們之間既有聯(lián)系,也有區(qū)別。本文將介紹深度學(xué)習(xí)與NLP的區(qū)別。 深度學(xué)習(xí)簡介 深度學(xué)習(xí)是一種基于人工神經(jīng)網(wǎng)絡(luò)的機(jī)器學(xué)習(xí)方法
    的頭像 發(fā)表于 07-05 09:47 ?749次閱讀

    NLP技術(shù)在機(jī)器人中的應(yīng)用

    在人工智能的廣闊領(lǐng)域中,自然語言處理(NLP)技術(shù)作為連接人類語言與機(jī)器智能的橋梁,正逐漸滲透到我們?nèi)粘I畹姆椒矫婷?,其中機(jī)器人技術(shù)便是一個尤為突出的應(yīng)用領(lǐng)域。NLP技術(shù)不僅賦予了機(jī)器人理解
    的頭像 發(fā)表于 07-04 16:04 ?391次閱讀

    NLP技術(shù)在人工智能領(lǐng)域的重要性

    在自然語言處理(Natural Language Processing, NLP)與人工智能(Artificial Intelligence, AI)的交織發(fā)展中,NLP技術(shù)作為連接人類語言與機(jī)器
    的頭像 發(fā)表于 07-04 16:03 ?394次閱讀

    NLP模型中RNN與CNN的選擇

    在自然語言處理(NLP)領(lǐng)域,循環(huán)神經(jīng)網(wǎng)絡(luò)(RNN)與卷積神經(jīng)網(wǎng)絡(luò)(CNN)是兩種極為重要且廣泛應(yīng)用的網(wǎng)絡(luò)結(jié)構(gòu)。它們各自具有獨特的優(yōu)勢,適用于處理不同類型的NLP任務(wù)。本文旨在深入探討RNN與CNN
    的頭像 發(fā)表于 07-03 15:59 ?373次閱讀

    什么是自然語言處理 (NLP)

    自然語言處理(Natural Language Processing, NLP)是人工智能領(lǐng)域中的一個重要分支,它專注于構(gòu)建能夠理解和生成人類語言的計算機(jī)系統(tǒng)。NLP的目標(biāo)是使計算機(jī)能夠像人類一樣
    的頭像 發(fā)表于 07-02 18:16 ?818次閱讀

    NLP領(lǐng)域的語言偏置問題分析

    許多研究證明,學(xué)術(shù)論文表達(dá)的nativeness會影響其被接受發(fā)表的可能性[1, 2]。先前的研究也揭示了非英語母語的作者在國際期刊發(fā)表論文時所經(jīng)歷的壓力和焦慮。我們通過對自然語言處理(NLP
    的頭像 發(fā)表于 01-03 11:00 ?408次閱讀
    <b class='flag-5'>NLP</b>領(lǐng)域的語言偏置問題分析

    細(xì)同軸電纜連接器在移動通信設(shè)備中有哪些應(yīng)用?

    細(xì)同軸電纜連接器可實現(xiàn)天線和移動通信設(shè)備之間的連接。它們促進(jìn)設(shè)備和天線之間的信號傳輸,確保最佳的信號強度和接收。
    的頭像 發(fā)表于 12-05 15:06 ?975次閱讀

    python如何遍歷列表并提取

    遍歷列表是Python中非常常見的操作之一,可以使用for循環(huán)或者while循環(huán)來實現(xiàn)。下面我將詳細(xì)介紹如何使用for循環(huán)遍歷列表并提取元素。 首先,讓我們簡單了解一下Python中的列表。
    的頭像 發(fā)表于 11-23 15:55 ?1242次閱讀

    python列表和數(shù)組的區(qū)別

    Python是一種功能強大的編程語言,為開發(fā)者提供了許多數(shù)據(jù)結(jié)構(gòu)來處理和操作數(shù)據(jù)。其中,列表和數(shù)組是常用的數(shù)據(jù)結(jié)構(gòu),用于存儲和組織一系列元素。在本文中,我們將詳細(xì)比較Python中的列表和數(shù)組,從
    的頭像 發(fā)表于 11-21 15:13 ?2208次閱讀