在16.1 節(jié)中,我們討論了情感分析的問題。該任務(wù)旨在將單個(gè)文本序列分類為預(yù)定義的類別,例如一組情感極性。然而,當(dāng)需要決定一個(gè)句子是否可以從另一個(gè)句子推斷出來,或者通過識(shí)別語義等同的句子來消除冗余時(shí),知道如何對(duì)一個(gè)文本序列進(jìn)行分類是不夠的。相反,我們需要能夠?qū)Τ蓪?duì)的文本序列進(jìn)行推理。
16.4.1。自然語言推理
自然語言推理研究是否可以從前提中推斷出假設(shè),其中兩者都是文本序列。換句話說,自然語言推理決定了一對(duì)文本序列之間的邏輯關(guān)系。這種關(guān)系通常分為三種類型:
蘊(yùn)涵:假設(shè)可以從前提中推導(dǎo)出來。
矛盾:可以從前提推導(dǎo)出假設(shè)的否定。
中性:所有其他情況。
自然語言推理也稱為識(shí)別文本蘊(yùn)含任務(wù)。例如,下面的一對(duì)將被標(biāo)記為 蘊(yùn)涵,因?yàn)榧僭O(shè)中的“示愛”可以從前提中的“互相擁抱”中推導(dǎo)出來。
前提:兩個(gè)女人互相擁抱。
假設(shè):兩個(gè)女人正在秀恩愛。
下面是一個(gè)矛盾的例子,因?yàn)椤皉unning the coding example”表示“not sleeping”而不是“sleeping”。
前提:一個(gè)人正在運(yùn)行來自 Dive into Deep Learning 的編碼示例。
假設(shè):這個(gè)人正在睡覺。
第三個(gè)例子顯示了一種中立關(guān)系,因?yàn)椤盀槲覀儽硌荨边@一事實(shí)不能推斷出“著名”或“不著名”。
前提:音樂家正在為我們表演。
假設(shè):音樂家很有名。
自然語言推理一直是理解自然語言的中心話題。它享有從信息檢索到開放域問答的廣泛應(yīng)用。為了研究這個(gè)問題,我們將從調(diào)查一個(gè)流行的自然語言推理基準(zhǔn)數(shù)據(jù)集開始。
16.4.2。斯坦福自然語言推理 (SNLI) 數(shù)據(jù)集
斯坦福自然語言推理 (SNLI) 語料庫是超過 500000 個(gè)帶標(biāo)簽的英語句子對(duì)的集合 (Bowman等人,2015 年)。我們將提取的 SNLI 數(shù)據(jù)集下載并存儲(chǔ)在路徑中../data/snli_1.0。
import os import re import torch from torch import nn from d2l import torch as d2l #@save d2l.DATA_HUB['SNLI'] = ( 'https://nlp.stanford.edu/projects/snli/snli_1.0.zip', '9fcde07509c7e87ec61c640c1b2753d9041758e4') data_dir = d2l.download_extract('SNLI')
Downloading ../data/snli_1.0.zip from https://nlp.stanford.edu/projects/snli/snli_1.0.zip...
import os import re from mxnet import gluon, np, npx from d2l import mxnet as d2l npx.set_np() #@save d2l.DATA_HUB['SNLI'] = ( 'https://nlp.stanford.edu/projects/snli/snli_1.0.zip', '9fcde07509c7e87ec61c640c1b2753d9041758e4') data_dir = d2l.download_extract('SNLI')
16.4.2.1。讀取數(shù)據(jù)集
原始 SNLI 數(shù)據(jù)集包含的信息比我們?cè)趯?shí)驗(yàn)中真正需要的信息豐富得多。因此,我們定義了一個(gè)函數(shù)read_snli 來僅提取部分?jǐn)?shù)據(jù)集,然后返回前提、假設(shè)及其標(biāo)簽的列表。
#@save def read_snli(data_dir, is_train): """Read the SNLI dataset into premises, hypotheses, and labels.""" def extract_text(s): # Remove information that will not be used by us s = re.sub('\(', '', s) s = re.sub('\)', '', s) # Substitute two or more consecutive whitespace with space s = re.sub('\s{2,}', ' ', s) return s.strip() label_set = {'entailment': 0, 'contradiction': 1, 'neutral': 2} file_name = os.path.join(data_dir, 'snli_1.0_train.txt' if is_train else 'snli_1.0_test.txt') with open(file_name, 'r') as f: rows = [row.split('t') for row in f.readlines()[1:]] premises = [extract_text(row[1]) for row in rows if row[0] in label_set] hypotheses = [extract_text(row[2]) for row in rows if row[0] in label_set] labels = [label_set[row[0]] for row in rows if row[0] in label_set] return premises, hypotheses, labels
#@save def read_snli(data_dir, is_train): """Read the SNLI dataset into premises, hypotheses, and labels.""" def extract_text(s): # Remove information that will not be used by us s = re.sub('\(', '', s) s = re.sub('\)', '', s) # Substitute two or more consecutive whitespace with space s = re.sub('\s{2,}', ' ', s) return s.strip() label_set = {'entailment': 0, 'contradiction': 1, 'neutral': 2} file_name = os.path.join(data_dir, 'snli_1.0_train.txt' if is_train else 'snli_1.0_test.txt') with open(file_name, 'r') as f: rows = [row.split('t') for row in f.readlines()[1:]] premises = [extract_text(row[1]) for row in rows if row[0] in label_set] hypotheses = [extract_text(row[2]) for row in rows if row[0] in label_set] labels = [label_set[row[0]] for row in rows if row[0] in label_set] return premises, hypotheses, labels
現(xiàn)在讓我們打印前 3 對(duì)前提和假設(shè),以及它們的標(biāo)簽(“0”、“1”和“2”分別對(duì)應(yīng)“蘊(yùn)含”、“矛盾”和“中性”)。
train_data = read_snli(data_dir, is_train=True) for x0, x1, y in zip(train_data[0][:3], train_data[1][:3], train_data[2][:3]): print('premise:', x0) print('hypothesis:', x1) print('label:', y)
premise: A person on a horse jumps over a broken down airplane . hypothesis: A person is training his horse for a competition . label: 2 premise: A person on a horse jumps over a broken down airplane . hypothesis: A person is at a diner , ordering an omelette . label: 1 premise: A person on a horse jumps over a broken down airplane . hypothesis: A person is outdoors , on a horse . label: 0
train_data = read_snli(data_dir, is_train=True) for x0, x1, y in zip(train_data[0][:3], train_data[1][:3], train_data[2][:3]): print('premise:', x0) print('hypothesis:', x1) print('label:', y)
premise: A person on a horse jumps over a broken down airplane . hypothesis: A person is training his horse for a competition . label: 2 premise: A person on a horse jumps over a broken down airplane . hypothesis: A person is at a diner , ordering an omelette . label: 1 premise: A person on a horse jumps over a broken down airplane . hypothesis: A person is outdoors , on a horse . label: 0
訓(xùn)練集約550000對(duì),測試集約10000對(duì)。下圖表明“蘊(yùn)含”、“矛盾”、“中性”這三個(gè)標(biāo)簽在訓(xùn)練集和測試集上都是均衡的。
test_data = read_snli(data_dir, is_train=False) for data in [train_data, test_data]: print([[row for row in data[2]].count(i) for i in range(3)])
[183416, 183187, 182764] [3368, 3237, 3219]
test_data = read_snli(data_dir, is_train=False) for data in [train_data, test_data]: print([[row for row in data[2]].count(i) for i in range(3)])
[183416, 183187, 182764] [3368, 3237, 3219]
16.4.2.2。定義用于加載數(shù)據(jù)集的類
下面我們繼承DatasetGluon中的類定義一個(gè)加載SNLI數(shù)據(jù)集的類。類構(gòu)造函數(shù)中的參數(shù)num_steps指定文本序列的長度,以便每個(gè)小批量序列具有相同的形狀。換句話說,num_steps較長序列中第一個(gè)之后的標(biāo)記被修剪,而特殊標(biāo)記“”將附加到較短的序列,直到它們的長度變?yōu)閚um_steps. 通過實(shí)現(xiàn)該__getitem__ 功能,我們可以任意訪問前提、假設(shè)和帶有索引的標(biāo)簽idx。
#@save class SNLIDataset(torch.utils.data.Dataset): """A customized dataset to load the SNLI dataset.""" def __init__(self, dataset, num_steps, vocab=None): self.num_steps = num_steps all_premise_tokens = d2l.tokenize(dataset[0]) all_hypothesis_tokens = d2l.tokenize(dataset[1]) if vocab is None: self.vocab = d2l.Vocab(all_premise_tokens + all_hypothesis_tokens, min_freq=5, reserved_tokens=['']) else: self.vocab = vocab self.premises = self._pad(all_premise_tokens) self.hypotheses = self._pad(all_hypothesis_tokens) self.labels = torch.tensor(dataset[2]) print('read ' + str(len(self.premises)) + ' examples') def _pad(self, lines): return torch.tensor([d2l.truncate_pad( self.vocab[line], self.num_steps, self.vocab['']) for line in lines]) def __getitem__(self, idx): return (self.premises[idx], self.hypotheses[idx]), self.labels[idx] def __len__(self): return len(self.premises)
#@save class SNLIDataset(gluon.data.Dataset): """A customized dataset to load the SNLI dataset.""" def __init__(self, dataset, num_steps, vocab=None): self.num_steps = num_steps all_premise_tokens = d2l.tokenize(dataset[0]) all_hypothesis_tokens = d2l.tokenize(dataset[1]) if vocab is None: self.vocab = d2l.Vocab(all_premise_tokens + all_hypothesis_tokens, min_freq=5, reserved_tokens=['']) else: self.vocab = vocab self.premises = self._pad(all_premise_tokens) self.hypotheses = self._pad(all_hypothesis_tokens) self.labels = np.array(dataset[2]) print('read ' + str(len(self.premises)) + ' examples') def _pad(self, lines): return np.array([d2l.truncate_pad( self.vocab[line], self.num_steps, self.vocab['']) for line in lines]) def __getitem__(self, idx): return (self.premises[idx], self.hypotheses[idx]), self.labels[idx] def __len__(self): return len(self.premises)
16.4.2.3。把它們放在一起
現(xiàn)在我們可以調(diào)用read_snli函數(shù)和SNLIDataset 類來下載 SNLI 數(shù)據(jù)集并返回DataLoader訓(xùn)練集和測試集的實(shí)例,以及訓(xùn)練集的詞匯表。值得注意的是,我們必須使用從訓(xùn)練集中構(gòu)造的詞匯作為測試集的詞匯。因此,測試集中的任何新標(biāo)記對(duì)于在訓(xùn)練集上訓(xùn)練的模型都是未知的。
#@save def load_data_snli(batch_size, num_steps=50): """Download the SNLI dataset and return data iterators and vocabulary.""" num_workers = d2l.get_dataloader_workers() data_dir = d2l.download_extract('SNLI') train_data = read_snli(data_dir, True) test_data = read_snli(data_dir, False) train_set = SNLIDataset(train_data, num_steps) test_set = SNLIDataset(test_data, num_steps, train_set.vocab) train_iter = torch.utils.data.DataLoader(train_set, batch_size, shuffle=True, num_workers=num_workers) test_iter = torch.utils.data.DataLoader(test_set, batch_size, shuffle=False, num_workers=num_workers) return train_iter, test_iter, train_set.vocab
#@save def load_data_snli(batch_size, num_steps=50): """Download the SNLI dataset and return data iterators and vocabulary.""" num_workers = d2l.get_dataloader_workers() data_dir = d2l.download_extract('SNLI') train_data = read_snli(data_dir, True) test_data = read_snli(data_dir, False) train_set = SNLIDataset(train_data, num_steps) test_set = SNLIDataset(test_data, num_steps, train_set.vocab) train_iter = gluon.data.DataLoader(train_set, batch_size, shuffle=True, num_workers=num_workers) test_iter = gluon.data.DataLoader(test_set, batch_size, shuffle=False, num_workers=num_workers) return train_iter, test_iter, train_set.vocab
這里我們將批量大小設(shè)置為 128,將序列長度設(shè)置為 50,并調(diào)用該load_data_snli函數(shù)來獲取數(shù)據(jù)迭代器和詞匯表。然后我們打印詞匯量。
train_iter, test_iter, vocab = load_data_snli(128, 50) len(vocab)
read 549367 examples read 9824 examples
18678
train_iter, test_iter, vocab = load_data_snli(128, 50) len(vocab)
read 549367 examples read 9824 examples
18678
現(xiàn)在我們打印第一個(gè)小批量的形狀。與情緒分析相反,我們有兩個(gè)輸入X[0],X[1]代表成對(duì)的前提和假設(shè)。
for X, Y in train_iter: print(X[0].shape) print(X[1].shape) print(Y.shape) break
torch.Size([128, 50]) torch.Size([128, 50]) torch.Size([128])
for X, Y in train_iter: print(X[0].shape) print(X[1].shape) print(Y.shape) break
(128, 50) (128, 50) (128,)
16.4.3。概括
自然語言推理研究是否可以從前提中推斷出假設(shè),其中兩者都是文本序列。
在自然語言推理中,前提和假設(shè)之間的關(guān)系包括蘊(yùn)涵、矛盾和中性。
斯坦福自然語言推理 (SNLI) 語料庫是一種流行的自然語言推理基準(zhǔn)數(shù)據(jù)集。
16.4.4。練習(xí)
長期以來,機(jī)器翻譯的評(píng)估都是基于膚淺的 n- 輸出翻譯和真值翻譯之間的語法匹配。你能設(shè)計(jì)一個(gè)使用自然語言推理來評(píng)估機(jī)器翻譯結(jié)果的方法嗎?
我們?nèi)绾胃淖兂瑓?shù)來減少詞匯量?
-
數(shù)據(jù)集
+關(guān)注
關(guān)注
4文章
1200瀏覽量
24619 -
自然語言
+關(guān)注
關(guān)注
1文章
285瀏覽量
13320 -
pytorch
+關(guān)注
關(guān)注
2文章
802瀏覽量
13115
發(fā)布評(píng)論請(qǐng)先 登錄
相關(guān)推薦
評(píng)論