Step2:导入数据分析库并进行分析
接下来,我们导入必要的库并且查看数据集。使用的是运行在 TensorFlow 2.0 的 Keras 框架。
from tensorflow.keras.preprocessing.sequence import pad_sequences from tensorflow.keras.layers import Embedding, LSTM, Dense, Dropout, Bidirectional from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.models import Sequential from tensorflow.keras.optimizers import Adam from tensorflow.keras import regularizers import tensorflow.keras.utils as ku import numpy as np import tensorflow as tf import pickle data=open('stories.txt',encoding="utf8").read()
Step3:使用 NLP 库预处理数据
首先,我们将数据全部转换为小写,并将其按行拆分,以获得一个python语句列表。转换成小写的原因是,同一单词不同大小写,其意义是一样的。例如,“Doctor”和“doctor”都是医生,但模型会对其进行不同的处理。
然后我们将单词进行编码并转化为向量。为每一个单词生成索引属性,该属性返回一个包含键值对的字典,其中键是单词,值是该单词的记号。
# Converting the text to lowercase and splitting it corpus = data.lower().split("\n") # Tokenization tokenizer = Tokenizer() tokenizer.fit_on_texts(corpus) total_words = len(tokenizer.word_index) + 1 print(total_words)
下一步将把句子转换成基于这些标记索引的值列表。这将把一行文本(如“frozen grass crunched beneath the steps”)转换成表示单词对应的标记列表。
然后我们将遍历标记列表,并且使每个句子的长度一致,否则,用它们训练神经网络可能会很困难。主要在于遍历所有序列并找到最长的一个。一旦我们有了最长的序列长度,接下来要做的是填充所有序列,使它们的长度相同。
同时,我们需要将划分输入数据(特征)以及输出数据(标签)。其中,输入数据就是除最后一个字符外的所有数据,而输出数据则是最后一个字符。
现在,我们将对标签进行 One-hot 编码,因为这实际上是一个分类问题,在给定一个单词序列的情况下,我们可以从语料库中对下一个单词进行分类预测。
# create input sequences using list of tokens input_sequences = [] for line in corpus: token_list = tokenizer.texts_to_sequences([line])[0] for i in range(1, len(token_list)): n_gram_sequence = token_list[:i+1] input_sequences.append(n_gram_sequence) # pad sequences max_sequence_len = max([len(x) for x in input_sequences]) print(max_sequence_len) input_sequences = np.array(pad_sequences(input_sequences, maxlen=max_sequence_len, padding='pre')) # create predictors and label predictors, label = input_sequences[:,:-1],input_sequences[:,-1] label = ku.to_categorical(label, num_classes=total_words)
Step 4:搭建模型
有了训练数据集后,我们就可以搭建需要的模型了:
model = Sequential() model.add(Embedding(total_words, 300, input_length=max_sequence_len-1)) model.add(Bidirectional(LSTM(200, return_sequences = True))) model.add(Dropout(0.2)) model.add(LSTM(100)) model.add(Dense(total_words/2, activation='relu', kernel_regularizer=regularizers.l2(0.001))) model.add(Dense(total_words, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) print(model.summary())
history = model.fit(predictors, label, epochs=200, verbose=0)
其中,第一层是 embedding 层。第一个参数反映模型处理的单词数量,这里我们希望能够处理所有单词,所以赋值 total_words
;第二个参数反映用于绘制单词向量的维数,可以随意调整,会获得不同的预测结果;第三个参数反映输入的序列长度,因为输入序列是原始序列中除最后一个字符外的所有数据,所以这里需要减去一。随后是 bidirectional LSTM 层以及 Dense 层。对于损失函数,我们设置为分类交叉熵;优化函数,我们选择 adam 算法。
Step 5:结果分析
对于训练后的效果,我们主要查看准确度和损失大小。
import matplotlib.pyplot as plt acc = history.history['accuracy'] loss = history.history['loss'] epochs = range(len(acc)) plt.plot(epochs, acc, 'b', label='Training accuracy') plt.title('Training accuracy') plt.figure() plt.plot(epochs, loss, 'b', label='Training Loss') plt.title('Training loss') plt.legend() plt.show()
从曲线图可以看出,训练准确率不断提高,而损失则不断衰减。说明模型达到较好的性能。
Step 6:保存模型
通过以下代码可以对训练完成的模型进行保存,以方便进一步的部署。
# serialize model to JSON model_json=model.to_json() with open("model.json","w") as json_file: json_file.write(model_json) # serialize weights to HDF5 model.save_weights("model.h5") print("Saved model to disk")
Step 7:进行预测
接下来,将应用训练好的模型进行单词预测以及生成故事。首先,用户输入初始语句,然后将该语句进行预处理,输入到 LSTM 模型中,得到对应的一个预测单词。重复这一过程,便能够生成对应的故事了。具体代码如下:
seed_text = "As i walked, my heart sank" next_words = 100 for _ in range(next_words): token_list = tokenizer.texts_to_sequences([seed_text])[0] token_list = pad_sequences([token_list], maxlen=max_sequence_len-1, padding='pre') predicted = model.predict_classes(token_list, verbose=0) output_word = "" for word, index in tokenizer.word_index.items(): if index == predicted: output_word = word break seed_text += " " + output_word print(seed_text)
生成故事如下:
As i walked, my heart sank until he was alarmed by the voice of the hunterand realised what could have happened with him he flew away the boy crunchedbefore it disguised herself as another effort to pull out the bush which he didthe next was a small tree which the child had to struggle a lot to pull outfinally the old man showed him a bigger tree and asked the child to pull itout the boy did so with ease and they walked on the morning she was askedhow she had slept as a while they came back with me