TensorFlow RNN 教程和代码

简介: 分析:看 TensorFlow 也有一段时间了,准备按照 GitHub 上的教程,敲出来,顺便整理一下思路。

分析:
看 TensorFlow 也有一段时间了,准备按照 GitHub 上的教程,敲出来,顺便整理一下思路。
RNN部分
  1. 定义参数,包括数据相关,训练相关。
  2. 定义模型,损失函数,优化函数。
  3. 训练,准备数据,输入数据,输出结果。

代码:

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
from tensorflow.contrib import rnn

mnist=input_data.read_data_sets("./data",one_hot=True)

training_rate=0.001
training_iters=100000
batch_size=128
display_step=10

n_input=28
n_steps=28
n_hidden=128
n_classes=10

x=tf.placeholder("float",[None,n_steps,n_input])
y=tf.placeholder("float",[None,n_classes])

weights={'out':tf.Variable(tf.random_normal([n_hidden,n_classes]))}
biases={'out':tf.Variable(tf.random_normal([n_classes]))}

def RNN(x,weights,biases):
   x=tf.unstack(x,n_steps,1)
   lstm_cell=rnn.BasicLSTMCell(n_hidden,forget_bias=1.0)
   outputs,states=rnn.static_rnn(lstm_cell,x,dtype=tf.float32)
   return tf.matmul(outputs[-1],weights['out'])+biases['out']

pred=RNN(x,weights,biases)
cost=tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred,labels=y))
optimizer=tf.train.AdamOptimizer(learning_rate=training_rate).minimize(cost)

correct_pred=tf.equal(tf.argmax(pred,1),tf.argmax(y,1))
accuaracy=tf.reduce_mean(tf.cast(correct_pred,tf.float32))

init=tf.global_variables_initializer()

with tf.Session() as sess:
   sess.run(init)
   step=1
   while step*batch_size<training_iters:
      batch_x,batch_y=mnist.train.next_batch(batch_size)
      batch_x=batch_x.reshape(batch_size,n_steps,n_input)
      sess.run(optimizer,feed_dict={x:batch_x,y:batch_y})
      if step%display_step==0:
         acc=sess.run(accuaracy,feed_dict={x:batch_x,y:batch_y})
         loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})
         print("Iter " + str(step * batch_size) + ", Minibatch Loss= " + \
               "{:.6f}".format(loss) + ", Training Accuracy= " + \
               "{:.5f}".format(acc))
      step+=1


输出:

/anaconda/bin/python2.7 /Users/xxxx/PycharmProjects/TF_3/tf_rnn.py
Extracting ./data/train-images-idx3-ubyte.gz
Extracting ./data/train-labels-idx1-ubyte.gz
Extracting ./data/t10k-images-idx3-ubyte.gz
Extracting ./data/t10k-labels-idx1-ubyte.gz
2017-07-15 16:41:15.125981: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn’t compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-07-15 16:41:15.125994: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn’t compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-07-15 16:41:15.125997: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn’t compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-07-15 16:41:15.126002: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn’t compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
Iter 1280, Minibatch Loss= 1.842738, Training Accuracy= 0.33594
Iter 2560, Minibatch Loss= 1.489123, Training Accuracy= 0.50000
Iter 3840, Minibatch Loss= 1.300060, Training Accuracy= 0.57812
Iter 5120, Minibatch Loss= 1.244872, Training Accuracy= 0.62500
Iter 6400, Minibatch Loss= 0.947143, Training Accuracy= 0.71094
Iter 7680, Minibatch Loss= 0.709695, Training Accuracy= 0.75781
Iter 8960, Minibatch Loss= 0.799844, Training Accuracy= 0.76562
Iter 10240, Minibatch Loss= 0.594611, Training Accuracy= 0.83594
Iter 11520, Minibatch Loss= 0.529350, Training Accuracy= 0.82031
Iter 12800, Minibatch Loss= 0.624426, Training Accuracy= 0.82031
Iter 14080, Minibatch Loss= 0.481889, Training Accuracy= 0.82812
Iter 15360, Minibatch Loss= 0.449692, Training Accuracy= 0.84375
Iter 16640, Minibatch Loss= 0.418820, Training Accuracy= 0.85938
Iter 17920, Minibatch Loss= 0.412161, Training Accuracy= 0.85156
Iter 19200, Minibatch Loss= 0.256099, Training Accuracy= 0.90625
Iter 20480, Minibatch Loss= 0.227309, Training Accuracy= 0.90625
Iter 21760, Minibatch Loss= 0.431014, Training Accuracy= 0.85938
Iter 23040, Minibatch Loss= 0.377097, Training Accuracy= 0.87500
Iter 24320, Minibatch Loss= 0.268153, Training Accuracy= 0.89844
Iter 25600, Minibatch Loss= 0.170557, Training Accuracy= 0.95312
Iter 26880, Minibatch Loss= 0.286947, Training Accuracy= 0.91406
Iter 28160, Minibatch Loss= 0.189623, Training Accuracy= 0.94531
Iter 29440, Minibatch Loss= 0.228949, Training Accuracy= 0.95312
Iter 30720, Minibatch Loss= 0.157198, Training Accuracy= 0.94531
Iter 32000, Minibatch Loss= 0.205744, Training Accuracy= 0.93750
Iter 33280, Minibatch Loss= 0.195218, Training Accuracy= 0.92188
Iter 34560, Minibatch Loss= 0.177956, Training Accuracy= 0.92969
Iter 35840, Minibatch Loss= 0.131563, Training Accuracy= 0.96875
Iter 37120, Minibatch Loss= 0.215156, Training Accuracy= 0.92969
Iter 38400, Minibatch Loss= 0.232274, Training Accuracy= 0.94531
Iter 39680, Minibatch Loss= 0.324053, Training Accuracy= 0.91406
Iter 40960, Minibatch Loss= 0.196385, Training Accuracy= 0.93750
Iter 42240, Minibatch Loss= 0.151221, Training Accuracy= 0.95312
Iter 43520, Minibatch Loss= 0.242021, Training Accuracy= 0.95312
Iter 44800, Minibatch Loss= 0.304008, Training Accuracy= 0.90625
Iter 46080, Minibatch Loss= 0.185177, Training Accuracy= 0.93750
Iter 47360, Minibatch Loss= 0.190960, Training Accuracy= 0.94531
Iter 48640, Minibatch Loss= 0.141995, Training Accuracy= 0.94531
Iter 49920, Minibatch Loss= 0.199995, Training Accuracy= 0.94531
Iter 51200, Minibatch Loss= 0.193773, Training Accuracy= 0.92188
Iter 52480, Minibatch Loss= 0.151757, Training Accuracy= 0.94531
Iter 53760, Minibatch Loss= 0.153755, Training Accuracy= 0.94531
Iter 55040, Minibatch Loss= 0.141472, Training Accuracy= 0.93750
Iter 56320, Minibatch Loss= 0.168057, Training Accuracy= 0.96094
Iter 57600, Minibatch Loss= 0.135691, Training Accuracy= 0.96094
Iter 58880, Minibatch Loss= 0.097003, Training Accuracy= 0.97656
Iter 60160, Minibatch Loss= 0.274090, Training Accuracy= 0.92188
Iter 61440, Minibatch Loss= 0.147230, Training Accuracy= 0.95312
Iter 62720, Minibatch Loss= 0.106019, Training Accuracy= 0.96094
Iter 64000, Minibatch Loss= 0.101133, Training Accuracy= 0.97656
Iter 65280, Minibatch Loss= 0.169548, Training Accuracy= 0.93750
Iter 66560, Minibatch Loss= 0.101966, Training Accuracy= 0.96094
Iter 67840, Minibatch Loss= 0.106501, Training Accuracy= 0.96875
Iter 69120, Minibatch Loss= 0.082817, Training Accuracy= 0.96875
Iter 70400, Minibatch Loss= 0.192926, Training Accuracy= 0.96094
Iter 71680, Minibatch Loss= 0.086935, Training Accuracy= 0.96875
Iter 72960, Minibatch Loss= 0.052052, Training Accuracy= 0.98438
Iter 74240, Minibatch Loss= 0.129968, Training Accuracy= 0.95312
Iter 75520, Minibatch Loss= 0.058070, Training Accuracy= 0.99219
Iter 76800, Minibatch Loss= 0.089518, Training Accuracy= 0.96875
Iter 78080, Minibatch Loss= 0.106092, Training Accuracy= 0.98438
Iter 79360, Minibatch Loss= 0.223101, Training Accuracy= 0.92188
Iter 80640, Minibatch Loss= 0.069419, Training Accuracy= 0.97656
Iter 81920, Minibatch Loss= 0.050585, Training Accuracy= 0.99219
Iter 83200, Minibatch Loss= 0.048002, Training Accuracy= 0.98438
Iter 84480, Minibatch Loss= 0.094293, Training Accuracy= 0.96875
Iter 85760, Minibatch Loss= 0.152253, Training Accuracy= 0.96094
Iter 87040, Minibatch Loss= 0.085382, Training Accuracy= 0.97656
Iter 88320, Minibatch Loss= 0.147018, Training Accuracy= 0.95312
Iter 89600, Minibatch Loss= 0.099780, Training Accuracy= 0.96094
Iter 90880, Minibatch Loss= 0.118362, Training Accuracy= 0.93750
Iter 92160, Minibatch Loss= 0.110498, Training Accuracy= 0.96094
Iter 93440, Minibatch Loss= 0.077664, Training Accuracy= 0.98438
Iter 94720, Minibatch Loss= 0.070865, Training Accuracy= 0.96094
Iter 96000, Minibatch Loss= 0.156309, Training Accuracy= 0.94531
Iter 97280, Minibatch Loss= 0.116825, Training Accuracy= 0.94531
Iter 98560, Minibatch Loss= 0.099852, Training Accuracy= 0.96875
Iter 99840, Minibatch Loss= 0.116358, Training Accuracy= 0.96875

Process finished with exit code 0


原文链接:http://www.tensorflownews.com/2017/07/15/tensorflow-rnn-turorial-mnist-code/


目录
相关文章
|
6月前
|
机器学习/深度学习 自然语言处理 TensorFlow
tensorflow循环神经网络(RNN)文本生成莎士比亚剧集
我们将使用 Andrej Karpathy 在《循环神经网络不合理的有效性》一文中提供的莎士比亚作品数据集。给定此数据中的一个字符序列 (“Shakespear”),训练一个模型以预测该序列的下一个字符(“e”)。通过重复调用该模型,可以生成更长的文本序列。
129 0
|
6月前
|
机器学习/深度学习 自然语言处理 PyTorch
使用Python实现循环神经网络(RNN)的博客教程
使用Python实现循环神经网络(RNN)的博客教程
691 1
|
3月前
|
自然语言处理 C# 开发者
Uno Platform多语言开发秘籍大公开:轻松驾驭全球用户,一键切换语言,让你的应用成为跨文化交流的桥梁!
【8月更文挑战第31天】Uno Platform 是一个强大的开源框架,允许使用 C# 和 XAML 构建跨平台的原生移动、Web 和桌面应用程序。本文详细介绍如何通过 Uno Platform 创建多语言应用,包括准备工作、设置多语言资源、XAML 中引用资源、C# 中加载资源以及处理语言更改。通过简单的步骤和示例代码,帮助开发者轻松实现应用的国际化。
41 1
|
3月前
|
持续交付 测试技术 jenkins
JSF 邂逅持续集成,紧跟技术热点潮流,开启高效开发之旅,引发开发者强烈情感共鸣
【8月更文挑战第31天】在快速发展的软件开发领域,JavaServer Faces(JSF)这一强大的Java Web应用框架与持续集成(CI)结合,可显著提升开发效率及软件质量。持续集成通过频繁的代码集成及自动化构建测试,实现快速反馈、高质量代码、加强团队协作及简化部署流程。以Jenkins为例,配合Maven或Gradle,可轻松搭建JSF项目的CI环境,通过JUnit和Selenium编写自动化测试,确保每次构建的稳定性和正确性。
62 0
|
3月前
|
机器学习/深度学习 Linux TensorFlow
【Tensorflow+keras】用代码给神经网络结构绘图
文章提供了使用TensorFlow和Keras来绘制神经网络结构图的方法,并给出了具体的代码示例。
55 0
|
3月前
|
机器学习/深度学习 自然语言处理 TensorFlow
|
4月前
|
机器学习/深度学习 TensorFlow API
Keras是一个高层神经网络API,由Python编写,并能够在TensorFlow、Theano或CNTK之上运行。Keras的设计初衷是支持快速实验,能够用最少的代码实现想法,并且能够方便地在CPU和GPU上运行。
Keras是一个高层神经网络API,由Python编写,并能够在TensorFlow、Theano或CNTK之上运行。Keras的设计初衷是支持快速实验,能够用最少的代码实现想法,并且能够方便地在CPU和GPU上运行。
|
6月前
|
机器学习/深度学习 算法 TensorFlow
TensorFlow 2keras开发深度学习模型实例:多层感知器(MLP),卷积神经网络(CNN)和递归神经网络(RNN)
TensorFlow 2keras开发深度学习模型实例:多层感知器(MLP),卷积神经网络(CNN)和递归神经网络(RNN)
|
6月前
|
机器学习/深度学习 PyTorch TensorFlow
TensorFlow、Keras 和 Python 构建神经网络分析鸢尾花iris数据集|代码数据分享
TensorFlow、Keras 和 Python 构建神经网络分析鸢尾花iris数据集|代码数据分享
|
6月前
|
机器学习/深度学习 自然语言处理 TensorFlow
Python TensorFlow循环神经网络RNN-LSTM神经网络预测股票市场价格时间序列和MSE评估准确性
Python TensorFlow循环神经网络RNN-LSTM神经网络预测股票市场价格时间序列和MSE评估准确性

热门文章

最新文章