开发者社区> 问答> 正文

在tensorflow中没有计算Dropout

我试图设置一个实例,使辍学是计算只在训练期间,但不知怎的,似乎模型没有看到辍学层,当修改概率什么都没有发生。我怀疑这是我代码中的一个逻辑问题,但我找不到。另外,我对这个世界还比较陌生,所以请您体谅我的缺乏经验。任何帮助都将非常感谢。 这里的代码。我首先创建一个布尔占位符

Train = tf.placeholder(tf.bool,shape=())

然后将其作为true(训练)或False(测试)传递到字典值中。然后我实现了如下的正向传播。

def forward_prop_cost(X, parameters,string,drop_probs,Train):
    """
    Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX

    Arguments:
    X -- input dataset placeholder, of shape (input size, number of examples)
    parameters -- python dictionary containing your parameters "W1", "b1", ...
    string - ReLU or tanh
    drop_probs = drop probabilities for each layer. First and last == 0
    Train = boolean 

    Returns:
    ZL -- the output of the last LINEAR unit
    """
    L = len(drop_probs)-1
    activations = []
    activations.append(X)

    if string == 'ReLU':
        for i in range(1,L):
            Zi =  tf.matmul(parameters['W'+str(i)],activations[i-1]) + parameters['b'+str(i)] 
            if (Train == True and drop_probs[i] != 0):
                Ai = tf.nn.dropout(tf.nn.relu(Zi),drop_probs[i])
            else:
                Ai = tf.nn.relu(Zi)
            activations.append(Ai)
    elif string == 'tanh':   #needs update!
        for i in range(1,L):
            Zi =  tf.matmul(parameters['W'+str(i)],activations[i-1]) + parameters['b'+str(i)] 
            Ai = tf.nn.dropout(tf.nn.tanh(Zi),drop_probs[i])
            activations.append(Ai)

    ZL = tf.matmul(parameters['W'+str(L)],activations[L-1]) + parameters['b'+str(L)] 
    logits = tf.transpose(ZL)
    labels = tf.transpose(Y)

    return ZL

然后调用模型函数,在最后,根据所使用的数据集,将火车的值作为true或false传递。

def model(X_train, Y_train, X_test, Y_test,hidden = [12288,25,12,6], string = 'ReLU',drop_probs = [0.,0.4,0.2,0.],
          regular_param = 0.0, starter_learning_rate = 0.0001,
          num_epochs = 1500, minibatch_size = 32, print_cost = True, learning_decay = False):
    '''
    Returns:
    parameters -- parameters learnt by the model. They can then be used to predict.
    '''

    ops.reset_default_graph()   
    tf.set_random_seed(1)                             
    seed = 3                                          
    (n_x, m) = X_train.shape                          # (n_x: input size, m : number of examples in the train set)
    n_y = Y_train.shape[0]                            # n_y : output size
    costs = []                                        # To keep track of the cost

    graph = tf.Graph()

    X, Y ,Train = create_placeholders(n_x, n_y)
    parameters = initialize_parameters(hidden)


    #print([n.name for n in tf.get_default_graph().as_graph_def().node])

    ZL = forward_prop_cost(X, parameters,'ReLU',drop_probs,Train)
    #cost = forward_prop_cost(X, parameters,'ReLU',drop_probs,regular_param )
    cost = compute_cost(ZL,Y,parameters,regular_param)
    #optimizer = tf.train.AdamOptimizer(learning_rate = starter_learning_rate).minimize(cost)
    if learning_decay == True:
        increasing = tf.Variable(0, trainable=False)
        learning_rate = tf.train.exponential_decay(starter_learning_rate,increasing * minibatch_size,m, 0.95, staircase=True)
        optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost,global_step=increasing)
    else:
        optimizer = tf.train.AdamOptimizer(learning_rate = starter_learning_rate).minimize(cost)

    # Initialize all the variables
    init = tf.global_variables_initializer()

    # Start the session to compute the tensorflow graph
    with tf.Session() as sess:

        # Run the initialization
        sess.run(init, { Train: True } )

        # Do the training loop
        for epoch in range(num_epochs):

            epoch_cost = 0.                       
            num_minibatches = int(m / minibatch_size) 
            seed = seed + 1
            minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)

            for minibatch in minibatches:

                (minibatch_X, minibatch_Y) = minibatch

                _ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})

                epoch_cost += minibatch_cost / num_minibatches

            # Print the cost every 100 epoch
            if print_cost == True and epoch % 100 == 0:
                print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
            if print_cost == True and epoch % 5 == 0:
                costs.append(epoch_cost)

        # plot the cost
        plt.plot(np.squeeze(costs))
        plt.ylabel('cost')
        plt.xlabel('iterations (per fives)')
        plt.title("Learning rate =" + str(learning_rate))
        plt.show()

        parameters = sess.run(parameters)
        print ("Parameters have been trained!")

        # Calculate accuracy on the test set
        correct_prediction = tf.equal(tf.argmax(ZL), tf.argmax(Y))
        accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

        print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train, Train: True}))
        print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test, Train: False}))

        return parameters

问题来源StackOverflow 地址:/questions/59379758/dropout-not-computed-in-tensorflow

展开
收起
kun坤 2019-12-29 21:49:17 753 0
0 条回答
写回答
取消 提交回答
问答排行榜
最热
最新

相关电子书

更多
使用TensorFlow搭建智能开发系统自动生成App UI 立即下载
从零到一:IOS平台TensorFlow入门及应用详解 立即下载
从零到一:IOS平台TensorFlow入门及应用详解(附源 立即下载