第四周编程作业(二)-Deep Neural Network for Image Classification: Application(二)

简介: 第四周编程作业(二)-Deep Neural Network for Image Classification: Application(二)

3 - Architecture of your model


Now that you are familiar with the dataset, it is time to build a deep neural network to distinguish cat images from non-cat images.

You will build two different models:

  • A 2-layer neural network
  • An L-layer deep neural network

You will then compare the performance of these models, and also try out different values for $L$.

Let's look at the two architectures.


3.1 - 2-layer neural network


<caption><center> <u>Figure 2</u>: 2-layer neural network.

The model can be summarized as: INPUT -> LINEAR -> RELU -> LINEAR -> SIGMOID -> OUTPUT. </center></caption>

<u>Detailed Architecture of figure 2</u>:

  • The input is a (64,64,3) image which is flattened to a vector of size $(12288,1)$.
  • The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ of size $(n^{[1]}, 12288)$.
  • You then add a bias term and take its relu to get the following vector: $[a_0^{[1]}, a_1^{[1]},..., a_{n{[1]}-1}{[1]}]^T$.
  • You then repeat the same process.
  • You multiply the resulting vector by $W^{[2]}$ and add your intercept (bias).
  • Finally, you take the sigmoid of the result. If it is greater than 0.5, you classify it to be a cat.


3.2 - L-layer deep neural network


It is hard to represent an L-layer deep neural network with the above representation. However, here is a simplified network representation:


<caption><center> <u>Figure 3</u>: L-layer neural network.

The model can be summarized as: [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID</center></caption>

<u>Detailed Architecture of figure 3</u>:

  • The input is a (64,64,3) image which is flattened to a vector of size (12288,1).
  • The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ and then you add the intercept $b^{[1]}$. The result is called the linear unit.
  • Next, you take the relu of the linear unit. This process could be repeated several times for each $(W^{[l]}, b^{[l]})$ depending on the model architecture.
  • Finally, you take the sigmoid of the final linear unit. If it is greater than 0.5, you classify it to be a cat.


3.3 - General methodology


As usual you will follow the Deep Learning methodology to build the model:

1. Initialize parameters / Define hyperparameters

2. Loop for num_iterations:

a. Forward propagation

b. Compute cost function

c. Backward propagation

d. Update parameters (using parameters, and grads from backprop)

4. Use trained parameters to predict labels

Let's now implement those two models!


4 - Two-layer neural network


Question:  Use the helper functions you have implemented in the previous assignment to build a 2-layer neural network with the following structure: LINEAR -> RELU -> LINEAR -> SIGMOID. The functions you may need and their inputs are:

def initialize_parameters(n_x, n_h, n_y):
    ...
    return parameters 
def linear_activation_forward(A_prev, W, b, activation):
    ...
    return A, cache
def compute_cost(AL, Y):
    ...
    return cost
def linear_activation_backward(dA, cache, activation):
    ...
    return dA_prev, dW, db
def update_parameters(parameters, grads, learning_rate):
    ...
    return parameters

### CONSTANTS DEFINING THE MODEL ####
n_x = 12288     # num_px * num_px * 3
n_h = 7
n_y = 1
layers_dims = (n_x, n_h, n_y)

# GRADED FUNCTION: two_layer_model
def two_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):
    """
    Implements a two-layer neural network: LINEAR->RELU->LINEAR->SIGMOID.
    Arguments:
    X -- input data, of shape (n_x, number of examples)
    Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
    layers_dims -- dimensions of the layers (n_x, n_h, n_y)
    num_iterations -- number of iterations of the optimization loop
    learning_rate -- learning rate of the gradient descent update rule
    print_cost -- If set to True, this will print the cost every 100 iterations 
    Returns:
    parameters -- a dictionary containing W1, W2, b1, and b2
    """
    np.random.seed(1)
    grads = {}
    costs = []                              # to keep track of the cost
    m = X.shape[1]                           # number of examples
    (n_x, n_h, n_y) = layers_dims
    # Initialize parameters dictionary, by calling one of the functions you'd previously implemented
    ### START CODE HERE ### (≈ 1 line of code)
    parameters = initialize_parameters(n_x, n_h, n_y)
    ### END CODE HERE ###
    # Get W1, b1, W2 and b2 from the dictionary parameters.
    W1 = parameters["W1"]
    b1 = parameters["b1"]
    W2 = parameters["W2"]
    b2 = parameters["b2"]
    # Loop (gradient descent)
    for i in range(0, num_iterations):
        # Forward propagation: LINEAR -> RELU -> LINEAR -> SIGMOID. Inputs: "X, W1, b1". Output: "A1, cache1, A2, cache2".
        ### START CODE HERE ### (≈ 2 lines of code)
        A1, cache1 = linear_activation_forward(X, W1, b1, activation="relu")
        A2, cache2 = linear_activation_forward(A1,W2, b2, activation="sigmoid")
        ### END CODE HERE ###
        # Compute cost
        ### START CODE HERE ### (≈ 1 line of code)
        cost = compute_cost(AL=A2,Y=Y)
        ### END CODE HERE ###
        # Initializing backward propagation
        dA2 = - (np.divide(Y, A2) - np.divide(1 - Y, 1 - A2))
        # Backward propagation. Inputs: "dA2, cache2, cache1". Outputs: "dA1, dW2, db2; also dA0 (not used), dW1, db1".
        ### START CODE HERE ### (≈ 2 lines of code) linear_activation_backward(dA, cache, activation)
        dA1, dW2, db2 = linear_activation_backward(dA2,cache2,activation="sigmoid")
        dA0, dW1, db1 = linear_activation_backward(dA1,cache1,activation="relu")
        ### END CODE HERE ###
        # Set grads['dWl'] to dW1, grads['db1'] to db1, grads['dW2'] to dW2, grads['db2'] to db2
        grads['dW1'] = dW1
        grads['db1'] = db1
        grads['dW2'] = dW2
        grads['db2'] = db2
        # Update parameters.
        ### START CODE HERE ### (approx. 1 line of code) def update_parameters(parameters, grads, learning_rate):
        parameters = update_parameters(parameters,grads,learning_rate)
        ### END CODE HERE ###
        # Retrieve W1, b1, W2, b2 from parameters
        W1 = parameters["W1"]
        b1 = parameters["b1"]
        W2 = parameters["W2"]
        b2 = parameters["b2"]
        # Print the cost every 100 training example
        if print_cost and i % 100 == 0:
            print("Cost after iteration {}: {}".format(i, np.squeeze(cost)))
        if print_cost and i % 100 == 0:
            costs.append(cost)
    # plot the cost
    plt.plot(np.squeeze(costs))
    plt.ylabel('cost')
    plt.xlabel('iterations (per tens)')
    plt.title("Learning rate =" + str(learning_rate))
    plt.show()
    return parameters


Run the cell below to train your parameters. See if your model runs. The cost should be decreasing. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.

parameters = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h, n_y), num_iterations = 2500, print_cost=True)

Cost after iteration 0: 0.693049735659989
Cost after iteration 100: 0.6464320953428849
Cost after iteration 200: 0.6325140647912678
Cost after iteration 300: 0.6015024920354665
Cost after iteration 400: 0.5601966311605748
Cost after iteration 500: 0.515830477276473
Cost after iteration 600: 0.4754901313943325
Cost after iteration 700: 0.43391631512257495
Cost after iteration 800: 0.4007977536203886
Cost after iteration 900: 0.35807050113237987
Cost after iteration 1000: 0.3394281538366413
Cost after iteration 1100: 0.30527536361962654
Cost after iteration 1200: 0.2749137728213015
Cost after iteration 1300: 0.24681768210614827
Cost after iteration 1400: 0.1985073503746611
Cost after iteration 1500: 0.17448318112556593
Cost after iteration 1600: 0.1708076297809661
Cost after iteration 1700: 0.11306524562164737
Cost after iteration 1800: 0.09629426845937163
Cost after iteration 1900: 0.08342617959726878
Cost after iteration 2000: 0.0743907870431909
Cost after iteration 2100: 0.06630748132267938
Cost after iteration 2200: 0.05919329501038176
Cost after iteration 2300: 0.05336140348560564
Cost after iteration 2400: 0.048554785628770226


19.png

output_18_1.png


Expected Output:

<table>

<tr>

<td> Cost after iteration 0</td>

<td> 0.6930497356599888 </td>

</tr>

<tr>

<td> Cost after iteration 100</td>

<td> 0.6464320953428849 </td>

</tr>

<tr>

<td> ...</td>

<td> ... </td>

</tr>

<tr>

<td> Cost after iteration 2400</td>

<td> 0.048554785628770206 </td>

</tr>

</table>

Good thing you built a vectorized implementation! Otherwise it might have taken 10 times longer to train this.

Now, you can use the trained parameters to classify images from the dataset. To see your predictions on the training and test sets, run the cell below.

predictions_train = predict(train_x, train_y, parameters)

Accuracy: 1.0


Expected Output:

<table>

<tr>

<td> Accuracy</td>

<td> 1.0 </td>

</tr>

</table>

predictions_test = predict(test_x, test_y, parameters)

Accuracy: 0.72


Expected Output:

<table>

<tr>

<td> Accuracy</td>

<td> 0.72 </td>

</tr>

</table>

Note: You may notice that running the model on fewer iterations (say 1500) gives better accuracy on the test set. This is called "early stopping" and we will talk about it in the next course. Early stopping is a way to prevent overfitting.

Congratulations! It seems that your 2-layer neural network has better performance (72%) than the logistic regression implementation (70%, assignment week 2). Let's see if you can do even better with an $L$-layer model.

相关文章
|
5月前
|
机器学习/深度学习 编解码 算法
论文精度笔记(二):《Deep Learning based Face Liveness Detection in Videos 》
论文提出了基于深度学习的面部欺骗检测技术,使用LRF-ELM和CNN两种模型,在NUAA和CASIA数据库上进行实验,发现LRF-ELM在检测活体面部方面更为准确。
48 1
论文精度笔记(二):《Deep Learning based Face Liveness Detection in Videos 》
|
7月前
|
机器学习/深度学习 PyTorch 算法框架/工具
【文献学习】Phase-Aware Speech Enhancement with Deep Complex U-Net
文章介绍了Deep Complex U-Net模型,用于复数值的语音增强,提出了新的极坐标掩码方法和wSDR损失函数,并通过多种评估指标验证了其性能。
100 1
|
机器学习/深度学习 编解码 人工智能
Text to image综述阅读(2)A Survey and Taxonomy of Adversarial Neural Networks for Text-to-Image Synthesis
这是一篇用GAN做文本生成图像(Text to Image)的综述阅读报告。 综述名为:《A Survey and Taxonomy of Adversarial Neural Networks for Text-to-Image Synthesis》,发表于2019年,其将文本生成图像分类为Semantic Enhancement GANs, Resolution Enhancement GANs, Diversity Enhancement GANs, Motion Enhancement GANs四类,并且介绍了代表性model。
Text to image综述阅读(2)A Survey and Taxonomy of Adversarial Neural Networks for Text-to-Image Synthesis
【多标签文本分类】Ensemble Application of Convolutional and Recurrent Neural Networks for Multi-label Text
【多标签文本分类】Ensemble Application of Convolutional and Recurrent Neural Networks for Multi-label Text
122 0
【多标签文本分类】Ensemble Application of Convolutional and Recurrent Neural Networks for Multi-label Text
|
数据挖掘
【多标签文本分类】Initializing neural networks for hierarchical multi-label text classification
【多标签文本分类】Initializing neural networks for hierarchical multi-label text classification
146 0
【多标签文本分类】Initializing neural networks for hierarchical multi-label text classification
|
机器学习/深度学习 编解码 固态存储
Single Shot MultiBox Detector论文翻译【修改】
Single Shot MultiBox Detector论文翻译【修改】
123 0
Single Shot MultiBox Detector论文翻译【修改】
|
机器学习/深度学习 PyTorch 算法框架/工具
Batch Normlization: Accelerating Deep Network Training by Reducing Internal Covariate Shift》论文详细解读
Batch Normlization: Accelerating Deep Network Training by Reducing Internal Covariate Shift》论文详细解读
153 0
Batch Normlization: Accelerating Deep Network Training by Reducing Internal Covariate Shift》论文详细解读
|
机器学习/深度学习 算法 数据挖掘
【论文泛读】 Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
【论文泛读】 Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
【论文泛读】 Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
|
算法 数据挖掘 知识图谱
Self-supervised graph convolutional network for multi-view clustering(论文阅读记录)
IEEE TRANSACTIONS ON MULTIMEDIA 2区top 2022 影响因子:6.005
470 0
|
机器学习/深度学习 自然语言处理 计算机视觉
Text to image论文精读 MirrorGAN: Learning Text-to-image Generation by Redescription(通过重新描述学习从文本到图像的生成)
MirrorGAN通过学习文本-图像-文本,试图从生成的图像中重新生成文本描述,从而加强保证文本描述和视觉内容的一致性。文章被2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)会议录用。 论文地址:https://arxiv.org/abs/1903.05854 代码地址:https://github.com/qiaott/MirrorGAN
Text to image论文精读 MirrorGAN: Learning Text-to-image Generation by Redescription(通过重新描述学习从文本到图像的生成)