第四周编程作业(二)-Deep Neural Network for Image Classification: Application(三)

简介: 第四周编程作业(二)-Deep Neural Network for Image Classification: Application(三)

5 - L-layer Neural Network


Question: Use the helper functions you have implemented previously to build an $L$-layer neural network with the following structure: [LINEAR -> RELU]$\times$(L-1) -> LINEAR -> SIGMOID. The functions you may need and their inputs are:

def initialize_parameters_deep(layer_dims):
    ...
    return parameters 
def L_model_forward(X, parameters):
    ...
    return AL, caches
def compute_cost(AL, Y):
    ...
    return cost
def L_model_backward(AL, Y, caches):
    ...
    return grads
def update_parameters(parameters, grads, learning_rate):
    ...
    return parameters

### CONSTANTS ###
layers_dims = [12288, 20, 7, 5, 1] #  5-layer model

# GRADED FUNCTION: L_layer_model
def L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):#lr was 0.009
    """
    Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID.
    Arguments:
    X -- data, numpy array of shape (number of examples, num_px * num_px * 3)
    Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
    layers_dims -- list containing the input size and each layer size, of length (number of layers + 1).
    learning_rate -- learning rate of the gradient descent update rule
    num_iterations -- number of iterations of the optimization loop
    print_cost -- if True, it prints the cost every 100 steps
    Returns:
    parameters -- parameters learnt by the model. They can then be used to predict.
    """
    np.random.seed(1)
    costs = []                         # keep track of cost
    # Parameters initialization.
    ### START CODE HERE ###
    parameters = initialize_parameters_deep(layers_dims)
    ### END CODE HERE ###
    # Loop (gradient descent)
    for i in range(0, num_iterations):
        # Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.
        ### START CODE HERE ### (≈ 1 line of code)
        AL, caches = L_model_forward(X,parameters)
        ### END CODE HERE ###
        # Compute cost.
        ### START CODE HERE ### (≈ 1 line of code)
        cost = compute_cost(AL,Y)
        ### END CODE HERE ###
        # Backward propagation.
        ### START CODE HERE ### (≈ 1 line of code)
        grads = L_model_backward(AL,Y,caches)
        ### END CODE HERE ###
        # Update parameters.
        ### START CODE HERE ### (≈ 1 line of code)
        parameters = update_parameters(parameters,grads,learning_rate)
        ### END CODE HERE ###
        # Print the cost every 100 training example
        if print_cost and i % 100 == 0:
            print ("Cost after iteration %i: %f" %(i, cost))
        if print_cost and i % 100 == 0:
            costs.append(cost)
    # plot the cost
    plt.plot(np.squeeze(costs))
    plt.ylabel('cost')
    plt.xlabel('iterations (per tens)')
    plt.title("Learning rate =" + str(learning_rate))
    plt.show()
    return parameters


You will now train the model as a 5-layer neural network.

Run the cell below to train your model. The cost should decrease on every iteration. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.

parameters = L_layer_model(train_x, train_y, layers_dims, num_iterations = 2500, print_cost = True)

Cost after iteration 0: 0.771749
Cost after iteration 100: 0.672053
Cost after iteration 200: 0.648263
Cost after iteration 300: 0.611507
Cost after iteration 400: 0.567047
Cost after iteration 500: 0.540138
Cost after iteration 600: 0.527930
Cost after iteration 700: 0.465477
Cost after iteration 800: 0.369126
Cost after iteration 900: 0.391747
Cost after iteration 1000: 0.315187
Cost after iteration 1100: 0.272700
Cost after iteration 1200: 0.237419
Cost after iteration 1300: 0.199601
Cost after iteration 1400: 0.189263
Cost after iteration 1500: 0.161189
Cost after iteration 1600: 0.148214
Cost after iteration 1700: 0.137775
Cost after iteration 1800: 0.129740
Cost after iteration 1900: 0.121225
Cost after iteration 2000: 0.113821
Cost after iteration 2100: 0.107839
Cost after iteration 2200: 0.102855
Cost after iteration 2300: 0.100897
Cost after iteration 2400: 0.092878


18.png

output_30_1.png


Expected Output:

<table>

<tr>

<td> Cost after iteration 0</td>

<td> 0.771749 </td>

</tr>

<tr>

<td> Cost after iteration 100</td>

<td> 0.672053 </td>

</tr>

<tr>

<td> ...</td>

<td> ... </td>

</tr>

<tr>

<td> Cost after iteration 2400</td>

<td> 0.092878 </td>

</tr>

</table>

pred_train = predict(train_x, train_y, parameters)

Accuracy: 0.985645933014

pred_test = predict(test_x, test_y, parameters)

Accuracy: 0.8


Expected Output:

<table>

<tr>

<td> Test Accuracy</td>

<td> 0.8 </td>

</tr>

</table>

Congrats! It seems that your 5-layer neural network has better performance (80%) than your 2-layer neural network (72%) on the same test set.

This is good performance for this task. Nice job!

Though in the next course on "Improving deep neural networks" you will learn how to obtain even higher accuracy by systematically searching for better hyperparameters (learning_rate, layers_dims, num_iterations, and others you'll also learn in the next course).


6) Results Analysis


First, let's take a look at some images the L-layer model labeled incorrectly. This will show a few mislabeled images.

print_mislabeled_images(classes, test_x, test_y, pred_test)


17.png

output_37_0.png


A few type of images the model tends to do poorly on include:

  • Cat body in an unusual position
  • Cat appears against a background of a similar color
  • Unusual cat color and species
  • Camera Angle
  • Brightness of the picture
  • Scale variation (cat is very large or small in image)


7) Test with your own image (optional/ungraded exercise)


Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that:

1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.

2. Add your image to this Jupyter Notebook's directory, in the "images" folder

3. Change your image's name in the following code

4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!

## START CODE HERE ##
my_image = "my_image.jpg" # change this to the name of your image file 
my_label_y = [1] # the true class of your image (1 -> cat, 0 -> non-cat)
## END CODE HERE ##
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((num_px*num_px*3,1))
my_predicted_image = predict(my_image, my_label_y, parameters)
plt.imshow(image)
print ("y = " + str(np.squeeze(my_predicted_image)) + ", your L-layer model predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") +  "\" picture.")

Accuracy: 1.0
y = 1.0, your L-layer model predicts a "cat" picture.


16.png

output_40_1.png

References:

相关文章
|
2月前
|
机器学习/深度学习 人工智能 算法
【博士每天一篇论文-算法】Collective Behavior of a Small-World Recurrent Neural System With Scale-Free Distrib
本文介绍了一种新型的尺度无标度高聚类回声状态网络(SHESN)模型,该模型通过模拟生物神经系统的特性,如小世界现象和无标度分布,显著提高了逼近复杂非线性动力学系统的能力,并在Mackey-Glass动态系统和激光时间序列预测等问题上展示了其优越的性能。
24 1
【博士每天一篇论文-算法】Collective Behavior of a Small-World Recurrent Neural System With Scale-Free Distrib
|
机器学习/深度学习 编解码 人工智能
Text to image综述阅读(2)A Survey and Taxonomy of Adversarial Neural Networks for Text-to-Image Synthesis
这是一篇用GAN做文本生成图像(Text to Image)的综述阅读报告。 综述名为:《A Survey and Taxonomy of Adversarial Neural Networks for Text-to-Image Synthesis》,发表于2019年,其将文本生成图像分类为Semantic Enhancement GANs, Resolution Enhancement GANs, Diversity Enhancement GANs, Motion Enhancement GANs四类,并且介绍了代表性model。
Text to image综述阅读(2)A Survey and Taxonomy of Adversarial Neural Networks for Text-to-Image Synthesis
|
数据挖掘
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation-ppt版学习笔记
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation-ppt版学习笔记
120 0
【多标签文本分类】Ensemble Application of Convolutional and Recurrent Neural Networks for Multi-label Text
【多标签文本分类】Ensemble Application of Convolutional and Recurrent Neural Networks for Multi-label Text
【多标签文本分类】Ensemble Application of Convolutional and Recurrent Neural Networks for Multi-label Text
|
机器学习/深度学习 编解码 固态存储
Single Shot MultiBox Detector论文翻译【修改】
Single Shot MultiBox Detector论文翻译【修改】
98 0
Single Shot MultiBox Detector论文翻译【修改】
|
机器学习/深度学习 编解码 人工智能
Text to image论文精读CogView: Mastering Text-to-Image Generation via Transformers(通过Transformer控制文本生成图像)
CogView是清华大学和阿里巴巴达摩院共同研究开发的一款用Transformer来控制文本生成图像的模型。该论文已被NIPS(Conference and Workshop on Neural Information Processing Systems,计算机人工智能领域A类会议)录用,文章发表于2021年10月。 论文地址:https://arxiv.org/pdf/2105.13290v3.pdf 代码地址:https://github.com/THUDM/CogView 本博客是精读这篇论文的报告,包含一些个人理解、知识拓展和总结。
Text to image论文精读CogView: Mastering Text-to-Image Generation via Transformers(通过Transformer控制文本生成图像)
|
机器学习/深度学习 自然语言处理 计算机视觉
Text to image论文精读 MirrorGAN: Learning Text-to-image Generation by Redescription(通过重新描述学习从文本到图像的生成)
MirrorGAN通过学习文本-图像-文本,试图从生成的图像中重新生成文本描述,从而加强保证文本描述和视觉内容的一致性。文章被2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)会议录用。 论文地址:https://arxiv.org/abs/1903.05854 代码地址:https://github.com/qiaott/MirrorGAN
Text to image论文精读 MirrorGAN: Learning Text-to-image Generation by Redescription(通过重新描述学习从文本到图像的生成)
|
机器学习/深度学习 人工智能 自然语言处理
Text to image论文精读DF-GAN:A Simple and Effective Baseline for Text-to-Image Synthesis一种简单有效的文本生成图像基准模型
DF-GAN是南京邮电大学、苏黎世联邦理工学院、武汉大学等学者共同研究开发的一款简单且有效的文本生成图像模型。该论文已被CVPR 2022 Oral录用,文章最初发表于2020年8月,最后v3版本修订于22年3月 。 论文地址:https://arxiv.org/abs/2008.05865 代码地址:https://github.com/tobran/DF-GAN 本博客是精读这篇论文的报告,包含一些个人理解、知识拓展和总结。
Text to image论文精读DF-GAN:A Simple and Effective Baseline for Text-to-Image Synthesis一种简单有效的文本生成图像基准模型
|
机器学习/深度学习 搜索推荐
【推荐系统论文精读系列】(十七)--Content-Aware Collaborative Music Recommendation Using Pre-trained Neural Networks
虽然内容是我们音乐收听喜好的基础,但音乐推荐的领先性能是通过基于协作过滤的方法实现的,这种方法利用了用户收听历史中的相似模式,而不是歌曲的音频内容。与此同时,协同过滤有一个众所周知的“冷启动”问题,也就是说,它无法处理没有人听过的新歌。将内容信息整合到协作过滤方法的努力在许多非音乐应用中都取得了成功,比如科学文章推荐。受相关工作的启发,我们将语义标签信息训练成一个神经网络作为内容模型,并将其作为协作过滤模型的先决条件。这样的系统仍然允许用户监听数据“为自己说话”。在百万歌曲数据集上进行了测试,结果表明该系统比协同过滤方法有更好的效果,并且在冷启动情况下具有良好的性能。
241 0
【推荐系统论文精读系列】(十七)--Content-Aware Collaborative Music Recommendation Using Pre-trained Neural Networks
|
机器学习/深度学习 人工智能 搜索推荐
【推荐系统论文精读系列】(十五)--Examples-Rules Guided Deep Neural Network for Makeup Recommendation
在本文中,我们考虑了一个全自动补妆推荐系统,并提出了一种新的例子-规则引导的深度神经网络方法。该框架由三个阶段组成。首先,将与化妆相关的面部特征进行结构化编码。其次,这些面部特征被输入到示例中——规则引导的深度神经推荐模型,该模型将Before-After图像和化妆师知识两两结合使用。
153 0
【推荐系统论文精读系列】(十五)--Examples-Rules Guided Deep Neural Network for Makeup Recommendation