【人工智能】45测试深度学习基础知识的数据科学家的问题(以及解决方案)(下)

简介: 【人工智能】45测试深度学习基础知识的数据科学家的问题(以及解决方案

Which of the following architecture has feedback connections?

A. Recurrent Neural network

B. Convolutional Neural Network

C. Restricted Boltzmann Machine

D. None of these

Solution: (A)

Option A is correct.

Q17. What is the sequence of the following tasks in a perceptron?

  1. Initialize weights of perceptron randomly
  2. Go to the next batch of dataset
  3. If the prediction does not match the output, change the weights
  4. For a sample input, compute an output

A. 1, 2, 3, 4

B. 4, 3, 2, 1

C. 3, 1, 2, 4

D. 1, 4, 3, 2

Solution: (D)

Sequence D is correct.

Q18. Suppose that you have to minimize the cost function by changing the parameters. Which of the following technique could be used for this?

A. Exhaustive Search

B. Random Search

C. Bayesian Optimization

D. Any of these

Solution: (D)

Any of the above mentioned technique can be used to change parameters.

Q19. First Order Gradient descent would not work correctly (i.e. may get stuck) in which of the following graphs?

A.

B.

C.

D. None of these

Solution: (B)

This is a classic example of saddle point problem of gradient descent.

Q20. The below graph shows the accuracy of a trained 3-layer convolutional neural network vs the number of parameters (i.e. number of feature kernels).

The trend suggests that as you increase the width of a neural network, the accuracy increases till a certain threshold value, and then starts decreasing.

What could be the possible reason for this decrease?

A. Even if number of kernels increase, only few of them are used for prediction

B. As the number of kernels increase, the predictive power of neural network decrease

C. As the number of kernels increase, they start to correlate with each other which in turn helps overfitting

D. None of these

Solution: (C)

As mentioned in option C, the possible reason could be kernel correlation.

Q21. Suppose we have one hidden layer neural network as shown above. The hidden layer in this network works as a dimensionality reductor. Now instead of using this hidden layer, we replace it with a dimensionality reduction technique such as PCA.

Would the network that uses a dimensionality reduction technique always give same output as network with hidden layer?

A. Yes

B. No

Solution: (B)

Because PCA works on correlated features, whereas hidden layers work on predictive capacity of features.

Q22. Can a neural network model the function (y=1/x)?

A. Yes

B. No

Solution: (A)

Option A is true, because activation function can be reciprocal function.

Q23. In which neural net architecture, does weight sharing occur?

A. Convolutional neural Network

B. Recurrent Neural Network

C. Fully Connected Neural Network

D. Both A and B

Solution: (D)

Option D is correct.

Q24. Batch Normalization is helpful because

A. It normalizes (changes) all the input before sending it to the next layer

B. It returns back the normalized mean and standard deviation of weights

C. It is a very efficient backpropagation technique

D. None of these

Solution: (A)

To read more about batch normalization, see refer this video

Q25. Instead of trying to achieve absolute zero error, we set a metric called bayes error which is the error we hope to achieve. What could be the reason for using bayes error?

A. Input variables may not contain complete information about the output variable

B. System (that creates input-output mapping) may be stochastic

C. Limited training data

D. All the above

Solution: (D)

In reality achieving accurate prediction is a myth. So we should hope to achieve an “achievable result”.

Q26. The number of neurons in the output layer should match the number of classes (Where the number of classes is greater than 2) in a supervised learning task. True or False?

A. True

B. False

Solution: (B)

It depends on output encoding. If it is one-hot encoding, then its true. But you can have two outputs for four classes, and take the binary values as four classes(00,01,10,11).

Q27. In a neural network, which of the following techniques is used to deal with overfitting?

A. Dropout

B. Regularization

C. Batch Normalization

D. All of these

Solution: (D)

All of the techniques can be used to deal with overfitting.

Q28. Y = ax^2 + bx + c (polynomial equation of degree 2)

Can this equation be represented by a neural network of single hidden layer with linear threshold?

A. Yes

B. No

Solution: (B)

The answer is no because having a linear threshold restricts your neural network and in simple terms, makes it a consequential linear transformation function.

Q29. What is a dead unit in a neural network?

A. A unit which doesn’t update during training by any of its neighbour

B. A unit which does not respond completely to any of the training patterns

C. The unit which produces the biggest sum-squared error

D. None of these

Solution: (A)

Option A is correct.

Q30. Which of the following statement is the best description of early stopping?

A. Train the network until a local minimum in the error function is reached

B. Simulate the network on a test dataset after every epoch of training. Stop training when the generalization error starts to increase

C. Add a momentum term to the weight update in the Generalized Delta Rule, so that training converges more quickly

D. A faster version of backpropagation, such as the `Quickprop’ algorithm

Solution: (B)

Option B is correct.

Q31. What if we use a learning rate that’s too large?

A. Network will converge

B. Network will not converge

C. Can’t Say

Solution: B

Option B is correct because the error rate would become erratic and explode.

Q32. The network shown in Figure 1 is trained to recognize the characters H and T as shown below:

What would be the output of the network?

A

B

C

D: Could be A or B depending on the weights of neural network

Solution: (D)

Without knowing what are the weights and biases of a neural network, we cannot comment on what output it would give.

Q33. Suppose a convolutional neural network is trained on ImageNet dataset (Object recognition dataset). This trained model is then given a completely white image as an input.The output probabilities for this input would be equal for all classes. True or False?

A. True

B. False

Solution: (B)

There would be some neurons which are do not activate for white pixels as input. So the classes wont be equal.

Q34. When pooling layer is added in a convolutional neural network, translation in-variance is preserved. True or False?

A. True

B. False

Solution: (A)

Translation invariance is induced when you use pooling.

Q35. Which gradient technique is more advantageous when the data is too big to handle in RAM simultaneously?

A. Full Batch Gradient Descent

B. Stochastic Gradient Descent

Solution: (B)

Option B is correct.

Q36. The graph represents gradient flow of a four-hidden layer neural network which is trained using sigmoid activation function per epoch of training. The neural network suffers with the vanishing gradient problem.

Which of the following statements is true?

A. Hidden layer 1 corresponds to D, Hidden layer 2 corresponds to C, Hidden layer 3 corresponds to B and Hidden layer 4 corresponds to A

B. Hidden layer 1 corresponds to A, Hidden layer 2 corresponds to B, Hidden layer 3 corresponds to C and Hidden layer 4 corresponds to D

Solution: (A)

This is a description of a vanishing gradient problem. As the backprop algorithm goes to starting layers, learning decreases.

Q37. For a classification task, instead of random weight initializations in a neural network, we set all the weights to zero. Which of the following statements is true?

A. There will not be any problem and the neural network will train properly

B. The neural network will train but all the neurons will end up recognizing the same thing

C. The neural network will not train as there is no net gradient change

D. None of these

Solution: (B)

Option B is correct.

Q38. There is a plateau at the start. This is happening because the neural network gets stuck at local minima before going on to global minima.

To avoid this, which of the following strategy should work?

A. Increase the number of parameters, as the network would not get stuck at local minima

B. Decrease the learning rate by 10 times at the start and then use momentum

C. Jitter the learning rate, i.e. change the learning rate for a few epochs

D. None of these

Solution: (C)

Option C can be used to take a neural network out of local minima in which it is stuck.

Q39. For an image recognition problem (recognizing a cat in a photo), which architecture of neural network would be better suited to solve the problem?

A. Multi Layer Perceptron

B. Convolutional Neural Network

C. Recurrent Neural network

D. Perceptron

Solution: (B)

Convolutional Neural Network would be better suited for image related problems because of its inherent nature for taking into account changes in nearby locations of an image

Q40.Suppose while training, you encounter this issue. The error suddenly increases after a couple of iterations.

You determine that there must a problem with the data. You plot the data and find the insight that, original data is somewhat skewed and that may be causing the problem.

What will you do to deal with this challenge?

A. Normalize

B. Apply PCA and then Normalize

C. Take Log Transform of the data

D. None of these

Solution: (B)

First you would remove the correlations of the data and then zero center it.

Q41. Which of the following is a decision boundary of Neural Network?

A) B

B) A

C) D

D) C

E) All of these

Solution: (E)

A neural network is said to be a universal function approximator, so it can theoretically represent any decision boundary.

Q42. In the graph below, we observe that the error has many “ups and downs”

Should we be worried?

A. Yes, because this means there is a problem with the learning rate of neural network.

B. No, as long as there is a cumulative decrease in both training and validation error, we don’t need to worry.

Solution: (B)

Option B is correct. In order to decrease these “ups and downs” try to increase the batch size.

Q43. What are the factors to select the depth of neural network?

  1. Type of neural network (eg. MLP, CNN etc)
  2. Input data
  3. Computation power, i.e. Hardware capabilities and software capabilities
  4. Learning Rate
  5. The output function to map

A. 1, 2, 4, 5

B. 2, 3, 4, 5

C. 1, 3, 4, 5

D. All of these

Solution: (D)

All of the above factors are important to select the depth of neural network

Q44. Consider the scenario. The problem you are trying to solve has a small amount of data. Fortunately, you have a pre-trained neural network that was trained on a similar problem. Which of the following methodologies would you choose to make use of this pre-trained network?

A. Re-train the model for the new dataset

B. Assess on every layer how the model performs and only select a few of them

C. Fine tune the last couple of layers only

D. Freeze all the layers except the last, re-train the last layer

Solution: (D)

If the dataset is mostly similar, the best method would be to train only the last layer, as previous all layers work as feature extractors.

Q45. Increase in size of a convolutional kernel would necessarily increase the performance of a convolutional network.

A. True

B. False

Solution: (B)

Increasing kernel size would not necessarily increase performance. This depends heavily on the dataset.

结束笔记

我希望你喜欢参加测试,你发现解决方案有帮助。测试集中在深度学习的概念知识。

我们试图通过这篇文章清除所有的疑问,但如果我们错过了一些事情,那么让我在下面的评论中知道。如果您有任何建议或改进,您认为我们应该在下一个技能测试中,通过在评论部分放弃您的反馈来告知我们。


相关文章
|
8天前
|
数据采集 人工智能 安全
软件测试中的人工智能应用与挑战
在这篇文章中,我们将深入探讨人工智能(AI)在软件测试中的应用及其所面临的挑战。通过分析当前的技术趋势和具体案例,揭示AI如何提高测试效率和准确性,并指出在实施过程中遇到的主要问题及可能的解决途径。
23 1
|
13天前
|
机器学习/深度学习 人工智能 算法
植物病害识别系统Python+卷积神经网络算法+图像识别+人工智能项目+深度学习项目+计算机课设项目+Django网页界面
植物病害识别系统。本系统使用Python作为主要编程语言,通过收集水稻常见的四种叶片病害图片('细菌性叶枯病', '稻瘟病', '褐斑病', '稻瘟条纹病毒病')作为后面模型训练用到的数据集。然后使用TensorFlow搭建卷积神经网络算法模型,并进行多轮迭代训练,最后得到一个识别精度较高的算法模型,然后将其保存为h5格式的本地模型文件。再使用Django搭建Web网页平台操作界面,实现用户上传一张测试图片识别其名称。
65 21
植物病害识别系统Python+卷积神经网络算法+图像识别+人工智能项目+深度学习项目+计算机课设项目+Django网页界面
|
13天前
|
机器学习/深度学习 算法 TensorFlow
交通标志识别系统Python+卷积神经网络算法+深度学习人工智能+TensorFlow模型训练+计算机课设项目+Django网页界面
交通标志识别系统。本系统使用Python作为主要编程语言,在交通标志图像识别功能实现中,基于TensorFlow搭建卷积神经网络算法模型,通过对收集到的58种常见的交通标志图像作为数据集,进行迭代训练最后得到一个识别精度较高的模型文件,然后保存为本地的h5格式文件。再使用Django开发Web网页端操作界面,实现用户上传一张交通标志图片,识别其名称。
43 6
交通标志识别系统Python+卷积神经网络算法+深度学习人工智能+TensorFlow模型训练+计算机课设项目+Django网页界面
|
9天前
|
机器学习/深度学习 人工智能 算法
【新闻文本分类识别系统】Python+卷积神经网络算法+人工智能+深度学习+计算机毕设项目+Django网页界面平台
文本分类识别系统。本系统使用Python作为主要开发语言,首先收集了10种中文文本数据集("体育类", "财经类", "房产类", "家居类", "教育类", "科技类", "时尚类", "时政类", "游戏类", "娱乐类"),然后基于TensorFlow搭建CNN卷积神经网络算法模型。通过对数据集进行多轮迭代训练,最后得到一个识别精度较高的模型,并保存为本地的h5格式。然后使用Django开发Web网页端操作界面,实现用户上传一段文本识别其所属的类别。
22 1
【新闻文本分类识别系统】Python+卷积神经网络算法+人工智能+深度学习+计算机毕设项目+Django网页界面平台
|
2天前
|
机器学习/深度学习 人工智能 自然语言处理
软件测试中的人工智能:现状与未来
本文探讨了人工智能在软件测试中的应用,包括自动化测试、智能缺陷分析以及测试用例生成等方面。通过案例展示了AI如何提升测试效率和质量,并讨论了当前面临的挑战及未来发展趋势。
|
7天前
|
机器学习/深度学习 人工智能 自然语言处理
软件测试中的人工智能革命
本文探讨了人工智能在软件测试中的应用,包括自动化测试、智能缺陷检测和测试用例生成。通过实际案例分析,展示了AI如何提高测试效率和准确性,并讨论了未来发展趋势。
|
5天前
|
数据采集 人工智能 自然语言处理
软件测试中的人工智能应用与挑战
本文探讨了人工智能在软件测试中的应用及其所面临的挑战。通过分析AI技术如何优化测试流程、提高测试效率以及目前存在的局限性,文章提供了对软件测试未来发展趋势的深入思考。
|
7天前
|
机器学习/深度学习 人工智能 监控
软件测试中的人工智能应用与挑战
随着科技的迅猛发展,人工智能(AI)在软件测试中的应用越来越广泛。本文将探讨AI在软件测试中的具体应用场景、带来的优势以及所面临的挑战,旨在为软件开发和测试人员提供有价值的参考。
|
10天前
|
人工智能 算法 测试技术
软件测试中的人工智能:提升测试效率与质量
随着软件开发的快速发展,传统的手工测试方法已经无法满足现代软件项目的需求。本文探讨了人工智能在软件测试中的应用,如何通过自动化测试、智能缺陷分析和测试用例生成等技术,提高测试效率和质量。我们将详细介绍这些技术的原理和实际应用,并讨论其带来的优势和挑战。
|
9天前
|
机器学习/深度学习 人工智能 算法
【果蔬识别系统】Python+卷积神经网络算法+人工智能+深度学习+计算机毕设项目+Django网页界面平台
【果蔬识别系统】Python+卷积神经网络算法+人工智能+深度学习+计算机毕设项目+Django网页界面平台。果蔬识别系统,本系统使用Python作为主要开发语言,通过收集了12种常见的水果和蔬菜('土豆', '圣女果', '大白菜', '大葱', '梨', '胡萝卜', '芒果', '苹果', '西红柿', '韭菜', '香蕉', '黄瓜'),然后基于TensorFlow库搭建CNN卷积神经网络算法模型,然后对数据集进行训练,最后得到一个识别精度较高的算法模型,然后将其保存为h5格式的本地文件方便后期调用。再使用Django框架搭建Web网页平台操作界面,实现用户上传一张果蔬图片识别其名称。
28 0
【果蔬识别系统】Python+卷积神经网络算法+人工智能+深度学习+计算机毕设项目+Django网页界面平台
下一篇
无影云桌面