ML/DL之Paper:机器学习、深度学习常用的国内/国外引用(References)参考文献集合(建议收藏,持续更新)

简介: ML/DL之Paper:机器学习、深度学习常用的国内/国外引用(References)参考文献集合(建议收藏,持续更新)

三、CV方向


1、《ImageNet Classification with Deep Convolutional  Neural Networks》


Alex Krizhevsky University of Toronto      Ilya Sutskever University of Toronto       Geoffrey E. Hinton University of Toronto


REFERENCES

[1] R.M. Bell and Y. Koren. Lessons from the netflix prize challenge. ACM SIGKDD Explorations Newsletter,

9(2):75–79, 2007.

[2] A. Berg, J. Deng, and L. Fei-Fei. Large scale visual recognition challenge 2010. www.imagenet.org/challenges.

2010.

[3] L. Breiman. Random forests. Machine learning, 45(1):5–32, 2001.

[4] D. Cire¸san, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classification.

Arxiv preprint arXiv:1202.2745, 2012.

[5] D.C. Cire¸san, U. Meier, J. Masci, L.M. Gambardella, and J. Schmidhuber. High-performance neural

networks for visual object classification. Arxiv preprint arXiv:1102.0183, 2011.

[6] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical

Image Database. In CVPR09, 2009.

[7] J. Deng, A. Berg, S. Satheesh, H. Su, A. Khosla, and L. Fei-Fei. ILSVRC-2012, 2012. URL

http://www.image-net.org/challenges/LSVRC/2012/.

[8] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: An

incremental bayesian approach tested on 101 object categories. Computer Vision and Image Understanding,

106(1):59–70, 2007.

[9] G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset. Technical Report 7694, California

Institute of Technology, 2007. URL http://authors.library.caltech.edu/7694.

[10] G.E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R.R. Salakhutdinov. Improving neural networks

by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012.

[11] K. Jarrett, K. Kavukcuoglu, M. A. Ranzato, and Y. LeCun. What is the best multi-stage architecture for

object recognition? In International Conference on Computer Vision, pages 2146–2153. IEEE, 2009.

[12] A. Krizhevsky. Learning multiple layers of features from tiny images. Master’s thesis, Department of

Computer Science, University of Toronto, 2009.

[13] A. Krizhevsky. Convolutional deep belief networks on cifar-10. Unpublished manuscript, 2010.

[14] A. Krizhevsky and G.E. Hinton. Using very deep autoencoders for content-based image retrieval. In

ESANN, 2011.

[15] Y. Le Cun, B. Boser, J.S. Denker, D. Henderson, R.E. Howard, W. Hubbard, L.D. Jackel, et al. Handwritten

digit recognition with a back-propagation network. In Advances in neural information processing

systems, 1990.

[16] Y. LeCun, F.J. Huang, and L. Bottou. Learning methods for generic object recognition with invariance to

pose and lighting. In Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the

2004 IEEE Computer Society Conference on, volume 2, pages II–97. IEEE, 2004.

[17] Y. LeCun, K. Kavukcuoglu, and C. Farabet. Convolutional networks and applications in vision. In

Circuits and Systems (ISCAS), Proceedings of 2010 IEEE International Symposium on, pages 253–256.

IEEE, 2010.

[18] H. Lee, R. Grosse, R. Ranganath, and A.Y. Ng. Convolutional deep belief networks for scalable unsupervised

learning of hierarchical representations. In Proceedings of the 26th Annual International Conference

on Machine Learning, pages 609–616. ACM, 2009.

[19] T. Mensink, J. Verbeek, F. Perronnin, and G. Csurka. Metric Learning for Large Scale Image Classifi-

cation: Generalizing to New Classes at Near-Zero Cost. In ECCV - European Conference on Computer

Vision, Florence, Italy, October 2012.

[20] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In Proc. 27th

International Conference on Machine Learning, 2010.

[21] N. Pinto, D.D. Cox, and J.J. DiCarlo. Why is real-world visual object recognition hard? PLoS computational

biology, 4(1):e27, 2008.

[22] N. Pinto, D. Doukhan, J.J. DiCarlo, and D.D. Cox. A high-throughput screening approach to discovering

good forms of biologically inspired visual representation. PLoS computational biology, 5(11):e1000579,

2009.

[23] B.C. Russell, A. Torralba, K.P. Murphy, and W.T. Freeman. Labelme: a database and web-based tool for

image annotation. International journal of computer vision, 77(1):157–173, 2008.

[24] J. Sánchez and F. Perronnin. High-dimensional signature compression for large-scale image classification.

In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1665–1672. IEEE,

2011.

[25] P.Y. Simard, D. Steinkraus, and J.C. Platt. Best practices for convolutional neural networks applied to

visual document analysis. In Proceedings of the Seventh International Conference on Document Analysis

and Recognition, volume 2, pages 958–962, 2003.

[26] S.C. Turaga, J.F. Murray, V. Jain, F. Roth, M. Helmstaedter, K. Briggman, W. Denk, and H.S. Seung. Convolutional

networks can



2、《Faster R-CNN: Towards Real-Time Object  Detection with Region Proposal Networks》


Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun


REFERENCES

[1] K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling

in deep convolutional networks for visual recognition,” in

European Conference on Computer Vision (ECCV), 2014.

[2] R. Girshick, “Fast R-CNN,” in IEEE International Conference on

Computer Vision (ICCV), 2015.

[3] K. Simonyan and A. Zisserman, “Very deep convolutionalnetworks for large-scale image recognition,” in International

Conference on Learning Representations (ICLR), 2015.

[4] J. R. Uijlings, K. E. van de Sande, T. Gevers, and A. W. Smeulders,

“Selective search for object recognition,” International

Journal of Computer Vision (IJCV), 2013.

[5] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature

hierarchies for accurate object detection and semantic segmentation,”

in IEEE Conference on Computer Vision and Pattern

Recognition (CVPR), 2014.

[6] C. L. Zitnick and P. Dollar, “Edge boxes: Locating object ´

proposals from edges,” in European Conference on Computer

Vision (ECCV), 2014.

[7] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional

networks for semantic segmentation,” in IEEE Conference on

Computer Vision and Pattern Recognition (CVPR), 2015.

[8] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan,

“Object detection with discriminatively trained partbased

models,” IEEE Transactions on Pattern Analysis and Machine

Intelligence (TPAMI), 2010.

[9] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus,

and Y. LeCun, “Overfeat: Integrated recognition, localization

and detection using convolutional networks,” in International

Conference on Learning Representations (ICLR), 2014.

[10] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” in

Neural Information Processing Systems (NIPS), 2015.

[11] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and

A. Zisserman, “The PASCAL Visual Object Classes Challenge

2007 (VOC2007) Results,” 2007.

[12] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan,

P. Dollar, and C. L. Zitnick, “Microsoft COCO: Com- ´

mon Objects in Context,” in European Conference on Computer

Vision (ECCV), 2014.

[13] S. Song and J. Xiao, “Deep sliding shapes for amodal 3d object

detection in rgb-d images,” arXiv:1511.02300, 2015.

[14] J. Zhu, X. Chen, and A. L. Yuille, “DeePM: A deep part-based

model for object detection and semantic part localization,”

arXiv:1511.07131, 2015.

[15] J. Dai, K. He, and J. Sun, “Instance-aware semantic segmentation

via multi-task network cascades,” arXiv:1512.04412, 2015.

[16] J. Johnson, A. Karpathy, and L. Fei-Fei, “Densecap: Fully

convolutional localization networks for dense captioning,”

arXiv:1511.07571, 2015.

[17] D. Kislyuk, Y. Liu, D. Liu, E. Tzeng, and Y. Jing, “Human curation

and convnets: Powering item-to-item recommendations

on pinterest,” arXiv:1511.04003, 2015.

[18] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning

for image recognition,” arXiv:1512.03385, 2015.

[19] J. Hosang, R. Benenson, and B. Schiele, “How good are detection

proposals, really?” in British Machine Vision Conference

(BMVC), 2014.

[20] J. Hosang, R. Benenson, P. Dollar, and B. Schiele, “What makes ´

for effective detection proposals?” IEEE Transactions on Pattern

Analysis and Machine Intelligence (TPAMI), 2015.

[21] N. Chavali, H. Agrawal, A. Mahendru, and D. Batra,

“Object-Proposal Evaluation Protocol is ’Gameable’,” arXiv:

1505.05836, 2015.

[22] J. Carreira and C. Sminchisescu, “CPMC: Automatic object

segmentation using constrained parametric min-cuts,”

IEEE Transactions on Pattern Analysis and Machine Intelligence

(TPAMI), 2012.

[23] P. Arbelaez, J. Pont-Tuset, J. T. Barron, F. Marques, and J. Malik, ´

“Multiscale combinatorial grouping,” in IEEE Conference on

Computer Vision and Pattern Recognition (CVPR), 2014.

[24] B. Alexe, T. Deselaers, and V. Ferrari, “Measuring the objectness

of image windows,” IEEE Transactions on Pattern Analysis

and Machine Intelligence (TPAMI), 2012.

[25] C. Szegedy, A. Toshev, and D. Erhan, “Deep neural networks

for object detection,” in Neural Information Processing Systems

(NIPS), 2013.

[26] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov, “Scalable

object detection using deep neural networks,” in IEEE Conference

on Computer Vision and Pattern Recognition (CVPR), 2014.

[27] C. Szegedy, S. Reed, D. Erhan, and D. Anguelov, “Scalable,

high-quality object detection,” arXiv:1412.1441 (v1), 2015.

[28] P. O. Pinheiro, R. Collobert, and P. Dollar, “Learning to

segment object candidates,” in Neural Information Processing

Systems (NIPS), 2015.

[29] J. Dai, K. He, and J. Sun, “Convolutional feature masking

for joint object and stuff segmentation,” in IEEE Conference on

Computer Vision and Pattern Recognition (CVPR), 2015.

[30] S. Ren, K. He, R. Girshick, X. Zhang, and J. Sun, “Object

detection networks on convolutional feature maps,”

arXiv:1504.06066, 2015.

[31] J. K. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and

Y. Bengio, “Attention-based models for speech recognition,”

in Neural Information Processing Systems (NIPS), 2015.

[32] M. D. Zeiler and R. Fergus, “Visualizing and understanding

convolutional neural networks,” in European Conference on

Computer Vision (ECCV), 2014.

[33] V. Nair and G. E. Hinton, “Rectified linear units improve

restricted boltzmann machines,” in International Conference on

Machine Learning (ICML), 2010.

[34] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov,

D. Erhan, and A. Rabinovich, “Going deeper with convolutions,”

in IEEE Conference on Computer Vision and Pattern

Recognition (CVPR), 2015.

[35] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard,

W. Hubbard, and L. D. Jackel, “Backpropagation applied to

handwritten zip code recognition,” Neural computation, 1989.

[36] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma,

Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg,

and L. Fei-Fei, “ImageNet Large Scale Visual Recognition

Challenge,” in International Journal of Computer Vision (IJCV),

2015.

[37] A. Krizhevsky, I. Sutskever, and G. Hinton, “Imagenet classi-

fication with deep convolutional neural networks,” in Neural

Information Processing Systems (NIPS), 2012.

[38] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick,

S. Guadarrama, and T. Darrell, “Caffe: Convolutional

architecture for fast feature embedding,” arXiv:1408.5093, 2014.

[39] K. Lenc and A. Vedaldi, “R-CNN minus R,” in British Machine

Vision Conference (BMVC), 2015.



3、《Mask R-CNN》


Kaiming He Georgia Gkioxari Piotr Dollar Ross Girshick ´

Facebook AI Research (FAIR)


References

[1] M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele. 2D

human pose estimation: New benchmark and state of the art

analysis. In CVPR, 2014. 8

[2] P. Arbelaez, J. Pont-Tuset, J. T. Barron, F. Marques, and ´

J. Malik. Multiscale combinatorial grouping. In CVPR,

2014. 2

[3] A. Arnab and P. H. Torr. Pixelwise instance segmentation

with a dynamically instantiated network. In CVPR, 2017. 3,

9

[4] M. Bai and R. Urtasun. Deep watershed transform for instance

segmentation. In CVPR, 2017. 3, 9

[5] S. Bell, C. L. Zitnick, K. Bala, and R. Girshick. Insideoutside

net: Detecting objects in context with skip pooling

and recurrent neural networks. In CVPR, 2016. 5

[6] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh. Realtime multiperson

2d pose estimation using part affinity fields. In CVPR,

2017. 7, 8

[7] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler,

R. Benenson, U. Franke, S. Roth, and B. Schiele. The

Cityscapes dataset for semantic urban scene understanding.

In CVPR, 2016. 9

[8] J. Dai, K. He, Y. Li, S. Ren, and J. Sun. Instance-sensitive

fully convolutional networks. In ECCV, 2016. 2

[9] J. Dai, K. He, and J. Sun. Convolutional feature masking for

joint object and stuff segmentation. In CVPR, 2015. 2

[10] J. Dai, K. He, and J. Sun. Instance-aware semantic segmentation

via multi-task network cascades. In CVPR, 2016. 2, 3,

4, 5, 6

[11] J. Dai, Y. Li, K. He, and J. Sun. R-FCN: Object detection via

region-based fully convolutional networks. In NIPS, 2016. 2

[12] R. Girshick. Fast R-CNN. In ICCV, 2015. 1, 2, 3, 4, 6

[13] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature

hierarchies for accurate object detection and semantic

segmentation. In CVPR, 2014. 2, 3

[14] R. Girshick, F. Iandola, T. Darrell, and J. Malik. Deformable

part models are convolutional neural networks. In CVPR,

2015. 4

[15] B. Hariharan, P. Arbelaez, R. Girshick, and J. Malik. Simul- ´

taneous detection and segmentation. In ECCV. 2014. 2

[16] B. Hariharan, P. Arbelaez, R. Girshick, and J. Malik. Hyper- ´

columns for object segmentation and fine-grained localization.

In CVPR, 2015. 2

[17] Z. Hayder, X. He, and M. Salzmann. Shape-aware instance

segmentation. In CVPR, 2017. 9

[18] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling

in deep convolutional networks for visual recognition. In

ECCV. 2014. 1, 2

[19] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning

for image recognition. In CVPR, 2016. 2, 4, 7, 10

[20] J. Hosang, R. Benenson, P. Dollar, and B. Schiele. What ´

makes for effective detection proposals? PAMI, 2015. 2

[21] J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara,

A. Fathi, I. Fischer, Z. Wojna, Y. Song, S. Guadarrama, et al.

Speed/accuracy trade-offs for modern convolutional object

detectors. In CVPR, 2017. 2, 3, 4, 6, 7

[22] M. Jaderberg, K. Simonyan, A. Zisserman, and

K. Kavukcuoglu. Spatial transformer networks. In

NIPS, 2015. 4

[23] A. Kirillov, E. Levinkov, B. Andres, B. Savchynskyy, and

C. Rother. Instancecut: from edges to instances with multicut.

In CVPR, 2017. 3, 9

[24] A. Krizhevsky, I. Sutskever, and G. Hinton. ImageNet classification

with deep convolutional neural networks. In NIPS,

2012. 2

[25] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E.

Howard, W. Hubbard, and L. D. Jackel. Backpropagation

applied to handwritten zip code recognition. Neural computation,

1989. 2

[26] Y. Li, H. Qi, J. Dai, X. Ji, and Y. Wei. Fully convolutional

instance-aware semantic segmentation. In CVPR, 2017. 2,

3, 5, 6

[27] T.-Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and ´

S. Belongie. Feature pyramid networks for object detection.

In CVPR, 2017. 2, 4, 5, 7

[28] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan,

P. Dollar, and C. L. Zitnick. Microsoft COCO: Com- ´

mon objects in context. In ECCV, 2014. 2, 5

[29] S. Liu, J. Jia, S. Fidler, and R. Urtasun. SGN: Sequential

grouping networks for instance segmentation. In ICCV,

2017. 3, 9

[30] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional

networks for semantic segmentation. In CVPR, 2015. 1, 3, 6

[31] V. Nair and G. E. Hinton. Rectified linear units improve restricted

boltzmann machines. In ICML, 2010. 4

[32] G. Papandreou, T. Zhu, N. Kanazawa, A. Toshev, J. Tompson,

C. Bregler, and K. Murphy. Towards accurate multiperson

pose estimation in the wild. In CVPR, 2017. 8

[33] P. O. Pinheiro, R. Collobert, and P. Dollar. Learning to segment

object candidates. In NIPS, 2015. 2, 3

[34] P. O. Pinheiro, T.-Y. Lin, R. Collobert, and P. Dollar. Learn- ´

ing to refine object segments. In ECCV, 2016. 2, 3

[35] I. Radosavovic, P. Dollar, R. Girshick, G. Gkioxari, and ´

K. He. Data distillation: Towards omni-supervised learning.

arXiv:1712.04440, 2017. 10

[36] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards

real-time object detection with region proposal networks.

In NIPS, 2015. 1, 2, 3, 4, 7

[37] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards

real-time object detection with region proposal networks.

In TPAMI, 2017. 10

[38] A. Shrivastava, A. Gupta, and R. Girshick. Training regionbased

object detectors with online hard example mining. In

CVPR, 2016. 2, 5

[39] A. Shrivastava, R. Sukthankar, J. Malik, and A. Gupta. Beyond

skip connections: Top-down modulation for object detection.

arXiv:1612.06851, 2016. 4, 7

[40] C. Sun, A. Shrivastava, S. Singh, and A. Gupta. Revisiting

unreasonable effectiveness of data in deep learning era. In

ICCV, 2017. 10

[41] C. Szegedy, S. Ioffe, and V. Vanhoucke. Inception-v4,

inception-resnet and the impact of residual connections on

learning. In ICLR Workshop, 2016. 7

[42] J. R. Uijlings, K. E. van de Sande, T. Gevers, and A. W.

Smeulders. Selective search for object recognition. IJCV,

2013. 2

[43] X. Wang, R. Girshick, A. Gupta, and K. He. Non-local neural

networks. arXiv:1711.07971, 2017. 10

[44] S.-E. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh. Convolutional

pose machines. In CVPR, 2016. 8

[45] S. Xie, R. Girshick, P. Dollar, Z. Tu, and K. He. Aggregated ´

residual transformations for deep neural networks. In CVPR,

2017. 4, 10


 

论文导出文献引用正确格式的几个方法

1、使用百度学术

image.png

2、Google维基百科

image.png

相关文章
|
2月前
|
机器学习/深度学习 人工智能 安全
探索AI的未来:从机器学习到深度学习
【10月更文挑战第28天】本文将带你走进AI的世界,从机器学习的基本概念到深度学习的复杂应用,我们将一起探索AI的未来。你将了解到AI如何改变我们的生活,以及它在未来可能带来的影响。无论你是AI专家还是初学者,这篇文章都将为你提供新的视角和思考。让我们一起探索AI的奥秘,看看它将如何塑造我们的未来。
84 3
|
29天前
|
机器学习/深度学习 人工智能 算法
探索机器学习:从线性回归到深度学习
本文将带领读者从基础的线性回归模型开始,逐步深入到复杂的深度学习网络。我们将通过代码示例,展示如何实现这些算法,并解释其背后的数学原理。无论你是初学者还是有经验的开发者,这篇文章都将为你提供有价值的见解和知识。让我们一起踏上这段激动人心的旅程吧!
|
18天前
|
机器学习/深度学习 人工智能 算法
机器学习与深度学习:差异解析
机器学习与深度学习作为两大核心技术,各自拥有独特的魅力和应用价值。尽管它们紧密相连,但两者之间存在着显著的区别。本文将从定义、技术、数据需求、应用领域、模型复杂度以及计算资源等多个维度,对机器学习与深度学习进行深入对比,帮助您更好地理解它们之间的差异。
|
2月前
|
机器学习/深度学习 人工智能 算法
【手写数字识别】Python+深度学习+机器学习+人工智能+TensorFlow+算法模型
手写数字识别系统,使用Python作为主要开发语言,基于深度学习TensorFlow框架,搭建卷积神经网络算法。并通过对数据集进行训练,最后得到一个识别精度较高的模型。并基于Flask框架,开发网页端操作平台,实现用户上传一张图片识别其名称。
105 0
【手写数字识别】Python+深度学习+机器学习+人工智能+TensorFlow+算法模型
|
2月前
|
机器学习/深度学习 人工智能 TensorFlow
基于TensorFlow的深度学习模型训练与优化实战
基于TensorFlow的深度学习模型训练与优化实战
106 0
|
2月前
|
机器学习/深度学习 自然语言处理 语音技术
探索机器学习中的深度学习模型:原理与应用
探索机器学习中的深度学习模型:原理与应用
46 0
|
3月前
|
机器学习/深度学习 人工智能 算法
揭开深度学习与传统机器学习的神秘面纱:从理论差异到实战代码详解两者间的选择与应用策略全面解析
【10月更文挑战第10天】本文探讨了深度学习与传统机器学习的区别,通过图像识别和语音处理等领域的应用案例,展示了深度学习在自动特征学习和处理大规模数据方面的优势。文中还提供了一个Python代码示例,使用TensorFlow构建多层感知器(MLP)并与Scikit-learn中的逻辑回归模型进行对比,进一步说明了两者的不同特点。
115 2
|
3月前
|
机器学习/深度学习 人工智能 算法
【玉米病害识别】Python+卷积神经网络算法+人工智能+深度学习+计算机课设项目+TensorFlow+模型训练
玉米病害识别系统,本系统使用Python作为主要开发语言,通过收集了8种常见的玉米叶部病害图片数据集('矮花叶病', '健康', '灰斑病一般', '灰斑病严重', '锈病一般', '锈病严重', '叶斑病一般', '叶斑病严重'),然后基于TensorFlow搭建卷积神经网络算法模型,通过对数据集进行多轮迭代训练,最后得到一个识别精度较高的模型文件。再使用Django搭建Web网页操作平台,实现用户上传一张玉米病害图片识别其名称。
82 0
【玉米病害识别】Python+卷积神经网络算法+人工智能+深度学习+计算机课设项目+TensorFlow+模型训练
|
3月前
|
机器学习/深度学习 自然语言处理 算法
机器学习和深度学习之间的区别
机器学习和深度学习在实际应用中各有优势和局限性。机器学习适用于一些数据量较小、问题相对简单、对模型解释性要求较高的场景;而深度学习则在处理大规模、复杂的数据和任务时表现出色,但需要更多的计算资源和数据,并且模型的解释性较差。在实际应用中,需要根据具体的问题和需求,结合两者的优势,选择合适的方法来解决问题。
100 0
|
3月前
|
机器学习/深度学习 人工智能 自然语言处理
浅谈机器学习与深度学习的区别
浅谈机器学习与深度学习的区别
81 0