DL:深度学习算法(神经网络模型集合)概览之《THE NEURAL NETWORK ZOO》的中文解释和感悟(九)

简介: DL:深度学习算法(神经网络模型集合)概览之《THE NEURAL NETWORK ZOO》的中文解释和感悟

DRN


image.png


      Deep residual networks (DRN) are very deep FFNNs with extra connections passing input from one layer to a later layer (often 2 to 5 layers) as well as the next layer. Instead of trying to find a solution for mapping some input to some output across say 5 layers, the network is enforced to learn to map some input to some output + some input. Basically, it adds an identity to the solution, carrying the older input over and serving it freshly to a later layer. It has been shown that these networks are very effective at learning patterns up to 150 layers deep, much more than the regular 2 to 5 layers one could expect to train. However, it has been proven that these networks are in essence just RNNs without the explicit time based construction and they’re often compared to LSTMs without gates.

      深度残差网络(DRN)是非常深的FFNNs,有额外的连接将输入从一层传递到下一层(通常是2到5层)以及下一层。与其试图寻找一个跨5层将一些输入映射到一些输出的解决方案,不如强制网络学会将一些输入映射到一些输出+一些输入。基本上,它为解决方案添加了一个标识,将旧的输入传送到新层。

      研究表明,这些网络在学习高达150层的模式方面非常有效,远远超过人们可以预期训练的常规2至5层。然而,已经证明,这些网络本质上只是没有显式的基于时间的构造的RNNs,它们经常被比作没有门的LSTMs。


He, Kaiming, et al. “Deep residual learning for image recognition.” arXiv preprint arXiv:1512.03385 (2015).

Original Paper PDF



DNC


image.png


       Differentiable Neural Computers (DNC) are enhanced Neural Turing Machines with scalable memory, inspired by how memories are stored by the human hippocampus. The idea is to take the classical Von Neumann computer architecture and replace the CPU with an RNN, which learns when and what to read from the RAM. Besides having a large bank of numbers as memory (which may be resized without retraining the RNN). The DNC also has three attention mechanisms. These mechanisms allow the RNN to query the similarity of a bit of input to the memory’s entries, the temporal relationship between any two entries in memory, and whether a memory entry was recently updated – which makes it less likely to be overwritten when there’s no empty memory available.

       可微神经计算机(DNC)是一种增强的神经图灵机,具有可伸缩的内存,其灵感来自于人类海马区存储记忆的方式。其想法是采用经典的冯•诺依曼计算机架构,用RNN替换CPU, RNN可以学习何时以及从RAM中读取什么。除了拥有大量的数字作为内存(可以在不重新训练RNN的情况下调整大小)之外。DNC也有三个注意机制。这些机制允许RNN查询少量输入与内存条目的相似性、内存中任意两个条目之间的时间关系,以及最近是否更新了内存条目——这使得在没有可用的空内存时不太可能覆盖该条目。


Graves, Alex, et al. “Hybrid computing using a neural network with dynamic external memory.” Nature 538 (2016): 471-476.

Original Paper PDF




NTM

image.png



      Neural Turing machines (NTM) can be understood as an abstraction of LSTMs and an attempt to un-black-box neural networks (and give us some insight in what is going on in there). Instead of coding a memory cell directly into a neuron, the memory is separated. It’s an attempt to combine the efficiency and permanency of regular digital storage and the efficiency and expressive power of neural networks. The idea is to have a content-addressable memory bank and a neural network that can read and write from it. The “Turing” in Neural Turing Machines comes from them being Turing complete: the ability to read and write and change state based on what it reads means it can represent anything a Universal Turing Machine can represent.

      神经网络图灵机(NTM)可以被理解为LSTMs的抽象,是一种试图消除黑盒神经网络(并让我们对其中发生的事情有一些了解)的尝试。不是直接将记忆细胞编码成神经元,而是将记忆分开。它试图将常规数字存储的效率和持久性与神经网络的效率和表达能力结合起来。这个想法是要有一个内容可寻址的存储库和一个可以从中读写的神经网络。神经图灵机器中的“图灵”来自于它们的图灵完备性:根据它所读取的内容读写和改变状态的能力意味着它可以表示任何通用图灵机器能够表示的东西。


Graves, Alex, Greg Wayne, and Ivo Danihelka. “Neural turing machines.” arXiv preprint arXiv:1410.5401 (2014).

Original Paper PDF



CN

image.png



      Capsule Networks (CapsNet) are biology inspired alternatives to pooling, where neurons are connected with multiple weights (a vector) instead of just one weight (a scalar). This allows neurons to transfer more information than simply which feature was detected, such as where a feature is in the picture or what colour and orientation it has. The learning process involves a local form of Hebbian learning that values correct predictions of output in the next layer.

       胶囊网络(CapsNet)是受生物学启发的池的替代品,其中神经元连接多个权重(向量),而不是一个权重(标量)。这使得神经元能够传递更多的信息,而不仅仅是检测到哪些特征,比如某个特征在图片中的什么位置,或者它的颜色和方向。学习过程包括一种局部形式的Hebbian学习,它重视对下一层输出的正确预测。


Sabour, Sara, Frosst, Nicholas, and Hinton, G. E. “Dynamic Routing Between Capsules.” In Advances in neural information processing systems (2017): 3856-3866.

Original Paper PDF



KN

image.png



      Kohonen networks (KN, also self organising (feature) map, SOM, SOFM) utilise competitive learning to classify data without supervision. Input is presented to the network, after which the network assesses which of its neurons most closely match that input. These neurons are then adjusted to match the input even better, dragging along their neighbours in the process. How much the neighbours are moved depends on the distance of the neighbours to the best matching units.

      Kohonen networks (KN,也是self - organizational (feature) map, SOM, SOFM)利用竞争性学习对数据进行分类,无需监督。输入被呈现给网络,然后网络评估哪个神经元与输入最匹配。然后,这些神经元被调整,以更好地匹配输入,在这个过程中拖拽它们的邻居。邻域的移动量取决于邻域到最佳匹配单元的距离。


Kohonen, Teuvo. “Self-organized formation of topologically correct feature maps.” Biological cybernetics 43.1 (1982): 59-69.

Original Paper PDF




AN

image.png



      Attention networks (AN) can be considered a class of networks, which includes the Transformer architecture. They use an attention mechanism to combat information decay by separately storing previous network states and switching attention between the states. The hidden states of each iteration in the encoding layers are stored in memory cells. The decoding layers are connected to the encoding layers, but it also receives data from the memory cells filtered by an attention context. This filtering step adds context for the decoding layers stressing the importance of particular features. The attention network producing this context is trained using the error signal from the output of decoding layer. Moreover, the attention context can be visualized giving valuable insight into which input features correspond with what output features.

      注意机制网络(AN)可以看作是一类网络,它包括转换器体系结构。他们使用一种注意机制,通过单独存储以前的网络状态和在状态之间切换注意来对抗信息衰减。编码层中每个迭代的隐藏状态存储在内存单元中。解码层连接到编码层,但它也接收由注意上下文过滤的记忆细胞的数据。此过滤步骤为解码层添加上下文,强调特定特性的重要性。利用解码层输出的错误信号对产生该上下文的注意网络进行训练。此外,注意上下文可以被可视化,从而提供有价值的见解,了解哪些输入特性对应于哪些输出特性。


Jaderberg, Max, et al. “Spatial Transformer Networks.” In Advances in neural information processing systems (2015): 2017-2025.

Original Paper PDF



Follow us on twitter for future updates and posts. We welcome comments and feedback, and thank you for reading!


[Update 22 April 2019] Included Capsule Networks, Differentiable Neural Computers and Attention Networks to the Neural Network Zoo; Support Vector Machines are removed; updated links to original articles. The previous version of this post can be found here .

【2019年4月22日更新】包括胶囊网络、可微神经计算机和神经网络动物园的注意力网络;删除支持向量机;更新到原始文章的链接。这篇文章的前一个版本可以在这里找到。



相关文章
基于Python深度学习的眼疾识别系统实现~人工智能+卷积网络算法
眼疾识别系统,本系统使用Python作为主要开发语言,基于TensorFlow搭建卷积神经网络算法,并收集了4种常见的眼疾图像数据集(白内障、糖尿病性视网膜病变、青光眼和正常眼睛) 再使用通过搭建的算法模型对数据集进行训练得到一个识别精度较高的模型,然后保存为为本地h5格式文件。最后使用Django框架搭建了一个Web网页平台可视化操作界面,实现用户上传一张眼疾图片识别其名称。
263 5
基于Python深度学习的眼疾识别系统实现~人工智能+卷积网络算法
深度学习实践技巧:提升模型性能的详尽指南
深度学习模型在图像分类、自然语言处理、时间序列分析等多个领域都表现出了卓越的性能,但在实际应用中,为了使模型达到最佳效果,常规的标准流程往往不足。本文提供了多种深度学习实践技巧,包括数据预处理、模型设计优化、训练策略和评价与调参等方面的详细操作和代码示例,希望能够为应用实战提供有效的指导和支持。
PyTorch生态系统中的连续深度学习:使用Torchdyn实现连续时间神经网络
神经常微分方程(Neural ODEs)是深度学习领域的创新模型,将神经网络的离散变换扩展为连续时间动力系统。本文基于Torchdyn库介绍Neural ODE的实现与训练方法,涵盖数据集构建、模型构建、基于PyTorch Lightning的训练及实验结果可视化等内容。Torchdyn支持多种数值求解算法和高级特性,适用于生成模型、时间序列分析等领域。
211 77
PyTorch生态系统中的连续深度学习:使用Torchdyn实现连续时间神经网络
基于MobileNet深度学习网络的MQAM调制类型识别matlab仿真
本项目基于Matlab2022a实现MQAM调制类型识别,使用MobileNet深度学习网络。完整程序运行效果无水印,核心代码含详细中文注释和操作视频。MQAM调制在无线通信中至关重要,MobileNet以其轻量化、高效性适合资源受限环境。通过数据预处理、网络训练与优化,确保高识别准确率并降低计算复杂度,为频谱监测、信号解调等提供支持。
基于Python深度学习的【害虫识别】系统~卷积神经网络+TensorFlow+图像识别+人工智能
害虫识别系统,本系统使用Python作为主要开发语言,基于TensorFlow搭建卷积神经网络算法,并收集了12种常见的害虫种类数据集【"蚂蚁(ants)", "蜜蜂(bees)", "甲虫(beetle)", "毛虫(catterpillar)", "蚯蚓(earthworms)", "蜚蠊(earwig)", "蚱蜢(grasshopper)", "飞蛾(moth)", "鼻涕虫(slug)", "蜗牛(snail)", "黄蜂(wasp)", "象鼻虫(weevil)"】 再使用通过搭建的算法模型对数据集进行训练得到一个识别精度较高的模型,然后保存为为本地h5格式文件。最后使用Djan
87 1
基于Python深度学习的【害虫识别】系统~卷积神经网络+TensorFlow+图像识别+人工智能
基于MobileNet深度学习网络的活体人脸识别检测算法matlab仿真
本内容主要介绍一种基于MobileNet深度学习网络的活体人脸识别检测技术及MQAM调制类型识别方法。完整程序运行效果无水印,需使用Matlab2022a版本。核心代码包含详细中文注释与操作视频。理论概述中提到,传统人脸识别易受非活体攻击影响,而MobileNet通过轻量化的深度可分离卷积结构,在保证准确性的同时提升检测效率。活体人脸与非活体在纹理和光照上存在显著差异,MobileNet可有效提取人脸高级特征,为无线通信领域提供先进的调制类型识别方案。
基于Python深度学习的【蘑菇识别】系统~卷积神经网络+TensorFlow+图像识别+人工智能
蘑菇识别系统,本系统使用Python作为主要开发语言,基于TensorFlow搭建卷积神经网络算法,并收集了9种常见的蘑菇种类数据集【"香菇(Agaricus)", "毒鹅膏菌(Amanita)", "牛肝菌(Boletus)", "网状菌(Cortinarius)", "毒镰孢(Entoloma)", "湿孢菌(Hygrocybe)", "乳菇(Lactarius)", "红菇(Russula)", "松茸(Suillus)"】 再使用通过搭建的算法模型对数据集进行训练得到一个识别精度较高的模型,然后保存为为本地h5格式文件。最后使用Django框架搭建了一个Web网页平台可视化操作界面,
126 11
基于Python深度学习的【蘑菇识别】系统~卷积神经网络+TensorFlow+图像识别+人工智能
MNN:阿里开源的轻量级深度学习推理框架,支持在移动端等多种终端上运行,兼容主流的模型格式
MNN 是阿里巴巴开源的轻量级深度学习推理框架,支持多种设备和主流模型格式,具备高性能和易用性,适用于移动端、服务器和嵌入式设备。
750 18
MNN:阿里开源的轻量级深度学习推理框架,支持在移动端等多种终端上运行,兼容主流的模型格式
基于yolov4深度学习网络的排队人数统计系统matlab仿真,带GUI界面
本项目基于YOLOv4深度学习网络,利用MATLAB 2022a实现排队人数统计的算法仿真。通过先进的计算机视觉技术,系统能自动、准确地检测和统计监控画面中的人数,适用于银行、车站等场景,优化资源分配和服务管理。核心程序包含多个回调函数,用于处理用户输入及界面交互,确保系统的高效运行。仿真结果无水印,操作步骤详见配套视频。
98 18
基于yolov4深度学习网络的公共场所人流密度检测系统matlab仿真,带GUI界面
本项目使用 MATLAB 2022a 进行 YOLOv4 算法仿真,实现公共场所人流密度检测。通过卷积神经网络提取图像特征,将图像划分为多个网格进行目标检测和识别,最终计算人流密度。核心程序包括图像和视频读取、处理和显示功能。仿真结果展示了算法的有效性和准确性。
127 31

热门文章

最新文章

AI助理

你好,我是AI助理

可以解答问题、推荐解决方案等