DL:深度学习算法(神经网络模型集合)概览之《THE NEURAL NETWORK ZOO》的中文解释和感悟(五)

简介: DL:深度学习算法(神经网络模型集合)概览之《THE NEURAL NETWORK ZOO》的中文解释和感悟

MC

image.png



      Markov chains (MC or discrete time Markov Chain, DTMC) are kind of the predecessors to BMs and HNs. They can be understood as follows: from this node where I am now, what are the odds of me going to any of my neighbouring nodes? They are memoryless (i.e. Markov Property) which means that every state you end up in depends completely on the previous state. While not really a neural network, they do resemble neural networks and form the theoretical basis for BMs and HNs. MC aren’t always considered neural networks, as goes for BMs, RBMs and HNs. Markov chains aren’t always fully connected either.

      马尔可夫链(MC或离散时间马尔可夫链,DTMC)是BMs和HNs的前身。它们可以这样理解:从我现在所在的节点,我到相邻节点的概率是多少?它们是无记忆的(即马尔可夫性质),这意味着你最终所处的每一种状态完全依赖于前一种状态。虽然不是真正的神经网络,但它们确实类似于神经网络,构成了BMs和HNs的理论基础。MC并不总是像BMs、RBMs和HNs那样被认为是神经网络。马尔可夫链也不总是完全连接的。


Hayes, Brian. “First links in the Markov chain.” American Scientist 101.2 (2013): 252.

Original Paper PDF



HN

image.png



      A Hopfield network (HN) is a network where every neuron is connected to every other neuron; it is a completely entangled plate of spaghetti as even all the nodes function as everything. Each node is input before training, then hidden during training and output afterwards. The networks are trained by setting the value of the neurons to the desired pattern after which the weights can be computed. The weights do not change after this. Once trained for one or more patterns, the network will always converge to one of the learned patterns because the network is only stable in those states. Note that it does not always conform to the desired state (it’s not a magic black box sadly). It stabilises in part due to the total “energy” or “temperature” of the network being reduced incrementally during training. Each neuron has an activation threshold which scales to this temperature, which if surpassed by summing the input causes the neuron to take the form of one of two states (usually -1 or 1, sometimes 0 or 1). Updating the network can be done synchronously or more commonly one by one. If updated one by one, a fair random sequence is created to organise which cells update in what order (fair random being all options (n) occurring exactly once every n items). This is so you can tell when the network is stable (done converging), once every cell has been updated and none of them changed, the network is stable (annealed). These networks are often called associative memory because the converge to the most similar state as the input; if humans see half a table we can image the other half, this network will converge to a table if presented with half noise and half a table.

      Hopfield网络(HN)是一个网络,其中每个神经元都与其他神经元相连;它是一个完全缠结的意大利面盘,因为所有节点的功能都是一样的。每个节点在训练前输入,训练中隐藏,训练后输出。通过将神经元的值设置为所需的模式来训练网络,然后计算权重。在这之后,权重不会改变。

      一旦对一个或多个模式进行了训练,网络将始终收敛到所学习的模式之一,因为网络只在这些状态下是稳定的。注意,它并不总是符合所需的状态(遗憾的是,它不是一个神奇的黑盒)。它之所以稳定,部分原因是在训练过程中,网络的总“能量”或“温度”逐渐降低。每个神经元都有一个激活阈值尺度这个温度,如果超过了通过加总输入导致神经元以两种状态之一的形式(通常1或1,有时是0或1),更新网络可以做到同步或更常见。

       如果逐个更新,将创建一个公平随机序列来组织哪些单元格以何种顺序更新(公平随机是所有选项(n),每n个项目恰好发生一次)。这样您就可以知道网络何时是稳定的(聚合完成),一旦每个单元都更新了,并且没有一个单元更改,那么网络就是稳定的(退火)。这些网络通常被称为联想记忆,因为它们收敛到与输入最相似的状态;如果人类看到半张桌子,我们就能想象另一半,如果半张桌子有半张噪音,这个网络就会收敛到一张桌子。


Hopfield, John J. “Neural networks and physical systems with emergent collective computational abilities.” Proceedings of the national academy of sciences 79.8 (1982): 2554-2558.

Original Paper PDF



BM

image.png



      Boltzmann machines (BM) are a lot like HNs, but: some neurons are marked as input neurons and others remain “hidden”. The input neurons become output neurons at the end of a full network update. It starts with random weights and learns through back-propagation, or more recently through contrastive divergence (a Markov chain is used to determine the gradients between two informational gains). Compared to a HN, the neurons mostly have binary activation patterns. As hinted by being trained by MCs, BMs are stochastic networks. The training and running process of a BM is fairly similar to a HN: one sets the input neurons to certain clamped values after which the network is set free (it doesn’t get a sock). While free the cells can get any value and we repetitively go back and forth between the input and hidden neurons. The activation is controlled by a global temperature value, which if lowered lowers the energy of the cells. This lower energy causes their activation patterns to stabilise. The network reaches an equilibrium given the right temperature.

      玻尔兹曼机器(BM)很像HNs,但是:一些神经元被标记为输入神经元,而另一些仍然是“隐藏的”。在整个网络更新结束时,输入神经元变成输出神经元。它从随机权重开始,通过反向传播学习,或者最近通过对比发散学习(使用马尔可夫链来确定两个信息增益之间的梯度)。

      与HN相比,神经元大多具有二元激活模式。由MCs训练可知,BMs是随机网络。BM的训练和运行过程与HN非常相似:将输入神经元设置为特定的固定值,在此之后网络将被释放(它不会得到袜子)。当自由时,细胞可以得到任何值,我们不断地在输入和隐藏的神经元之间来回移动。

      激活由一个全局温度值控制,如果降低这个温度值,就会降低细胞的能量。这种较低的能量使它们的激活模式趋于稳定。在适当的温度下,网络达到平衡。


Hinton, Geoffrey E., and Terrence J. Sejnowski. “Learning and releaming in Boltzmann machines.” Parallel distributed processing: Explorations in the microstructure of cognition 1 (1986): 282-317.

Original Paper PDF



RBM


image.png


     Restricted Boltzmann machines (RBM) are remarkably similar to BMs (surprise) and therefore also similar to HNs. The biggest difference between BMs and RBMs is that RBMs are a better usable because they are more restricted. They don’t trigger-happily connect every neuron to every other neuron but only connect every different group of neurons to every other group, so no input neurons are directly connected to other input neurons and no hidden to hidden connections are made either. RBMs can be trained like FFNNs with a twist: instead of passing data forward and then back-propagating, you forward pass the data and then backward pass the data (back to the first layer). After that you train with forward-and-back-propagation.

      受限玻尔兹曼机(RBM)与BMs (surprise)非常相似,因此也与HNs相似。BMs和RBMs之间最大的区别是,RBMs的可用性更好,因为它们受到了更多的限制。它们不会把每个神经元连接到另一个神经元上,而是把每一组不同的神经元连接到另一组神经元上,所以没有输入神经元直接连接到其他输入神经元上,也没有隐藏到隐藏的连接。

     RBMs可以像FFNNs一样进行训练:不需要先向前传递数据,然后向后传播,而是向前传递数据,然后向后传递数据(回到第一层)。在此之后,您将使用正向和反向传播进行训练。


Smolensky, Paul. Information processing in dynamical systems: Foundations of harmony theory. No. CU-CS-321-86. COLORADO UNIV AT BOULDER DEPT OF COMPUTER SCIENCE, 1986.

Original Paper PDF




相关文章
|
2月前
|
机器学习/深度学习 Python
深度学习笔记(九):神经网络剪枝(Neural Network Pruning)详细介绍
神经网络剪枝是一种通过移除不重要的权重来减小模型大小并提高效率的技术,同时尽量保持模型性能。
57 0
深度学习笔记(九):神经网络剪枝(Neural Network Pruning)详细介绍
|
5月前
|
机器学习/深度学习 数据采集 监控
算法金 | DL 骚操作扫盲,神经网络设计与选择、参数初始化与优化、学习率调整与正则化、Loss Function、Bad Gradient
**神经网络与AI学习概览** - 探讨神经网络设计,包括MLP、RNN、CNN,激活函数如ReLU,以及隐藏层设计,强调网络结构与任务匹配。 - 参数初始化与优化涉及Xavier/He初始化,权重和偏置初始化,优化算法如SGD、Adam,针对不同场景选择。 - 学习率调整与正则化,如动态学习率、L1/L2正则化、早停法和Dropout,以改善训练和泛化。
47 0
算法金 | DL 骚操作扫盲,神经网络设计与选择、参数初始化与优化、学习率调整与正则化、Loss Function、Bad Gradient
|
6月前
|
机器学习/深度学习 人工智能 算法
人工智能(AI)、机器学习(ML)和深度学习(DL)
人工智能(AI)、机器学习(ML)和深度学习(DL)
170 1
|
6月前
|
机器学习/深度学习 PyTorch 算法框架/工具
【从零开始学习深度学习】16. Pytorch中神经网络模型的构造方法:Module、Sequential、ModuleList、ModuleDict的区别
【从零开始学习深度学习】16. Pytorch中神经网络模型的构造方法:Module、Sequential、ModuleList、ModuleDict的区别
|
5月前
|
机器学习/深度学习 人工智能 数据可视化
使用Python实现深度学习模型:模型解释与可解释人工智能
【7月更文挑战第6天】 使用Python实现深度学习模型:模型解释与可解释人工智能
86 0
|
6月前
|
机器学习/深度学习 人工智能 自然语言处理
深度学习基础之经典神经网络模型
深度学习模型来源于神经系统层次化结构特性,主要机制是层层递进,逐层抽象,主要应用于计算机视觉(computer vision,CV)和自然语言处理(Natural language processing,NLP)。
79 1
|
7月前
|
机器学习/深度学习
【深度学习入门】- 用电路思想解释感知机
【深度学习入门】- 用电路思想解释感知机
|
7月前
|
机器学习/深度学习 算法 数据可视化
MATLAB基于深度学习U-net神经网络模型的能谱CT的基物质分解技术研究
MATLAB基于深度学习U-net神经网络模型的能谱CT的基物质分解技术研究
|
7月前
|
机器学习/深度学习 人工智能 算法
深度学习及CNN、RNN、GAN等神经网络简介(图文解释 超详细)
深度学习及CNN、RNN、GAN等神经网络简介(图文解释 超详细)
750 1
|
7月前
|
机器学习/深度学习 自动驾驶 算法
【计算机视觉+自动驾驶】二、多任务深度学习网络并联式、级联式构建详细讲解(图像解释 超详细必看)
【计算机视觉+自动驾驶】二、多任务深度学习网络并联式、级联式构建详细讲解(图像解释 超详细必看)
303 1