原论文地址:
https://arxiv.org/pdf/2007.06559.pdf
https://arxiv.org/abs/2007.06559
Comments:
ICML 2020
[Submitted on 13 Jul 2020]
Subjects: Machine Learning (cs.LG); Computer Vision and Pattern Recognition (cs.CV); Social and Information Networks (cs.SI); Machine Learning (stat.ML)
Cite as: arXiv:2007.06559 [cs.LG]
(or arXiv:2007.06559v1 [cs.LG] for this version)
Abstract
Neural networks are often represented as graphs of connections between neurons. However, despite their wide use, there is currently little understanding of the relationship between the graph structure of the neural network and its predictive performance. Here we systematically investigate how does the graph structure of neural networks affect their predictive performance. To this end, we develop a novel graph-based representation of neural networks called relational graph, where layers of neural network computation correspond to rounds of message exchange along the graph structure. Using this representation we show that: (1) a "sweet spot" of relational graphs leads to neural networks with significantly improved predictive performance; (2) neural network's performance is approximately a smooth function of the clustering coefficient and average path length of its relational graph; (3) our findings are consistent across many different tasks and datasets; (4) the sweet spot can be identified efficiently; (5) top-performing neural networks have graph structure surprisingly similar to those of real biological neural networks. Our work opens new directions for the design of neural architectures and the understanding on neural networks in general.
神经网络通常用神经元之间的连接图来表示。然而,尽管它们被广泛使用,目前人们对神经网络的图结构与其预测性能之间的关系知之甚少。在这里,我们系统地研究如何图结构的神经网络影响其预测性能。为此,我们开发了一种新的基于图的神经网络表示,称为关系图,其中神经网络计算的层对应于沿着图结构的消息交换轮数。利用这种表示法,我们证明:
(1)关系图的“最佳点”可以显著提高神经网络的预测性能;
(2)神经网络的性能近似为其关系图的聚类系数和平均路径长度的平滑函数;
(3)我们的发现在许多不同的任务和数据集上是一致的;
(4)有效识别最佳点;
(5)性能最好的神经网络具有与真实生物神经网络惊人相似的图结构。我们的工作为一般神经网络的设计和理解开辟了新的方向。
1. Introduction
Deep neural networks consist of neurons organized into layers and connections between them. Architecture of a neural network can be captured by its “computational graph” where neurons are represented as nodes and directed edges link neurons in different layers. Such graphical representation demonstrates how the network passes and transforms the information from its input neurons, through hidden layers all the way to the output neurons (McClelland et al., 1986). While it has been widely observed that performance of neural networks depends on their architecture (LeCun et al., 1998; Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; Szegedy et al., 2015; He et al., 2016), there is currently little systematic understanding on the relation between a neural network’s accuracy and its underlying graph structure. This is especially important for the neural architecture search, which today exhaustively searches over all possible connectivity patterns (Ying et al., 2019). From this perspective, several open questions arise:
Is there a systematic link between the network structure and its predictive performance?
What are structural signatures of well-performing neural networks?
How do such structural signatures generalize across tasks and datasets?
Is there an efficient way to check whether a given neural network is promising or not?
Establishing such a relation is both scientifically and practically important because it would have direct consequences on designing more efficient and more accurate architectures. It would also inform the design of new hardware architectures that execute neural networks. Understanding the graph structures that underlie neural networks would also advance the science of deep learning.
However, establishing the relation between network architecture and its accuracy is nontrivial, because it is unclear how to map a neural network to a graph (and vice versa). The natural choice would be to use computational graph representation but it has many limitations: (1) lack of generality: Computational graphs are constrained by the allowed graph properties, e.g., these graphs have to be directed and acyclic (DAGs), bipartite at the layer level, and single-in-single-out at the network level (Xie et al., 2019). This limits the use of the rich tools developed for general graphs. (2) Disconnection with biology/neuroscience: Biological neural networks have a much richer and less templatized structure (Fornito et al., 2013). There are information exchanges, rather than just single-directional flows, in the brain networks (Stringer et al., 2018). Such biological or neurological models cannot be simply represented by directed acyclic graphs.
深层神经网络由神经元组成,这些神经元被组织成层,并在层与层之间建立联系。神经网络的结构可以通过它的“计算图”来捕获,其中神经元被表示为节点,有向边连接不同层次的神经元。这样的图形表示演示了网络如何传递和转换来自输入神经元的信息,通过隐藏层一直到输出神经元(McClelland et al., 1986)。虽然已广泛观察到神经网络的性能取决于其结构(LeCun et al., 1998;Krizhevsky等,2012;Simonyan & Zisserman, 2015;Szegedy等,2015;对于神经网络的精度与其底层图结构之间的关系,目前尚无系统的认识。这对于神经结构搜索来说尤为重要,如今,神经结构搜索遍寻所有可能的连通性模式(Ying等人,2019)。从这个角度来看,几个开放的问题出现了:
网络结构和它的预测性能之间是否有系统的联系?
性能良好的神经网络的结构特征是什么?
这种结构特征如何在任务和数据集之间泛化?
有没有一种有效的方法来检查一个给定的神经网络是否有前途?
建立这样的关系在科学上和实践上都很重要,因为它将直接影响到设计更高效、更精确的架构。它还将指导执行神经网络的新硬件架构的设计。理解神经网络的图形结构也将推进深度学习科学。
然而,建立网络架构与其准确度之间的关系并非无关紧要,因为还不清楚如何将神经网络映射到图(反之亦然)。自然选择是使用计算图表示,但它有很多限制:(1)缺乏普遍性:计算图被限制允许图形属性,例如,这些图表必须指导和无环(无进取心的人),在层级别由两部分构成的,single-in-single-out在网络层(谢et al ., 2019)。这限制了为一般图形开发的丰富工具的使用。(2)与生物学/神经科学的分离:生物神经网络具有更丰富的、更少圣殿化的结构(Fornito et al., 2013)。大脑网络中存在着信息交换,而不仅仅是单向流动(Stringer et al., 2018)。这样的生物或神经模型不能简单地用有向无环图来表示。
Here we systematically study the relationship between the graph structure of a neural network and its predictive performance. We develop a new way of representing a neural network as a graph, which we call relational graph. Our key insight is to focus on message exchange, rather than just on directed data flow. As a simple example, for a fixedwidth fully-connected layer, we can represent one input channel and one output channel together as a single node, and an edge in the relational graph represents the message exchange between the two nodes (Figure 1(a)). Under this formulation, using appropriate message exchange definition, we show that the relational graph can represent many types of neural network layers (a fully-connected layer, a convolutional layer, etc.), while getting rid of many constraints of computational graphs (such as directed, acyclic, bipartite, single-in-single-out). One neural network layer corresponds to one round of message exchange over a relational graph, and to obtain deep networks, we perform message exchange over the same graph for several rounds. Our new representation enables us to build neural networks that are richer and more diverse and analyze them using well-established tools of network science (Barabasi & Psfai ´ , 2016).
We then design a graph generator named WS-flex that allows us to systematically explore the design space of neural networks (i.e., relation graphs). Based on the insights from neuroscience, we characterize neural networks by the clustering coefficient and average path length of their relational graphs (Figure 1(c)). Furthermore, our framework is flexible and general, as we can translate relational graphs into diverse neural architectures, including Multilayer Perceptrons (MLPs), Convolutional Neural Networks (CNNs), ResNets, etc. with controlled computational budgets (Figure 1(d)).
本文系统地研究了神经网络的图结构与其预测性能之间的关系。我们提出了一种将神经网络表示为图的新方法,即关系图。我们的重点是关注消息交换,而不仅仅是定向数据流。作为一个简单的例子,对于固定宽度的全连接层,我们可以将一个输入通道和一个输出通道一起表示为单个节点,关系图中的一条边表示两个节点之间的消息交换(图1(a))。在此公式下,利用适当的消息交换定义,我们表明关系图可以表示多种类型的神经网络层(全连通层、卷积层等),同时摆脱了计算图的许多约束(如有向、无环、二部图、单入单出)。一个神经网络层对应于在关系图上进行一轮消息交换,为了获得深度网络,我们在同一图上进行几轮消息交换。我们的新表示使我们能够构建更加丰富和多样化的神经网络,并使用成熟的网络科学工具对其进行分析(Barabasi & Psfai, 2016)。
然后我们设计了一个名为WS-flex的图形生成器,它允许我们系统地探索神经网络的设计空间。关系图)。基于神经科学的见解,我们通过聚类系数和关系图的平均路径长度来描述神经网络(图1(c))。此外,我们的框架是灵活和通用的,因为我们可以将关系图转换为不同的神经结构,包括多层感知器(MLPs)、卷积神经网络(CNNs)、ResNets等,并控制计算预算(图1(d))。
Using standard image classification datasets CIFAR-10 and ImageNet, we conduct a systematic study on how the architecture of neural networks affects their predictive performance. We make several important empirical observations:
A “sweet spot” of relational graphs lead to neural net works with significantly improved performance under controlled computational budgets (Section 5.1).
Neural network’s performance is approximately a smooth function of the clustering coefficient and average path length of its relational graph. (Section 5.2).
Our findings are consistent across many architectures (MLPs, CNNs, ResNets, EfficientNet) and tasks (CIFAR-10, ImageNet) (Section 5.3).
The sweet spot can be identified efficiently. Identifying a sweet spot only requires a few samples of relational graphs and a few epochs of training (Section 5.4).
Well-performing neural networks have graph structure surprisingly similar to those of real biological neural networks (Section 5.5).
Our results have implications for designing neural network architectures, advancing the science of deep learning and improving our understanding of neural networks in general.
使用标准图像分类数据集CIFAR-10和ImageNet,我们对神经网络的结构如何影响其预测性能进行了系统研究。我们做了几个重要的经验观察:
关系图的一个“最佳点”可以使神经网络在受控的计算预算下显著提高性能(第5.1节)。
神经网络的性能近似于其聚类系数和关系图平均路径长度的平滑函数。(5.2节)。
我们的发现在许多体系结构(MLPs, CNNs, resnet, EfficientNet)和任务(ci远-10,ImageNet)中是一致的(第5.3节)。
最佳点可以被有效地确定。确定一个最佳点只需要几个关系图的样本和几个时期的训练(第5.4节)。
性能良好的神经网络的图形结构与真实的生物神经网络惊人地相似(第5.5节)。
我们的研究结果对设计神经网络架构、推进深度学习的科学以及提高我们对神经网络的总体理解都有意义。
Figure 1: Overview of our approach. (a) A layer of a neural network can be viewed as a relational graph where we connect nodes that exchange messages. (b) More examples of neural network layers and relational graphs. (c) We explore the design space of relational graphs according to their graph measures, including average path length and clustering coefficient, where the complete graph corresponds to a fully-connected layer. (d) We translate these relational graphs to neural networks and study how their predictive performance depends on the graph measures of their corresponding relational graphs.
图1:我们方法的概述。(a)神经网络的一层可以看作是一个关系图,在这里我们连接交换消息的节点。(b)神经网络层和关系图的更多例子。(c)我们根据关系图的图度量来探索关系图的设计空间,包括平均路径长度和聚类系数,其中完全图对应一个全连通层。我们将这些关系图转换为神经网络,并研究它们的预测性能如何取决于对应关系图的图度量。
2. Neural Networks as Relational Graphs
To explore the graph structure of neural networks, we first introduce the concept of our relational graph representation and its instantiations. We demonstrate how our representation can capture diverse neural network architectures under a unified framework. Using the language of graph in the context of deep learning helps bring the two worlds together and establish a foundation for our study.
为了探讨神经网络的图结构,我们首先介绍关系图表示的概念及其实例。我们将演示如何在统一框架下捕获不同的神经网络架构。在深度学习的背景下使用graph语言有助于将这两个世界结合起来,为我们的研究奠定基础。
2.1. Message Exchange over Graphs
We start by revisiting the definition of a neural network from the graph perspective. We define a graph G = (V, E) by its node set V = {v1, ..., vn} and edge set E ⊆ {(vi , vj )|vi , vj ∈ V}. We assume each node v has a node feature scalar/vector xv.
我们首先从图的角度重新讨论神经网络的定义。通过图G = (V, E)的节点集V = {v1,…, vn}和边集E⊆{(vi, vj)|vi, vj∈V}。我们假设每个节点v有一个节点特征标量/向量xv。
Table 1: Diverse neural architectures expressed in the language of relational graphs. These architectures are usually implemented as complete relational graphs, while we systematically explore more graph structures for these architectures.
表1:用关系图语言表示的各种神经结构。这些架构通常被实现为完整的关系图,而我们系统地为这些架构探索更多的图结构。
Figure 2: Example of translating a 4-node relational graph to a 4-layer 65-dim MLP. We highlight the message exchange for node x1. Using different definitions of xi , fi(·), AGG(·) and R (those defined in Table 1), relational graphs can be translated to diverse neural architectures.
图2:将4节点关系图转换为4层65-dim MLP的示例。我们将重点介绍节点x1的消息交换。使用xi、fi(·)、AGG(·)和R(表1中定义的)的不同定义,关系图可以转换为不同的神经结构。
We call graph G a relational graph, when it is associated with message exchanges between neurons. Specifically, a message exchange is defined by a message function, whose input is a node’s feature and output is a message, and an aggregation function, whose input is a set of messages and output is the updated node feature. At each round of message exchange, each node sends messages to its neighbors, and aggregates incoming messages from its neighbors. Each message is transformed at each edge through a message function f(·), then they are aggregated at each node via an aggregation function AGG(·). Suppose we conduct R rounds of message exchange, then the r-th round of message exchange for a node v can be described as
Equation 1 provides a general definition for message exchange. In the remainder of this section, we discuss how this general message exchange definition can be instantiated as different neural architectures. We summarize the different instantiations in Table 1, and provide a concrete example of instantiating a 4-layer 65-dim MLP in Figure 2.
我们称图G为关系图,当它与神经元之间的信息交换有关时。具体来说,消息交换由消息函数和聚合函数定义,前者的输入是节点的特性,输出是消息,后者的输入是一组消息,输出是更新后的节点特性。在每一轮消息交换中,每个节点向它的邻居发送消息,并聚合从它的邻居传入的消息。每个消息通过消息函数f(·)在每个边进行转换,然后通过聚合函数AGG(·)在每个节点进行聚合。假设我们进行了R轮的消息交换,那么节点v的第R轮消息交换可以描述为
公式1提供了消息交换的一般定义。在本节的其余部分中,我们将讨论如何将这个通用消息交换定义实例化为不同的神经结构。我们在表1中总结了不同的实例,并在图2中提供了实例化4层65-dim MLP的具体示例。
2.2. Fixed-width MLPs as Relational Graphs
A Multilayer Perceptron (MLP) consists of layers of computation units (neurons), where each neuron performs a weighted sum over scalar inputs and outputs, followed by some non-linearity. Suppose the r-th layer of an MLP takes x (r) as input and x (r+1) as output, then a neuron computes:
一个多层感知器(MLP)由多层计算单元(神经元)组成,其中每个神经元执行标量输入和输出的加权和,然后是一些非线性。假设MLP的第r层以x (r)为输入,x (r+1)为输出,则有一个神经元计算:
The above discussion reveals that a fixed-width MLP can be viewed as a complete relational graph with a special message exchange function. Therefore, a fixed-width MLP is a special case under a much more general model family, where the message function, aggregation function, and most importantly, the relation graph structure can vary.
This insight allows us to generalize fixed-width MLPs from using complete relational graph to any general relational graph G. Based on the general definition of message exchange in Equation 1, we have:
上面的讨论表明,可以将固定宽度的MLP视为具有特殊消息交换功能的完整关系图。因此,固定宽度的MLP是更为通用的模型系列中的一种特殊情况,其中消息函数、聚合函数以及最重要的关系图结构可能会发生变化。
这使得我们可以将固定宽度MLPs从使用完全关系图推广到任何一般关系图g。根据公式1中消息交换的一般定义,我们有:
2.3. General Neural Networks as Relational Graphs
The graph viewpoint in Equation 3 lays the foundation of representing fixed-width MLPs as relational graphs. In this section, we discuss how we can further generalize relational graphs to general neural networks.
Variable-width MLPs as relational graphs. An important design consideration for general neural networks is that layer width often varies through out the network. For example, in CNNs, a common practice is to double the layer width (number of feature channels) after spatial down-sampling.
式3中的图点为将定宽MLPs表示为关系图奠定了基础。在这一节中,我们将讨论如何进一步将关系图推广到一般的神经网络。
可变宽度MLPs作为关系图。对于一般的神经网络,一个重要的设计考虑是网络的层宽经常是不同的。例如,在CNNs中,常用的做法是在空间下采样后将层宽(特征通道数)增加一倍。
Note that under this definition, the maximum number of nodes of a relational graph is bounded by the width of the narrowest layer in the corresponding neural network (since the feature dimension for each node must be at least 1).
注意,在这个定义下,关系图的最大节点数以对应神经网络中最窄层的宽度为界(因为每个节点的特征维数必须至少为1)。
Modern neural architectures as relational graphs. Finally, we generalize relational graphs to represent modern neural architectures with more sophisticated designs. For example, to represent a ResNet (He et al., 2016), we keep the residual connections between layers unchanged. To represent neural networks with bottleneck transform (He et al., 2016), a relational graph alternatively applies message exchange with 3×3 and 1×1 convolution; similarly, in the efficient computing setup, the widely used separable convolution (Howard et al., 2017; Chollet, 2017) can be viewed as alternatively applying message exchange with 3×3 depth-wise convolution and 1×1 convolution.
Overall, relational graphs provide a general representation for neural networks. With proper definitions of node features and message exchange, relational graphs can represent diverse neural architectures, as is summarized in Table 1.
作为关系图的现代神经结构。最后,我们推广了关系图,用更复杂的设计来表示现代神经结构。例如,为了表示ResNet (He et al., 2016),我们保持层之间的剩余连接不变。为了用瓶颈变换表示神经网络(He et al., 2016),关系图交替应用3×3和1×1卷积的消息交换;同样,在高效的计算设置中,广泛使用的可分离卷积(Howard et al., 2017;Chollet, 2017)可以看作是3×3深度卷积和1×1卷积交替应用消息交换。
总的来说,关系图提供了神经网络的一般表示。通过正确定义节点特性和消息交换,关系图可以表示不同的神经结构,如表1所示。
3. Exploring Relational Graphs
In this section, we describe in detail how we design and explore the space of relational graphs defined in Section 2, in order to study the relationship between the graph structure of neural networks and their predictive performance. Three main components are needed to make progress: (1) graph measures that characterize graph structural properties, (2) graph generators that can generate diverse graphs, and (3) a way to control the computational budget, so that the differences in performance of different neural networks are due to their diverse relational graph structures.
在本节中,我们将详细描述如何设计和探索第二节中定义的关系图空间,以研究神经网络的图结构与其预测性能之间的关系。三个主要组件是需要取得进展:(1)图的措施描述图的结构性质,(2)图形发生器,可以产生不同的图表,和(3)一种方法来控制计算预算,以便不同神经网络的性能的差异是由于各自不同的关系图结构。
3.1. Selection of Graph Measures
Given the complex nature of graph structure, graph measures are often used to characterize graphs. In this paper, we focus on one global graph measure, average path length, and one local graph measure, clustering coefficient. Notably, these two measures are widely used in network science (Watts & Strogatz, 1998) and neuroscience (Sporns, 2003; Bassett & Bullmore, 2006). Specifically, average path length measures the average shortest path distance between any pair of nodes; clustering coefficient measures the proportion of edges between the nodes within a given node’s neighborhood, divided by the number of edges that could possibly exist between them, averaged over all the nodes. There are other graph measures that can be used for analysis, which are included in the Appendix.
由于图结构的复杂性,图测度通常被用来刻画图的特征。本文主要研究了一个全局图测度,即平均路径长度,和一个局部图测度,即聚类系数。值得注意的是,这两种方法在网络科学(Watts & Strogatz, 1998)和神经科学(Sporns, 2003;巴西特和布尔莫尔,2006年)。具体来说,平均路径长度度量任意对节点之间的平均最短路径距离;聚类系数度量给定节点邻域内节点之间的边的比例,除以它们之间可能存在的边的数量,平均到所有节点上。还有其他可以用于分析的图表度量,包括在附录中。
3.2. Design of Graph Generators
Given selected graph measures, we aim to generate diverse graphs that can cover a large span of graph measures, using a graph generator. However, such a goal requires careful generator designs: classic graph generators can only generate a limited class of graphs, while recent learning-based graph generators are designed to imitate given exemplar graphs (Kipf & Welling, 2017; Li et al., 2018b; You et al., 2018a;b; 2019a).
Limitations of existing graph generators. To illustrate the limitation of existing graph generators, we investigate the following classic graph generators: (1) Erdos-R ˝ enyi ´ (ER) model that can sample graphs with given node and edge number uniformly at random (Erdos & R ˝ enyi ´ , 1960); (2) Watts-Strogatz (WS) model that can generate graphs with small-world properties (Watts & Strogatz, 1998); (3) Barabasi-Albert (BA) model that can generate scale-free ´ graphs (Albert & Barabasi ´ , 2002); (4) Harary model that can generate graphs with maximum connectivity (Harary, 1962); (5) regular ring lattice graphs (ring graphs); (6) complete graphs. For all types of graph generators, we control the number of nodes to be 64, enumerate all possible discrete parameters and grid search over all continuous parameters of the graph generator. We generate 30 random graphs with different random seeds under each parameter setting. In total, we generate 486,000 WS graphs, 53,000 ER graphs, 8,000 BA graphs, 1,800 Harary graphs, 54 ring graphs and 1 complete graph (more details provided in the Appendix). In Figure 3, we can observe that graphs generated by those classic graph generators have a limited span in the space of average path length and clustering coefficient.
给定选定的图度量,我们的目标是使用图生成器生成能够覆盖大范围图度量的不同图。然而,这样的目标需要仔细的生成器设计:经典的图形生成器只能生成有限的图形类,而最近的基于学习的图形生成器被设计用来模仿给定的范例图形(Kipf & Welling, 2017;李等,2018b;You等,2018a;b;2019年)。
现有图形生成器的局限性。为了说明现有图生成器的限制,我们调查以下经典图生成器:(1)可以随机统一地用给定节点和边数采样图的厄多斯-R图表生成器(厄多斯& R他们是在1960年);(2)能够生成具有小世界特性图的Watts-Strogatz (WS)模型(Watts & Strogatz, 1998);(3) barabsi -Albert (BA)模型,可以生成无尺度的’图(Albert & Barabasi, 2002);(4)可生成最大连通性图的Harary模型(Harary, 1962);(5)正则环格图(环图);(6)完成图表。对于所有类型的图生成器,我们控制节点数为64,枚举所有可能的离散参数,并对图生成器的所有连续参数进行网格搜索。我们在每个参数设置下生成30个带有不同随机种子的随机图。总共生成486,000个WS图、53,000个ER图、8,000个BA图、1,800个Harary图、54个环图和1个完整图(详情见附录)。在图3中,我们可以看到经典的图生成器生成的图在平均路径长度和聚类系数的空间中具有有限的跨度。
WS-flex graph generator. Here we propose the WS-flex graph generator that can generate graphs with a wide coverage of graph measures; notably, WS-flex graphs almost encompass all the graphs generated by classic random generators mentioned above, as is shown in Figure 3. WSflex generator generalizes WS model by relaxing the constraint that all the nodes have the same degree before random rewiring. Specifically, WS-flex generator is parametrized by node n, average degree k and rewiring probability p. The number of edges is determined as e = bn ∗ k/2c. Specifically, WS-flex generator first creates a ring graph where each node connects to be/nc neighboring nodes; then the generator randomly picks e mod n nodes and connects each node to one closest neighboring node; finally, all the edges are randomly rewired with probability p. We use WS-flex generator to smoothly sample within the space of clustering coefficient and average path length, then sub-sample 3942 graphs for our experiments, as is shown in Figure 1(c). WS-flex图生成器。在这里,我们提出了WS-flex图形生成器,它可以生成覆盖范围广泛的图形度量;值得注意的是,WS-flex图几乎包含了上面提到的经典随机生成器生成的所有图,如图3所示。WSflex生成器是对WS模型的一般化,它放宽了随机重新布线前所有节点具有相同程度的约束。具体来说,WS-flex生成器由节点n、平均度k和重新布线概率p参数化。边的数量确定为e = bn∗k/2c。具体来说,WS-flex生成器首先创建一个环图,其中每个节点连接到be/nc相邻节点;然后随机选取e mod n个节点,将每个节点连接到一个最近的相邻节点;最后,以概率p随机重新连接所有的边。我们使用WS-flex生成器在聚类系数和平均路径长度的空间内平滑采样,然后进行我们实验的子样本3942张图,如图1(c)所示。
3.3. Controlling Computational Budget
To compare the neural networks translated by these diverse graphs, it is important to ensure that all networks have approximately the same complexity, so that the differences in performance are due to their relational graph structures. We use FLOPS (# of multiply-adds) as the metric. We first compute the FLOPS of our baseline network instantiations (i.e. complete relational graph), and use them as the reference complexity in each experiment. As described in Section 2.3, a relational graph structure can be instantiated as a neural network with variable width, by partitioning dimensions or channels into disjoint set of node features. Therefore, we can conveniently adjust the width of a neural network to match the reference complexity (within 0.5% of baseline FLOPS) without changing the relational graph structures. We provide more details in the Appendix.
为了比较由这些不同图转换的神经网络,确保所有网络具有近似相同的复杂性是很重要的,这样性能上的差异是由于它们的关系图结构造成的。我们使用FLOPS (number of multiply- added)作为度量标准。我们首先计算我们的基线网络实例化的失败(即完全关系图),并使用它们作为每个实验的参考复杂度。如2.3节所述,通过将维度或通道划分为节点特征的不相交集,可以将关系图结构实例化为宽度可变的神经网络。因此,我们可以方便地调整神经网络的宽度,以匹配参考复杂度(在基线失败的0.5%以内),而不改变关系图结构。我们在附录中提供了更多细节。
4. Experimental Setup
Considering the large number of candidate graphs (3942 in total) that we want to explore, we first investigate graph structure of MLPs on the CIFAR-10 dataset (Krizhevsky, 2009) which has 50K training images and 10K validation images. We then further study the larger and more complex task of ImageNet classification (Russakovsky et al., 2015), which consists of 1K image classes, 1.28M training images and 50K validation images.
考虑到我们想要探索的候选图数量很大(总共3942个),我们首先在CIFAR-10数据集(Krizhevsky, 2009)上研究MLPs的图结构,该数据集有50K训练图像和10K验证图像。然后,我们进一步研究了更大、更复杂的ImageNet分类任务(Russakovsky et al., 2015),包括1K图像类、1.28M训练图像和50K验证图像。
4.1. Base Architectures
For CIFAR-10 experiments, We use a 5-layer MLP with 512 hidden units as the baseline architecture. The input of the MLP is a 3072-d flattened vector of the (32×32×3) image, the output is a 10-d prediction. Each MLP layer has a ReLU non-linearity and a BatchNorm layer (Ioffe & Szegedy, 2015). We train the model for 200 epochs with batch size 128, using cosine learning rate schedule (Loshchilov & Hutter, 2016) with an initial learning rate of 0.1 (annealed to 0, no restarting). We train all MLP models with 5 different random seeds and report the averaged results. 在CIFAR-10实验中,我们使用一个5层的MLP和512个隐藏单元作为基线架构。MLP的输入是(32×32×3)图像的3072 d平坦向量,输出是10 d预测。每个MLP层具有ReLU非线性和BatchNorm层(Ioffe & Szegedy, 2015)。我们使用余弦学习率计划(Loshchilov & Hutter, 2016)训练批量大小为128的200个epoch的模型,初始学习率为0.1(退火到0,不重新启动)。我们用5种不同的随机种子训练所有的MLP模型,并报告平均结果。
For ImageNet experiments, we use three ResNet-family architectures, including (1) ResNet-34, which only consists of basic blocks of 3×3 convolutions (He et al., 2016); (2) ResNet-34-sep, a variant where we replace all 3×3 dense convolutions in ResNet-34 with 3×3 separable convolutions (Chollet, 2017); (3) ResNet-50, which consists of bottleneck blocks (He et al., 2016) of 1×1, 3×3, 1×1 convolutions. Additionally, we use EfficientNet-B0 architecture (Tan & Le, 2019) that achieves good performance in the small computation regime. Finally, we use a simple 8- layer CNN with 3×3 convolutions. The model has 3 stages with [64, 128, 256] hidden units. Stride-2 convolutions are used for down-sampling. The stem and head layers are the same as a ResNet. We train all the ImageNet models for 100 epochs using cosine learning rate schedule with initial learning rate of 0.1. Batch size is 256 for ResNetfamily models and 512 for EfficientNet-B0. We train all ImageNet models with 3 random seeds and report the averaged performance. All the baseline architectures have a complete relational graph structure. The reference computational complexity is 2.89e6 FLOPS for MLP, 3.66e9 FLOPS for ResNet-34, 0.55e9 FLOPS for ResNet-34-sep, 4.09e9 FLOPS for ResNet-50, 0.39e9 FLOPS for EffcientNet-B0, and 0.17e9 FLOPS for 8-layer CNN. Training an MLP model roughly takes 5 minutes on a NVIDIA Tesla V100 GPU, and training a ResNet model on ImageNet roughly takes a day on 8 Tesla V100 GPUs with data parallelism. We provide more details in Appendix. 在ImageNet实验中,我们使用了三种resnet系列架构,包括(1)ResNet-34,它只包含3×3卷积的基本块(He et al., 2016);(2) ResNet-34-sep,将ResNet-34中的所有3×3稠密卷积替换为3×3可分离卷积(Chollet, 2017);(3) ResNet-50,由1×1,3×3,1×1卷积的瓶颈块(He et al., 2016)组成。此外,我们使用了efficiency - net - b0架构(Tan & Le, 2019),在小计算环境下取得了良好的性能。最后,我们使用一个简单的8层CNN, 3×3卷积。模型有3个阶段,隐含单元为[64,128,256]。Stride-2卷积用于下采样。茎和头层与ResNet相同。我们使用初始学习率为0.1的余弦学习率计划对所有ImageNet模型进行100个epoch的训练。批大小是256的ResNetfamily模型和512的效率网- b0。我们用3个随机种子训练所有的ImageNet模型,并报告平均性能。所有的基线架构都有一个完整的关系图结构。MLP的参考计算复杂度是2.89e6失败,ResNet-34的3.66e9失败,ResNet-34-sep的0.55e9失败,ResNet-50的4.09e9失败,EffcientNet-B0的0.39e9失败,8层CNN的0.17e9失败。在NVIDIA Tesla V100 GPU上训练一个MLP模型大约需要5分钟,而在ImageNet上训练一个ResNet模型大约需要一天,在8个Tesla V100 GPU上使用数据并行。我们在附录中提供更多细节。
Figure 4: Key results. The computational budgets of all the experiments are rigorously controlled. Each visualized result is averaged over at least 3 random seeds. A complete graph with C = 1 and L = 1 (lower right corner) is regarded as the baseline. (a)(c) Graph measures vs. neural network performance. The best graphs significantly outperform the baseline complete graphs. (b)(d) Single graph measure vs. neural network performance. Relational graphs that fall within the given range are shown as grey points. The overall smooth function is indicated by the blue regression line. (e) Consistency across architectures. Correlations of the performance of the same set of 52 relational graphs when translated to different neural architectures are shown. (f) Summary of all the experiments. Best relational graphs (the red crosses) consistently outperform the baseline complete graphs across different settings. Moreover, we highlight the “sweet spots” (red rectangular regions), in which relational graphs are not statistically worse than the best relational graphs (bins with red crosses). Bin values of 5-layer MLP on CIFAR-10 are average over all the relational graphs whose C and L fall into the given bin
图4:关键结果。所有实验的计算预算都是严格控制的。每个可视化结果至少在3个随机种子上取平均值。以C = 1, L = 1(右下角)的完整图作为基线。(a)(c)图形测量与神经网络性能。最好的图表明显优于基线完整的图表。(b)(d)单图测量与神经网络性能。在给定范围内的关系图显示为灰色点。整体平滑函数用蓝色回归线表示。(e)架构之间的一致性。当转换到不同的神经结构时,同一组52个关系图的性能的相关性被显示出来。(f)总结所有实验。在不同的设置中,最佳关系图(红色叉)的表现始终优于基线完整图。此外,我们强调了“甜蜜点”(红色矩形区域),其中关系图在统计上并不比最佳关系图(红色叉的箱子)差。CIFAR-10上5层MLP的Bin值是C和L属于给定Bin的所有关系图的平均值
4.2. Exploration with Relational Graphs
For all the architectures, we instantiate each sampled relational graph as a neural network, using the corresponding definitions outlined in Table 1. Specifically, we replace all the dense layers (linear layers, 3×3 and 1×1 convolution layers) with their relational graph counterparts. We leave the input and output layer unchanged and keep all the other designs (such as down-sampling, skip-connections, etc.) intact. We then match the reference computational complexity for all the models, as discussed in Section 3.3.
对于所有的架构,我们使用表1中列出的相应定义将每个抽样的关系图实例化为一个神经网络。具体来说,我们将所有的稠密层(线性层、3×3和1×1卷积层)替换为对应的关系图。我们保持输入和输出层不变,并保持所有其他设计(如下采样、跳接等)不变。然后我们匹配所有模型的参考计算复杂度,如3.3节所讨论的。
For CIFAR-10 MLP experiments, we study 3942 sampled relational graphs of 64 nodes as described in Section 3.2. For ImageNet experiments, due to high computational cost, we sub-sample 52 graphs uniformly from the 3942 graphs. Since EfficientNet-B0 is a small model with a layer that has only 16 channels, we can not reuse the 64-node graphs sampled for other setups. We re-sample 48 relational graphs with 16 nodes following the same procedure in Section 3.
对于CIFAR-10 MLP实验,我们研究了包含64个节点的3942个抽样关系图,如3.2节所述。在ImageNet实验中,由于计算量大,我们从3942个图中均匀地抽取52个图。因为efficient - b0是一个只有16个通道的层的小模型,我们不能在其他设置中重用64节点图。我们按照第3节中相同的步骤,对48个有16个节点的关系图重新采样。