简介
Spektral工具还发表了论文:
《Graph Neural Networks in TensorFlow and Keras with Spektral》
https://arxiv.org/abs/2006.12138
github地址:https://github.com/danielegrattarola/spektral/
在本文中,我们介绍了 Spektral,这是一个开源 Python 库,用于使用 TensorFlow 和 Keras 应用程序编程接口构建图神经网络。Spektral 实现了大量的图深度学习方法,包括消息传递和池化运算符,以及用于处理图和加载流行基准数据集的实用程序。这个库的目的是为创建图神经网络提供基本的构建块,重点是 Keras 所基于的用户友好性和快速原型设计的指导原则。因此,Spektral 适合绝对的初学者和专业的深度学习从业者。
主要网络
Spektral 实现了一些主流的图深度学习层,包括:
- Graph Convolutional Networks (GCN)
- Chebyshev convolutions
- GraphSAGE
- ARMA convolutions
- Edge-Conditioned Convolutions (ECC)
- Graph attention networks (GAT)
- Approximated Personalized Propagation of Neural Predictions (APPNP)
- Graph Isomorphism Networks (GIN)
- Diffusional Convolutions
还有很多池化层,包括: - MinCut pooling
- DiffPool
- Top-K pooling
- Self-Attention Graph (SAG) pooling
- Global pooling
- Global gated attention pooling
- SortPool
安装
pip安装:
pip install spektral
源码安装:
git clone https://github.com/danielegrattarola/spektral.git cd spektral python setup.py install # Or 'pip install .'
Spektral实现GCN
对于TF爱好者很友好:
import numpy as np import tensorflow as tf from tensorflow.keras.callbacks import EarlyStopping from tensorflow.keras.losses import CategoricalCrossentropy from tensorflow.keras.optimizers import Adam from spektral.data.loaders import SingleLoader from spektral.datasets.citation import Citation from spektral.layers import GCNConv from spektral.models.gcn import GCN from spektral.transforms import AdjToSpTensor, LayerPreprocess learning_rate = 1e-2 seed = 0 epochs = 200 patience = 10 data = "cora" tf.random.set_seed(seed=seed) # make weight initialization reproducible # Load data dataset = Citation( data, normalize_x=True, transforms=[LayerPreprocess(GCNConv), AdjToSpTensor()] ) # We convert the binary masks to sample weights so that we can compute the # average loss over the nodes (following original implementation by # Kipf & Welling) def mask_to_weights(mask): return mask.astype(np.float32) / np.count_nonzero(mask) weights_tr, weights_va, weights_te = ( mask_to_weights(mask) for mask in (dataset.mask_tr, dataset.mask_va, dataset.mask_te) ) model = GCN(n_labels=dataset.n_labels, n_input_channels=dataset.n_node_features) model.compile( optimizer=Adam(learning_rate), loss=CategoricalCrossentropy(reduction="sum"), weighted_metrics=["acc"], ) # Train model loader_tr = SingleLoader(dataset, sample_weights=weights_tr) loader_va = SingleLoader(dataset, sample_weights=weights_va) model.fit( loader_tr.load(), steps_per_epoch=loader_tr.steps_per_epoch, validation_data=loader_va.load(), validation_steps=loader_va.steps_per_epoch, epochs=epochs, callbacks=[EarlyStopping(patience=patience, restore_best_weights=True)], ) # Evaluate model print("Evaluating model.") loader_te = SingleLoader(dataset, sample_weights=weights_te) eval_results = model.evaluate(loader_te.load(), steps=loader_te.steps_per_epoch) print("Done.\n" "Test loss: {}\n" "Test accuracy: {}".format(*eval_results))