⏱️ Benchmarks

Here we compare the speed of some popular GNN models encoded in PyNeuraLogic against some of the most popular GNN frameworks in their latest versions, namely PyTorch Geometric (PyG) (2.0.2), Deep Graph Library (DGL) (0.6.1), and Spektral (1.0.6).

The benchmarks report comparison of the average training time per epoch of three different architectures - GCN (two GCNConv layers), GraphSAGE (two GraphSAGEConv layers), and GIN (five GINConv layers).

Datasets are picked from the common TUDataset Benchmark Data Sets and are loaded into PyNeuraLogic, DGL, and PyG via PyG’s Dataset loader. Spektral benchmark uses Spektral’s Dataset loader.

We compare the frameworks in a binary graph classification task with only node’s features. This is merely for the sake of simple reusability of the introduced architectures over the frameworks. Statistics of each dataset can be seen down below.

Due to its declarative nature, PyNeuraLogic has to transform each dataset into a logic form and then into a computation graph. The time spent on this preprocessing task is labeled as “Dataset Build Time”. Note that this transformation happens only once before the training.

Average Time Per Epoch

Average Time Per Epoch

GCN

GraphSAGE

GIN

Spektral

0.1238s

0.1547s

0.2491s

Deep Graph Library

0.1287s

0.1795s

0.5214s

PyTorch Geometric

0.0897s

0.1099s

0.3399s

PyNeuraLogic

0.0083s

0.0119s

0.0393s

Dataset Build Time

GCN

GraphSAGE

GIN

PyNeuraLogic

1.4265s

1.9372s

2.3662s

Dataset Statistics

Num. of Graphs

Avg. num. of nodes

Avg. num. of edges

Num. node of features

188

~17.9

~19.7

7