The Generalized Neural Network Bundle (GNN) provides a friendly, parallelized, ECL interface to the popular Tensorflow Neural Network package. The newest version of GNN — Version 3.0 — provides a number of important new features:
- Native support for Tensorflow 2.0
- Full access to Tensorflow pre-trained models
- Easier saving and restoration of trained models
- More flexible distributed training
I’ll discuss each of these features below. This discussion assumes some general knowledge of GNN and Tensorflow.
Native Support for Tensorflow 2.0
In GNN Version 2, we added support for Tensorflow 2.0. This was done, however, using Tensorflow’s “compatibility mode” that allowed Tensorflow 2.0 to support the version 1.5 interface. While this approach was functional, it limited the performance and access to some of the features provided by Tensorflow 2.0. GNN Version 3 now natively utilizes the Tensorflow 2.0 interface, maximizing performance, and providing all features.
Tensorflow Pre-trained Models
Tensorflow provides a rich set of pre-trained models that were difficult to utilize in previous GNN versions. Some of the pre-trained models could not be used at all with the “compatibility mode” interface.
GNN 3.0 not only allows use of any pre-trained models, but allows extension of pre-trained models by adding layers above or below those models.
For example, in GNN, you would typically define the layers of the Neural Network using a layer definition:
// ldef provides the set of Keras layers that form the neural network. These are // provided as strings representing the Python layer definitions as would be provided // to Keras. Note that the symbol 'tf' is available for use (import tensorflow as tf), // as is the symbol 'layers' (from tensorflow.keras import layers). // Recall that in Keras, the input_shape must be present in the first layer. // Note that this shape is the shape of a single observation. ldef := ['''layers.Dense(256, activation='tanh', input_shape=(5,))''', '''layers.Dense(256, activation='relu')''', '''layers.Dense(1, activation=None)'''];
This is unchanged in GNN, but in Version 3, you can alternately specifiy a pre-trained model, which is known by an “application” path in Tensorflow. So you could create the full ResNet50 (50 layer image recognition network) with the following layer definition.
ldef := ['''applications.resnet50.ResNet50()'''];
You could also load all of the pre-trained weights for the ImageNet image set.
ldef := ['''applications.resnet50.ResNet50(weights = "imagenet")'''];
Finally, you could combine pre-defined (and optionally pre-trained) with custom pre or post processing by adding layers above or below the application.
ldef := ['''layers.Dense(...)''', '''applications.resnet50.ResNet50(weights = "imagenet")''', '''layers.Dense(...)''']
Of course, the shapes of the added layers must be compatible with the inputs and outputs of the pre-defined model.
Saving and Restoring Trained Models
Once you’ve trained a GNN model, you wil likely want to persist that model so that it can be used to make predictions in the future. In previous versions of GNN, you could save the trained weights, but in order to utilize the model, you would need to instantiate the exact same set of layers (network topology) that you used in training. This was error prone because if the weights did not exactly match the network, the predictions could fail or produce poor results.
A new facility in GNN 3.0 provides a single call that saves both the topology and trained weights of the network. This is returned as a specialized dataset that can hold both sets of information. That dataset can be persisted to a file, and later, retreived. A second call will take that trained model dataset and recreate the neural network within tensorflow, restoring both the topology and weights.
GNN_Model := GNNI.GNN_Model; The record structure for a trained model. // modelId is the id returned from Fit(...) when the model is trained. trained_model := GNNI.GetModel(modelId); OUTPUT(trained_model, GNN_Model, 'mytrainedmodelfile'); ... // In another program trained_model := DATASET('mytrainedmodelfile', THOR); GNNI.SetModel(sessionId, trained_model); // Model is now ready to make predictions.
Flexible Distributed Training
GNN parallelizes both the training and prediction processes. While prediction is fully parallelizable (linear scaling), training is only partially parallelizable (exponential scaling). This is because training requires periodic synchronization of weights across nodes. If you double the number of nodes, the speed of training will not double. At some point, it becomes counterproductive to add more nodes since it will actually slow down training. Once the cost of synchronization outweighs the gain of parallelization, we’ve passed the optimal number of nodes for training. Additionally, due to the power of GPUs, a single node with a GPU may train faster than many nodes without GPUs. The optimal number of nodes, depending on the particular neural network topology, is typically between 5 and 15, and if you have a node with a GPU, than the optimal number of nodes may be 1.
A new parameter has been added to the GNNI Fit(…) function to allow better control over training. The limitNodes parameter allows you to set the number of nodes used during training. It defaults to 0 — use all nodes — but can be set to any number less than the number of nodes in the cluster. For example, if you are running on a 30 node cluster, you may be better off setting limitNodes to 10. If your first node has a GPU, it may be better to set limitNodes to 1.
GNN Version 3 provides significant new functionality, and broadens GNN’s application for neural networks of all types.