Radial basis function network for two classes of the MNIST dataset
In this blog-post, we will discuss radial basis function networks(RBFN). RBNF's seem to have lost the race for popularity as compared to Neural Network. Yet, they prove a powerful generalization capacity. RBFN's are similar to NN in the sense that they use multiple layers of perceptrons to define a function over the input space. Yet, in an RBFN, the first layer generalizes the input space. That is, the input layer makes use of the distances between fixed center points and input data. The consequent activation functions are radially symmetric. Hence the name, radial basis function networks.
This code is a mere proof of concept We explore the world of RBFN with a basic implementation. The dataset are two selected classes from the MNIST dataset. We implement both exact interpolation and approximate interpolation:
RBFN's have no deterministic threshold, even when the targets are 0 and 1. Therefore, we evaluate the accuracies over a range of possible thresholds:

Also interesting is to vary the regularization on the final perceptron or the variance of the radial basis function:


This implementation of an RBFN is basic. Anyone who would like to take this further can consider
The code is on the associated GitHub repository. RBFN_two_MNIST_main.m is the file to start from.
As always, I am curious to any comments and questions. Reach me at romijndersrob@gmail.com