tiny_dnn
1.0.0
A header only, dependency-free deep learning framework in C++11
|
MNIST is well-known dataset of handwritten digits. We'll use paired convolution and deconvolution with 4 layers in total for an auto-encoder without fully connected layers.
We implement the forward pass of deconvolution as a typical upsampling process with convolutional kernels. The backward pass codes are implemented based on the fact that the result of deconvolution is the same as padded convolution with a reversed kernel which means that the element indexs are reversed in view of the result.
You can add layers from top to bottom by operator <<, we recommand that the convolution and deconvolution layers be paired as conv1 - conv2...deconv2 - deconv1.
Tiny-cnn supports idx format, so all you have to do is calling parse_mnist_images and parse_mnist_labels functions.
>Note: >Original MNIST images are 28x28 centered, [0,255]value. >This code rescale values [0,255] to [-1.0,1.0], and add 2px borders (so each image is 32x32).
If you want to use another format for learning nets, see Data Format page.
Just use operator <<
and operator >>
with ostream/istream.
This examples will have a visualization of output images and the basic deconvolution kernels, you can modify codes below to have a selection of visualization of layer outputs, convolution kernels and deconvolution kernels.
You can just doing so:
We replace a convolutional layer in LeNet-5-like architecture as a deconvolutional layer for MNIST digits recognition task and got an acceptable accuracy over 98%, you can doing so to carry out the classification example: