tiny_dnn
1.0.0
A header only, dependency-free deep learning framework in C++11
|
Include tiny_dnn.h:
Declare the model as network
. There are 2 types of network: network<sequential>
and network<graph>
. The sequential model is easier to construct.
Stack layers:
Some layer takes an activation as a template parameter : max_pool<relu>
means "apply a relu activation after the pooling". if the layer has no successive activation, use max_pool<identity>
instead.
Declare the optimizer:
In addition to gradient descent, you can use modern optimizers such as adagrad, adadelta, adam.
Now you can start the training:
If you don't have the target vector but have the class-id, you can alternatively use train
.
Validate the training result:
Generate prediction on the new data:
Save the trained parameter and models:
For a more in-depth about tiny-dnn, check out MNIST classification where you can see the end-to-end example. You will find tiny-dnn's API in How-to.