This section explains the API changes from v0.0.1.
How to specify the loss and the optimizer
In v0.0.1, the loss function and the optimization algorithm are treated as template parameter of network
.
network<mse, adagrad> net;
net.train(x_data, y_label, n_batch, n_epoch);
From v0.1.0, these are treated as parameters of train/fit functions.
network<sequential> net;
adagrad opt;
net.fit<mse>(opt, x_data, y_label, n_batch, n_epoch);
Training API for regression
In v0.0.1, the regression and the classification have the same API:
net.train(x_data, y_data, n_batch, n_epoch);
net.train(x_data, y_label, n_batch, n_epoch);
From v0.1.0, these are separated into fit
and train
.
net.fit<mse>(opt, x_data, y_data, n_batch, n_epoch);
net.train<mse>(opt, x_data, y_label, n_batch, n_epoch);
The default mode of re-init weights
In v0.0.1, the default mode of weight-initialization in trian
function is reset_weights=true
.
std::ifstream is("model");
is >> net;
net.train(x_data, y_data, n_batch, n_epoch);
net.train(x_data, y_data, n_bacth, n_epoch);
std::ifstream is("model");
is >> net;
net.trian<mse>(opt, x_data, y_data, n_batch, n_epoch);
net.train<mse>(opt, x_data, y_data, n_bacth, n_epoch);
net2.train<mse>(opt, x_data, y_data, n_batch, n_epoch);