tiny_dnn
1.0.0
A header only, dependency-free deep learning framework in C++11
|
Use fan-in(number of input weight for each neuron) for scaling. More...
#include <weight_init.h>
Public Member Functions | |
lecun (float_t value) | |
void | fill (vec_t *weight, serial_size_t fan_in, serial_size_t fan_out) override |
![]() | |
scalable (float_t value) | |
void | scale (float_t value) |
Additional Inherited Members | |
![]() | |
float_t | scale_ |
Use fan-in(number of input weight for each neuron) for scaling.
Y LeCun, L Bottou, G B Orr, and K Muller, Efficient backprop Neural Networks, Tricks of the Trade, Springer, 1998