tiny_dnn
1.0.0
A header only, dependency-free deep learning framework in C++11
|
▼Nmodels | |
Calexnet | |
▼Ntiny_dnn | |
►Nactivation | |
►Ncore | |
►Ndetail | |
►Nweight_init | |
CDevice | |
CProgram | |
CProgramHash | |
CProgramManager | |
CTensor | |
CConv2dGradOp | |
CConv2dOp | |
CConv2dLibDNNForwardOp | |
CConv2dLibDNNBackwardOp | |
CConv2dOpenCLForwardOp | |
CConv2dOpenCLBackwardOp | |
CFullyConnectedGradOp | |
CFullyConnectedOp | |
CMaxPoolGradOp | |
CMaxPoolOp | |
Ctimer | |
Cprogress_display | |
Celementwise_add_layer | Element-wise add N vectors y_i = x0_i + x1_i + .. |
Caverage_pooling_layer | Average pooling with trainable weights |
Caverage_unpooling_layer | Average pooling with trainable weights |
Cbatch_normalization_layer | Batch Normalization |
Cconcat_layer | Concat N layers along depth |
Cconvolutional_layer | 2D convolution layer |
Cdeconvolutional_layer | 2D deconvolution layer |
Cdropout_layer | Applies dropout to the input |
Cfeedforward_layer | Single-input, single-output network with activation function |
Cfully_connected_layer | Compute fully-connected(matmul) operation |
Cinput_layer | |
Clayer | Base class of all kind of NN layers |
Clinear_layer | Element-wise operation: f(x) = h(scale*x+bias) |
Clrn_layer | Local response normalization |
Cmax_pooling_layer | Applies max-pooing operaton to the spatial data |
Cmax_unpooling_layer | Applies max-pooing operaton to the spatial data |
Cpartial_connected_layer | |
Cpower_layer | Element-wise pow: y = scale*x^factor |
Cquantized_convolutional_layer | 2D convolution layer |
Cquantized_deconvolutional_layer | 2D deconvolution layer |
Cquantized_fully_connected_layer | Compute fully-connected(matmul) operation |
Cslice_layer | Slice an input data into multiple outputs along a given slice dimension |
Cmse | |
Cabsolute | |
Cabsolute_eps | |
Ccross_entropy | |
Ccross_entropy_multiclass | |
Cresult | |
Cnetwork | A model of neural networks in tiny-dnn |
Cnode | Base class of all kind of tinny-cnn data |
Cedge | Class containing input/output data |
Cnode_tuple | |
Cnodes | Basic class of various network types (sequential, multi-in/multi-out) |
Csequential | Single-input, single-output feedforward network |
Cgraph | Generic graph network |
Coptimizer | Base class of optimizer usesHessian : true if an optimizer uses hessian (2nd order derivative of loss function) |
Cstateful_optimizer | |
Cadagrad | Adaptive gradient method |
CRMSprop | RMSprop |
Cadam | [a new optimizer (2015)] |
Cgradient_descent | SGD without momentum |
Cmomentum | SGD with momentum |
►Caligned_allocator | |
Cdeserialization_helper | |
Cgraph_visualizer | Utility for graph visualization |
Cimage | Simple image utility class |
Cnn_error | Error exception class for tiny-dnn |
Cnn_warn | Warning class for tiny-dnn (for debug) |
Cnn_info | Info class for tiny-dnn (for debug) |
Cnn_not_implemented_error | |
Cblocked_range | |
Crandom_generator | |
Cserialization_helper | |
Cindex3d | |
▼Nvectorize | |
►Ndetail | |
Cfoobar |