tiny_dnn  1.0.0
A header only, dependency-free deep learning framework in C++11
Public Types | Public Member Functions | Protected Member Functions | Protected Attributes | List of all members
tiny_dnn::nodes Class Referenceabstract

basic class of various network types (sequential, multi-in/multi-out). More...

#include <nodes.h>

Inheritance diagram for tiny_dnn::nodes:
Inheritance graph
[legend]

Public Types

typedef std::vector< layerptr_t >::iterator iterator
 
typedef std::vector< layerptr_t >::const_iterator const_iterator
 

Public Member Functions

virtual void backward (const std::vector< tensor_t > &first)=0
 propagate gradient More...
 
virtual std::vector< tensor_t > forward (const std::vector< tensor_t > &first)=0
 
virtual void update_weights (optimizer *opt, int batch_size)
 update weights and clear all gradients
 
virtual void setup (bool reset_weight)
 setup all weights, must be called before forward/backward
 
void clear_grads ()
 
size_t size () const
 
iterator begin ()
 
iterator end ()
 
const_iterator begin () const
 
const_iterator end () const
 
layeroperator[] (size_t index)
 
const layeroperator[] (size_t index) const
 
serial_size_t in_data_size () const
 
serial_size_t out_data_size () const
 
template<typename T >
const T & at (size_t index) const
 
template<typename T >
T & at (size_t index)
 
virtual float_t target_value_min (int out_channel=0) const
 
virtual float_t target_value_max (int out_channel=0) const
 
void save (std::ostream &os) const
 
void load (std::istream &is)
 
virtual void load (const std::vector< float_t > &vec)
 
void label2vec (const label_t *t, serial_size_t num, std::vector< vec_t > *vec) const
 
template<typename OutputArchive >
void save_model (OutputArchive &oa) const
 
template<typename InputArchive >
void load_model (InputArchive &ia)
 
template<typename OutputArchive >
void save_weights (OutputArchive &oa) const
 
template<typename InputArchive >
void load_weights (InputArchive &ia)
 

Protected Member Functions

template<typename T >
void push_back (T &&node)
 
template<typename T >
void push_back (std::shared_ptr< T > node)
 
std::vector< tensor_t > reorder_for_layerwise_processing (const std::vector< tensor_t > &input)
 
template<typename T >
void push_back_impl (T &&node, std::true_type)
 
template<typename T >
void push_back_impl (T &&node, std::false_type)
 

Protected Attributes

std::vector< std::shared_ptr< layer > > own_nodes_
 
std::vector< layerptr_tnodes_
 

Detailed Description

basic class of various network types (sequential, multi-in/multi-out).

this class holds list of pointer of Node, and provides entry point of forward / backward operations. Node is a computational unit of tiny-dnn (for example, convolution). Currently 2 kinds of implementation are available: sequential and graph.

Nodes can accept lvalue, rvalue and shared_ptr forms of node. If given type is rvalue or shared_ptr, nodes create shared_ptr<node> to keep given node alive. If given type is lvalue, tiny-dnn holds raw-pointer only (to avoid double-free).

sequential s;
s.add(fc<tan_h>(100, 200));                   // rvalue, moved into nodes

s.add(std::make_shared<fc<tan_h>>(200, 100)); // shared_ptr, shared by nodes

fc<softmax> out(100, 10);
s.add(out);                                   // lvalue, hold raw-pointer only

Member Function Documentation

◆ backward()

virtual void tiny_dnn::nodes::backward ( const std::vector< tensor_t > &  first)
pure virtual

propagate gradient

Parameters
first: gradient of cost function(dE/dy)
worker_index: id of worker-task

Implemented in tiny_dnn::graph, and tiny_dnn::sequential.

◆ forward()

virtual std::vector<tensor_t> tiny_dnn::nodes::forward ( const std::vector< tensor_t > &  first)
pure virtual
Parameters
firstinput : data vectors
worker_index: id of worker-task

Implemented in tiny_dnn::graph, and tiny_dnn::sequential.


The documentation for this class was generated from the following file: