Classes | |
| class | AdaBoost |
| AdaBoost (adaptive boosting). More... | |
| class | AdaBoost_ECOC |
| AdaBoost.ECC with exponential cost and Hamming distance. More... | |
| class | Aggregating |
| An abstract class for aggregating. More... | |
| class | Bagging |
| Bagging (boostrap aggregating). More... | |
| class | Boosting |
| Boosting generates a linear combination of hypotheses. More... | |
| struct | _boost_gd |
| class | Cascade |
| Aggregate hypotheses in a cascade (sequential) way. More... | |
| class | CGBoost |
| CGBoost (Conjugate Gradient Boosting). More... | |
| struct | _boost_cg |
| class | DataFeeder |
| Feed (random splitted) training and testing data. More... | |
| class | dataset |
| Class template for storing, retrieving, and manipulating a vector of input-output style data. More... | |
| class | FeedForwardNN |
| class | LearnModel |
| A unified interface for learning models. More... | |
| class | LPBoost |
| LPBoost (Linear-Programming Boosting). More... | |
| struct | MgnCost |
| Cost proxy used in MgnBoost. More... | |
| class | MgnBoost |
| MgnBoost (margin maximizing boosting). More... | |
| struct | _mgn_gd |
| class | MultiClass_ECOC |
| Multiclass classification using error-correcting output code. More... | |
| class | NNLayer |
| A layer in a neural network. More... | |
| struct | _search |
| Interface used in iterative optimization algorithms. More... | |
| struct | _gradient_descent |
| Gradient descent. More... | |
| struct | _gd_weightdecay |
| Gradient descent with weight decay. More... | |
| struct | _gd_momentum |
| Gradient descent with momentum. More... | |
| struct | _gd_adaptive |
| struct | _line_search |
| struct | _conjugate_gradient |
| class | Perceptron |
| Perceptron models a type of artificial neural network that consists of only one neuron, invented by Frank Rosenblatt in 1957. More... | |
| class | Pulse |
| Multi-transition pulse functions (step functions). More... | |
| class | Stump |
| Decision stump. More... | |
| class | SVM |
Namespaces | |
| namespace | cost |
| namespace | details |
| namespace | kernel |
| namespace | op |
| Operators used in optimization. | |
Typedefs | |
| typedef std::vector< std::vector< REAL > > | WMAT |
| typedef std::vector< DataWgt > | JointWgt |
| typedef const_shared_ptr< JointWgt > | pJointWgt |
| typedef var_shared_ptr< LearnModel > | pLearnModel |
| typedef std::vector< REAL > | Input |
| typedef std::vector< REAL > | Output |
| typedef dataset< Input, Output > | DataSet |
| typedef std::vector< REAL > | DataWgt |
| typedef const_shared_ptr< DataSet > | pDataSet |
| typedef const_shared_ptr< DataWgt > | pDataWgt |
| typedef std::vector< int > | ECOC_VECTOR |
| typedef std::vector< ECOC_VECTOR > | ECOC_TABLE |
| typedef std::map< REAL, REAL >::iterator | MI |
| typedef svm_node * | p_svm_node |
Enumerations | |
| enum | ECOC_TYPE { ONE_VS_ONE, ONE_VS_ALL } |
Functions | |
| DataSet * | load_data (std::istream &, UINT, UINT, UINT) |
| Load a data set from a stream. | |
| DataSet * | load_data (std::istream &is, UINT n) |
| template<class SEARCH> | |
| void | iterative_optimize (SEARCH s) |
| Main search routine. | |
| bool | ldivide (RMAT &A, const RVEC &b, RVEC &x) |
| void | update_wgt (RVEC &wgt, const RVEC &dir, const RMAT &X, const RVEC &y) |
| void | dset_extract (const pDataSet &ptd, RMAT &X, RVEC &y) |
| void | dset_mult_wgt (const pDataWgt &ptw, RVEC &y) |
| UINT | randcdf (REAL r, const RVEC &cdf) |
| bool | ldivide (std::vector< std::vector< REAL > > &A, const std::vector< REAL > &b, std::vector< REAL > &x) |
| p_svm_node | fill_svm_node (const Input &x, struct svm_node *pool) |
Variables | |
| const kernel::RBF | _svm_ker (0.5) |
The idea is to separate the learning model and optimization techniques.
Using vectorop.h for default vector operation
|
|
Definition at line 26 of file learnmodel.h. |
|
|
Definition at line 27 of file learnmodel.h. |
|
|
Definition at line 21 of file multiclass_ecoc.h. |
|
|
Definition at line 20 of file multiclass_ecoc.h. |
|
|
Definition at line 23 of file learnmodel.h. |
|
|
Definition at line 16 of file adaboost_ecoc.h. |
|
|
|
|
|
Definition at line 24 of file learnmodel.h. |
|
|
|
|
|
Definition at line 28 of file learnmodel.h. |
|
|
Definition at line 29 of file learnmodel.h. |
|
|
Definition at line 17 of file adaboost_ecoc.h. |
|
|
Definition at line 17 of file aggregating.h. |
|
|
Definition at line 27 of file adaboost_ecoc.cpp. |
|
|
Definition at line 22 of file multiclass_ecoc.h. |
|
||||||||||||||||
|
Definition at line 179 of file perceptron.cpp. Referenced by Perceptron::train(). |
|
||||||||||||
|
Definition at line 189 of file perceptron.cpp. |
|
||||||||||||
|
Definition at line 92 of file svm.cpp. Referenced by SVM::kernel(), SVM::operator()(), and SVM::signed_margin(). |
|
|
Main search routine.
Definition at line 74 of file optimize.h. Referenced by MgnBoost::train(), FeedForwardNN::train(), CGBoost::train_gd(), and Boosting::train_gd(). |
|
||||||||||||||||
|
Solve inv(A) * b, when A is symmetric and positive-definite. Actually we only need the upper triangular part of A. |
|
||||||||||||||||
|
Definition at line 120 of file perceptron.cpp. References Cholesky_decomp(), and Cholesky_linsol(). |
|
||||||||||||
|
An easier-to-use version, where the output dimension is fixed at 1, and the input dimension is auto-detected. This version requires that each row of stream is should be a sample. Definition at line 46 of file learnmodel.cpp. References dataset::append(), and dataset::size(). |
|
||||||||||||||||||||
|
Load a data set from a stream. Each sample consists of first the input and then the output. Numbers are separated by spaces.
Definition at line 37 of file learnmodel.cpp. Referenced by DataFeeder::DataFeeder(). |
|
||||||||||||
|
Definition at line 324 of file perceptron.cpp. |
|
||||||||||||||||||||
|
Update the weight wgt along the direction dir. If necessary, the whole wgt will be negated. Definition at line 131 of file perceptron.cpp. References DOTPROD. |
|
|
|
1.4.6