gbmCMA {CMA} | R Documentation |
Roughly speaking, Boosting combines 'weak learners'
in a weighted manner in a stronger ensemble. This
method calls the function gbm.fit
from the
package gbm
. The 'weak learners' are
simple trees that need only very few splits (default: 1).
For S4
method information, see gbmCMA-methods
.
gbmCMA(X, y, f, learnind, ...)
X |
Gene expression data. Can be one of the following:
|
y |
Class labels. Can be one of the following:
0 to K-1 , where K is the
total number of different classes in the learning set.
|
f |
A two-sided formula, if X is a data.frame . The
left part correspond to class labels, the right to variables. |
learnind |
An index vector specifying the observations that
belong to the learning set. May be missing ;
in that case, the learning set consists of all
observations and predictions are made on the
learning set. |
... |
Further arguments passed to the function gbm.fit
from the package of the same name. Worth mentionning are
|
An onject of class cloutput
.
Up to now, this method can only be applied to binary classification.
Martin Slawski martin.slawski@campus.lmu.de
Anne-Laure Boulesteix http://www.slcmsr.net/boulesteix
Ridgeway, G. (1999).
The state of boosting.
Computing Science and Statistics, 31:172-181
Friedman, J. (2001).
Greedy Function Approximation: A Gradient Boosting Machine.
Annals of Statistics 29(5):1189-1232.
code{compBoostCMA}, dldaCMA
, ElasticNetCMA
,
fdaCMA
, flexdaCMA
,
knnCMA
, ldaCMA
, LassoCMA
,
nnetCMA
, pknnCMA
, plrCMA
,
pls_ldaCMA
, pls_lrCMA
, pls_rfCMA
,
pnnCMA
, qdaCMA
, rfCMA
,
scdaCMA
, shrinkldaCMA
, svmCMA
### load Golub AML/ALL data data(golub) ### extract class labels golubY <- golub[,1] ### extract gene expression golubX <- as.matrix(golub[,-1]) ### select learningset ratio <- 2/3 set.seed(111) learnind <- sample(length(golubY), size=floor(ratio*length(golubY))) ### run tree-based gradient boosting (no tuning) gbmresult <- gbmCMA(X=golubX, y=golubY, learnind=learnind, n.trees = 500) show(gbmresult) ftable(gbmresult) plot(gbmresult)