ElasticNetCMA {CMA} | R Documentation |
Zou and Hastie (2004) proposed a combined L1/L2 penalty for regularization and
variable selection. The Elastic Net penalty encourages a grouping effect, where
strongly correlated predictors tend to be in or out of the model together.
The computation is done with the function glmpath
from the package
of the same name.
The method can be used for variable selection alone, s. GeneSelection
.
For S4
method information, see ElasticNetCMA-methods
.
ElasticNetCMA(X, y, f, learnind, norm.fraction = 0.1, lambda2=1e-3, ...)
X |
Gene expression data. Can be one of the following:
|
y |
Class labels. Can be one of the following:
0 to K-1 , where K is the
total number of different classes in the learning set.
|
f |
A two-sided formula, if X is a data.frame . The
left part correspond to class labels, the right to variables. |
learnind |
An index vector specifying the observations that
belong to the learning set. May be missing ;
in that case, the learning set consists of all
observations and predictions are made on the
learning set. |
norm.fraction |
L1 Shrinkage intensity, expressed as the fraction
of the coefficient L1 norm compared to the
maximum possible L1 norm (corresponds to fraction = 1 ).
Lower values correspond to higher shrinkage.
Note that the default (0.1) need not produce good
results, i.e. tuning of this parameter is recommended. |
lambda2 |
L2 shrinkage intensity, a positive real number. The default (0.001) need not produce good results. |
... |
Further arguments passed to the function glmpath
from the package of the same name. |
An object of class clvarseloutput
.
For a strongly related method, s. LassoCMA
.
Up to now, this method can only be applied to binary classification.
Martin Slawski martin.slawski@campus.lmu.de,
Anne-Laure Boulesteix http://www.slcmsr.net/boulesteix
Zhou, H., Hastie, T. (2004).
Regularization and variable selection via the elastic net.
Journal of the Royal Statistical Society B, 67(2),301-320
Young-Park, M., Hastie, T. (2007)
L1-regularization path algorithm for generalized linear models.
Journal of the Royal Statistical Society B, 69(4), 659-677
compBoostCMA
, dldaCMA
,
fdaCMA
, flexdaCMA
, gbmCMA
,
knnCMA
, ldaCMA
, LassoCMA
,
nnetCMA
, pknnCMA
, plrCMA
,
pls_ldaCMA
, pls_lrCMA
, pls_rfCMA
,
pnnCMA
, qdaCMA
, rfCMA
,
scdaCMA
, shrinkldaCMA
, svmCMA
### load Golub AML/ALL data data(golub) ### extract class labels golubY <- golub[,1] ### extract gene expression golubX <- as.matrix(golub[,-1]) ### select learningset ratio <- 2/3 set.seed(111) learnind <- sample(length(golubY), size=floor(ratio*length(golubY))) ### run ElasticNet - penalized logistic regression (no tuning) result <- ElasticNetCMA(X=golubX, y=golubY, learnind=learnind, norm.fraction = 0.2, lambda2=0.01) show(result) ftable(result) plot(result)