[R] Fractional deviance for lambda in cv.glmnet for non-negative Lasso
Pierre Maho
m@ho@p|erre @end|ng |rom gm@||@com
Tue Nov 19 12:39:48 CET 2019
Hi,
I want to solve the following optimisation problem:
[image: \hat{\beta} = \arg \min_{\beta \geq 0} \| y-A\beta \|_2^2 + \lambda
\|\beta\|_1]
For that, I am using glmnet package (cv.glmnet for finding 𝜆 and
lower.limits = 0 to impose non-negativity).
I would like to modify the fdev parameter (minimum fractional change in
deviance for stopping path) in glmnet function. This modification seems
impossible when lower.limits=0 (non-negative coefficients) is specified.
Here is a minimal working example
# MWE
# Generate data
P = 100 # number of sensors
R = 10 # number of sources
beta = runif(R,0,1) %>% matrix
beta[1:7] = 0 # optimal solution is sparse
A = replicate(R,runif(P,0,1))
y = A %*% beta
# set all control parameters of glmnet to factory parameters (fdev = 1e-5)
glmnet.control(factory = T)
# Now set the stopping criterion for the lambda path (fdev) to a
bigger value, say 1e-1
glmnet.control(fdev = 1e-1)
# Without any constraint
cvfit = glmnet::cv.glmnet(A, y, type.measure = "mse", nfolds =
10,intercept=T,nlambda=100,parallel = F)
cvfit_fdev = diff(cvfit$glmnet.fit$dev.ratio)/cvfit$glmnet.fit$dev.ratio[-1]
print(cvfit_fdev) # fdev = 0.1 is respected
# With non-negativity constraint
cvfit = glmnet::cv.glmnet(A, y, type.measure = "mse", nfolds =
10,intercept=T,lower.limits=0,nlambda=100,parallel = F)
cvfit_fdev = diff(cvfit$glmnet.fit$dev.ratio)/cvfit$glmnet.fit$dev.ratio[-1]
print(cvfit_fdev) # fdev = 0.1 is not respected
I would like to know if it is a known bug (I couldn't find it on Google) or
if I am simply doing something wrong.
Thanks a lot
[[alternative HTML version deleted]]
More information about the R-help
mailing list