The addition of shrinkage (small step-sizes) is only possible with an adjustment in form of a restart condition. For infinitesimal shrinkage this leads to an algorithm between forward stagewise fitting (boosting) and least angle regression. In particular, it also employs variable selection.
While least angle regression seems to be restricted to linear regression, we can easily generalize our approach to arbitrary learners (fitting methods) like trees or splines, as for boosting. The learner is used to produce vectors of fitted values which can then be linearly fitted by CDBoost.
The different methods are compared on simulated and real datasets. CDBoost achieves the best predictions mainly in complicated settings with correlated covariates, where it is difficult to assign how much a covariate contributes to the response. The gain of CDBoost over boosting is especially high in low and mid noise problems.