CRAN Package Check Results for Package mlexperiments

Last updated on 2025-12-24 21:50:22 CET.

Flavor Version Tinstall Tcheck Ttotal Status Flags
r-devel-linux-x86_64-debian-clang 0.0.8 12.01 267.49 279.50 ERROR
r-devel-linux-x86_64-debian-gcc 0.0.8 8.07 201.86 209.93 ERROR
r-devel-linux-x86_64-fedora-clang 0.0.8 21.00 467.24 488.24 OK
r-devel-linux-x86_64-fedora-gcc 0.0.8 21.00 627.98 648.98 OK
r-devel-windows-x86_64 0.0.8 13.00 427.00 440.00 OK
r-patched-linux-x86_64 0.0.8 12.09 262.64 274.73 OK
r-release-linux-x86_64 0.0.8 11.88 281.95 293.83 OK
r-release-macos-arm64 0.0.8 OK
r-release-macos-x86_64 0.0.8 7.00 362.00 369.00 OK
r-release-windows-x86_64 0.0.8 12.00 401.00 413.00 OK
r-oldrel-macos-arm64 0.0.8 OK
r-oldrel-macos-x86_64 0.0.8 7.00 377.00 384.00 OK
r-oldrel-windows-x86_64 0.0.8 18.00 575.00 593.00 OK

Check Details

Version: 0.0.8
Check: examples
Result: ERROR Running examples in ‘mlexperiments-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: performance > ### Title: performance > ### Aliases: performance > > ### ** Examples > > dataset <- do.call( + cbind, + c(sapply(paste0("col", 1:6), function(x) { + rnorm(n = 500) + }, + USE.NAMES = TRUE, + simplify = FALSE + ), + list(target = sample(0:1, 500, TRUE)) + )) > > fold_list <- splitTools::create_folds( + y = dataset[, 7], + k = 3, + type = "stratified", + seed = 123 + ) > > glm_optimization <- mlexperiments::MLCrossValidation$new( + learner = LearnerGlm$new(), + fold_list = fold_list, + seed = 123 + ) > > glm_optimization$learner_args <- list(family = binomial(link = "logit")) > glm_optimization$predict_args <- list(type = "response") > glm_optimization$performance_metric_args <- list( + positive = "1", + negative = "0" + ) > glm_optimization$performance_metric <- list( + auc = metric("AUC"), sensitivity = metric("TPR"), + specificity = metric("TNR") + ) > glm_optimization$return_models <- TRUE > > # set data > glm_optimization$set_data( + x = data.matrix(dataset[, -7]), + y = dataset[, 7] + ) > > cv_results <- glm_optimization$execute() CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. > > # predictions > preds <- mlexperiments::predictions( + object = glm_optimization, + newdata = data.matrix(dataset[, -7]), + na.rm = FALSE, + ncores = 2L, + type = "response" + ) Error in `[.data.table`(res, , `:=`(mean = mean(as.numeric(.SD), na.rm = na.rm), : attempt access index 3/3 in VECTOR_ELT Calls: <Anonymous> -> [ -> [.data.table Execution halted Flavors: r-devel-linux-x86_64-debian-clang, r-devel-linux-x86_64-debian-gcc

Version: 0.0.8
Check: tests
Result: ERROR Running ‘testthat.R’ [182s/468s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mlexperiments) > > test_check("mlexperiments") CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold4 CV fold: Fold5 Testing for identical folds in 2 and 1. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. Saving _problems/test-glm_predictions-79.R CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerLm'. Saving _problems/test-glm_predictions-188.R CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 25.821 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.966 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 27.299 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.047 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. Registering parallel backend using 2 cores. Running initial scoring function 4 times in 2 thread(s)... 12.549 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.107 seconds 3) Running FUN 2 times in 2 thread(s)... 4.992 seconds CV fold: Fold1 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 15.024 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.182 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. CV fold: Fold2 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 15.514 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.76 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 14.044 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.209 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 25.531 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.07 seconds 3) Running FUN 2 times in 2 thread(s)... 3.963 seconds Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 12.95 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.055 seconds 3) Running FUN 2 times in 2 thread(s)... 2.257 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 13.459 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.165 seconds 3) Running FUN 2 times in 2 thread(s)... 2.238 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 14.172 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.91 seconds 3) Running FUN 2 times in 2 thread(s)... 2.592 seconds CV fold: Fold1 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold2 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold3 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 6.354 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.301 seconds 3) Running FUN 2 times in 2 thread(s)... 0.677 seconds Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 5.334 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.997 seconds 3) Running FUN 2 times in 2 thread(s)... 0.352 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 4.992 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.189 seconds 3) Running FUN 2 times in 2 thread(s)... 0.472 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 6.47 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.051 seconds 3) Running FUN 2 times in 2 thread(s)... 0.559 seconds CV fold: Fold1 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold2 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold3 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. [ FAIL 2 | WARN 0 | SKIP 1 | PASS 68 ] ══ Skipped tests (1) ═══════════════════════════════════════════════════════════ • On CRAN (1): 'test-lints.R:10:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-glm_predictions.R:73:5'): test predictions, binary - glm ─────── Error in ``[.data.table`(res, , `:=`(mean = mean(as.numeric(.SD), na.rm = na.rm), sd = stats::sd(as.numeric(.SD), na.rm = na.rm)), .SDcols = colnames(res), by = seq_len(nrow(res)))`: attempt access index 5/5 in VECTOR_ELT Backtrace: ▆ 1. └─mlexperiments::predictions(...) at test-glm_predictions.R:73:5 2. ├─...[] 3. └─data.table:::`[.data.table`(...) ── Error ('test-glm_predictions.R:182:5'): test predictions, regression - lm ─── Error in ``[.data.table`(res, , `:=`(mean = mean(as.numeric(.SD), na.rm = na.rm), sd = stats::sd(as.numeric(.SD), na.rm = na.rm)), .SDcols = colnames(res), by = seq_len(nrow(res)))`: attempt access index 5/5 in VECTOR_ELT Backtrace: ▆ 1. └─mlexperiments::predictions(...) at test-glm_predictions.R:182:5 2. ├─...[] 3. └─data.table:::`[.data.table`(...) [ FAIL 2 | WARN 0 | SKIP 1 | PASS 68 ] Error: ! Test failures. Execution halted Flavor: r-devel-linux-x86_64-debian-clang

Version: 0.0.8
Check: tests
Result: ERROR Running ‘testthat.R’ [135s/356s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mlexperiments) > > test_check("mlexperiments") CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold4 CV fold: Fold5 Testing for identical folds in 2 and 1. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. Saving _problems/test-glm_predictions-79.R CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerLm'. Saving _problems/test-glm_predictions-188.R CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 19.442 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.583 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 18.284 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.673 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. Registering parallel backend using 2 cores. Running initial scoring function 4 times in 2 thread(s)... 9.394 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.789 seconds 3) Running FUN 2 times in 2 thread(s)... 3.076 seconds CV fold: Fold1 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 9.601 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.594 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. CV fold: Fold2 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 13.782 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.568 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 14.576 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.632 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 21.019 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.707 seconds 3) Running FUN 2 times in 2 thread(s)... 3.28 seconds Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 10.137 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.694 seconds 3) Running FUN 2 times in 2 thread(s)... 2.059 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 11.439 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.482 seconds 3) Running FUN 2 times in 2 thread(s)... 1.811 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 9.864 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.581 seconds 3) Running FUN 2 times in 2 thread(s)... 1.657 seconds CV fold: Fold1 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold2 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold3 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 2.892 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.477 seconds 3) Running FUN 2 times in 2 thread(s)... 0.452 seconds Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 3.513 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.479 seconds 3) Running FUN 2 times in 2 thread(s)... 0.388 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 3.146 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.59 seconds 3) Running FUN 2 times in 2 thread(s)... 0.398 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 3.165 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.448 seconds 3) Running FUN 2 times in 2 thread(s)... 0.369 seconds CV fold: Fold1 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold2 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold3 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. [ FAIL 2 | WARN 0 | SKIP 1 | PASS 68 ] ══ Skipped tests (1) ═══════════════════════════════════════════════════════════ • On CRAN (1): 'test-lints.R:10:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-glm_predictions.R:73:5'): test predictions, binary - glm ─────── Error in ``[.data.table`(res, , `:=`(mean = mean(as.numeric(.SD), na.rm = na.rm), sd = stats::sd(as.numeric(.SD), na.rm = na.rm)), .SDcols = colnames(res), by = seq_len(nrow(res)))`: attempt access index 5/5 in VECTOR_ELT Backtrace: ▆ 1. └─mlexperiments::predictions(...) at test-glm_predictions.R:73:5 2. ├─...[] 3. └─data.table:::`[.data.table`(...) ── Error ('test-glm_predictions.R:182:5'): test predictions, regression - lm ─── Error in ``[.data.table`(res, , `:=`(mean = mean(as.numeric(.SD), na.rm = na.rm), sd = stats::sd(as.numeric(.SD), na.rm = na.rm)), .SDcols = colnames(res), by = seq_len(nrow(res)))`: attempt access index 5/5 in VECTOR_ELT Backtrace: ▆ 1. └─mlexperiments::predictions(...) at test-glm_predictions.R:182:5 2. ├─...[] 3. └─data.table:::`[.data.table`(...) [ FAIL 2 | WARN 0 | SKIP 1 | PASS 68 ] Error: ! Test failures. Execution halted Flavor: r-devel-linux-x86_64-debian-gcc