CRAN Package Check Results for Package utiml

Last updated on 2019-12-15 05:47:58 CET.

Flavor Version Tinstall Tcheck Ttotal Status Flags
r-devel-linux-x86_64-debian-clang 0.1.5 10.27 127.69 137.96 ERROR
r-devel-linux-x86_64-debian-gcc 0.1.5 8.47 94.50 102.97 OK
r-devel-linux-x86_64-fedora-clang 0.1.5 164.73 OK
r-devel-linux-x86_64-fedora-gcc 0.1.5 155.08 OK
r-devel-windows-ix86+x86_64 0.1.5 19.00 195.00 214.00 OK
r-devel-windows-ix86+x86_64-gcc8 0.1.5 24.00 192.00 216.00 OK
r-patched-linux-x86_64 0.1.5 8.29 110.84 119.13 OK
r-patched-solaris-x86 0.1.5 184.90 NOTE
r-release-linux-x86_64 0.1.5 9.13 111.93 121.06 OK
r-release-windows-ix86+x86_64 0.1.5 18.00 142.00 160.00 OK
r-release-osx-x86_64 0.1.5 OK
r-oldrel-windows-ix86+x86_64 0.1.5 15.00 131.00 146.00 OK
r-oldrel-osx-x86_64 0.1.5 OK

Check Details

Version: 0.1.5
Check: examples
Result: ERROR
    Running examples in 'utiml-Ex.R' failed
    The error most likely occurred in:
    
    > base::assign(".ptime", proc.time(), pos = "CheckExEnv")
    > ### Name: ns
    > ### Title: Nested Stacking for multi-label Classification
    > ### Aliases: ns
    >
    > ### ** Examples
    >
    > model <- ns(toyml, "RANDOM")
    > pred <- predict(model, toyml)
     ----------- FAILURE REPORT --------------
     --- failure: the condition has length > 1 ---
     --- srcref ---
    :
     --- package (from environment) ---
    utiml
     --- call from context ---
    subset_correction(utiml_predict(predictions[object$labels], probability),
     object$labelsets, probability)
     --- call from argument ---
    if (class(train_y) == "mldr") {
     train_y <- train_y$dataset[train_y$labels$index]
    }
     --- R stacktrace ---
    where 1: subset_correction(utiml_predict(predictions[object$labels], probability),
     object$labelsets, probability)
    where 2: predict.NSmodel(model, toyml)
    where 3: predict(model, toyml)
    
     --- value of length: 2 type: logical ---
    [1] FALSE FALSE
     --- function from context ---
    function (mlresult, train_y, probability = FALSE)
    {
     bip <- as.bipartition(mlresult)
     prob <- as.probability(mlresult)
     if (class(train_y) == "mldr") {
     train_y <- train_y$dataset[train_y$labels$index]
     }
     if (ncol(mlresult) != ncol(train_y)) {
     stop("The number of columns in the predicted result are different from the\n training data")
     }
     labelsets <- as.matrix(unique(train_y))
     rownames(labelsets) <- apply(labelsets, 1, paste, collapse = "")
     order <- names(sort(table(apply(train_y, 1, paste, collapse = "")),
     decreasing = TRUE))
     labelsets <- labelsets[order, ]
     new.pred <- t(apply(bip, 1, function(y) {
     labelsets[names(which.min(apply(labelsets, 1, function(row) {
     sum(row != y)
     }))), ]
     }))
     multilabel_prediction(new.pred, prob, probability)
    }
    <bytecode: 0x6ebe028>
    <environment: namespace:utiml>
     --- function search by body ---
    Function subset_correction in namespace utiml has this body.
     ----------- END OF FAILURE REPORT --------------
    Error in if (class(train_y) == "mldr") { : the condition has length > 1
    Calls: predict -> predict.NSmodel -> subset_correction
    Execution halted
Flavor: r-devel-linux-x86_64-debian-clang

Version: 0.1.5
Check: tests
Result: ERROR
     Running 'testthat.R' [32s/34s]
    Running the tests in 'tests/testthat.R' failed.
    Complete output:
     > library(testthat)
     > library(utiml)
     Loading required package: mldr
     Loading required package: parallel
     Loading required package: ROCR
     Loading required package: gplots
    
     Attaching package: 'gplots'
    
     The following object is masked from 'package:stats':
    
     lowess
    
     >
     > test_check("utiml")
     ----------- FAILURE REPORT --------------
     --- failure: the condition has length > 1 ---
     --- srcref ---
     :
     --- package (from environment) ---
     utiml
     --- call from context ---
     subset_correction(utiml_predict(predictions[object$labels], probability),
     object$labelsets, probability)
     --- call from argument ---
     if (class(train_y) == "mldr") {
     train_y <- train_y$dataset[train_y$labels$index]
     }
     --- R stacktrace ---
     where 1: subset_correction(utiml_predict(predictions[object$labels], probability),
     object$labelsets, probability)
     where 2: predict.NSmodel(model, test)
     where 3 at testthat/test_brclassifiers.R#8: predict(model, test)
     where 4 at testthat/test_brclassifiers.R#29: predictionTest(model)
     where 5 at testthat/test_brclassifiers.R#187: baseTest(model, "NSmodel")
     where 6: eval(code, test_env)
     where 7: eval(code, test_env)
     where 8: withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error)
     where 9: doTryCatch(return(expr), name, parentenv, handler)
     where 10: tryCatchOne(expr, names, parentenv, handlers[[1L]])
     where 11: tryCatchList(expr, names[-nh], parentenv, handlers[-nh])
     where 12: doTryCatch(return(expr), name, parentenv, handler)
     where 13: tryCatchOne(tryCatchList(expr, names[-nh], parentenv, handlers[-nh]),
     names[nh], parentenv, handlers[[nh]])
     where 14: tryCatchList(expr, classes, parentenv, handlers)
     where 15: tryCatch(withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error), error = handle_fatal,
     skip = function(e) {
     })
     where 16: test_code(desc, code, env = parent.frame())
     where 17 at testthat/test_brclassifiers.R#185: test_that("Nestest Stack", {
     model <- ns(train, "RANDOM")
     pred <- baseTest(model, "NSmodel")
     mpred <- as.matrix(pred)
     suppressWarnings(RNGversion("3.5.0"))
     set.seed(123)
     pred1 <- predict(model, test, prob = FALSE)
     expect_is(pred1, "mlresult")
     expect_equal(as.matrix(pred1), attr(pred, "classes"))
     expect_equal(as.matrix(pred), attr(pred1, "probs"))
     new.chain <- c("y5", "y4", "y3", "y2", "y1")
     model2 <- ns(train, "RANDOM", new.chain)
     expect_equal(model2$chain, new.chain)
     suppressWarnings(RNGversion("3.5.0"))
     set.seed(123)
     pred2 <- predict(model2, test)
     expect_equal(colnames(pred2), rownames(train$labels))
     suppressWarnings(RNGversion("3.5.0"))
     set.seed(123)
     pred3 <- predict(model2, test)
     expect_false(isTRUE(all.equal(pred3, pred1)))
     expect_equal(pred3, pred2)
     expect_error(ns(train, "RANDOM", chain = c("a", "b", "c",
     "d", "e")))
     expect_error(ns(train, "RANDOM", chain = c(new.chain, "extra")))
     })
     where 18: eval(code, test_env)
     where 19: eval(code, test_env)
     where 20: withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error)
     where 21: doTryCatch(return(expr), name, parentenv, handler)
     where 22: tryCatchOne(expr, names, parentenv, handlers[[1L]])
     where 23: tryCatchList(expr, names[-nh], parentenv, handlers[-nh])
     where 24: doTryCatch(return(expr), name, parentenv, handler)
     where 25: tryCatchOne(tryCatchList(expr, names[-nh], parentenv, handlers[-nh]),
     names[nh], parentenv, handlers[[nh]])
     where 26: tryCatchList(expr, classes, parentenv, handlers)
     where 27: tryCatch(withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error), error = handle_fatal,
     skip = function(e) {
     })
     where 28: test_code(NULL, exprs, env)
     where 29: source_file(path, new.env(parent = env), chdir = TRUE, wrap = wrap)
     where 30: force(code)
     where 31: doWithOneRestart(return(expr), restart)
     where 32: withOneRestart(expr, restarts[[1L]])
     where 33: withRestarts(testthat_abort_reporter = function() NULL, force(code))
     where 34: with_reporter(reporter = reporter, start_end_reporter = start_end_reporter,
     {
     reporter$start_file(basename(path))
     lister$start_file(basename(path))
     source_file(path, new.env(parent = env), chdir = TRUE,
     wrap = wrap)
     reporter$.end_context()
     reporter$end_file()
     })
     where 35: FUN(X[[i]], ...)
     where 36: lapply(paths, test_file, env = env, reporter = current_reporter,
     start_end_reporter = FALSE, load_helpers = FALSE, wrap = wrap)
     where 37: force(code)
     where 38: doWithOneRestart(return(expr), restart)
     where 39: withOneRestart(expr, restarts[[1L]])
     where 40: withRestarts(testthat_abort_reporter = function() NULL, force(code))
     where 41: with_reporter(reporter = current_reporter, results <- lapply(paths,
     test_file, env = env, reporter = current_reporter, start_end_reporter = FALSE,
     load_helpers = FALSE, wrap = wrap))
     where 42: test_files(paths, reporter = reporter, env = env, stop_on_failure = stop_on_failure,
     stop_on_warning = stop_on_warning, wrap = wrap)
     where 43: test_dir(path = test_path, reporter = reporter, env = env, filter = filter,
     ..., stop_on_failure = stop_on_failure, stop_on_warning = stop_on_warning,
     wrap = wrap)
     where 44: test_package_dir(package = package, test_path = test_path, filter = filter,
     reporter = reporter, ..., stop_on_failure = stop_on_failure,
     stop_on_warning = stop_on_warning, wrap = wrap)
     where 45: test_check("utiml")
    
     --- value of length: 2 type: logical ---
     [1] FALSE FALSE
     --- function from context ---
     function (mlresult, train_y, probability = FALSE)
     {
     bip <- as.bipartition(mlresult)
     prob <- as.probability(mlresult)
     if (class(train_y) == "mldr") {
     train_y <- train_y$dataset[train_y$labels$index]
     }
     if (ncol(mlresult) != ncol(train_y)) {
     stop("The number of columns in the predicted result are different from the\n training data")
     }
     labelsets <- as.matrix(unique(train_y))
     rownames(labelsets) <- apply(labelsets, 1, paste, collapse = "")
     order <- names(sort(table(apply(train_y, 1, paste, collapse = "")),
     decreasing = TRUE))
     labelsets <- labelsets[order, ]
     new.pred <- t(apply(bip, 1, function(y) {
     labelsets[names(which.min(apply(labelsets, 1, function(row) {
     sum(row != y)
     }))), ]
     }))
     multilabel_prediction(new.pred, prob, probability)
     }
     <bytecode: 0x75ca9a0>
     <environment: namespace:utiml>
     --- function search by body ---
     Function subset_correction in namespace utiml has this body.
     ----------- END OF FAILURE REPORT --------------
     -- 1. Error: Nestest Stack (@test_brclassifiers.R#187) ------------------------
     the condition has length > 1
     Backtrace:
     1. utiml:::baseTest(model, "NSmodel")
     2. utiml:::predictionTest(model)
     4. utiml:::predict.NSmodel(model, test)
     5. utiml::subset_correction(...)
    
     ----------- FAILURE REPORT --------------
     --- failure: the condition has length > 1 ---
     --- srcref ---
     :
     --- package (from environment) ---
     utiml
     --- call from context ---
     multilabel_confusion_matrix(mdata, mlresult)
     --- call from argument ---
     if (class(mlresult) != "mlresult") {
     mlresult <- as.mlresult(mlresult)
     }
     --- R stacktrace ---
     where 1: multilabel_confusion_matrix(mdata, mlresult)
     where 2: multilabel_evaluate.mldr(parts$test, labels, "clp")
     where 3: multilabel_evaluate(parts$test, labels, "clp")
     where 4: eval_bare(expr, quo_get_env(quo))
     where 5: quasi_label(enquo(expected), expected.label, arg = "expected")
     where 6 at testthat/test_evaluation.R#258: expect_equal(c(clp = 0), multilabel_evaluate(parts$test, labels,
     "clp"))
     where 7: eval(code, test_env)
     where 8: eval(code, test_env)
     where 9: withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error)
     where 10: doTryCatch(return(expr), name, parentenv, handler)
     where 11: tryCatchOne(expr, names, parentenv, handlers[[1L]])
     where 12: tryCatchList(expr, names[-nh], parentenv, handlers[-nh])
     where 13: doTryCatch(return(expr), name, parentenv, handler)
     where 14: tryCatchOne(tryCatchList(expr, names[-nh], parentenv, handlers[-nh]),
     names[nh], parentenv, handlers[[nh]])
     where 15: tryCatchList(expr, classes, parentenv, handlers)
     where 16: tryCatch(withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error), error = handle_fatal,
     skip = function(e) {
     })
     where 17: test_code(desc, code, env = parent.frame())
     where 18 at testthat/test_evaluation.R#253: test_that("Label-problem measures", {
     labels <- as.matrix(parts$test$dataset[, parts$test$labels$index])
     expected <- parts$test$dataset[, parts$test$labels$index]
     expect_equal(c(clp = 0), multilabel_evaluate(parts$test,
     labels, "clp"))
     lapply(seq(ncol(labels)), function(i) {
     labels[, seq(i)] <- 1
     expect_equal(c(clp = i/ncol(labels)), multilabel_evaluate(parts$test,
     labels, "clp"))
     })
     options(utiml.empty.prediction = TRUE)
     expect_equal(c(mlp = 0), multilabel_evaluate(parts$test,
     labels, "mlp"))
     expect_equal(c(wlp = 0), multilabel_evaluate(parts$test,
     labels, "wlp"))
     lapply(seq(ncol(labels)), function(i) {
     labels[, seq(i)] <- 0
     expect_equal(c(mlp = i/ncol(labels)), multilabel_evaluate(parts$test,
     labels, "mlp"))
     expect_equal(c(wlp = i/ncol(labels)), multilabel_evaluate(parts$test,
     labels, "wlp"))
     })
     lapply(seq(ncol(labels)), function(i) {
     for (j in seq(i)) {
     labels[, j] <- ifelse(expected[, j] == 1, 0, 1)
     }
     expect_equal(c(mlp = 0), multilabel_evaluate(parts$test,
     labels, "mlp"))
     expect_equal(c(wlp = i/ncol(labels)), multilabel_evaluate(parts$test,
     labels, "wlp"))
     })
     options(utiml.empty.prediction = FALSE)
     })
     where 19: eval(code, test_env)
     where 20: eval(code, test_env)
     where 21: withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error)
     where 22: doTryCatch(return(expr), name, parentenv, handler)
     where 23: tryCatchOne(expr, names, parentenv, handlers[[1L]])
     where 24: tryCatchList(expr, names[-nh], parentenv, handlers[-nh])
     where 25: doTryCatch(return(expr), name, parentenv, handler)
     where 26: tryCatchOne(tryCatchList(expr, names[-nh], parentenv, handlers[-nh]),
     names[nh], parentenv, handlers[[nh]])
     where 27: tryCatchList(expr, classes, parentenv, handlers)
     where 28: tryCatch(withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error), error = handle_fatal,
     skip = function(e) {
     })
     where 29: test_code(NULL, exprs, env)
     where 30: source_file(path, new.env(parent = env), chdir = TRUE, wrap = wrap)
     where 31: force(code)
     where 32: doWithOneRestart(return(expr), restart)
     where 33: withOneRestart(expr, restarts[[1L]])
     where 34: withRestarts(testthat_abort_reporter = function() NULL, force(code))
     where 35: with_reporter(reporter = reporter, start_end_reporter = start_end_reporter,
     {
     reporter$start_file(basename(path))
     lister$start_file(basename(path))
     source_file(path, new.env(parent = env), chdir = TRUE,
     wrap = wrap)
     reporter$.end_context()
     reporter$end_file()
     })
     where 36: FUN(X[[i]], ...)
     where 37: lapply(paths, test_file, env = env, reporter = current_reporter,
     start_end_reporter = FALSE, load_helpers = FALSE, wrap = wrap)
     where 38: force(code)
     where 39: doWithOneRestart(return(expr), restart)
     where 40: withOneRestart(expr, restarts[[1L]])
     where 41: withRestarts(testthat_abort_reporter = function() NULL, force(code))
     where 42: with_reporter(reporter = current_reporter, results <- lapply(paths,
     test_file, env = env, reporter = current_reporter, start_end_reporter = FALSE,
     load_helpers = FALSE, wrap = wrap))
     where 43: test_files(paths, reporter = reporter, env = env, stop_on_failure = stop_on_failure,
     stop_on_warning = stop_on_warning, wrap = wrap)
     where 44: test_dir(path = test_path, reporter = reporter, env = env, filter = filter,
     ..., stop_on_failure = stop_on_failure, stop_on_warning = stop_on_warning,
     wrap = wrap)
     where 45: test_package_dir(package = package, test_path = test_path, filter = filter,
     reporter = reporter, ..., stop_on_failure = stop_on_failure,
     stop_on_warning = stop_on_warning, wrap = wrap)
     where 46: test_check("utiml")
    
     --- value of length: 2 type: logical ---
     [1] TRUE TRUE
     --- function from context ---
     function (mdata, mlresult)
     {
     mdim <- c(mdata$measures$num.instances, mdata$measures$num.labels)
     if (any(mdim != dim(mlresult))) {
     stop("Wrong dimension between the real and expected data")
     }
     if (class(mlresult) != "mlresult") {
     mlresult <- as.mlresult(mlresult)
     }
     expected <- mdata$dataset[, mdata$labels$index]
     bipartition <- as.bipartition(mlresult)
     scores <- as.probability(mlresult)
     ranking <- t(apply(1 - scores, 1, rank, ties.method = "first"))
     predict_and_expected <- expected & bipartition
     predict_and_nexpected <- !expected & bipartition
     npredict_and_nexpected <- !expected & !bipartition
     npredict_and_expected <- expected & !bipartition
     cm <- list(Z = bipartition, Y = expected, Fx = scores, R = ranking,
     TP = predict_and_expected, FP = predict_and_nexpected,
     TN = npredict_and_nexpected, FN = npredict_and_expected,
     Zi = rowSums(bipartition), Yi = rowSums(expected), Zl = colSums(bipartition),
     Yl = colSums(expected), TPi = rowSums(predict_and_expected),
     FPi = rowSums(predict_and_nexpected), TNi = rowSums(npredict_and_nexpected),
     FNi = rowSums(npredict_and_expected), TPl = colSums(predict_and_expected),
     FPl = colSums(predict_and_nexpected), TNl = colSums(npredict_and_nexpected),
     FNl = colSums(npredict_and_expected))
     class(cm) <- "mlconfmat"
     cm
     }
     <bytecode: 0x3cce8e8>
     <environment: namespace:utiml>
     --- function search by body ---
     Function multilabel_confusion_matrix in namespace utiml has this body.
     ----------- END OF FAILURE REPORT --------------
     -- 2. Error: Label-problem measures (@test_evaluation.R#258) ------------------
     the condition has length > 1
     Backtrace:
     1. testthat::expect_equal(...)
     5. utiml:::multilabel_evaluate.mldr(parts$test, labels, "clp")
     6. utiml::multilabel_confusion_matrix(mdata, mlresult)
    
     ----------- FAILURE REPORT --------------
     --- failure: the condition has length > 1 ---
     --- srcref ---
     :
     --- package (from environment) ---
     utiml
     --- call from context ---
     multilabel_confusion_matrix(mdata, mlresult)
     --- call from argument ---
     if (class(mlresult) != "mlresult") {
     mlresult <- as.mlresult(mlresult)
     }
     --- R stacktrace ---
     where 1: multilabel_confusion_matrix(mdata, mlresult)
     where 2: multilabel_evaluate.mldr(parts$test, as.matrix(result))
     where 3: multilabel_evaluate(parts$test, as.matrix(result))
     where 4: eval_bare(expr, quo_get_env(quo))
     where 5: quasi_label(enquo(object), label, arg = "object")
     where 6 at testthat/test_evaluation.R#330: expect_equal(multilabel_evaluate(parts$test, as.matrix(result)),
     multilabel_evaluate(parts$test, as.matrix(result)))
     where 7: eval(code, test_env)
     where 8: eval(code, test_env)
     where 9: withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error)
     where 10: doTryCatch(return(expr), name, parentenv, handler)
     where 11: tryCatchOne(expr, names, parentenv, handlers[[1L]])
     where 12: tryCatchList(expr, names[-nh], parentenv, handlers[-nh])
     where 13: doTryCatch(return(expr), name, parentenv, handler)
     where 14: tryCatchOne(tryCatchList(expr, names[-nh], parentenv, handlers[-nh]),
     names[nh], parentenv, handlers[[nh]])
     where 15: tryCatchList(expr, classes, parentenv, handlers)
     where 16: tryCatch(withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error), error = handle_fatal,
     skip = function(e) {
     })
     where 17: test_code(desc, code, env = parent.frame())
     where 18 at testthat/test_evaluation.R#314: test_that("Evaluate", {
     expect_equal(length(multilabel_evaluate(parts$test, result,
     "accuracy")), 1)
     measures <- multilabel_evaluate(parts$test, result, "example-based")
     expect_equal(length(measures), 6)
     expect_true(all(measures >= 0 & measures <= 1))
     expect_named(measures, utiml_measure_names("example-based"))
     mlconfmat <- multilabel_confusion_matrix(parts$test, result)
     expect_equal(measures, multilabel_evaluate(mlconfmat, "example-based"))
     measures <- multilabel_evaluate(mlconfmat, c("hamming-loss",
     "macro-accuracy", "micro-accuracy"))
     expect_equal(length(measures), 3)
     expect_true(measures["macro-accuracy"] == measures["micro-accuracy"])
     expect_true(measures["macro-accuracy"] + measures["hamming-loss"] ==
     1)
     expect_equal(multilabel_evaluate(parts$test, as.matrix(result)),
     multilabel_evaluate(parts$test, as.matrix(result)))
     expect_error(multilabel_evaluate(parts$test))
     expect_error(multilabel_evaluate(parts$test, result, "mymeasure"))
     })
     where 19: eval(code, test_env)
     where 20: eval(code, test_env)
     where 21: withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error)
     where 22: doTryCatch(return(expr), name, parentenv, handler)
     where 23: tryCatchOne(expr, names, parentenv, handlers[[1L]])
     where 24: tryCatchList(expr, names[-nh], parentenv, handlers[-nh])
     where 25: doTryCatch(return(expr), name, parentenv, handler)
     where 26: tryCatchOne(tryCatchList(expr, names[-nh], parentenv, handlers[-nh]),
     names[nh], parentenv, handlers[[nh]])
     where 27: tryCatchList(expr, classes, parentenv, handlers)
     where 28: tryCatch(withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error), error = handle_fatal,
     skip = function(e) {
     })
     where 29: test_code(NULL, exprs, env)
     where 30: source_file(path, new.env(parent = env), chdir = TRUE, wrap = wrap)
     where 31: force(code)
     where 32: doWithOneRestart(return(expr), restart)
     where 33: withOneRestart(expr, restarts[[1L]])
     where 34: withRestarts(testthat_abort_reporter = function() NULL, force(code))
     where 35: with_reporter(reporter = reporter, start_end_reporter = start_end_reporter,
     {
     reporter$start_file(basename(path))
     lister$start_file(basename(path))
     source_file(path, new.env(parent = env), chdir = TRUE,
     wrap = wrap)
     reporter$.end_context()
     reporter$end_file()
     })
     where 36: FUN(X[[i]], ...)
     where 37: lapply(paths, test_file, env = env, reporter = current_reporter,
     start_end_reporter = FALSE, load_helpers = FALSE, wrap = wrap)
     where 38: force(code)
     where 39: doWithOneRestart(return(expr), restart)
     where 40: withOneRestart(expr, restarts[[1L]])
     where 41: withRestarts(testthat_abort_reporter = function() NULL, force(code))
     where 42: with_reporter(reporter = current_reporter, results <- lapply(paths,
     test_file, env = env, reporter = current_reporter, start_end_reporter = FALSE,
     load_helpers = FALSE, wrap = wrap))
     where 43: test_files(paths, reporter = reporter, env = env, stop_on_failure = stop_on_failure,
     stop_on_warning = stop_on_warning, wrap = wrap)
     where 44: test_dir(path = test_path, reporter = reporter, env = env, filter = filter,
     ..., stop_on_failure = stop_on_failure, stop_on_warning = stop_on_warning,
     wrap = wrap)
     where 45: test_package_dir(package = package, test_path = test_path, filter = filter,
     reporter = reporter, ..., stop_on_failure = stop_on_failure,
     stop_on_warning = stop_on_warning, wrap = wrap)
     where 46: test_check("utiml")
    
     --- value of length: 2 type: logical ---
     [1] TRUE TRUE
     --- function from context ---
     function (mdata, mlresult)
     {
     mdim <- c(mdata$measures$num.instances, mdata$measures$num.labels)
     if (any(mdim != dim(mlresult))) {
     stop("Wrong dimension between the real and expected data")
     }
     if (class(mlresult) != "mlresult") {
     mlresult <- as.mlresult(mlresult)
     }
     expected <- mdata$dataset[, mdata$labels$index]
     bipartition <- as.bipartition(mlresult)
     scores <- as.probability(mlresult)
     ranking <- t(apply(1 - scores, 1, rank, ties.method = "first"))
     predict_and_expected <- expected & bipartition
     predict_and_nexpected <- !expected & bipartition
     npredict_and_nexpected <- !expected & !bipartition
     npredict_and_expected <- expected & !bipartition
     cm <- list(Z = bipartition, Y = expected, Fx = scores, R = ranking,
     TP = predict_and_expected, FP = predict_and_nexpected,
     TN = npredict_and_nexpected, FN = npredict_and_expected,
     Zi = rowSums(bipartition), Yi = rowSums(expected), Zl = colSums(bipartition),
     Yl = colSums(expected), TPi = rowSums(predict_and_expected),
     FPi = rowSums(predict_and_nexpected), TNi = rowSums(npredict_and_nexpected),
     FNi = rowSums(npredict_and_expected), TPl = colSums(predict_and_expected),
     FPl = colSums(predict_and_nexpected), TNl = colSums(npredict_and_nexpected),
     FNl = colSums(npredict_and_expected))
     class(cm) <- "mlconfmat"
     cm
     }
     <bytecode: 0x3cce8e8>
     <environment: namespace:utiml>
     --- function search by body ---
     Function multilabel_confusion_matrix in namespace utiml has this body.
     ----------- END OF FAILURE REPORT --------------
     -- 3. Error: Evaluate (@test_evaluation.R#330) --------------------------------
     the condition has length > 1
     Backtrace:
     1. testthat::expect_equal(...)
     5. utiml:::multilabel_evaluate.mldr(parts$test, as.matrix(result))
     6. utiml::multilabel_confusion_matrix(mdata, mlresult)
    
     ----------- FAILURE REPORT --------------
     --- failure: the condition has length > 1 ---
     --- srcref ---
     :
     --- package (from environment) ---
     utiml
     --- call from context ---
     scut_threshold.default(result, classes)
     --- call from argument ---
     if (class(expected) == "mldr") {
     expected <- expected$dataset[expected$labels$index]
     }
     --- R stacktrace ---
     where 1: scut_threshold.default(result, classes)
     where 2 at testthat/test_threshold.R#164: scut_threshold(result, classes)
     where 3: eval(code, test_env)
     where 4: eval(code, test_env)
     where 5: withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error)
     where 6: doTryCatch(return(expr), name, parentenv, handler)
     where 7: tryCatchOne(expr, names, parentenv, handlers[[1L]])
     where 8: tryCatchList(expr, names[-nh], parentenv, handlers[-nh])
     where 9: doTryCatch(return(expr), name, parentenv, handler)
     where 10: tryCatchOne(tryCatchList(expr, names[-nh], parentenv, handlers[-nh]),
     names[nh], parentenv, handlers[[nh]])
     where 11: tryCatchList(expr, classes, parentenv, handlers)
     where 12: tryCatch(withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error), error = handle_fatal,
     skip = function(e) {
     })
     where 13: test_code(desc, code, env = parent.frame())
     where 14 at testthat/test_threshold.R#154: test_that("SCut threshold", {
     thresholds <- scut_threshold(result, mlresult)
     expected <- c(lbl1 = 0.68, lbl2 = 0.8, lbl3 = 0.34)
     expect_equal(thresholds, expected)
     thresholds2 <- scut_threshold(mlresult, mlresult)
     expect_equal(thresholds, thresholds2)
     classes <- as.bipartition(mlresult)
     classes[, 3] <- 0
     thresholds <- scut_threshold(result, classes)
     expect_equal(thresholds[1], thresholds2[1])
     expect_equal(thresholds[2], thresholds2[2])
     expect_gt(thresholds[3], max(result[, 3]))
     expect_error(scut_threshold(result, mlresult, function() {
     }))
     expect_error(scut_threshold(result, mlresult, CORES = 0))
     })
     where 15: eval(code, test_env)
     where 16: eval(code, test_env)
     where 17: withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error)
     where 18: doTryCatch(return(expr), name, parentenv, handler)
     where 19: tryCatchOne(expr, names, parentenv, handlers[[1L]])
     where 20: tryCatchList(expr, names[-nh], parentenv, handlers[-nh])
     where 21: doTryCatch(return(expr), name, parentenv, handler)
     where 22: tryCatchOne(tryCatchList(expr, names[-nh], parentenv, handlers[-nh]),
     names[nh], parentenv, handlers[[nh]])
     where 23: tryCatchList(expr, classes, parentenv, handlers)
     where 24: tryCatch(withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error), error = handle_fatal,
     skip = function(e) {
     })
     where 25: test_code(NULL, exprs, env)
     where 26: source_file(path, new.env(parent = env), chdir = TRUE, wrap = wrap)
     where 27: force(code)
     where 28: doWithOneRestart(return(expr), restart)
     where 29: withOneRestart(expr, restarts[[1L]])
     where 30: withRestarts(testthat_abort_reporter = function() NULL, force(code))
     where 31: with_reporter(reporter = reporter, start_end_reporter = start_end_reporter,
     {
     reporter$start_file(basename(path))
     lister$start_file(basename(path))
     source_file(path, new.env(parent = env), chdir = TRUE,
     wrap = wrap)
     reporter$.end_context()
     reporter$end_file()
     })
     where 32: FUN(X[[i]], ...)
     where 33: lapply(paths, test_file, env = env, reporter = current_reporter,
     start_end_reporter = FALSE, load_helpers = FALSE, wrap = wrap)
     where 34: force(code)
     where 35: doWithOneRestart(return(expr), restart)
     where 36: withOneRestart(expr, restarts[[1L]])
     where 37: withRestarts(testthat_abort_reporter = function() NULL, force(code))
     where 38: with_reporter(reporter = current_reporter, results <- lapply(paths,
     test_file, env = env, reporter = current_reporter, start_end_reporter = FALSE,
     load_helpers = FALSE, wrap = wrap))
     where 39: test_files(paths, reporter = reporter, env = env, stop_on_failure = stop_on_failure,
     stop_on_warning = stop_on_warning, wrap = wrap)
     where 40: test_dir(path = test_path, reporter = reporter, env = env, filter = filter,
     ..., stop_on_failure = stop_on_failure, stop_on_warning = stop_on_warning,
     wrap = wrap)
     where 41: test_package_dir(package = package, test_path = test_path, filter = filter,
     reporter = reporter, ..., stop_on_failure = stop_on_failure,
     stop_on_warning = stop_on_warning, wrap = wrap)
     where 42: test_check("utiml")
    
     --- value of length: 2 type: logical ---
     [1] FALSE FALSE
     --- function from context ---
     function (prediction, expected, loss.function = NA, cores = getOption("utiml.cores",
     1))
     {
     if (cores < 1) {
     stop("Cores must be a positive value")
     }
     if (!is.function(loss.function)) {
     loss.function <- function(real, predicted) {
     mean((real - predicted)^2)
     }
     }
     if (class(expected) == "mldr") {
     expected <- expected$dataset[expected$labels$index]
     }
     labels <- utiml_rename(colnames(prediction))
     thresholds <- utiml_lapply(labels, function(col) {
     scores <- prediction[, col]
     index <- order(scores)
     ones <- which(expected[index, col] == 1)
     difs <- c(Inf)
     for (i in seq(length(ones) - 1)) {
     difs <- c(difs, ones[i + 1] - ones[i])
     }
     evaluated.thresholds <- c()
     result <- c()
     for (i in ones[which(difs > 1)]) {
     thr <- scores[index[i]]
     res <- loss.function(expected[, col], ifelse(scores <
     thr, 0, 1))
     evaluated.thresholds <- c(evaluated.thresholds, thr)
     result <- c(result, res)
     }
     ifelse(length(ones) > 0, as.numeric(evaluated.thresholds[which.min(result)]),
     max(scores) + 1e-04)
     }, cores)
     unlist(thresholds)
     }
     <bytecode: 0x78e2fb8>
     <environment: namespace:utiml>
     --- function search by body ---
     Function scut_threshold.default in namespace utiml has this body.
     ----------- END OF FAILURE REPORT --------------
     -- 4. Error: SCut threshold (@test_threshold.R#164) ---------------------------
     the condition has length > 1
     Backtrace:
     1. utiml::scut_threshold(result, classes)
     2. utiml:::scut_threshold.default(result, classes)
    
     ----------- FAILURE REPORT --------------
     --- failure: the condition has length > 1 ---
     --- srcref ---
     :
     --- package (from environment) ---
     utiml
     --- call from context ---
     subset_correction(mlresult, as.bipartition(mlresult))
     --- call from argument ---
     if (class(train_y) == "mldr") {
     train_y <- train_y$dataset[train_y$labels$index]
     }
     --- R stacktrace ---
     where 1 at testthat/test_threshold.R#174: subset_correction(mlresult, as.bipartition(mlresult))
     where 2: eval(code, test_env)
     where 3: eval(code, test_env)
     where 4: withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error)
     where 5: doTryCatch(return(expr), name, parentenv, handler)
     where 6: tryCatchOne(expr, names, parentenv, handlers[[1L]])
     where 7: tryCatchList(expr, names[-nh], parentenv, handlers[-nh])
     where 8: doTryCatch(return(expr), name, parentenv, handler)
     where 9: tryCatchOne(tryCatchList(expr, names[-nh], parentenv, handlers[-nh]),
     names[nh], parentenv, handlers[[nh]])
     where 10: tryCatchList(expr, classes, parentenv, handlers)
     where 11: tryCatch(withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error), error = handle_fatal,
     skip = function(e) {
     })
     where 12: test_code(desc, code, env = parent.frame())
     where 13 at testthat/test_threshold.R#173: test_that("Subset correction", {
     prediction <- subset_correction(mlresult, as.bipartition(mlresult))
     expect_is(prediction, "mlresult")
     expect_equal(as.probability(prediction), as.probability(mlresult))
     })
     where 14: eval(code, test_env)
     where 15: eval(code, test_env)
     where 16: withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error)
     where 17: doTryCatch(return(expr), name, parentenv, handler)
     where 18: tryCatchOne(expr, names, parentenv, handlers[[1L]])
     where 19: tryCatchList(expr, names[-nh], parentenv, handlers[-nh])
     where 20: doTryCatch(return(expr), name, parentenv, handler)
     where 21: tryCatchOne(tryCatchList(expr, names[-nh], parentenv, handlers[-nh]),
     names[nh], parentenv, handlers[[nh]])
     where 22: tryCatchList(expr, classes, parentenv, handlers)
     where 23: tryCatch(withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error), error = handle_fatal,
     skip = function(e) {
     })
     where 24: test_code(NULL, exprs, env)
     where 25: source_file(path, new.env(parent = env), chdir = TRUE, wrap = wrap)
     where 26: force(code)
     where 27: doWithOneRestart(return(expr), restart)
     where 28: withOneRestart(expr, restarts[[1L]])
     where 29: withRestarts(testthat_abort_reporter = function() NULL, force(code))
     where 30: with_reporter(reporter = reporter, start_end_reporter = start_end_reporter,
     {
     reporter$start_file(basename(path))
     lister$start_file(basename(path))
     source_file(path, new.env(parent = env), chdir = TRUE,
     wrap = wrap)
     reporter$.end_context()
     reporter$end_file()
     })
     where 31: FUN(X[[i]], ...)
     where 32: lapply(paths, test_file, env = env, reporter = current_reporter,
     start_end_reporter = FALSE, load_helpers = FALSE, wrap = wrap)
     where 33: force(code)
     where 34: doWithOneRestart(return(expr), restart)
     where 35: withOneRestart(expr, restarts[[1L]])
     where 36: withRestarts(testthat_abort_reporter = function() NULL, force(code))
     where 37: with_reporter(reporter = current_reporter, results <- lapply(paths,
     test_file, env = env, reporter = current_reporter, start_end_reporter = FALSE,
     load_helpers = FALSE, wrap = wrap))
     where 38: test_files(paths, reporter = reporter, env = env, stop_on_failure = stop_on_failure,
     stop_on_warning = stop_on_warning, wrap = wrap)
     where 39: test_dir(path = test_path, reporter = reporter, env = env, filter = filter,
     ..., stop_on_failure = stop_on_failure, stop_on_warning = stop_on_warning,
     wrap = wrap)
     where 40: test_package_dir(package = package, test_path = test_path, filter = filter,
     reporter = reporter, ..., stop_on_failure = stop_on_failure,
     stop_on_warning = stop_on_warning, wrap = wrap)
     where 41: test_check("utiml")
    
     --- value of length: 2 type: logical ---
     [1] FALSE FALSE
     --- function from context ---
     function (mlresult, train_y, probability = FALSE)
     {
     bip <- as.bipartition(mlresult)
     prob <- as.probability(mlresult)
     if (class(train_y) == "mldr") {
     train_y <- train_y$dataset[train_y$labels$index]
     }
     if (ncol(mlresult) != ncol(train_y)) {
     stop("The number of columns in the predicted result are different from the\n training data")
     }
     labelsets <- as.matrix(unique(train_y))
     rownames(labelsets) <- apply(labelsets, 1, paste, collapse = "")
     order <- names(sort(table(apply(train_y, 1, paste, collapse = "")),
     decreasing = TRUE))
     labelsets <- labelsets[order, ]
     new.pred <- t(apply(bip, 1, function(y) {
     labelsets[names(which.min(apply(labelsets, 1, function(row) {
     sum(row != y)
     }))), ]
     }))
     multilabel_prediction(new.pred, prob, probability)
     }
     <bytecode: 0x75ca9a0>
     <environment: namespace:utiml>
     --- function search by body ---
     Function subset_correction in namespace utiml has this body.
     ----------- END OF FAILURE REPORT --------------
     -- 5. Error: Subset correction (@test_threshold.R#174) ------------------------
     the condition has length > 1
     Backtrace:
     1. utiml::subset_correction(mlresult, as.bipartition(mlresult))
    
     == testthat results ===========================================================
     [ OK: 1107 | SKIPPED: 21 | WARNINGS: 0 | FAILED: 5 ]
     1. Error: Nestest Stack (@test_brclassifiers.R#187)
     2. Error: Label-problem measures (@test_evaluation.R#258)
     3. Error: Evaluate (@test_evaluation.R#330)
     4. Error: SCut threshold (@test_threshold.R#164)
     5. Error: Subset correction (@test_threshold.R#174)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-devel-linux-x86_64-debian-clang

Version: 0.1.5
Check: package dependencies
Result: NOTE
    Packages suggested but not available for checking: 'FSelector', 'RWeka'
Flavor: r-patched-solaris-x86

Version: 0.1.5
Check: Rd cross-references
Result: NOTE
    Package unavailable to check Rd xrefs: ‘FSelector’
Flavor: r-patched-solaris-x86