ModelOriented / DALEX
Showing 18 of 127 files from the diff.
Other files ignored by Codecov
man/dragons.Rd has changed.
man/model_info.Rd has changed.
NAMESPACE has changed.
R/data_dragons.R has changed.
README.md has changed.
DESCRIPTION has changed.
NEWS.md has changed.
man/explain.Rd has changed.
tox.ini has changed.

@@ -2,7 +2,7 @@
Loading
2 2
#'
3 3
#' This function calculates explanations on a dataset level set that explore model response as a function of selected variables.
4 4
#' The explanations can be calulated as Partial Dependence Profile or  Accumulated Local Dependence Profile.
5 -
#' Find information how to use this function here: \url{http://ema.drwhy.ai/partialDependenceProfiles.html}.
5 +
#' Find information how to use this function here: \url{https://ema.drwhy.ai/partialDependenceProfiles.html}.
6 6
#' The \code{variable_profile} function is a copy of \code{model_profile}.
7 7
#'
8 8
#' Underneath this function calls the \code{\link[ingredients]{partial_dependence}} or
@@ -22,7 +22,7 @@
Loading
22 22
#' @return An object of the class \code{model_profile}.
23 23
#' It's a data frame with calculated average model responses.
24 24
#'
25 -
#' @references Explanatory Model Analysis. Explore, Explain, and Examine Predictive Models. \url{http://ema.drwhy.ai/}
25 +
#' @references Explanatory Model Analysis. Explore, Explain, and Examine Predictive Models. \url{https://ema.drwhy.ai/}
26 26
#'
27 27
#' @name model_profile
28 28
#' @examples

@@ -9,17 +9,17 @@
Loading
9 9
#'
10 10
#' \subsection{ceteris_paribus}{
11 11
#' \itemize{
12 -
#'  \item{color}{ a character. Either name of a color or name of a variable that should be used for coloring}
13 -
#'  \item{size}{ a numeric. Size of lines to be plotted}
14 -
#'  \item{alpha}{ a numeric between \code{0} and \code{1}. Opacity of lines}
15 -
#'  \item{facet_ncol}{ number of columns for the \code{\link[ggplot2]{facet_wrap}}}
16 -
#'  \item{variables}{ if not \code{NULL} then only \code{variables} will be presented}
17 -
#'  \item{variable_type}{ a character. If \code{numerical} then only numerical variables will be plotted.
12 +
#'  \item{\code{color}}{ a character. Either name of a color or name of a variable that should be used for coloring}
13 +
#'  \item{\code{size}}{ a numeric. Size of lines to be plotted}
14 +
#'  \item{\code{alpha}}{ a numeric between \code{0} and \code{1}. Opacity of lines}
15 +
#'  \item{\code{facet_ncol}}{ number of columns for the \code{\link[ggplot2]{facet_wrap}}}
16 +
#'  \item{\code{variables}}{ if not \code{NULL} then only \code{variables} will be presented}
17 +
#'  \item{\code{variable_type}}{ a character. If \code{numerical} then only numerical variables will be plotted.
18 18
#'        If \code{categorical} then only categorical variables will be plotted.}
19 -
#'  \item{title}{ a character. Plot title. By default \code{"Ceteris Paribus profile"}.}
20 -
#'  \item{subtitle}{ a character. Plot subtitle. By default \code{NULL} - then subtitle is set to "created for the XXX, YYY model",
19 +
#'  \item{\code{title}}{ a character. Plot title. By default \code{"Ceteris Paribus profile"}.}
20 +
#'  \item{\code{subtitle}}{ a character. Plot subtitle. By default \code{NULL} - then subtitle is set to "created for the XXX, YYY model",
21 21
#'        where XXX, YYY are labels of given explainers.}
22 -
#'  \item{categorical_type}{ a character. How categorical variables shall be plotted? Either \code{"lines"} (default) or \code{"bars"}.}
22 +
#'  \item{\code{categorical_type}}{ a character. How categorical variables shall be plotted? Either \code{"lines"} (default) or \code{"bars"}.}
23 23
#' }
24 24
#' }
25 25
#'

@@ -9,12 +9,12 @@
Loading
9 9
#'
10 10
#' \subsection{variable_importance}{
11 11
#' \itemize{
12 -
#'  \item{max_vars}{ maximal number of features to be included in the plot. default value is \code{10}}
13 -
#'  \item{show_boxplots}{ logical if \code{TRUE} (default) boxplot will be plotted to show permutation data.}
14 -
#'  \item{bar_width}{ width of bars. By default \code{10}}
15 -
#'  \item{desc_sorting}{ logical. Should the bars be sorted descending? By default \code{TRUE}}
16 -
#'  \item{title}{ the plot's title, by default \code{'Feature Importance'}}
17 -
#'  \item{subtitle}{ a character. Plot subtitle. By default \code{NULL} - then subtitle is set to "created for the XXX, YYY model",
12 +
#'  \item{\code{max_vars}}{ maximal number of features to be included in the plot. default value is \code{10}}
13 +
#'  \item{\code{show_boxplots}}{ logical if \code{TRUE} (default) boxplot will be plotted to show permutation data.}
14 +
#'  \item{\code{bar_width}}{ width of bars. By default \code{10}}
15 +
#'  \item{\code{desc_sorting}}{ logical. Should the bars be sorted descending? By default \code{TRUE}}
16 +
#'  \item{\code{title}}{ the plot's title, by default \code{'Feature Importance'}}
17 +
#'  \item{\code{subtitle}}{ a character. Plot subtitle. By default \code{NULL} - then subtitle is set to "created for the XXX, YYY model",
18 18
#'        where XXX, YYY are labels of given explainers.}
19 19
#' }
20 20
#' }

@@ -2,7 +2,7 @@
Loading
2 2
#'
3 3
#' This function performs model diagnostic of residuals.
4 4
#' Residuals are calculated and plotted against predictions, true y values or selected variables.
5 -
#' Find information how to use this function here: \url{http://ema.drwhy.ai/residualDiagnostic.html}.
5 +
#' Find information how to use this function here: \url{https://ema.drwhy.ai/residualDiagnostic.html}.
6 6
#'
7 7
#' @param explainer a model to be explained, preprocessed by the \code{explain} function
8 8
#' @param variables character - name of variables to be explained. Default \code{NULL} stands for all variables
@@ -11,7 +11,7 @@
Loading
11 11
#' @return An object of the class \code{model_diagnostics}.
12 12
#' It's a data frame with residuals and selected variables.
13 13
#'
14 -
#' @references Explanatory Model Analysis. Explore, Explain and Examine Predictive Models. \url{http://ema.drwhy.ai/}
14 +
#' @references Explanatory Model Analysis. Explore, Explain and Examine Predictive Models. \url{https://ema.drwhy.ai/}
15 15
#' @export
16 16
#' @examples
17 17
#' library(DALEX)

@@ -40,7 +40,7 @@
Loading
40 40
#' \item \code{model_info} named list contating basic information about model, like package, version of package and type.
41 41
#' }
42 42
#'
43 -
#' @references Explanatory Model Analysis. Explore, Explain and Examine Predictive Models. \url{http://ema.drwhy.ai/}
43 +
#' @references Explanatory Model Analysis. Explore, Explain and Examine Predictive Models. \url{https://ema.drwhy.ai/}
44 44
#' @rdname explain
45 45
#' @export
46 46
#' @importFrom stats predict
@@ -64,6 +64,7 @@
Loading
64 64
#' aps_lm <- explain(aps_lm_model4, data = apartments, label = "model_4v", y = apartments$m2.price,
65 65
#'                                    predict_function = predict)
66 66
#'
67 +
#' \donttest{
67 68
#' # user provided predict_function
68 69
#' aps_ranger <- ranger::ranger(m2.price~., data = apartments, num.trees = 50)
69 70
#' custom_predict <- function(X.model, newdata) {
@@ -100,7 +101,6 @@
Loading
100 101
#' aps_lm_explainer4 <- explain(aps_lm_model4, data = apartments, label = "model_4v",
101 102
#'                              model_info = model_info)
102 103
#'
103 -
#' \donttest{
104 104
#' # simple function
105 105
#' aps_fun <- function(x) 58*x$surface
106 106
#' aps_fun_explainer <- explain(aps_fun, data = apartments, y = apartments$m2.price, label="sfun")
@@ -162,7 +162,7 @@
Loading
162 162
    } else {
163 163
      # Setting 0 as value of n if data is not present is necessary for future checks
164 164
      n <- 0
165 -
      verbose_cat("  -> no data avaliable! (",color_codes$red_start,"WARNING",color_codes$red_end,")\n", verbose = verbose)
165 +
      verbose_cat("  -> no data available! (",color_codes$red_start,"WARNING",color_codes$red_end,")\n", verbose = verbose)
166 166
    }
167 167
  } else {
168 168
    n <- nrow(data)

@@ -181,11 +181,12 @@
Loading
181 181
plot.model_performance_roc <- function(df, nlabels) {
182 182
  dfl <- split(df, factor(df$label))
183 183
  rocdfl <- lapply(dfl, function(df) {
184 -
    pred_sorted <- df[order(df$predicted, decreasing = TRUE), ]
185 -
186 184
    # assuming that y = 0/1 where 1 is the positive
187 -
    tpr <- cumsum(pred_sorted$observed)/sum(pred_sorted$observed)
188 -
    fpr <- cumsum(1-pred_sorted$observed)/sum(1-pred_sorted$observed)
185 +
    tpr_tmp <- tapply(df$observed, df$predicted, sum)
186 +
    tpr <- c(0,cumsum(rev(tpr_tmp)))/sum(df$observed)
187 +
    fpr_tmp <- tapply(1 - df$observed, df$predicted, sum)
188 +
    fpr <- c(0,cumsum(rev(fpr_tmp)))/sum(1 - df$observed)
189 +
189 190
    data.frame(tpr = tpr, fpr = fpr, label = df$label[1])
190 191
  })
191 192
  rocdf <- do.call(rbind, rocdfl)

@@ -2,7 +2,7 @@
Loading
2 2
#'
3 3
#' From DALEX version 1.0 this function calls the \code{\link[ingredients]{accumulated_dependence}} or
4 4
#' \code{\link[ingredients]{partial_dependence}} from the \code{ingredients} package.
5 -
#' Find information how to use this function here: \url{http://ema.drwhy.ai/partialDependenceProfiles.html}.
5 +
#' Find information how to use this function here: \url{https://ema.drwhy.ai/partialDependenceProfiles.html}.
6 6
#'
7 7
#' @param explainer a model to be explained, preprocessed by the 'explain' function
8 8
#' @param variables character - names of variables to be explained
@@ -13,7 +13,7 @@
Loading
13 13
#' @return An object of the class 'aggregated_profiles_explainer'.
14 14
#' It's a data frame with calculated average response.
15 15
#'
16 -
#' @references Explanatory Model Analysis. Explore, Explain, and Examine Predictive Models. \url{http://ema.drwhy.ai/}
16 +
#' @references Explanatory Model Analysis. Explore, Explain, and Examine Predictive Models. \url{https://ema.drwhy.ai/}
17 17
#' @export
18 18
#'
19 19
#' @examples

@@ -2,7 +2,7 @@
Loading
2 2
#'
3 3
#' This function calculated individual profiles aka Ceteris Paribus Profiles.
4 4
#' From DALEX version 1.0 this function calls the \code{\link[ingredients]{ceteris_paribus}} from the \code{ingredients} package.
5 -
#' Find information how to use this function here: \url{http://ema.drwhy.ai/ceterisParibus.html}.
5 +
#' Find information how to use this function here: \url{https://ema.drwhy.ai/ceterisParibus.html}.
6 6
#'
7 7
#' @param explainer a model to be explained, preprocessed by the \code{explain} function
8 8
#' @param new_observation a new observation for which predictions need to be explained
@@ -14,7 +14,7 @@
Loading
14 14
#' @return An object of the class \code{ceteris_paribus_explainer}.
15 15
#' It's a data frame with calculated average response.
16 16
#'
17 -
#' @references Explanatory Model Analysis. Explore, Explain, and Examine Predictive Models. \url{http://ema.drwhy.ai/}
17 +
#' @references Explanatory Model Analysis. Explore, Explain, and Examine Predictive Models. \url{https://ema.drwhy.ai/}
18 18
#' @export
19 19
#'
20 20
#' @examples

@@ -18,7 +18,7 @@
Loading
18 18
#' \item \code{type} - character that specifies type of the task.
19 19
#' }
20 20
#'
21 -
#' @references Explanatory Model Analysis. Explore, Explain, and Examine Predictive Models. \url{http://ema.drwhy.ai/}
21 +
#' @references Explanatory Model Analysis. Explore, Explain, and Examine Predictive Models. \url{https://ema.drwhy.ai/}
22 22
#' @importFrom stats median weighted.mean
23 23
#' @export
24 24
#' @examples
@@ -140,12 +140,10 @@
Loading
140 140
}
141 141
142 142
model_performance_auc <- function(predicted, observed) {
143 -
  pred <- data.frame(fitted.values = predicted, y = observed)
144 -
  pred_sorted <- pred[order(pred$fitted.values, decreasing = TRUE), ]
145 -
146 -
  # assuming that y = 0/1 where 1 is the positive
147 -
  TPR <- cumsum(pred_sorted$y)/sum(pred_sorted$y)
148 -
  FPR <- cumsum(1-pred_sorted$y)/sum(1-pred_sorted$y)
143 +
  tpr_tmp <- tapply(observed, predicted, sum)
144 +
  TPR <- c(0,cumsum(rev(tpr_tmp)))/sum(observed)
145 +
  fpr_tmp <- tapply(1 - observed, predicted, sum)
146 +
  FPR <- c(0,cumsum(rev(fpr_tmp)))/sum(1 - observed)
149 147
150 148
  auc <- sum(diff(FPR)*(TPR[-1] + TPR[-length(TPR)])/2)
151 149
  auc

@@ -1,7 +1,7 @@
Loading
1 1
#' Dataset Level Variable Importance as Change in Loss Function after Variable Permutations
2 2
#'
3 3
#' From DALEX version 1.0 this function calls the \code{\link[ingredients]{feature_importance}}
4 -
#' Find information how to use this function here: \url{http://ema.drwhy.ai/featureImportance.html}.
4 +
#' Find information how to use this function here: \url{https://ema.drwhy.ai/featureImportance.html}.
5 5
#'
6 6
#' @param explainer a model to be explained, preprocessed by the \code{explain} function
7 7
#' @param loss_function a function that will be used to assess variable importance. By default it is 1-AUC for classification, cross entropy for multilabel classification and RMSE for regression. Custom, user-made loss function should accept two obligatory parameters (observed, predicted), where \code{observed} states for actual values of the target, while \code{predicted} for predicted values. If attribute "loss_accuracy" is associated with function object, then it will be plotted as name of the loss function.
@@ -10,7 +10,7 @@
Loading
10 10
#' @param N number of observations that should be sampled for calculation of variable importance. If \code{NULL} then variable importance will be calculated on whole dataset (no sampling).
11 11
#' @param n_sample alias for \code{N} held for backwards compatibility. number of observations that should be sampled for calculation of variable importance.
12 12
#'
13 -
#' @references Explanatory Model Analysis. Explore, Explain and Examine Predictive Models. \url{http://ema.drwhy.ai/}
13 +
#' @references Explanatory Model Analysis. Explore, Explain and Examine Predictive Models. \url{https://ema.drwhy.ai/}
14 14
#' @return An object of the class \code{feature_importance}.
15 15
#' It's a data frame with calculated average response.
16 16
#'

@@ -5,9 +5,9 @@
Loading
5 5
#' From DALEX version 1.0 this function calls the \code{\link[iBreakDown]{break_down}} or
6 6
#' \code{\link[iBreakDown:break_down_uncertainty]{shap}} functions from the \code{iBreakDown} package or
7 7
#' \code{\link[ingredients:ceteris_paribus]{ceteris_paribus}} from the \code{ingredients} package.
8 -
#' Find information how to use the \code{break_down} method here: \url{http://ema.drwhy.ai/breakDown.html}.
9 -
#' Find information how to use the \code{shap} method here: \url{http://ema.drwhy.ai/shapley.html}.
10 -
#' Find information how to use the \code{oscillations} method here: \url{http://ema.drwhy.ai/ceterisParibusOscillations.html}.
8 +
#' Find information how to use the \code{break_down} method here: \url{https://ema.drwhy.ai/breakDown.html}.
9 +
#' Find information how to use the \code{shap} method here: \url{https://ema.drwhy.ai/shapley.html}.
10 +
#' Find information how to use the \code{oscillations} method here: \url{https://ema.drwhy.ai/ceterisParibusOscillations.html}.
11 11
#'
12 12
#' @param explainer a model to be explained, preprocessed by the \code{explain} function
13 13
#' @param new_observation a new observation for which predictions need to be explained
@@ -25,7 +25,7 @@
Loading
25 25
#'
26 26
#'
27 27
#' @aliases predict_parts_break_down predict_parts predict_parts_ibreak_down predict_parts_shap
28 -
#' @references Explanatory Model Analysis. Explore, Explain, and Examine Predictive Models. \url{http://ema.drwhy.ai/}
28 +
#' @references Explanatory Model Analysis. Explore, Explain, and Examine Predictive Models. \url{https://ema.drwhy.ai/}
29 29
#'
30 30
#' @examples
31 31
#' library(DALEX)

@@ -52,19 +52,15 @@
Loading
52 52
53 53
#' @rdname loss_functions
54 54
#' @export
55 -
# Alicja Gosiewska (agosiewska) is the author of this function
56 55
loss_one_minus_auc <- function(observed, predicted){
56 +
  tpr_tmp <- tapply(observed, predicted, sum)
57 +
  TPR <- c(0,cumsum(rev(tpr_tmp)))/sum(observed)
58 +
  fpr_tmp <- tapply(1 - observed, predicted, sum)
59 +
  FPR <- c(0,cumsum(rev(fpr_tmp)))/sum(1 - observed)
57 60
58 -
  pred <- data.frame(fitted.values = predicted,
59 -
             y = observed)
60 -
  pred_sorted <- pred[order(pred$fitted.values, decreasing = TRUE), ]
61 -
  roc_y <- factor(pred_sorted$y)
62 -
  levels <- levels(roc_y)
63 -
  x <- cumsum(roc_y == levels[1])/sum(roc_y == levels[1])
64 -
  y <- cumsum(roc_y == levels[2])/sum(roc_y == levels[2])
65 -
  auc <- sum((x[2:length(roc_y)]  -x[1:length(roc_y)-1]) * y[2:length(roc_y)])
66 -
  1 - auc
61 +
  auc <- sum(diff(FPR)*(TPR[-1] + TPR[-length(TPR)])/2)
67 62
63 +
  1 - auc
68 64
}
69 65
attr(loss_one_minus_auc, "loss_name") <- "One minus AUC"
70 66

@@ -6,6 +6,7 @@
Loading
6 6
#' @return an \code{ggplot2} object of the class \code{gg}.
7 7
#'
8 8
#' @examples
9 +
#' \donttest{
9 10
#' library("ranger")
10 11
#' titanic_glm_model <- ranger(survived ~ gender + age + class + fare + sibsp + parch,
11 12
#'                      data = titanic_imputed)
@@ -14,7 +15,6 @@
Loading
14 15
#'                          y = titanic_imputed$survived)
15 16
#' johny_d <- titanic_imputed[24, c("gender", "age", "class", "fare", "sibsp", "parch")]
16 17
#'
17 -
#' \donttest{
18 18
#' pl <- predict_diagnostics(explainer_glm, johny_d, variables = NULL)
19 19
#' plot(pl)
20 20
#'

@@ -8,14 +8,14 @@
Loading
8 8
#'
9 9
#' \subsection{aggregates}{
10 10
#' \itemize{
11 -
#'  \item{color}{ a character. Either name of a color, or hex code for a color,
11 +
#'  \item{\code{color}}{ a character. Either name of a color, or hex code for a color,
12 12
#'   or \code{_label_} if models shall be colored, or \code{_ids_} if instances shall be colored}
13 -
#'  \item{size}{ a numeric. Size of lines to be plotted}
14 -
#'  \item{alpha}{ a numeric between \code{0} and \code{1}. Opacity of lines}
15 -
#'  \item{facet_ncol}{ number of columns for the \code{\link[ggplot2]{facet_wrap}}}
16 -
#'  \item{variables}{ if not \code{NULL} then only \code{variables} will be presented}
17 -
#'  \item{title}{ a character. Partial and accumulated dependence explainers have deafult value.}
18 -
#'  \item{subtitle}{ a character. If \code{NULL} value will be dependent on model usage.}
13 +
#'  \item{\code{size}}{ a numeric. Size of lines to be plotted}
14 +
#'  \item{\code{alpha}}{ a numeric between \code{0} and \code{1}. Opacity of lines}
15 +
#'  \item{\code{facet_ncol}}{ number of columns for the \code{\link[ggplot2]{facet_wrap}}}
16 +
#'  \item{\code{variables}}{ if not \code{NULL} then only \code{variables} will be presented}
17 +
#'  \item{\code{title}}{ a character. Partial and accumulated dependence explainers have deafult value.}
18 +
#'  \item{\code{subtitle}}{ a character. If \code{NULL} value will be dependent on model usage.}
19 19
#' }
20 20
#' }
21 21
#'

@@ -0,0 +1,39 @@
Loading
1 +
#' Plot List of Explanations
2 +
#'
3 +
#' @param x a list of explanations of the same class
4 +
#' @param ... other parameters
5 +
#'
6 +
#' @return An object of the class \code{ggplot}.
7 +
#'
8 +
#' @export
9 +
#' @examples
10 +
#'  \donttest{
11 +
#' library("ranger")
12 +
#' titanic_ranger_model <- ranger(survived~., data = titanic_imputed, num.trees = 50,
13 +
#'                                probability = TRUE)
14 +
#' explainer_ranger  <- explain(titanic_ranger_model, data = titanic_imputed[,-8],
15 +
#'                              y = titanic_imputed$survived)
16 +
#' mp_ranger <- model_performance(explainer_ranger)
17 +
#'
18 +
#' titanic_ranger_model2 <- ranger(survived~gender + fare, data = titanic_imputed,
19 +
#'                                 num.trees = 50, probability = TRUE)
20 +
#' explainer_ranger2  <- explain(titanic_ranger_model2, data = titanic_imputed[,-8],
21 +
#'                               y = titanic_imputed$survived,
22 +
#'                               label = "ranger2")
23 +
#' mp_ranger2 <- model_performance(explainer_ranger2)
24 +
#'
25 +
#' plot(list(mp_ranger, mp_ranger2), geom = "prc")
26 +
#' plot(list(mp_ranger, mp_ranger2), geom = "roc")
27 +
#' tmp <- list(mp_ranger, mp_ranger2)
28 +
#' names(tmp) <- c("ranger", "ranger2")
29 +
#' plot(tmp)
30 +
#' }
31 +
#'
32 +
#
33 +
plot.list <- function(x, ...) {
34 +
  names(x) <- NULL
35 +
  args <- c(x, list(...))
36 +
  do.call(plot, args)
37 +
}
38 +
39 +

@@ -32,9 +32,11 @@
Loading
32 32
#' aps_lm_model4 <- lm(m2.price ~., data = apartments)
33 33
#' model_info(aps_lm_model4)
34 34
#'
35 +
#' \donttest{
35 36
#' library("ranger")
36 37
#' model_regr_rf <- ranger::ranger(status~., data = HR, num.trees = 50, probability = TRUE)
37 38
#' model_info(model_regr_rf, is_multiclass = TRUE)
39 +
#' }
38 40
#'
39 41
model_info <- function(model, is_multiclass = FALSE, ...)
40 42
  UseMethod("model_info")

@@ -3,7 +3,7 @@
Loading
3 3
#' This function performs local diagnostic of residuals.
4 4
#' For a single instance its neighbors are identified in the validation data.
5 5
#' Residuals are calculated for neighbors and plotted against residuals for all data.
6 -
#' Find information how to use this function here: \url{http://ema.drwhy.ai/localDiagnostics.html}.
6 +
#' Find information how to use this function here: \url{https://ema.drwhy.ai/localDiagnostics.html}.
7 7
#'
8 8
#' @param explainer a model to be explained, preprocessed by the 'explain' function
9 9
#' @param new_observation a new observation for which predictions need to be explained
@@ -16,11 +16,12 @@
Loading
16 16
#' @return An object of the class 'predict_diagnostics'.
17 17
#' It's a data frame with calculated distribution of residuals.
18 18
#'
19 -
#' @references Explanatory Model Analysis. Explore, Explain, and Examine Predictive Models. \url{http://ema.drwhy.ai/}
19 +
#' @references Explanatory Model Analysis. Explore, Explain, and Examine Predictive Models. \url{https://ema.drwhy.ai/}
20 20
#' @export
21 21
#' @importFrom stats ks.test
22 22
#' @importFrom graphics plot
23 23
#' @examples
24 +
#' \donttest{
24 25
#' library("ranger")
25 26
#' titanic_glm_model <- ranger(survived ~ gender + age + class + fare + sibsp + parch,
26 27
#'                      data = titanic_imputed)
@@ -29,7 +30,6 @@
Loading
29 30
#'                          y = titanic_imputed$survived)
30 31
#' johny_d <- titanic_imputed[24, c("gender", "age", "class", "fare", "sibsp", "parch")]
31 32
#'
32 -
#' \donttest{
33 33
#' id_johny <- predict_diagnostics(explainer_glm, johny_d, variables = NULL)
34 34
#' id_johny
35 35
#' plot(id_johny)

@@ -9,35 +9,35 @@
Loading
9 9
#'
10 10
#' \subsection{break_down}{
11 11
#' \itemize{
12 -
#'  \item{max_features}{ maximal number of features to be included in the plot. default value is \code{10}}
13 -
#'  \item{min_max}{ a range of OX axis. By default \code{NA}, therefore it will be extracted from the contributions of \code{x}.
12 +
#'  \item{\code{max_features}}{ maximal number of features to be included in the plot. default value is \code{10}}
13 +
#'  \item{\code{min_max}}{ a range of OX axis. By default \code{NA}, therefore it will be extracted from the contributions of \code{x}.
14 14
#'    But it can be set to some constants, useful if these plots are to be used for comparisons.}
15 -
#'  \item{add_contributions}{ if \code{TRUE}, variable contributions will be added to the plot.}
16 -
#'  \item{shift_contributions}{ number describing how much labels should be shifted to the right, as a fraction of range. By default equal to \code{0.05}.}
17 -
#'  \item{vcolors}{ If \code{NA} (default), DrWhy colors are used.}
18 -
#'  \item{vnames}{ a character vector, if specified then will be used as labels on OY axis. By default \code{NULL}.}
19 -
#'  \item{digits}{ number of decimal places (\code{\link{round}}) or significant digits (\code{\link{signif}}) to be used.}
20 -
#'  \item{rounding_function}{ a function to be used for rounding numbers.}
21 -
#'  \item{plot_distributions}{ if \code{TRUE} then distributions of conditional propotions will be plotted. This requires \code{keep_distributions=TRUE} in the
15 +
#'  \item{\code{add_contributions}}{ if \code{TRUE}, variable contributions will be added to the plot.}
16 +
#'  \item{\code{shift_contributions}}{ number describing how much labels should be shifted to the right, as a fraction of range. By default equal to \code{0.05}.}
17 +
#'  \item{\code{vcolors}}{ If \code{NA} (default), DrWhy colors are used.}
18 +
#'  \item{\code{vnames}}{ a character vector, if specified then will be used as labels on OY axis. By default \code{NULL}.}
19 +
#'  \item{\code{digits}}{ number of decimal places (\code{\link{round}}) or significant digits (\code{\link{signif}}) to be used.}
20 +
#'  \item{\code{rounding_function}}{ a function to be used for rounding numbers.}
21 +
#'  \item{\code{plot_distributions}}{ if \code{TRUE} then distributions of conditional propotions will be plotted. This requires \code{keep_distributions=TRUE} in the
22 22
#'    \code{\link{break_down}}, \code{\link{local_attributions}}, or \code{\link{local_interactions}}.}
23 -
#'  \item{baseline}{ if numeric then veritical line starts in \code{baseline}.}
24 -
#'  \item{title}{ a character. Plot title. By default \code{"Break Down profile"}.}
25 -
#'  \item{subtitle}{ a character. Plot subtitle. By default \code{NULL} - then subtitle is set to "created for the XXX, YYY model",
23 +
#'  \item{\code{baseline}}{ if numeric then veritical line starts in \code{baseline}.}
24 +
#'  \item{\code{title}}{ a character. Plot title. By default \code{"Break Down profile"}.}
25 +
#'  \item{\code{subtitle}}{ a character. Plot subtitle. By default \code{NULL} - then subtitle is set to "created for the XXX, YYY model",
26 26
#'        where XXX, YYY are labels of given explainers.}
27 -
#'  \item{max_vars}{ alias for the \code{max_features} parameter.}
27 +
#'  \item{\code{max_vars}}{ alias for the \code{max_features} parameter.}
28 28
#' }
29 29
#' }
30 30
#' \subsection{shap}{
31 31
#' \itemize{
32 -
#'  \item{show_boxplots}{ logical if \code{TRUE} (default) boxplot will be plotted to show uncertanity of attributions.}
33 -
#'  \item{vcolors}{ If \code{NA} (default), DrWhy colors are used.}
34 -
#'  \item{max_features}{ maximal number of features to be included in the plot. default value is \code{10}}
35 -
#'  \item{max_vars}{ alias for the \code{max_features} parameter.}
32 +
#'  \item{\code{show_boxplots}}{ logical if \code{TRUE} (default) boxplot will be plotted to show uncertanity of attributions.}
33 +
#'  \item{\code{vcolors}}{ If \code{NA} (default), DrWhy colors are used.}
34 +
#'  \item{\code{max_features}}{ maximal number of features to be included in the plot. default value is \code{10}}
35 +
#'  \item{\code{max_vars}}{ alias for the \code{max_features} parameter.}
36 36
#' }
37 37
#' }
38 38
#' \subsection{oscillations}{
39 39
#' \itemize{
40 -
#'  \item{bar_width}{ width of bars. By default \code{10}}
40 +
#'  \item{\code{bar_width}}{ width of bars. By default \code{10}}
41 41
#' }
42 42
#' }
43 43
#'
Files Coverage
R 86.53%
Project Totals (32 files) 86.53%
1
comment: false
2

3
coverage:
4
  status:
5
    project:
6
      default:
7
        target: auto
8
        threshold: 1%
9
        informational: true
10
    patch:
11
      default:
12
        target: auto
13
        threshold: 1%
14
        informational: true
Sunburst
The inner-most circle is the entire project, moving away from the center are folders then, finally, a single file. The size and color of each slice is representing the number of statements and the coverage, respectively.
Icicle
The top section represents the entire project. Proceeding with folders and finally individual files. The size and color of each slice is representing the number of statements and the coverage, respectively.
Grid
Each block represents a single file in the project. The size and color of each block is represented by the number of statements and the coverage, respectively.
Loading