library(hardhat)

Introduction

The counterpart to mold() (which you can read all about in vignette("mold", "hardhat")), is forge(). Where mold() is used to preprocess your training data, forge() is used to preprocess new data that you are going to use to generate predictions from your model.

Like mold(), forge() is not intended to be used interactively. Instead, it should be called from the predict() method for your model. To learn more about using forge() in a modeling package, see vignette("package", "hardhat"). The rest of this vignette will be focused on the many features that forge() offers.

Connection with mold()

When mold() is used, one of the returned objects is an blueprint. This is the key to preprocessing new data with forge(). For instance, assume you’ve called mold() like so:

iris_train <- iris[1:100,]
iris_test <- iris[101:150,]
iris_form <- mold(
  log(Sepal.Length) ~ Species + Petal.Width,
  iris_train,
  blueprint = default_formula_blueprint(indicators = FALSE)
)

formula_eng <- iris_form$blueprint

formula_eng
#> Formula blueprint: 
#>  
#> # Predictors: 2 
#>   # Outcomes: 1 
#>    Intercept: FALSE 
#> Novel Levels: FALSE 
#>   Indicators: FALSE

A formula blueprint is returned here, which knows about the predictors and outcomes that were used at training time, and knows that you don’t want to expand Species into dummy variables by setting indicators = FALSE.

When it is time to predict() on new data, that data is passed on to forge() along with the blueprint we just created.

forge(iris_test, formula_eng)
#> $predictors
#> # A tibble: 50 x 2
#>    Petal.Width Species  
#>          <dbl> <fct>    
#>  1         2.5 virginica
#>  2         1.9 virginica
#>  3         2.1 virginica
#>  4         1.8 virginica
#>  5         2.2 virginica
#>  6         2.1 virginica
#>  7         1.7 virginica
#>  8         1.8 virginica
#>  9         1.8 virginica
#> 10         2.5 virginica
#> # … with 40 more rows
#> 
#> $outcomes
#> NULL
#> 
#> $extras
#> $extras$offset
#> NULL

Note that in predictors, Species was not expanded because the blueprint knew about the preprocessing options that were set when mold() was called.

forge() always returns three things, and they should look familiar to you if you have used mold().

  • predictors holds a tibble of the predictors.

  • outcomes is returned as NULL by default, because most predict() methods assume you only have access to the new predictors. Alternatively, as you will read in a moment, this can contain a tibble of the new outcomes.

  • extras varies per blueprint, but is a catch-all slot to hold the same kind of extra objects that were returned by the blueprint when mold() was called.

Outcomes

Generally when generating predictions you only need to know about the new predictors. However, when performing resampling you will need to have the processed outcomes as well so you can compute cross validated performance statistics and decide between multiple models, or choose between hyperparameters.

You can easily request the outcomes as well with outcomes = TRUE. Just like with the predictors, these get processed using the same steps as done to the outcomes at fit time.

forge(iris_test, formula_eng, outcomes = TRUE)
#> $predictors
#> # A tibble: 50 x 2
#>    Petal.Width Species  
#>          <dbl> <fct>    
#>  1         2.5 virginica
#>  2         1.9 virginica
#>  3         2.1 virginica
#>  4         1.8 virginica
#>  5         2.2 virginica
#>  6         2.1 virginica
#>  7         1.7 virginica
#>  8         1.8 virginica
#>  9         1.8 virginica
#> 10         2.5 virginica
#> # … with 40 more rows
#> 
#> $outcomes
#> # A tibble: 50 x 1
#>    `log(Sepal.Length)`
#>                  <dbl>
#>  1                1.84
#>  2                1.76
#>  3                1.96
#>  4                1.84
#>  5                1.87
#>  6                2.03
#>  7                1.59
#>  8                1.99
#>  9                1.90
#> 10                1.97
#> # … with 40 more rows
#> 
#> $extras
#> $extras$offset
#> NULL

Validation

One of the most useful things about forge() is its robustness against malformed new data. It isn’t unreasonable to enforce that the new data a user provides at prediction time should have the same type as the data used at fit time. Type is defined in the vctrs sense, and for our uses essentially means that a number of checks on the test data have to pass, including:

  • The column names of the testing data and training data must be the same.

  • The type of each column of the testing data must be the same as the columns found in the training data. This means:

    • The classes must be the same (e.g. if it was a factor in training, it must be a factor in testing).

    • The attributes must be the same (e.g. the levels of the factors must also be the same).

Almost all of this validation is possible through the use of vctrs::vec_cast(), and is called for you by forge().

Column existence

The easiest example to demonstrate is missing columns in the testing data. forge() won’t let you continue until all of the required predictors used at training are also present in the new data.

test_missing_column <- subset(iris_test, select = -Species)

forge(test_missing_column, formula_eng)
#> Error: The following required columns are missing: 'Species'.

Column types

After an initial scan for the column names is done, a deeper scan of each column is performed, checking the type of that column. For instance, what happens if the new Species column was a double, not a factor?

test_species_double <- iris_test
test_species_double$Species <- as.double(test_species_double$Species)

forge(test_species_double, formula_eng)
#> Error: Can't convert `Species` <double> to match type of `Species` <factor<12d60>>.

An error is thrown, indicating that a double can’t be cast to a factor.

Lossless conversion

The error message above suggests that in some cases you can automatically cast from one type to another, and in fact that is true! Rather than being a double, what if Species was just a character?

test_species_character <- iris_test
test_species_character$Species <- as.character(test_species_character$Species)

forged_char <- forge(test_species_character, formula_eng)

forged_char$predictors
#> # A tibble: 50 x 2
#>    Petal.Width Species  
#>          <dbl> <fct>    
#>  1         2.5 virginica
#>  2         1.9 virginica
#>  3         2.1 virginica
#>  4         1.8 virginica
#>  5         2.2 virginica
#>  6         2.1 virginica
#>  7         1.7 virginica
#>  8         1.8 virginica
#>  9         1.8 virginica
#> 10         2.5 virginica
#> # … with 40 more rows

class(forged_char$predictors$Species)
#> [1] "factor"

levels(forged_char$predictors$Species)
#> [1] "setosa"     "versicolor" "virginica"

Interesting, so in this case we can actually convert to a factor, and the class and even the levels are all restored. The key here is that this was a lossless conversion. We lost no information when converting the character Species to a factor because the unique character values were a subset of the original levels.

An example of a conversion that would be lossy is if the character Species column had a value that was not a level in the training data.

test_species_lossy <- iris_test
test_species_lossy$Species <- as.character(test_species_lossy$Species)
test_species_lossy$Species[2] <- "im new!"

forged_lossy <- forge(test_species_lossy, formula_eng)
#> Warning: Novel levels found in column 'Species': 'im new!'. The levels have been
#> removed, and values have been coerced to 'NA'.

forged_lossy$predictors
#> # A tibble: 50 x 2
#>    Petal.Width Species  
#>          <dbl> <fct>    
#>  1         2.5 virginica
#>  2         1.9 <NA>     
#>  3         2.1 virginica
#>  4         1.8 virginica
#>  5         2.2 virginica
#>  6         2.1 virginica
#>  7         1.7 virginica
#>  8         1.8 virginica
#>  9         1.8 virginica
#> 10         2.5 virginica
#> # … with 40 more rows

In this case:

  • A lossy warning is thrown

  • The Species column is still converted to a factor with the right levels

  • The novel level is removed and its value is set to NA

Recipes and forge()

Just like with the formula method, a recipe can be used as the preprocessor at fit and prediction time. hardhat handles calling prep(), juice(), and bake() for you at the right times. For instance, say we have a recipe that just creates dummy variables out of Species.

library(recipes)
#> Loading required package: dplyr
#> 
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#> 
#>     filter, lag
#> The following objects are masked from 'package:base':
#> 
#>     intersect, setdiff, setequal, union
#> 
#> Attaching package: 'recipes'
#> The following object is masked from 'package:stats':
#> 
#>     step

rec <- recipe(Sepal.Width ~ Sepal.Length + Species, iris_train) %>%
  step_dummy(Species)

iris_recipe <- mold(rec, iris_train)

iris_recipe$predictors
#> # A tibble: 100 x 3
#>    Sepal.Length Species_versicolor Species_virginica
#>           <dbl>              <dbl>             <dbl>
#>  1          5.1                  0                 0
#>  2          4.9                  0                 0
#>  3          4.7                  0                 0
#>  4          4.6                  0                 0
#>  5          5                    0                 0
#>  6          5.4                  0                 0
#>  7          4.6                  0                 0
#>  8          5                    0                 0
#>  9          4.4                  0                 0
#> 10          4.9                  0                 0
#> # … with 90 more rows

The blueprint is a recipe blueprint.

recipe_eng <- iris_recipe$blueprint

recipe_eng
#> Recipe blueprint: 
#>  
#> # Predictors: 2 
#>   # Outcomes: 1 
#>    Intercept: FALSE 
#> Novel Levels: FALSE

When we forge(), we can request outcomes to have the predictors and outcomes separated like with the formula method.

forge(iris_test, recipe_eng, outcomes = TRUE)
#> $predictors
#> # A tibble: 50 x 3
#>    Sepal.Length Species_versicolor Species_virginica
#>           <dbl>              <dbl>             <dbl>
#>  1          6.3                  0                 1
#>  2          5.8                  0                 1
#>  3          7.1                  0                 1
#>  4          6.3                  0                 1
#>  5          6.5                  0                 1
#>  6          7.6                  0                 1
#>  7          4.9                  0                 1
#>  8          7.3                  0                 1
#>  9          6.7                  0                 1
#> 10          7.2                  0                 1
#> # … with 40 more rows
#> 
#> $outcomes
#> # A tibble: 50 x 1
#>    Sepal.Width
#>          <dbl>
#>  1         3.3
#>  2         2.7
#>  3         3  
#>  4         2.9
#>  5         3  
#>  6         3  
#>  7         2.5
#>  8         2.9
#>  9         2.5
#> 10         3.6
#> # … with 40 more rows
#> 
#> $extras
#> $extras$roles
#> NULL

A note on recipes

One complication with recipes is that, in the bake() step, the processing happens to the predictors and the outcomes all together. This means that you might run into the situation where the outcomes seem to be required to forge(), even if you aren’t requesting them.

rec2 <- recipe(Sepal.Width ~ Sepal.Length + Species, iris_train) %>%
  step_dummy(Species) %>%
  step_center(Sepal.Width) # Here we modify the outcome

iris_recipe2 <- mold(rec2, iris_train)

recipe_eng_log_outcome <- iris_recipe2$blueprint

If our new_data doesn’t have the outcome, baking this recipe will fail even if we don’t request that the outcomes are returned by forge().

iris_test_no_outcome <- subset(iris_test, select = -Sepal.Width)

forge(iris_test_no_outcome, recipe_eng_log_outcome)
#> Error: Can't subset columns that don't exist.
#> ✖ Column `Sepal.Width` doesn't exist.

The way around this is to use the built-in recipe argument, skip, on the step containing the outcome. This skips the processing of that step at bake() time.

rec3 <- recipe(Sepal.Width ~ Sepal.Length + Species, iris_train) %>%
  step_dummy(Species) %>%
  step_center(Sepal.Width, skip = TRUE)

iris_recipe3 <- mold(rec3, iris_train)

recipe_eng_skip_outcome <- iris_recipe3$blueprint

forge(iris_test_no_outcome, recipe_eng_skip_outcome)
#> $predictors
#> # A tibble: 50 x 3
#>    Sepal.Length Species_versicolor Species_virginica
#>           <dbl>              <dbl>             <dbl>
#>  1          6.3                  0                 1
#>  2          5.8                  0                 1
#>  3          7.1                  0                 1
#>  4          6.3                  0                 1
#>  5          6.5                  0                 1
#>  6          7.6                  0                 1
#>  7          4.9                  0                 1
#>  8          7.3                  0                 1
#>  9          6.7                  0                 1
#> 10          7.2                  0                 1
#> # … with 40 more rows
#> 
#> $outcomes
#> NULL
#> 
#> $extras
#> $extras$roles
#> NULL

There is a tradeoff here that you need to be aware of.

  • If you are just interested in generating predictions on completely new data, you can safely use skip = TRUE because you will almost never have access to the corresponding true outcomes to preprocess and compare against.

  • If you know you need to do resampling, you will likely have access to the outcomes during the resampling step so you can cross-validate the performance. In this case, you can’t set skip = TRUE because then the outcomes won’t be processed, but since you have access to them, you shouldn’t need to.

For example, if we used iris_test with the above recipe (which has the outcome), Sepal.Width wouldn’t get centered when forge() is called. But we probably would not have skipped that step if we knew that our test data would have the outcome.

forge(iris_test, recipe_eng_skip_outcome, outcomes = TRUE)$outcomes
#> # A tibble: 50 x 1
#>    Sepal.Width
#>          <dbl>
#>  1         3.3
#>  2         2.7
#>  3         3  
#>  4         2.9
#>  5         3  
#>  6         3  
#>  7         2.5
#>  8         2.9
#>  9         2.5
#> 10         3.6
#> # … with 40 more rows

# Notice that the `outcome` values haven't been centered
# and are the same as before
head(iris_test$Sepal.Width)
#> [1] 3.3 2.7 3.0 2.9 3.0 3.0