Track changes in data with the lumberjack %>>%

So you are using this pipeline to have data treated by different functions in R. For example, you may be imputing some missing values using the simputation package. Let us first load the only realistic dataset in R

> data(retailers, package="validate")
> head(retailers, 3)
  size incl.prob staff turnover other.rev total.rev staff.costs total.costs profit vat
1  sc0      0.02    75       NA        NA      1130          NA       18915  20045  NA
2  sc3      0.14     9     1607        NA      1607         131        1544     63  NA
3  sc3      0.14    NA     6886       -33      6919         324        6493    426  NA

This data is dirty with missings and full of errors. Let us do some imputations with simputation.

> out <- retailers %>% 
+   impute_lm(other.rev ~ turnover) %>%
+   impute_median(other.rev ~ size)
> head(out,3)
  size incl.prob staff turnover other.rev total.rev staff.costs total.costs profit vat
1  sc0      0.02    75       NA  6114.775      1130          NA       18915  20045  NA
2  sc3      0.14     9     1607  5427.113      1607         131        1544     63  NA
3  sc3      0.14    NA     6886   -33.000      6919         324        6493    426  NA

Ok, cool, we know all that. But what if you'd like to know what value was imputed with which method? That's where the lumberjack comes in.

The lumberjack operator is a `pipe'[1] operator that allows you to track changes in data.

> library(lumberjack)
> retailers$id <- seq_len(nrow(retailers))
> out <- retailers %>>% 
+   start_log(log=cellwise$new(key="id")) %>>%
+   impute_lm(other.rev ~ turnover) %>>%
+   impute_median(other.rev ~ size) %>>%
+   dump_log(stop=TRUE)
Dumped a log at cellwise.csv
> read.csv("cellwise.csv") %>>% dplyr::arrange(key) %>>% head(3)
  step                     time                      expression key  variable old      new
1    2 2017-06-23 21:11:05 CEST impute_median(other.rev ~ size)   1 other.rev  NA 6114.775
2    1 2017-06-23 21:11:05 CEST impute_lm(other.rev ~ turnover)   2 other.rev  NA 5427.113
3    1 2017-06-23 21:11:05 CEST impute_lm(other.rev ~ turnover)   6 other.rev  NA 6341.683

So, to track changes we only need to switch from %>% to %>>% and add the start_log() and dump_log() function calls in the data pipeline. (to be sure: it works with any function, not only with simputation). The package is on CRAN now, and please see the introductory vignette for more examples and ways to customize it.

There are many ways to track changes in data. That is why the lumberjack is completely extensible. The package comes with a few loggers, but users or package authors are invited to write their own. Please see the extending lumberjack vignette for instructions.

If this post got you interested, please install the package using


You can get started with the introductory vignette or even just use the lumberjack operator %>>% as a (close) replacement of the %>% operator.

As always, I am open to suggestions and comments. Either through the packages github page.

Also, I will be talking at useR2017 about the simputation package, but I will sneak in a bit of lumberjack as well :p.

And finally, here's a picture of a lumberjack smoking a pipe.

[1] It really should be called a function composition operator, but potetoes/potatoes.

Posted in programming, R | Tagged , | Leave a comment

Announcing the simputation package: make imputation simple

I am happy to announce that my simputation package has appeared on CRAN this weekend. This package aims to simplify missing value imputation. In particular it offers standardized interfaces that

  • make it easy to define both imputation method and imputation model;
  • for multiple variables at once;
  • while grouping data by categorical variables;
  • all fitting in the magrittr not-a-pipeline.

A few examples

To start with an example, let us first create a data set with some missings.

dat <- iris
# empty a few fields
dat[1:3,1] <- dat[3:7,2] <- dat[8:10,5] <- NA
##    Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## 1            NA         3.5          1.4         0.2  setosa
## 2            NA         3.0          1.4         0.2  setosa
## 3            NA          NA          1.3         0.2  setosa
## 4           4.6          NA          1.5         0.2  setosa
## 5           5.0          NA          1.4         0.2  setosa
## 6           5.4          NA          1.7         0.4  setosa
## 7           4.6          NA          1.4         0.3  setosa
## 8           5.0         3.4          1.5         0.2    <NA>
## 9           4.4         2.9          1.4         0.2    <NA>
## 10          4.9         3.1          1.5         0.1    <NA>

Below, we first impute Sepal.Width and Sepal.Length by regression on Petal.Width and Species. After this we impute Species using a decision tree model (CART) using every other variable as a predictor (including the ones just imputed).

library(magrittr)    # load the %>% operator
imputed <- dat %>% 
  impute_lm(Sepal.Width + Sepal.Length ~ Petal.Width + Species) %>%
  impute_cart(Species ~ .)
##    Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## 1      4.979844    3.500000          1.4         0.2  setosa
## 2      4.979844    3.000000          1.4         0.2  setosa
## 3      4.979844    3.409547          1.3         0.2  setosa
## 4      4.600000    3.409547          1.5         0.2  setosa
## 5      5.000000    3.409547          1.4         0.2  setosa
## 6      5.400000    3.561835          1.7         0.4  setosa
## 7      4.600000    3.485691          1.4         0.3  setosa
## 8      5.000000    3.400000          1.5         0.2  setosa
## 9      4.400000    2.900000          1.4         0.2  setosa
## 10     4.900000    3.100000          1.5         0.1  setosa

The package is pretty lenient against failure of imputation. For example, if one of the predictors is missing, fields just remain unimputed and if one of the models cannot be fitted, only a warning is issued (not shown here).

dat %>% impute_lm(Sepal.Length ~ Sepal.Width + Species) %>% head(3)
##   Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## 1     5.076579         3.5          1.4         0.2  setosa
## 2     4.675654         3.0          1.4         0.2  setosa
## 3           NA          NA          1.3         0.2  setosa

So here, the third Sepal.Length value could not be imputed since the predictor Sepal.Width is missing.

It is possible to split data into groups before estimating the imputation model and predicting missing values. There are two ways. The first is to use the | operator to specify grouping variables.

# We first need to complete 'Species'. Here, we use sequential 
# hot deck after sorting by Petal.Length
dat %<>% impute_shd(Species ~ Petal.Length) 
# Now impute Sepal.Length by regressing on 
# Sepal.Width, computing a model for each Species.
dat %>% impute_lm(Sepal.Length ~ Sepal.Width | Species) %>% head(3)
##   Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## 1     5.067813         3.5          1.4         0.2  setosa
## 2     4.725677         3.0          1.4         0.2  setosa
## 3           NA          NA          1.3         0.2  setosa

The second way is to use the group_by command from dplyr

dat %>% dplyr::group_by(Species) %>% 
    impute_lm(Sepal.Length ~ Sepal.Width) %>% 
## Source: local data frame [3 x 5]
## Groups: Species [1]
##   Sepal.Length Sepal.Width Petal.Length Petal.Width Species
##          <dbl>       <dbl>        <dbl>       <dbl>  <fctr>
## 1     5.067813         3.5          1.4         0.2  setosa
## 2     4.725677         3.0          1.4         0.2  setosa
## 3           NA          NA          1.3         0.2  setosa

Note: by using group_by, we also transformed the data.frame to a tibble, which not only sounds funny when you pronounce it (tibble, TIBBLE, tibble? tibbebbebbebble) but is also pretty useful.

Supported methods and how to specify them

Currently, the package supports the following methods:

  • Model based (optionally add [non-]parametric random residual)
    • linear regression
    • robust linear regression
    • CART models
    • Random forest
  • Donor imputation (including various donor pool specifications)
    • k-nearest neigbour (based on gower's distance)
    • sequential hotdeck (LOCF, NOCB)
    • random hotdeck
    • Predictive mean matching
  • Other
    • (groupwise) median imputation (optional random residual)
    • Proxy imputation (copy from other variable)

Any call to one of the impute_ functions looks as follows:

impute_<method>(data, formula [, <method-specific options>])

and the formula always has the following form:

<imputed variables> ~ <model specification> [|<grouping variables>]

The parts in square brackets are optional.

Please see the package vignette for more examples and details, or ?simputation::impute_ for an overview of all imputation functions.

Happy imputing!

Posted in programming, R | Tagged , , | 5 Comments

stringdist released

stringdist was accepted on CRAN at the end of last week.

This release just fixes a few bugs affecting the stringdistmatrix function, when called with a single argument.

From the NEWS file:

  • bugfix in stringdistmatrix(a): value of p, for jw-distance was ignored (thanks to Max Fritsche)
  • bugfix in stringdistmatrix(a): Would segfault on q-gram w/input > ~7k strings and q>1 (thanks to Connor McKay)
  • bugfix in jaccard distance: distance not always correct when passing multiple strings (thanks to Robert Carlson)

Actually the last bug has not bitten anyone since it was masked by the second one 🙂 (it was reported and fixed a long time ago but popped up again after fixing the second bug -- hat tip to Hadley for testthat!). The second fix also ensures that stringdist's memory allocator for q-gram storage is called fewer times which yields a speed gain in computation of q-gram based distances.

Posted in programming, R | Tagged | 2 Comments

validate version 0.1.5 is out

A new version of the validate package for data validation was just accepted on CRAN and will be available on all mirrors in a few days.

The most important addition is that you can now reference the data set as a whole, using the "dot" syntax like so:

iris %>% check_that(
  , "Sepal.Width" %in% names(.)) %>% 

  rule items passes fails nNA error warning                  expression
1   V1     1      1     0   0 FALSE   FALSE               nrow(.) > 100
2   V2     1      1     0   0 FALSE   FALSE "Sepal.Width" %in% names(.)

Also, it is now possible to return a logical, even when the result is NA, by passing the na.value option.

dat = data.frame(x=c(1,NA,-1))
v = validator(x > 0)
[1,]  TRUE
[2,]    NA
[3,] FALSE
[1,]  TRUE
[2,] FALSE
[3,] FALSE

A complete list of changes and bugfixes can be found in the NEWS file. Below I include changes in 1.4 since I did not write about it before.

I will be talking about this package at the upcoming useR!2016 event, so join me if you're interested!

version 0.1.5

  • The '.' is now used to reference the validated data set as whole.
  • Small change in output of 'compare' to match the table in van den Broek et al. (2013)

version 0.1.4

  • 'confront' now emits a warining when variable name conflicts with name of a reference data set
  • Deprecated 'validate_reset', in favour of the shorter 'reset' (use 'validate::reset' in case of ambiguity)
  • Deprecated 'validate_options' in favour of the shorter 'voptions'
  • New option na.value with default value NA, controlling the output when a rule evaluates to NA.
  • Added rules from the ESSnet on validation (deliverable 17) to automated tests.
  • added 'grepl' to allowed validation syntax (suggested by Dusan Sovic)
  • exported a few functions w/ keywords internal for extensibility
  • Bugfix: blocks sometimes reported wrong nr of blocks (in case of a single connected block.)
  • Bugfix: macro expansion failed when macros were reused in other macros.
  • Bugfix: certain nonlinear relations were recognized as linear
  • Bugfix: rules that use (anonymous) function definitions raised error when printed.
Posted in programming, R, Uncategorized | Tagged , , | Leave a comment

Easy data validation with the validate package

The validate package is our attempt to make checking data against domain knowledge as easy as possible. Here is an example.


iris %>% check_that(
  Sepal.Width > 0.5 * Sepal.Length
  , mean(Sepal.Width) > 0
  , if ( Sepal.Width > 0.5*Sepal.Length) Sepal.Length > 10
) %>% summary()

#  rule items passes fails nNA error warning                                              expression
# 1   V1   150     66    84   0 FALSE   FALSE                        Sepal.Width > 0.5 * Sepal.Length
# 2   V2     1      1     0   0 FALSE   FALSE                                   mean(Sepal.Width) > 0
# 3   V3   150     84    66   0 FALSE   FALSE !(Sepal.Width > 0.5 * Sepal.Length) | Sepal.Length > 10

The summary gives an overview of the number of items checked. For an aggregated test, such as the one where we test the mean of a variable only one item is tested: the whole Sepal.Width column. The other rules are tested on each record in iris. Furthermore the number of items that pass, fail or could not be evaluated because of missingness are reported.

In validate, data validation rules are considered objects of computation that may be stored, read, manipulated and investigated. The validator object supports such activities so validation rules can be reused.

v <-  validator(
  ratio = Sepal.Width > 0.5 * Sepal.Length
  , mean = mean(Sepal.Width) > 0
  , cnd = if ( Sepal.Width > 0.5*Sepal.Length) Sepal.Length > 10

# Object of class 'validator' with 3 elements:
#  ratio: Sepal.Width > 0.5 * Sepal.Length
#  mean : mean(Sepal.Width) > 0
#  cnd  : !(Sepal.Width > 0.5 * Sepal.Length) | Sepal.Length > 10

We can confront the iris data set with this validator. The results are stored in a validation object.

cf <- confront(iris, v)

# Object of class 'validation'
# Call:
#     confront(x = iris, dat = v)
# Confrontations: 3
# With fails    : 2
# Warnings      : 0
# Errors        : 0

validate-iris These are just the basics of what can be done with this package.

  • If this post got you interested, you can go through our introductory vignette
  • Some theory on data validation can be found here
  • We'd love to hear your suggestions, opinions, bugreports here
  • An introduction on how to retrieve and store rules from textfiles can be found in a second vignette
  • Github repo, CRAN page
Posted in programming, R | Tagged | 13 Comments

settings 0.2.3

An updated version of the settings package has been accepted on CRAN.

The settings package provides alternative options settings management for R. It is aimed to allow for layered options management where global options are the default that can easily be overruled locally (e.g. when calling a function, or options as part of an object).

New features:

  • Setting ranges or lists of allowed values.

See the vignette to get started, all code is on github.

Posted in programming, R | Leave a comment

stringdist 0.9.4 and 0.9.3: distances between integer sequences

A new release of stringdist has been accepted on CRAN.

stringdist offers a number of popular distance functions between sequences of integers or characters that are independent of character encoding.

version 0.9.4

  • bugfix: edge case for zero-size for lower tridiagonal dist matrices (caused UBSAN to fire, but gave correct results).
  • bugfix in jw distance: not symmetric for certain cases (thanks to github user gtumuluri)

Since 0.9.3, stringdist can compute distances between integer sequences, and you can use hashr to compute an integer representation of any (sequence of) R objects, based on the superFastHash algorithm of Paul Hsieh.

version 0.9.3

  • new functions computing distances between integer sequences: seq_dist, seq_distmatrix
  • new function for tokenizing integer sequences: seq_qgrams
  • new function for matching integer sequences: seq_amatch
  • q-gram based distances are now always 0 when q=0 (used to be Inf if at least one of the arguments was not the empty string)
  • stringdist, stringdistmatrix now emit warning when presented with list argument
  • small c-side code optimizations
  • bugfix in dl, lv, osa distance: weights were not taken into account properly (thanks to Zach Price)

All code on github.

Posted in programming, R | Tagged | Leave a comment

Stringdist 0.9.2: dist objects, string similarities and some deprecated arguments

On 24-06-2015 stringdist 0.9.2 was accepted on CRAN. A summary of new features can be found in the NEWS file; here I discuss the changes with some examples.

Computing 'dist' objects with 'stringdistmatrix'

The R dist object is used as input for many clustering algorithms such as cluster::hclust. It is stores the lower triangle of a matrix of distances between a vector of objects. The function stringdist::stringdistmatrix now takes a variable number of character arguments. If two vectors are given, it behaves the same as it used to.

> x <- c("fu","bar","baz","barb")
> stringdistmatrix(x,x,useNames="strings")
     fu bar baz barb
fu    0   3   3    4
bar   3   0   1    1
baz   3   1   0    2
barb  4   1   2    0

However, we're doing more work then necessary. Feeding stringdistmatrix just a single character argument yields the same information, but at half the computational and storage cost.

> stringdistmatrix(x,useNames="strings")
     fu bar baz
bar   3        
baz   3   1    
barb  4   1   2

The output is a dist object storing only the subdiagonal triangle. This makes it particularly easy to cluster texts using any algorithm that takes a dist object as argument. Many such algorithms available in R do, for example:

d <- stringdistmatrix(x,useNames="strings")
h <- stats::hclust(d)


(by the way, parallelizing the calculation of a lower triangle of a matrix poses an interesting exercise in index calculation. For those interested, I wrote it down)

Better labeling of distance matrices

Distance matrices can be labeled with the input strings by setting the useNames argument in stringdistmatrix to TRUE or FALSE (the default). However, if you're computing distances between looooong strings, like complete texts it is more convenient to use the names attribute of the input vector. So, the useNames arguments now takes three different values.

> x <- c(one="fu",two="bar",three="baz",four="barb")
> y <- c(a="foo",b="fuu")
> # the default:
> stringdistmatrix(x,y,useNames="none") 
     [,1] [,2]
[1,]    2    1
[2,]    3    3
[3,]    3    3
[4,]    4    4
> # like useNames=TRUE
> stringdistmatrix(x,y,useNames = "strings")
     foo fuu
fu     2   1
bar    3   3
baz    3   3
barb   4   4
> # use labels
> stringdistmatrix(x,y,useNames="names")
      a b
one   2 1
two   3 3
three 3 3
four  4 4

String similarities

Thanks to Jan van der Laan, a string similarity convenience function has been added. It computes the distance metric between two strings and then rescales it as , where the maximum possible distance depends on the type of distance metric and (depending on the metric) the length of the strings.

# similarity based on the damerau-levenshtein distance
> stringsim(c("hello", "World"), c("Ola", "Mundo"),method="dl")
[1] 0.2 0.0
# similarity based on the jaro distance
> stringsim(c("hello", "World"), c("Ola", "Mundo"),method="jw")
[1] 0.5111111 0.4666667

Here a similarity of 0 means completely different and 1 means exactly the same (within the chosen metric).

Deprecated arguments

The stringdistmatrix function had to option to be computed in parallel based on facilities of the parallel package. However, as of stringdist 0.9.0, all distance calculations are multicored by default.

Therefore, I'm phasing out the following options in stringdistmatrix:

  • ncores (how many R-sessions should be started by parallel to compute the matrix?)
  • cluster (optionally, provide your own cluster, created by parallel::makeCluster.

These argument are now ignored with a message but they'll be available untill somewhere in 2016 so users have time to adapt their code. Please mail me if you have any trouble doing so.

Posted in programming, R | Tagged | Leave a comment

stringdist 0.9: exercise all your cores

The latest release of the stringdist package for approximate text matching has two performance-enhancing novelties. First of all, encoding conversion got a lot faster since this is now done from C rather than from R.

Secondly, stringdist now employs multithreading based on the openmp protocol. This means that calculations are now parallelized on multicore machines running OS's that support openmp.

The stringdist package offers two main functions, both of which are now parallelized with openmp:

  • stringdist can compute a number of different string metrics between vectors of strings (see here)
  • amatch is an approximate text matching version of R's native match function.

By default, the package now uses the following number of cores: if your machine has one or two cores, all of them are used. If your machine has 3 or more cores, cores are used and the number of cores is determined by a call to parallel::detectCores(). This way, you can still use your computer for other things while stringdist is doing its job. I set this default since I noticed in some benchmarks that using all cores in a computation is sometimes slower than using cores. This is probably because one of the cores is occupied with (for example) competing OS tasks, but I haven't thourougly investigated that. You may still de- or increase the maximum amount of resources consumed since both amatch and stringdist now have a nthread argument. You may also alter the global option


or change the environmental variable OMP_THREAD_LIMIT prior to loading stringdist, but I'm digressing in details now.

A simple benchmark on my quadcore Linux machine (code at the end of the post) shows a near linear speedup as a function of the number of cores. The (default) distance computed here is the optimal string alignment distance. For this benchmark I sampled 10k strings of lengths between 5 and 31 characters. The first benchmark (left panel) shows the time it takes to compute 10k pairwise distances as a function of the number of cores used (nthread=1,2,3,4). The right panel shows how much time it takes to fuzzy-match 15 strings against a table of 10k strings as a function of the number of threads. The areas around the lines show the 1st and 3rd quartile interval of timings (thanks to the awesome microbenchmark package of Olaf Mersmann).


According to the Writing R extensions manual, certain commercially available operating systems have extra (fixed?) overhead when running openmp-based multithreading. However, for larger computations this shouldn't really matter.


# number of strings
N <- 10000

# Generate N random strings of length min_len to max_len
rand_str <- function(N, min_len=5, max_len=31){
  len <- sample(min_len:max_len, size=N, replace=TRUE)
  sapply(len,function(n) paste(sample(letters,n,replace=TRUE),collapse=""))

# plot results. bm: an object of class microbenchmark
bmplot <- function(bm,...){
  s <- summary(bm)
  unit <- attr(s,"unit")
  med <- s$median
  uq <- s$uq
  lq <- s$lq
  cores <- seq_along(med)
  plot(cores,med, col='white'
    , xlab = "Cores used"
    , ylab = sprintf("Time (%s)",unit)
    , ...
  polygon(c(cores,rev(cores)), c(lq,rev(uq))
    , col=adjustcolor('blue',alpha.f = 0.1)
    , border=NA)

x <- rand_str(N)
y <- rand_str(N)

bm_sd <- microbenchmark(times=100
  , stringdist(x,y,nthread=1)               
  , stringdist(x,y,nthread=2)
  , stringdist(x,y,nthread=3)
  , stringdist(x,y,nthread=4)

n <- 15
x1 <- x[1:n]
bm_am <- microbenchmark(times=25
  , amatch(x1,y,nthread=1)               
  , amatch(x1,y,nthread=2)
  , amatch(x1,y,nthread=3)
  , amatch(x1,y,nthread=4)

bmplot(bm_sd,main=sprintf("stringdist %d strings",N))
bmplot(bm_am,main=sprintf("amatch %dx%d strings",n,N))
Posted in programming, R | Tagged | 4 Comments

Easy to use option settings management with the 'settings' package

Last week I released a new package called settings. It grew out of my frustration built up during several small projects where I'm generating heavily parameterized d3/js output. What I wanted was support to

  • define a whole bunch of option settings with default values;
  • be able to set them globally or locally within a function or object without explicitly re-assigning every setting;
  • reset (global) option settings to default with ease.

Turns out, the first and last wishes on my list are fulfilled with the futile.options package. I really wanted the inheritance features though so I experimented a bunch of times with different implementations. Most of those were based on reference classes holding (global) option settings. In the end I chose a functional approach, inspired by futile.options. I feel this approach is both lightweight (the package's code basically fits readably on an A4 page) and elegant[1].

I'm going to give a quick glance of the package here, and refer to the package vignette for extensive examples.

You can define an options manager like this.

opt <- options_manager(foo=0,bar=1)

opt is a function that acts like R's default options function.

# get option settings:
> opt('foo')
[1] 0
# change option settings
> opt(bar=10,foo=6)
> opt()
[1] 6

[1] 10

The cool thing is that you can reset it to defaults like this.

> reset(opt)
> opt()
[1] 0

[1] 1

The second cool thing is that you can create a copy, where the copy has the same defaults but new current settings.

> loc_opt <- clone_and_merge(opt,foo=7)
> loc_opt()
[1] 7

[1] 1
# loc_opt can be reset locally:
> reset(loc_opt)
> loc_opt()
[1] 0

[1] 1

Resetting or otherwise altering loc_opt does not affect the global options set in opt. Of course, loc_opt can be cloned again and again.
This stuff is useful when you write a function and you want to merge options in dot-dot-dot arguments with global options. For example

# user may or may not want to add options like foo=10 when calling 'myfunc'
myfunc <- function(x,...){

  # merge user-defined options with global options in globally defined 'opt'
  loc_opt <- clone_and_merge(opt,...)
  # use local options
  loc_opt('foo') + loc_opt('bar') * x

For more examples, including on how to use this in S4 or reference classes, or how to use settings as an options manager in a package, please see the package vignette. As always, the code is available on github.

[1] Well, it hurts to say there's a bit of abstraction leakage here: there are two option names that cannot be used: .__defaults and .__reset, but the package provides methods to protect against that.

Posted in programming, R | Leave a comment