Tutorial | Dataiku for R Users (Advanced)#

Dataiku is a collaborative data science and machine learning platform. Its visual tools enable collaboration with a wider pool of colleagues who may not be coders (or R coders for that matter). At the same time, code integrations for languages like Python and R retain the flexibility needed when greater customization or freedom is desired.

Over the course of this tutorial, you’ll recreate a simple Flow built with visual tools using the integrations designed for R users.


You can also find this tutorial as part of the Academy course on Dataiku for R Users, which is part of the Developer learning path.

Get started#

In this tutorial, you’ll learn how to:

  • Build an ML pipeline from R notebooks, recipes, and project variables.

  • Apply a custom R code environment to use CRAN packages not found in the base environment.

  • Use the Dataiku R API to edit Dataiku recipes and create ggplot insights from within RStudio.

  • Import R code from a git repository into a Dataiku project library.

  • Work with managed folders to handle file types such as “*.RData”.


To complete this tutorial, you’ll need:

Workflow overview#

When you have completed the tutorial, you will have built the bottom half of the Flow pictured below:

Completed Flow with parallel R and visual workflows for preparing and scoring an ML model

Create the project#

We’ll start with a project built entirely with visual tools:

  • The data pipeline consists of visual recipes.

  • The native Chart builder has been used for exploratory visualizations.

  • The machine learning model (shown in green) has been built with the visual ML interface.

Although using visual tools can amplify collaboration and understanding with non R-coding colleagues, re-creating this Flow in R opens windows for greater flexibility and customization at every stage.

  1. From the Dataiku homepage, click on +New Project > DSS tutorials > Developer > Dataiku for R Users.


    You can also download the starter project from this website and import it as a zip file.

  1. After creating the project, go to the Flow, and click Build all > Build from the Flow Actions menu in the bottom right corner.

    Un-built flow of visual recipes and ML components

Observe the visual Flow#

Even if primarily an R user, it will be helpful for you to familiarize yourself with the available set of visual recipes and what they can achieve.

Although the table below is far from 1-1 matching, it suggests a Dataiku recipe that performs a similar operation for some of the most common data preparation functions in base R or the tidyverse.

R package

R function

Similar Dataiku recipe



Formulas in Prepare recipe



Delete/Keep processor in Prepare recipe



  • Filter processor in Prepare recipe

  • Sample/Filter recipe

  • Pre/post Filter step in many visual recipes



Sort recipe


group_by() %>% summarize()

Group recipe


group_by() %>% mutate()

Window recipe



Join recipe



Distinct recipe


gather() / pivot_longer()

Fold multiple columns processor in Prepare recipe


spread() / pivot_wider()

Pivot recipe Pivot processor in Prepare recipe

base, dplyr

rbind(), bind_rows()

Stack recipe

base, dplyr

subset(), group_split()

Split recipe

base, dplyr

head/tail(), slice_min/max()

Top N recipe



String processors in Prepare recipe



Date processors in Prepare recipe



Fuzzy Join recipe


As shown in the table, processors found in the Prepare recipe handle many data preparation functions. Moreover, many recipes and processors–although having a visual interface on top–are SQL-compatible.

Create an R recipe#

If you look at the Prepare recipe that creates the churn_prepared dataset, you’ll see it contains only a few simple steps. For routine data preparation, a visual recipe is an excellent choice since a wider pool of colleagues can more easily understand the actions in the Flow.

That being said, an R recipe grants you the freedom to code as you wish.

  1. From the churn_copy dataset, add an R recipe from the Actions sidebar on the right.

  2. Name the output dataset churn_prepared_r.

  3. Click Create Dataset and Create Recipe.

Dialog of new R recipe with churn_copy as input and churn_prepared_r as output dataset

Let’s break down the default R code recipe.

Default R code recipe

The first line loads the dataiku R package, which includes functions for interacting with Dataiku objects, such as datasets and folders.

Two functions from this package are included in the default recipe: dkuReadDataset() and dkuWriteDataset(). These functions simplify the process for reading and writing datasets.

The churn_copy dataset, in this case, is a managed filesystem dataset, resulting from the original uploaded CSV file. However, if the Sync recipe were instead moving the CSV file to an SQL database or an HDFS cluster, the syntax in the R recipe would be exactly the same.

The line churn_prepared_r <- churn_copy assigns the input dataset as the output dataset. You’ll want to replace this with your own logic to define a new output dataset based on the input.


dkuReadDataset() is not the only way to read a dataset with R. dkuSQLQueryToData() makes it possible to execute SQL queries from R. This can be helpful when you want to pull in a specific query of records into Dataiku, rather than any of the standard sampling options.

Code in an R notebook#

The recipe editor does not allow you to interactively test your code. Native Jupyter notebooks serve that purpose.

  1. From the compute_churn_prepared_r recipe editor, click Edit in Notebook.

    The recipe exists now in one cell of a Jupyter notebook with an R kernel.

  2. For convenience, click Edit > Split Cell, or use the keyboard shortcut Ctrl+Shift+-, to divide the recipe into four cells as shown below.

  3. Run the first two cells so that churn_copy is an in-memory dataframe.

  4. Add a new cell underneath churn_copy’s local creation, and run commands like head(churn_copy) and class(churn_copy) to test this out.

R notebook exploring head and class of churn_copy dataset


To learn more about code notebooks, you may wish to register for the Academy course, Code in Dataiku.

The R code environment#

Notice that the kernel of this notebook in the upper right corner says “R”. This means that the notebook is using the default R code environment for this instance.

Top of R notebook with kernel in top right corner highlighted

Accordingly, in addition to base R commands like head() and class(), you can also load and use functions from packages included in this environment.

  1. Add a new cell, and run the command library()$results[,1] to see a list of installed packages in the current environment.

    You’ll find that dplyr is one base package. Let’s use it in this recipe.

  2. Delete the existing recipe and any exploratory code.

  3. Copy and paste the code below to mimic some of the steps of the visual Prepare recipe.

    # Recipe inputs
    churn_copy <- dkuReadDataset("churn_copy", samplingMethod="head", nbRows=100000)
    # Data preparation
    churn_copy %>%
        rename(Churn = Churn.) %>%
        mutate(Churn = if_else(Churn == "True.", "True", "False"),
            Area_Code = as.character(Area_Code)) %>%
        select(-Phone) ->
    # Recipe outputs
  4. Click Save Back to Recipe.

  5. Once in the recipe editor, click Run.


You’ll find a more conceptual look at what code environments achieve in the Code in Dataiku Academy course.

Change the R code environment#

In many cases, you’ll want to use R packages not found in the base environment of the instance. In these cases, you’ll need to use a code environment that includes these packages.

For this project, we’ll need the following R packages:

  • tidyr (for data preparation),

  • ggplot2 (for visualization), and

  • gbm and caret (for machine learning).


The reference documentation includes instructions for creating a new R code environment. Follow these instructions, and add “tidyr”, “ggplot2”, “gbm” and “caret” as the requested packages. If you don’t have the requisite permissions, you’ll need to contact your Dataiku administrator.

When you have a code environment on your instance with at least these four additional R packages, you can activate it in the current project.

  1. From the more options menu in the top navigation bar, choose Settings > Code env selection.

  2. Change the Mode to Select an environment and the name of the environment that includes these four packages, in this case, tidy-gbm-caret.

  3. Click Save.

Code environment selection page under settings showing a custom R code environment selected as the default environment

With this code environment set as the project default, new R notebooks and recipes will inherit this environment, and be able to load these additional packages.

Use the R API outside Dataiku#

You have seen how the Dataiku R API works in notebooks and recipes within Dataiku. However, you can also use the same R API outside of Dataiku, such as in an IDE like RStudio.

The instructions for downloading the dataiku package and setting up the connection with Dataiku are covered in the reference documentation. If you’d rather not set this up at this time, feel free to create a new R notebook within Dataiku for this section.

After configuring a connection, you can use the Dataiku R API to read datasets found in Dataiku projects and code freely, even sharing visualizations, for example, back to the Dataiku instance.

  1. In a new R script (or a new R notebook if staying within Dataiku), copy/paste and run the code below to save a ggplot object as a static insight.


    If working outside Dataiku, you’ll need to supply an API key. One way to find this is by going to Profile & Settings > API keys. Also, be sure to check that your project key is the same as given below.

    # These lines are unnecessary if running within Dataiku
    dkuSetRemoteDSS("http(s)://DSS_HOST:DSS_PORT/", "Your API Key")
    dkuSetCurrentProjectKey("DKU_TUT_R_USERS") # Replace with your project key if different
    # Read the dataset as a R dataframe in memory
    df <- dkuReadDataset("churn_prepared_r", samplingMethod="head", nbRows=100000)
    # Create the visualization
    df %>%
      select(-c(State, Area_Code, Intl_Plan, VMail_Plan)) %>%
      gather("metric", "value", -Churn) %>%
      ggplot(aes(x = value, color = Churn)) +
      facet_wrap(~ metric, scales = "free") +
      # Save visualization above as a static insight
  2. After running the code above, return to Dataiku, and navigate to the Insights page (G+I) to confirm the insight has been added.

  3. If you wish, you can publish it to a dashboard like any other insight such as native charts or model reports.

The code above visualizes the distribution for all numeric variables in the dataset among churning and returning customers. While the distribution for many variables is quite similar, a few variables like CustServ_Calls, Day_Charge, and Day_Mins follow different patterns.

Grid of density plots for each numeric variable by churn status


In addition to ggplot2, the Dataiku R API has similar convenience functions for creating static insights with dygraphs, ggvis, and googleVis. You can also find more general information about static insights in the Visualization Academy course.

Edit recipes from RStudio#

Returning to the Flow, you can see that the Split recipe divides the prepared data into a training set (70%) and a test set (30%). Let’s achieve the same outcome with another R recipe, but demonstrate using the RStudio Desktop integration for editing and saving existing recipes.


In addition to the RStudio integration used here, some users may also prefer to write R code in the RStudio Server IDE through a Code Studio template.

  1. Select the churn_prepared_r dataset, and add a new R recipe.

  2. Add two output datasets, train_r and test_r, and click Create Recipe.

  3. In the recipe editor, click Save.

Now that you have created the recipe, let’s edit it in RStudio, and save the new version back to the Dataiku instance. If you followed the setup in the section above, there are no additional configuration steps needed. Alternatively, you can also skip this step, and directly edit the R recipe within Dataiku.

  1. Within RStudio, create a new R script.

  2. From the Addins menu, select Dataiku: download R recipe code.

  3. Choose the project key, DKU_TUT_R_USERS.

  4. Choose the Recipe ID, compute_train_r.

  5. Click Download.

    Dialog window from RStudio addin asking user which recipe from which project key to download

    The previously empty R script should now be filled with the same R code found on the Dataiku instance. Let’s edit it to mimic the action of the visual Split recipe.

  6. Replace the existing R script with the new code below.

    # Recipe inputs
    churn_prepared_r <- dkuReadDataset("churn_prepared_r", samplingMethod="head", nbRows=100000)
    # Data preparation
    churn_prepared_r %>%
        rowwise() %>%
        mutate(splitter = runif(1)) %>%
        ungroup() ->
    # Compute recipe outputs
    train_r <- subset(df_to_split, df_to_split$splitter <= 0.7)
    test_r <- subset(df_to_split, df_to_split$splitter > 0.7)
    # Recipe outputs

Now, let’s save it back to the Dataiku instance.

  1. From the Addin menu of RStudio, select Dataiku: save R recipe code.

  2. After ensuring the correct project key and recipe ID are selected, click Send to DSS.

  3. Return to the Dataiku instance, and confirm that the new recipe has been updated after refreshing the page.

  4. From the recipe editor, click Run to build the train_r and test_r datasets.


One limitation to using the Dataiku R API outside Dataiku is the ability to write datasets. You cannot write from RStudio to a Dataiku dataset as explained in this work environment section.

Use project variables#

We now have train and test sets ready for modeling, but first let’s demonstrate how project variables can be useful in a data pipeline such as this.

In the modeling stage ahead, it will be convenient to have our target variable, numeric variables, and character variables stored as separate vectors. It could be helpful to save these vectors as project variables instead of copying and pasting them for the forthcoming training and scoring recipes.

Here we are setting these variables using the Dataiku R API, but in many cases it’s helpful to do so manually through the UI (More options menu > Variables).

  1. Return to the first R recipe that creates the churn_prepared_r dataset.

  2. Click Edit in Notebook.

  3. Add the code snippet below to the end of the recipe in new cells. Walk through it line by line to understand how this section gets and sets project variables using the functions dkuGetProjectVariables() and dkuSetProjectVariables() from the R API.

    # Empty any existing project variables
    var <- dkuGetProjectVariables()
    var$standard <- list(standard=NULL)
    # Define target, categoric, and numeric variables
    target_var <- "Churn"
    categoric_vars <- names(churn_prepared_r)[sapply(churn_prepared_r, is.character)]
    categoric_vars <- categoric_vars[!categoric_vars %in% c("Churn")]
    numeric_vars <- names(churn_prepared_r)[sapply(churn_prepared_r, is.numeric)]
    # Get and set project variables
    var <- dkuGetProjectVariables()
    var$standard$target_var <- target_var
    var$standard$categoric_vars <- categoric_vars
    var$standard$numeric_vars <- numeric_vars
  4. After saving back to the recipe and running it, navigate to the Variables page (More options menu > Variables) from the top navigation bar.

    You should see three global variables–meaning these variables are accessible anywhere in the project.

Project variables page with 3 variables


Try opening a new R notebook, and running vars <- dkuGetProjectVariables() to confirm how these variables are now accessible anywhere in the project as an R list.

For a greater conceptual understanding of project variables, as well as examples in Python, you can refer to the Academy course on Variables for Coders.

Training machine learning models in R#

Now that we have prepared and split the dataset, we are ready to begin modeling. The green icons in the Flow demonstrate how the visual ML interface can be used to create a machine learning model from training data, and then apply it to testing data.

Image of the project's final flow with visual ML flow objects highlighted

The most common workflow to achieve the same in R is similar:

  1. Write an R recipe that trains a model and outputs it to a managed folder.

  2. Write another R recipe to score the testing data using that model.

To do this, you’ll need to be able to use the R API to interact with managed folders.

Use a managed folder#

Given that we have the necessary packages available in the project code environment, we are ready to create an R recipe that trains a model. Unlike previous recipes however, the output of this recipe will be a managed folder instead of a dataset.

We can store any kind of file (supported or unsupported) in a managed folder, and use the Dataiku R API to interact with the files stored inside.

  1. Return to the Flow, and select the train_r dataset.

  2. Initiate a new R recipe.

  3. Click +Add > New Folder. Name it model_r.

  4. Click Create Folder and Create Recipe.

    We now have the default R recipe for a dataset input and a managed folder output. Instead of using the randomly generated folder reference in the code, you can also use the folder name.

  5. Replace the randomly-generated alphanumeric argument to dkuManagedFolderPath() with "model_r", the name of the output folder.

  6. Click Save.

Default R code recipe with a dataset input and folder output


For a more conceptual look at managed folders, as well as examples in Python, register for the Academy course on Managed Folders.

Reuse R code from a Git repository#

We now have the correct code environment, input, and output to build our model. Let’s start coding!

Imagine, however, that we want to reuse some code already developed outside of Dataiku. Perhaps we want to reuse the same parameters or hyperparameter settings found in models elsewhere.

Let’s import code from a git repository into our project library so it can be used in the current recipe.

  1. From the code menu of the top navigation bar, select Libraries, or use the shortcut G+L.

  2. Click Git > Import from Git.

  3. In the dialog window, supply the HTTPS link for the Repository found on GitHub (found by clicking on the Code button and then the clipboard).

  4. Check out the main branch.

  5. Add /r-users/ as the path in the repository.

  6. Add /R/ as the target path of the project library.

  7. Uncheck Add to Python path.

  8. Click Save and Retrieve.

  9. Click OK to confirm the creation of the git reference has succeeded.


Let’s recap what this achieved:

  • The same file train_settings.R found in the GitHub repository is now also in the project library. It can be used in this project (or potentially in other Dataiku projects as well by editing the importLibrariesFromProjects name of the external-libraries.json file).

  • Open the file external-libraries.json to view the git reference.

Project library menu showing train_settings.R file after importing it from GitHub


The reference documentation provides more details on reusing R code.

Once the git reference is created, we can import the contents of a file found in the project library into an R recipe, notebook, or webapp with the function dkuSourceLibR().

  1. Return to the R recipe that outputs the model_r folder.

  2. Replace the existing recipe with the code snippet below, taking note of the following:

    • The gbm and caret packages can be used because of the new code environment.

    • dkuSourceLibR() imports the objects “fit.control” and “gbm.grid” found in the “train_settings.R” file.

    • dkuGetProjectVariables() calls the name of the target variable, as well as the set of numeric and categorical features.

    # Import from project library
    # Recipe inputs
    df <- dkuReadDataset("train_r")
    # Call project variables
    vars <- dkuGetProjectVariables()
    target.variable <- vars$standard$target_var
    features.cat <- unlist(vars$standard$categoric_vars)
    features.num <- unlist(vars$standard$numeric_vars)
    # Preprocessing
    df[features.cat]    <- lapply(df[features.cat], as.factor)
    df[features.num]    <- lapply(df[features.num], as.double)
    df[target.variable] <- lapply(df[target.variable], as.factor)
    train.ml <- df[c(features.cat, features.num, target.variable)]
    # Training (fit.control and gbm.grid found in train_settings.R)
    gbm.fit <- train(
        Churn ~ .,
        data = train.ml,
        method = "gbm",
        trControl = fit.control,
        tuneGrid = gbm.grid,
        metric = "ROC",
        verbose = FALSE
    # Recipe outputs (local folder only)
    model_r <- dkuManagedFolderPath("model_r")
    system("rm -rf *")
    path <- paste(model_r, 'model.RData', sep="/")
    save(gbm.fit, file = path)
  3. Once you understand this code, run the recipe, and observe the “model.RData” file found in the output folder.


For a conceptual look at sharing code in Dataiku, along with examples in Python, please register for the Academy course on Shared Code.

Local vs. non-local folders#

The code recipe above uses dkuManagedFolderPath() to retrieve the file path used to write the model to the folder output. However, this function works only for local folders.

If the data was hosted somewhere other than the local filesystem, or the code was not running on the Dataiku machine, this code would fail.

Let’s modify this recipe to work for a local or non-local folder using dkuManagedFolderUploadPath() instead of dkuManagedFolderPath().

  1. Open the recipe that produces the model_r folder.

  2. Replace the recipe outputs section with the code below.

  3. Run the recipe again.

# Recipe outputs (local or non-local folder)
save(gbm.fit, file= "model.RData")
connection <- file("model.RData", "rb")
dkuManagedFolderUploadPath("model_r", "model.RData", connection)

Score the test data#

There’s one last step to complete this Flow!

Now that we have a trained model in a managed folder, we can use it to score the testing data with another R recipe.

  1. From the Flow, select the model_r folder.

  2. Initiate a new R recipe, and add the test_r dataset as a second input.

  3. Add a new output dataset test_scored_r, and click Create Recipe.

  4. Replace the default code with the snippet below, and then run the recipe.


In addition to standard R code, note how the code below uses the Dataiku R API:

  • dkuManagedFolderDownloadPath() interacts with the contents of a (local or non-local) managed folder. The strictly local alternative using dkuManagedFolderPath() is also provided for demonstration in comments.

  • dkuReadDataset() and dkuWriteDataset() handle reading and writing of dataset inputs and outputs.

  • dkuGetProjectVariables() retrieves the values of project variables.


# Load R model (local or non-local folder)
data <- dkuManagedFolderDownloadPath("model_r", "model.RData")

# Load R model (local folder only)
# model_r <- dkuManagedFolderPath("model_r")
# path <- paste(model_r, 'model.RData', sep="/")
# load(path)

# Confirm model loaded

# Recipe inputs
df <- dkuReadDataset("test_r")

# Call project variables
vars <- dkuGetProjectVariables()
target.variable <- vars$standard$target_var
features.cat <- unlist(vars$standard$categoric_vars)
features.num <- unlist(vars$standard$numeric_vars)

# Preprocessing
df[features.cat]    <- lapply(df[features.cat], as.factor)
df[features.num]    <- lapply(df[features.num], as.double)
df[target.variable] <- lapply(df[target.variable], as.factor)
test.ml <- df[c(features.cat, features.num, target.variable)]

# Prediction
o <- cbind(df, predict(gbm.fit, test.ml,
                    type = "prob",
                    na.action = na.pass))

# Recipe outputs
dkuWriteDataset(o, "test_scored_r")

What’s next?#

Congratulations! You’ve built an ML pipeline in Dataiku entirely with R. You’ve also demonstrated how this could be done within Dataiku or from an external IDE such as RStudio.


If you have not already done so, register for the Academy course on Dataiku for R Users to validate your knowledge of this material.

Next, you might take this project further by sharing results in an R Markdown report, for which you can find a tutorial here.

You might also want to develop a Shiny webapp, for which you can find a tutorial in this article.

For general reference information about Dataiku and R>`_, please see the reference documentation.