Hands-On Tutorial: Column-Based Partitioning

Let’s Get Started!

In this tutorial, we will modify a completed Flow dedicated to the detection of credit card fraud. Our goal is to be able to predict whether or not a transaction is fraudulent based on the transaction subsector (e.g., insurance, travel, or gas) and the purchase date.

Prerequisites

Your Dataiku DSS instance must already have a SQL connection defined in order to complete this tutorial. For an overview of which databases are supported by Dataiku DSS, see the SQL databases reference documentation.

Objectives

Our Flow should meet the following business objectives:

  • Be able to create time-based computations.

  • Train a partitioned model to test the assumption that fraudulent behavior is related to merchant subsector, which is discrete information.

To meet these objectives, we will work with both discrete and time-based partitioning. Specifically, by the end of the lesson, we will accomplish the following tasks:

  • Partition a dataset for the purpose of optimizing a flow and creating targeted features.

  • Create the following partitions:

    • Time-based partitioning. We will partition by purchase date for the purpose of targeting a specific date.

    • Discrete partitioning. We will partition by subsector (e.g., internet, gas, or travel) for the purpose of targeting a specific subsector.

  • Propagate partitioning in a Flow.

  • Stop partitioning in a Flow, collecting the partitions.

Project Workflow Overview

The final pipeline in Dataiku DSS is shown below.

../../_images/column-based-final-flow.png

Create the Project

From the Dataiku homepage, click +New Project > DSS Tutorials > Advanced Designer > Advanced Partitioning: Column-Based (Tutorial).

Note

You can also download the starter project from this website and import it as a zip file.

../../_images/col-based-starter-flow.png

Explore the Flow

The Flow contains the following datasets:

  • cardholder_info contains information about the owner of the card used in the transaction process.

  • merchant_info contains information about the merchant receiving the transaction amount, including the merchant subsector (subsector_description).

  • transactions_ contains historical information about each transaction including the purchase_date.

Change the Dataset Connections

To work with column-based partitioning, our datasets must have a SQL connection. In this step, we will change the connections of all our datasets from the local filesystem to the SQL connection defined on your Dataiku DSS instance.

  • Go to the Flow.

  • Select all datasets except for the initial input datasets.

  • Open the right panel, then click Change connection.

../../_images/initial-flow-change-connection.png

Dataiku DSS displays the change connection dialog box.

  • In New connection, select your SQL connection.

  • Select Drop data. We won’t need the filesystem datasets.

  • Click Save.

The Flow looks like this:

../../_images/flow-sql-connections.png

Build a Time-Based Partitioned Dataset

In this section, we will partition the dataset, transactions_copy, using the purchase_date dimension. The column, purchase_date, contains values representing the date of the credit card transaction. For the purposes of this hands-on lesson, the number of distinct purchase dates has been limited to 15 values.

../../_images/purchase-date-col-based.png

Create a Partitioned Output

In this section, we will create a partitioned dataset, transactions_partitioned_by_day, without duplicating the data. To do this, we will create a new logical pointer to the same SQL table.

Create transactions_partitioned_by_day

  • Run the Sync recipe that was used to create transactions_copy so that the dataset is built.

../../_images/build-transactions-copy.png
  • Open transactions_copy and view the Settings page.

  • Copy the contents of the Table and Schema fields, then save the information for later.

../../_images/transactions-copy-table-schema-fields.png
  • Return to the Flow.

  • In the upper-right-hand corner, click +Dataset.

  • From the SQL databases menu, select your SQL connection.

../../_images/sql-databases-menu.png

Dataiku DSS displays the configuration page for the SQL connection you selected.

  • Paste the information you saved into the Table and Schema fields.

  • Click Test Table.

Note

If the table is undefined, you can locate it by getting the list of tables.

../../_images/new-postgresql-dataset.png
  • Next to Partitioned, click Activate Partitioning to enable column-based partitioning for this SQL dataset.

  • Configure the dimension as follows:

    • Name the dimension, purchase_date.

    • Set the dimension type to Time range.

    • Set the range to Day.

Let’s create this dataset.

  • In the upper-right-hand corner, name the new dataset, transactions_partitioned_by_day.

  • Click Create.

../../_images/transactions-partitioned-by-day-create.png

Configure the Join Recipe

Configure the Join recipe to change the input dataset from transactions_copy to transactions_partitioned_by_day:

  • Run the Sync recipe that was used to create merchant_info_copy.

  • Run the Sync recipe that was used to create cardholder_info_copy.

  • Open the Join recipe that was used to create transactions_joined.

  • Click to replace the transactions_copy dataset.

  • Replace the transactions_copy dataset with transactions_partitioned_by_day.

../../_images/join-replace-input.png
  • Click Replace Dataset.

  • In the Input / Output step, set the partition dependency function type to “All available”.

../../_images/transactions-partitioned-by-day-all-avail.png
  • Save and run the recipe.

  • Return to the Flow.

Our Flow now looks like this:

../../_images/transactions-partitioned-by-day-flow2.png

Propagate the Time-Based Partition Dimension Across the Flow

Now that our dataset is partitioned, we can propagate the partitioning dimension across the Flow.

Partition the transactions_joined Dataset

Partition transactions_joined and identify the partition dependencies using the following steps:

  • Open the transactions_joined dataset.

  • View Settings > Connection.

  • Click Activate Partitioning.

  • Configure the dimension as follows:

    • Name the dimension, purchase_date.

    • Set the dimension type to Time range.

    • Set the range to Day.

  • Click Save and click Confirm to confirm that dependencies with this dataset will be changed.

../../_images/recipe-partition-dependencies-update-required.png

The datasets, transactions_partitioned_by_day and transactions_joined are now both partitioned by the purchase_date dimension.

Identify the Partition Dependencies

Now we must identify which partition dependency function type (e.g., All available, Equals, Time Range, etc.) we want to map between the two datasets, transactions_partitioned_by_day and transactions_joined.

To do this, modify the Join recipe:

  • Open the Join recipe and view the Input / Output step.

By default, Dataiku DSS sets the partition mapping so that the output dataset is built, or partitioned, using all available partitions in the input dataset.

To optimize the Flow, we only want to target the specific partitions where we are asking the recipe to perform computations.

  • In the partitions mapping, choose “Equals” as the partition dependency function type.

../../_images/compute-transactions-joined-equals.png
  • View the Settings step.

  • Click the Recipe run options icon next the Run button, and select Specify explicitly

  • Type the value, 2017-12-20,2017-12-21,2017-12-22,2017-12-23 then click OK.

../../_images/col-based-join-recipe-specify-explicit.png
  • Without running the recipe, click Save and accept the schema change.

The recipe is now configured to compute the join only on the rows belonging to the dates between “2017-12-20” and “2017-12-23”.

  • Do not run the recipe yet.

Run a Partitioned Job

Let’s run a partitioned job.

  • Return to the Flow.

  • Right-click transactions_joined and select Build from the menu.

  • Choose a Recursive build.

  • Select Smart reconstruction.

  • Click Build Dataset.

../../_images/col-based-build-transactions-joined.png

View Job and Activity Log

Let’s view the job activity log to reveal how partitioned SQL jobs are managed.

  • View the most recent Job.

  • View the job Activities.

../../_images/col-based-build-transactions-joined-job.png

The Activity Log for the partition “2017-12-20” contains the following queries:

  • A query to delete rows from transactions_joined where the purchase date is “2017-12-20”.

  • A query to insert the rows belonging to the new partition into transactions_joined.

View Partitions Count in the Flow

We can use a Flow view to discover whether or not the job is completed.

  • Return to the Flow.

  • From the Flow view, select to visualize the Partitions count.

../../_images/col-based-flow-view-partitions-count2.png

The table, transactions_partitioned_by_day has zero partitions. To update this count:

  • Select transactions_partitioned_by_day.

  • In the right panel, click Update status (count of records, file size) to update the count.

../../_images/col-based-update-partitions-count1.png
  • Wait while Dataiku DSS updates the count of partitions, then refresh the page. The transactions_partitioned_by_day contains 15 time-based partitions.

Close the Partitions count view to restore the default view.

Build a Non-Partitioned Output Dataset

We will now stop the partitioning of our purchase_date dimension and create a non-partitioned dataset. To do this, Dataiku DSS uses a “partition collection” mechanism during runtime. This mechanism is triggered by the partition dependency function type, “All available”.

This process is referred to as partition collecting. Partition collecting can be thought of as the reverse of partition redispatching.

  • Open the Prepare recipe that is used to create transactions_joined_prepared.

  • In the Input / Output step, ensure the partition dependency function type is set to “All available”.

../../_images/col-based-prepare-recipe-all-avail1.png
  • In the Script step, run the recipe.

“All Available” means all of the available input partitions will be processed when we run the recipe. The output dataset, transactions_joined_prepared, is not partitioned.

../../_images/col-based-non-partitioned-transactions-joined.png

Build a Discrete-Based Partitioned Dataset

Create a Partitioned Output

Recall from the business objectives that we want to be able to predict whether or not a transaction is fraudulent based on the transaction subsector (e.g., insurance, travel, and gas) and the purchase date. This is discrete information.

To meet this objective, we need to partition the Flow with a discrete dimension. We want the input dataset, transactions_joined_prepared, of the Window recipe to be partitioned by merchant subsector.

../../_images/col-based-discrete-flow-before-partition1.png

Create transactions_joined_prepared_windows

The input dataset in our Window recipe, transactions_joined_prepared, is not partitioned. To partition this dataset, we can follow the same steps we followed when we created our time-based partitioned outputs. By creating a new, logical pointer, we can create a new dataset without duplicating the data.

  • Open transactions_joined_prepared and view the Settings page.

  • Copy the contents of the Table and Schema fields and save the information for later.

../../_images/transactions-joined-prepared-table-field.png
  • Return to the Flow.

  • In the upper-right-hand corner, click +Dataset.

  • From the SQL databases menu, select your SQL connection.

  • Paste the information you saved from the Table and Schema fields.

  • Click Test Table.

  • Next to Partitioned, click Activate Partitioning to enable discrete-based partitioning for this SQL dataset.

  • Name the dimension, merchant_subsector_description.

../../_images/creating-transactions-partitioned-by-sector.png

Let’s create this dataset.

  • In the upper-right-hand corner, name the new dataset, transactions_partitioned_by_sector.

  • Click Create.

Our Flow now looks like this :

../../_images/newly-created-transactions-partitioned-by-sector.png

Configure the Window Recipe

Configure the Window recipe to change the input dataset from transactions_joined_prepared to transactions_partitioned_by_sector.

  • Open the Window recipe.

  • In the Input / Ouput step, change the transactions_joined_prepared dataset to transactions_partitioned_by_sector.

../../_images/col-based-change-window-recipe-input.png
  • Save and run the recipe.

  • Return to the Flow.

Our Flow now looks like this:

../../_images/col-based-window-recipe-updated.png

Recap

Let’s look at this in more detail.

  1. Data is written into the SQL table, ${projectKey}_transactions_copy. The dataset, transactions_copy is linked to this table.

  2. A connection is created to the dataset, transactions_partitioned_by_day. This dataset is also linked to the table, ${projectKey}_transactions_copy, but it is partitioned with the “purchase_date” dimension. Therefore, downstream datasets can also be partitioned by purchase_date.

  3. Data is then written in the SQL table, ${projectKey}_transactions_joined_prepared. The dataset, transactions_joined_prepared is linked to this table.

  4. Finally, a connection is created to the dataset, transactions_partitioned_by_sector. This dataset is also linked to the table ${projectKey}_transactions_joined_prepared, but it is partitioned by merchant subsector. Therefore, downstream datasets can also be partitioned by merchant subsector.

../../_images/col-based-recap.png

View Partitions Count in the Flow

  • Return to the Flow.

  • From the Flow view, select to visualize the Partitions count.

  • Select transactions_partitioned_by_sector and update the count of records.

../../_images/update-count-partitions-by-sector.png
  • Wait while Dataiku DSS updates the count of partitions, then refresh the page.

The transactions_partitioned_by_sector dataset contains 9 partitions.

  • Close the Flow view.

Propagate the Discrete Partitioning through the Flow

Now you can apply what you’ve learned to propagate the discrete partitioning through the Flow. To do this, follow the same process you used to propagate the time-based partitioning through the Flow.

  • Run the Split recipe.

  • Partition the following datasets by the merchant_subsector_description dimension:

    • Partition transactions_joined_prepared_windows.

    • Partition transactions_known.

    • Partition transactions_unknown.

    • Partition transactions_unknown_scored.

Congratulations! You have completed this hands-on lesson.

What’s Next?

Visit the course, Partitioned Models, to learn more about training a machine learning model on the partitions of a dataset.