Tutorial | Column-based partitioning#

Get started#

In this tutorial, we will modify a completed Flow dedicated to the detection of credit card fraud. Our goal is to be able to predict whether or not a transaction is fraudulent based on the transaction subsector (e.g., insurance, travel, or gas) and the purchase date.

Prerequisites#

  • Dataiku 12.0 or later.

  • An SQL connection. For an overview of which databases are supported by Dataiku, see SQL databases in the reference documentation.

Objectives#

Our Flow should meet the following business objectives:

  • Be able to create time-based computations.

  • Train a partitioned model to test the assumption that fraudulent behavior is related to merchant subsector, which is discrete information.

To meet these objectives, we will work with both discrete and time-based partitioning. Specifically, by the end of the lesson, we will accomplish the following tasks:

  • Partition a dataset for the purpose of optimizing a flow and creating targeted features.

  • Create the following partitions:

    • Time-based partitioning. We will partition by purchase date for the purpose of targeting a specific date.

    • Discrete partitioning. We will partition by subsector (e.g., internet, gas, or travel) for the purpose of targeting a specific subsector.

  • Propagate partitioning in a Flow.

  • Stop partitioning in a Flow, collecting the partitions.

Project workflow overview#

The final pipeline in Dataiku is shown below.

../../_images/column-based-final-flow.png

Create the project#

  1. From the Dataiku homepage, click +New Project > DSS tutorials > Advanced Designer > Partitioning: Column-Based.

Note

You can also download the starter project from this website and import it as a zip file.

../../_images/col-based-starter-flow.png

Explore the Flow#

The Flow contains the following datasets:

  • cardholder_info contains information about the owner of the card used in the transaction process.

  • merchant_info contains information about the merchant receiving the transaction amount, including the merchant subsector (subsector_description).

  • transactions_ contains historical information about each transaction including the purchase_date.

Change the dataset connections#

To work with column-based partitioning, our datasets must have an SQL connection. In this step, we will change the connections of all our datasets from the local filesystem to the SQL connection defined on your Dataiku instance.

  1. Go to the Flow.

  2. Select all datasets except for the initial input datasets.

  3. Open the right panel, then click Change connection.

    ../../_images/initial-flow-change-connection.png

    Dataiku displays the change connection dialog box.

  4. In New connection, select your SQL connection.

  5. Select Drop data. We won’t need the filesystem datasets.

  6. Click Save.

The Flow looks like this:

../../_images/flow-sql-connections.png

Build a time-based partitioned dataset#

In this section, we will partition the dataset, transactions_copy, using the purchase_date dimension. The column, purchase_date, contains values representing the date of the credit card transaction. For the purposes of this tutorial, the number of distinct purchase dates has been limited to 15 values.

../../_images/purchase-date-col-based.png

Create a partitioned output#

In this section, we will create a partitioned dataset, transactions_partitioned_by_day, without duplicating the data. To do this, we will create a new logical pointer to the same SQL table.

Create transactions_partitioned_by_day#

  1. Run the Sync recipe that was used to create transactions_copy so that the dataset is built.

    ../../_images/build-transactions-copy.png
  2. Open transactions_copy and view the Settings page.

  3. Copy the contents of the Table and Schema fields, then save the information for later.

    ../../_images/transactions-copy-table-schema-fields.png
  4. Return to the Flow.

  5. In the upper-right-hand corner, click +Dataset.

  6. From the SQL databases menu, select your SQL connection.

    ../../_images/sql-databases-menu.png

Dataiku displays the configuration page for the SQL connection you selected.

  1. Paste the information you saved into the Table and Schema fields.

  2. Click Test Table.

    Note

    If the table is undefined, you can locate it by getting the list of tables.

    ../../_images/new-postgresql-dataset.png
  3. Next to Partitioned, click Activate Partitioning to enable column-based partitioning for this SQL dataset.

  4. Configure the dimension as follows:

    • Name the dimension, purchase_date.

    • Set the dimension type to Time range.

    • Set the range to Day.

Let’s create this dataset.

  1. In the upper-right-hand corner, name the new dataset, transactions_partitioned_by_day.

  2. Click Create.

../../_images/transactions-partitioned-by-day-create.png

Configure the Join recipe#

Configure the Join recipe to change the input dataset from transactions_copy to transactions_partitioned_by_day:

  1. Run the Sync recipe that was used to create merchant_info_copy.

  2. Run the Sync recipe that was used to create cardholder_info_copy.

  3. Open the Join recipe that was used to create transactions_joined.

  4. Click to replace the transactions_copy dataset.

  5. Replace the transactions_copy dataset with transactions_partitioned_by_day.

    ../../_images/join-replace-input.png
  6. Click Replace Dataset.

  7. In the Input / Output tab, set the partition dependency function type to All available.

    ../../_images/transactions-partitioned-by-day-all-avail.png
  8. Save and run the recipe.

  9. Return to the Flow.

Our Flow now looks like this:

../../_images/transactions-partitioned-by-day-flow2.png

Propagate the time-based partition dimension across the Flow#

Now that our dataset is partitioned, we can propagate the partitioning dimension across the Flow.

Partition the transactions_joined dataset#

Partition transactions_joined and identify the partition dependencies using the following steps:

  1. Open the transactions_joined dataset.

  2. View Settings > Connection.

  3. Click Activate Partitioning.

  4. Configure the dimension as follows:

    • Name the dimension, purchase_date.

    • Set the dimension type to Time range.

    • Set the range to Day.

  5. Click Save and click Confirm to confirm that dependencies with this dataset will be changed.

../../_images/recipe-partition-dependencies-update-required.png

The datasets, transactions_partitioned_by_day and transactions_joined are now both partitioned by the purchase_date dimension.

Identify the partition dependencies#

Now we must identify which partition dependency function type (e.g., All available, Equals, Time Range, etc.) we want to map between the two datasets, transactions_partitioned_by_day and transactions_joined.

To do this, modify the Join recipe:

  1. Open the Join recipe and view the Input / Output tab.

    By default, Dataiku sets the partition mapping so that the output dataset is built, or partitioned, using all available partitions in the input dataset.

    To optimize the Flow, we only want to target the specific partitions where we are asking the recipe to perform computations.

  2. In the partitions mapping, choose Equals as the partition dependency function type.

    ../../_images/compute-transactions-joined-equals.png
  3. View the Settings tab.

  4. Click the Recipe run options icon next the Run button, and select Specify explicitly.

  5. Type the value, 2017-12-20,2017-12-21,2017-12-22,2017-12-23 then click OK.

    ../../_images/col-based-join-recipe-specify-explicit.png
  6. Without running the recipe, click Save and accept the schema change.

    The recipe is now configured to compute the Join only on the rows belonging to the dates between “2017-12-20” and “2017-12-23”.

  7. Do not run the recipe yet.

Run a partitioned job#

Let’s run a partitioned job.

  1. Return to the Flow.

  2. Right-click transactions_joined and select Build from the menu.

  3. Choose a Recursive build.

  4. Select Smart reconstruction.

  5. Click Build Dataset.

../../_images/col-based-build-transactions-joined.png

View job and activity log#

Let’s view the job activity log to reveal how partitioned SQL jobs are managed.

  1. View the most recent Job.

  2. View the job Activities.

../../_images/col-based-build-transactions-joined-job.png

The Activity Log for the partition “2017-12-20” contains the following queries:

  • A query to delete rows from transactions_joined where the purchase date is “2017-12-20”.

  • A query to insert the rows belonging to the new partition into transactions_joined.

View partitions count in the Flow#

We can use a Flow view to discover whether or not the job is completed.

  1. Return to the Flow.

  2. From the Flow view, select to visualize the Partitions count.

../../_images/col-based-flow-view-partitions-count2.png

The table, transactions_partitioned_by_day has zero partitions. To update this count:

  1. Select transactions_partitioned_by_day.

  2. In the right panel, click Update status (count of records, file size) to update the count.

    ../../_images/col-based-update-partitions-count1.png
  3. Wait while Dataiku updates the count of partitions, then refresh the page. The transactions_partitioned_by_day contains 15 time-based partitions.

  4. Close the Partitions count view to restore the default view.

Build a non-partitioned output dataset#

We will now stop the partitioning of our purchase_date dimension and create a non-partitioned dataset. To do this, Dataiku uses a “partition collection” mechanism during runtime. This mechanism is triggered by the partition dependency function type, “All available”.

This process is referred to as partition collecting. Partition collecting can be thought of as the reverse of partition redispatching.

  1. Open the Prepare recipe that is used to create transactions_joined_prepared.

  2. In the Input / Output tab, ensure the partition dependency function type is set to All available.

    ../../_images/col-based-prepare-recipe-all-avail1.png
  3. In the Script step, run the recipe.

All Available means all of the available input partitions will be processed when we run the recipe. The output dataset, transactions_joined_prepared, is not partitioned.

../../_images/col-based-non-partitioned-transactions-joined.png

Build a discrete-based partitioned dataset#

Create a partitioned output#

Recall from the business objectives that we want to be able to predict whether or not a transaction is fraudulent based on the transaction subsector (e.g., insurance, travel, and gas) and the purchase date. This is discrete information.

To meet this objective, we need to partition the Flow with a discrete dimension. We want the input dataset, transactions_joined_prepared, of the Window recipe to be partitioned by merchant subsector.

../../_images/col-based-discrete-flow-before-partition1.png

Create transactions_joined_prepared_windows#

The input dataset in our Window recipe, transactions_joined_prepared, is not partitioned. To partition this dataset, we can follow the same steps we followed when we created our time-based partitioned outputs. By creating a new, logical pointer, we can create a new dataset without duplicating the data.

  1. Open transactions_joined_prepared and view the Settings page.

  2. Copy the contents of the Table and Schema fields and save the information for later.

    ../../_images/transactions-joined-prepared-table-field.png
  3. Return to the Flow.

  4. In the upper-right-hand corner, click +Dataset.

  5. From the SQL databases menu, select your SQL connection.

  6. Paste the information you saved from the Table and Schema fields.

  7. Click Test Table.

  8. Next to Partitioned, click Activate Partitioning to enable discrete-based partitioning for this SQL dataset.

  9. Name the dimension, merchant_subsector_description.

../../_images/creating-transactions-partitioned-by-sector.png

Let’s create this dataset.

  1. In the upper-right-hand corner, name the new dataset, transactions_partitioned_by_sector.

  2. Click Create.

Our Flow now looks like this :

../../_images/newly-created-transactions-partitioned-by-sector.png

Configure the Window recipe#

Configure the Window recipe to change the input dataset from transactions_joined_prepared to transactions_partitioned_by_sector.

  1. Open the Window recipe.

  2. In the Input / Output tab, change the transactions_joined_prepared dataset to transactions_partitioned_by_sector.

    ../../_images/col-based-change-window-recipe-input.png
  3. Save and run the recipe.

  4. Return to the Flow.

Our Flow now looks like this:

../../_images/col-based-window-recipe-updated.png

Recap#

Let’s look at this in more detail.

  1. Data is written into the SQL table, ${projectKey}_transactions_copy. The dataset, transactions_copy is linked to this table.

  2. A connection is created to the dataset, transactions_partitioned_by_day. This dataset is also linked to the table, ${projectKey}_transactions_copy, but it is partitioned with the “purchase_date” dimension. Therefore, downstream datasets can also be partitioned by purchase_date.

  3. Data is then written in the SQL table, ${projectKey}_transactions_joined_prepared. The dataset, transactions_joined_prepared is linked to this table.

  4. Finally, a connection is created to the dataset, transactions_partitioned_by_sector. This dataset is also linked to the table ${projectKey}_transactions_joined_prepared, but it is partitioned by merchant subsector. Therefore, downstream datasets can also be partitioned by merchant subsector.

../../_images/col-based-recap.png

View partitions count in the Flow#

  1. Return to the Flow.

  2. From the Flow view, select to visualize the Partitions count.

  3. Select transactions_partitioned_by_sector and update the count of records.

    ../../_images/update-count-partitions-by-sector.png
  4. Wait while Dataiku updates the count of partitions, then refresh the page.

    The transactions_partitioned_by_sector dataset contains 9 partitions.

  5. Close the Flow view.

Propagate the discrete partitioning through the Flow#

Now you can apply what you’ve learned to propagate the discrete partitioning through the Flow. To do this, follow the same process you used to propagate the time-based partitioning through the Flow.

  1. Run the Split recipe.

  2. Partition the following datasets by the merchant_subsector_description dimension:

    • Partition transactions_joined_prepared_windows

    • Partition transactions_known

    • Partition transactions_unknown

    • Partition transactions_unknown_scored

Congratulations! You have completed the tutorial.

What’s next?#

Visit the course, Partitioned Models, to learn more about training a machine learning model on the partitions of a dataset.