Solution | RFM Segmentation#
Overview#
Business Case#
Personalization is a huge opportunity for Retail and CPG businesses but in order to do so, a key step is to identify purchasing patterns among consumers in order to make the right decisions depending on each consumer purchase behavior. While several techniques can be used to do so, one that has been tried and true is RFM segmentation. It identifies purchasing patterns by focusing on the Recency, the Frequency, and the Monetary value of the consumer purchases.
In this plug and play solution, we assess all customers in a transactions dataset against those 3 criteria before segmenting customers across homogenous groups of users (segments): from the “hibernating” to the “champions” every consumer belongs to one segment which can evolve over time depending on the purchases made. Brands are therefore able to push the right offer/product to the right consumer (segment). Doing so will foster loyalty and increase the consumer lifetime value for the brand, while consumers will have a better purchase experience while benefiting from a more personalized journey.
Installation#
The process to install this solution differs depending on whether you are using Dataiku Cloud or a self-managed instance.
Dataiku Cloud users should follow the instructions for installing solutions on cloud.
The Cloud Launchpad will automatically meet the technical requirements listed below, and add the Solution to your Dataiku instance.
Once the Solution has been added to your space, move ahead to Data Requirements.
After meeting the technical requirements below, self-managed users can install the Solution in one of two ways:
On your Dataiku instance connected to the internet, click + New Project > Dataiku Solutions > Search for RFM Segmentation.
Alternatively, download the Solution’s .zip project file, and import it to your Dataiku instance as a new project.
Additional note for 12.1+ users
If using a Dataiku 12.1+ instance, and you are missing the technical requirements for this Solution, the popup below will appear to allow admin users to easily install the requirements, or for non-admin users to request installation of code environments and/or plugins on their instance for the Solution.
Admins can processes these requests in the admin request center, after which non-admin users can re-trigger the successful installation of the Solution.
Technical Requirements#
To leverage this solution, you must meet the following requirements:
Have access to a Dataiku 12.5+* instance.
Dataiku’s Sankey Charts Plugin.
To benefit natively from the solution, your data should be stored in one of the following connections:
Snowflake
Google Cloud Platform: BigQuery + GCS (Both are required if you want to leverage BigQuery)
PostgreSQL
A Python 3.8 code environment named
solution_rfm-segmentation
with the following required packages:
scikit-learn==0.24.1
Flask==2.0.2
plotly==5.5.0
nbformat==4.2.0
matplotlib==3.3.4
Werkzeug==2.3.7
Data Requirements#
The Dataiku Flow was initially built using publicly available data. However, this project is meant to be used with your own data which can be uploaded using the Dataiku Application. Having a transactional historical dataset is mandatory to run the project and each row of the dataset should be comprised of:
A product (Product ID)
A related transaction (Transaction ID)
Number of products purchased in a transaction (Product Quantity)
The product purchase price (Product Price)
Transaction date (Date)
Customer who made the purchase (Customer ID)
Workflow Overview#
You can follow along with the sample project in the Dataiku gallery.
The project has the following high-level steps:
Connect your data as an input and select your analysis parameters via the Dataiku Application.
Ingest and pre-process the data to be available for RFM computation and propagation.
Identify segments and apply segmentation to our customer base.
Propagate RFM scoring beyond the defined period of dates.
Interactively visualize our RFM segments as well as their evolution.
Walkthrough#
Note
In addition to reading this document, it is recommended to read the wiki of the project before beginning to get a deeper technical understanding of how this Solution was created and more detailed explanations of Solution-specific vocabulary.
Plug and play with your own data and parameter choices#
To begin, you will need to create a new instance of the RFM Segmentation Dataiku Application. This can be done by selecting the Dataiku Application from your instance home, and clicking Create App Instance.
Once the new instance has been created you can walk through the steps of the Application to add your data and select the analysis parameters to be run.
In the Inputs section of the Application, reconfigure the connection parameters of the Flow. By default, the solution works with datasets in a filesystem connection. To connect the solution to your own transaction data you will need your admin to inform you of your connection type and schema to be input into the Application parameters. Once completed, the RECONFIGURE button will rebuild the full Flow to work with your data. Following reconfiguration, you can refresh the webpage and search for and test the transaction dataset.
Once your data has been uploaded, the data needs to be preprocessed before association rules are identified. Within the Transactions preprocessing section of the App, we can define how we want our transactions dataset to be transformed. Specifically, it is here where we can map the schema of our input transaction dataset to the solution-defined schema (See Data Requirements section above). Additionally, we can clarify how the dates are formatted.
With our data filtered and formatted correctly, we are ready to move to RFM to define the period of time on which we want to filter our transactions history to use for RFM scoring. Additionally, we can select the RFM score computation technique (KMeans vs quantile) and Monetary Value Policy (total basket amount vs. average basket amount). The appendix of the project wiki goes into detail about the difference between these methods and policies.
Optionally, we can apply RFM propagation to our data by setting parameters of the RFM Propagation section. The overall project Flow will look different from what has been presented above if Propagation is turned off. Applying Propagation allows us to apply our selected RFM scoring method on a larger date range that we define and enables us to see how customers have transitioned between segments over time.
We offer two final sections to make the Dataiku App more production-ready. The Build all Flow at once section allows us to run all jobs needed to build the Flow using our set parameters from the Application. The Automation section activates pre-built scenarios in order to refresh the project with new data over time.
Once we’ve built all elements of our Dataiku Application you can either continue to the Project View to explore the generated datasets or go straight to the Dashboards and WebApp to visualize the data. If you’re mainly interested in the visual components of this pre-packaged solution, feel free to skip over the next section.
Under the Hood: How do we compute RFM Scores and Segment Customers?#
The Dataiku Application is built on top of a Dataiku Flow that has been optimized to accept input datasets and respond to your select parameters. Let’s quickly walk through the different Flow zones to get an idea of how this was done. We will begin by focusing on the first branch of our Flow which is specifically dedicated to the computation of our RFM scores and customer segmentation based on those scores.
Flow zone |
Description |
---|---|
Inputs_zone |
contains the transactions_dataset which is populated by ingesting our transactions table defined in the Inputs section of the Application. By default, it contains a publicly available dataset we have provided. Additionally, there is an editable dataset that can be used to tailor the segments applied to customers (name, recency & frequency values, and a color hex code which is used for visualizations). This editable dataset is later synced into a non-editable dataset via the segments_identification_sync Flow zone. |
transactions_preprocessing |
Very simply renames columns, parses the date column if needed, extracts date elements, computes total price per transaction, and retrieves the RFM reference date defined in the Application as a constant to the dataset. |
rfm_preprocessing |
Takes our prepared transaction dataset and aggregates the transaction data by customer so that we can create the features necessary for RFM scoring. It is in this section that we also filter our transactions based on the period of time we defined in the Dataiku Application as our dates filtering strategy. |
rfm_segmentation |
We apply our selected RFM scoring method and segment our customers based on this score in the rfm_segmentation Flow zone using a single custom python recipe. All customers are scored and segmented by the recipe and the resulting RFM scoring information is stored in a managed folder to be used later for optional propagation. Once again, highly detailed explanations of how this project works as well as reference materials can be found in the project wiki. |
webapp_zone |
Our customer RFM segments are passed along to this zone, which isolates the datasets required for the backend of this solution’s webapps. |
Note
If you chose not to propagate RFM scoring you can skip the next section.
Under the Hood: How do we compute RFM scoring propagation?#
The secondary branch of our Flow which applies our RFM scoring method to a pre-defined propagation period is seemingly more complicated than the first branch detailed above but in practice actually quite straightforward.
Flow zone |
Description |
---|---|
rfm_propagation_preprocessing |
Takes in our prepared transactions dataset and aggregates the transaction data by customer per month. As a result, we get, for each month a customer made a transaction, the features that are used to apply our selected RFM scoring method. We do also filter our transactions based on a period of time but the period is much larger than the one defined by our selected dates filtering strategy. |
rfm_propagation |
Leverages the RFM scoring information learned in the rfm_segmentation Flow zone and applies RFM segmentation to our customers on a larger period of time. |
rfm_propagation_last_dates |
Filters customer RFM data to focus only on the most recent RFM segments and customers present in those recent segments. From this subset of data, we can then identify customers NOT present in the most recently identified segments as inactive customers. |
rfm_propagation_scores |
Computes the average RFM score for each month of the propagation period for analysis in the dashboard of month-over-month changes. |
rfm_propagation_transitions |
Computes the transitions between RFM features (Recency, Frequency, Monetary) as well as the transitions between RFM segments before splitting the dataset into individual datasets per feature/segment so that we can analyze all transitions across multiple axes. |
webapp_zone |
Isolates the datasets required for the backend of this solution’s webapps. |
A Short Note on Automation#
As mentioned in the Dataiku Application section of this article, it is possible to automate the Flow of this solution to be triggered based on new data, a specific time, etc. All of these trigger parameters can be tuned in the Scenarios menu of the project. Additionally, reporters can be created to send messages to Teams, Slack, email, etc. to keep our full organization informed. These scenarios can also be run ad-hoc as needed. Full detail on the scenarios and project automation can be found in the wiki.
Reproducing these Processes With Minimal Effort For Your Own Data#
The intent of this project is to enable marketing teams to have a plug-and-play solution built with Dataiku to segment customers by their RFM scores. By creating a singular solution that can benefit and influence the decisions of a variety of teams in a single organization, smarter and more holistic strategies can be designed in order to optimize customer retention, identify at-risk customers, improve customer communication, and adapt marketing strategies to customer segment distributions.
We’ve provided several suggestions on how to use transaction data to segment your customers but ultimately the “best” approach will depend on your specific needs and your data. If you’re interested in adopting this project to the specific goals and needs of your organization, roll-out and customization services can be offered on demand.