Solution | Process Mining#

Overview#

Business Case#

Process optimization as a means of reducing costs and improving efficiency is a perennial priority for companies. During economic periods in which cost pressures are especially pressing, a focus on process optimization becomes all the more critical to ensure continuing business success and resilience.

The ever-increasing use of technology systems to manage key processes provides previously-inaccessible opportunities for process analysis and optimization. These systems generate timestamped workflow logs as a byproduct of their primary task (e.g., case management, process execution).

This in turn enables a shift from time-consuming and potentially erratic process evaluation techniques (e.g., spot checks, time-in-motion studies) to modern, comprehensive, rapid, and statistically-driven analytics via process mining. By leveraging timestamps at various stages along with a process flow, process mining instantly creates a visual and statistical representation of any process, allowing teams to immediately undertake effective reviews of:

  • Process conformance

  • Root cause analysis

  • Target optimization

  • Bottleneck identification

A survey of CFOs revealed that 93% have not yet used process mining to map business processes. With the ever-increasing digitization of business models, the use of process mining to identify root causes, take effective action, and monitor impact opens a significant field of untapped opportunities for companies.

Installation#

The process to install this solution differs depending on whether you are using Dataiku Cloud or a self-managed instance.

Dataiku Cloud users should follow the instructions for installing solutions on cloud.

  1. The Cloud Launchpad will automatically meet the technical requirements listed below, and add the Solution to your Dataiku instance.

  2. Once the Solution has been added to your space, move ahead to Data Requirements.

Technical Requirements#

To leverage this solution, you must meet the following requirements:

  • Have access to a Dataiku 13.0+* instance.

  • A Python 3.8 or later version code environment with pandas 1.3 named solution_process-mining with the following required packages:

    cairosvg==2.5.2
    dash==2.9.3
    dash_bootstrap_components>=1.0
    dash_daq>=0.5
    dash_interactive_graphviz>=0.3
    dash_table==5.0
    Flask-Session==0.4.1
    graphviz==0.17
    pydot==1.4.2
    
  • The open-source graph visualization software Graphviz must also be installed on the same system running Dataiku.

Data Requirements#

The Dataiku Flow was initially built using real-world loan application event logs contained in an XES file. The data was sourced from the BPIC Challenge 2012 and represents a loan application process within a Dutch financial institution. This data will come pre-loaded into every instantiation of the Dataiku Application packaged in the solution but can be overwritten with your own event logs. The solution accepts XES or CSV files for parsing. Logs that are taken as input for the process mining analysis must contain the following columns:

Mandatory column

Description

Case Id

Indicates the unique identifier of a trace.

Activity

Indicates a step in the process. Depending on the use case, it can be interpreted as an action or a state, or a mix of both.

Timestamp

Indicates the timestamp for the activity. It could be the timestamp of an event or the start time/end time of an activity.

Optional columns can be added to enrich the analysis, including:

Optional column

Description

End Timestamp

Indicates the end timestamp for an activity that must be greater than or equal to the above timestamp. The current implementation does not support concurrent executions.

Sorting

Provides a sorting column for your activities. When multiple activities of the same case share a timestamp, it is not possible to determine their order. If a sorting column is not provided, such cases will be dropped.

Resource

Indicates the person or cost center that executed the action. It could be defined at a case level or at an activity level.

Numerical Attribute

Provides any external numerical information about the case. Examples of numerical attributes are the claim amount for a claim process, a loan amount for a loan application process, or an invoice amount for an accounts payable process.

Categorical Attribute

Provides any external categorical information about the case. Examples of categorical attributes are a type of claim for a claim process, a type of loan for a loan application process, or a type of supplier for an accounts payable process.

Workflow Overview#

You can follow along with the solution in the Dataiku gallery.

Dataiku screenshot of the final project Flow showing all Flow zones.

The project has the following high-level steps:

  1. Input audit logs and parse the data.

  2. Analyze data and pre-compute statistics.

  3. Interactively explore processes and create insights with a visual pre-built tool.

  4. Run conformance checks and explore individual traces.

Walkthrough#

Note

In addition to reading this document, it is recommended to read the wiki of the project before beginning to get a deeper technical understanding of how this Solution was created and more detailed explanations of Solution-specific vocabulary.

Input and Analyze our Audit Logs#

After installation, you will find the Process Mining Application under the Applications section of the home page of your Dataiku instance.

To begin, you will need to create a new instance of the Process Mining Application. The project is delivered with sample data that should be replaced with our data, assuming that it adopts the data model described above. This can be done in one of two ways:

  1. Data can be uploaded directly from your filesystem in the first section of the Dataiku app.

  2. Data can be connected to your database of choice by selecting an existing connection.

Once uploaded, we click Reconfigure in order to trigger a scenario that will build the workflow_parsed dataset and (optionally) switch all datasets in the Flow to the selected connection. Once this scenario successfully completes we should refresh the page to update the column names for identification.

Dataiku screenshot of the Process Mining Dataiku Application.

Column identification allows us to link the three mandatory columns to the appropriate columns in the input dataset. Then we select either to include all other columns as attributes, or select attribute columns manually. For improved user experience, try selecting only those columns you think will be of most value as large numbers of columns selected here will make the visual interface more cluttered. It is possible to select the columns used for case and activity as additional attributes; this will allow you to use them as filters later.

With columns correctly identified and parameters input, we can build our Flow, which will run all recipes in the Flow in order to output the final datasets needed to start the webapp. Once the Flow is done building, we can directly access the Process Mining webapp by entering the dashboard directly from the Dataiku app. The Flow can always be rebuilt with new parameters from the Dataiku App.

Once a reference process has been defined, we can return to the Dataiku app to run a conformance check report to appear in the dashboard. We just need to select a saved reference process from the dropdown and press Run.

From Logs to Process Discovery: Using the Webapp#

Process mining often starts with very large log files that can seem at first glance inextricably jumbled and too complex to parse. The process of discovery is the complex task of making sense of the data. Achieving this is done using manual filters and selections to discriminate signal from noise, alongside algorithms that build a synthetic representation of a process.

The Process Discovery tab is the entry point to the webapp. It consists of a visualization screen where the process is displayed and a menu where filters and selectors can be used to configure the visualization. More details on each selector and filter can be found in the solution wiki. At a high level, the graph represents an aggregation of all the traces which match the set filters. Each node represents an activity with its name written inside, except two specific activities:

  • START: all nodes linked to this activity are starting activities.

  • END: all nodes linked to this activity are ending activities.

Activities are also color-coded according to their frequency in the process graph. A legend at the top of the graph explains the color-coding. In the top-left corner, additional information is available about the traces that are displayed. The first number is the number of traces actually displayed on the screen, which consist of filtered traces when including the variants filter and the second number is the total number of traces in the data, before any filtering happens.

Nodes are linked together with directed arrows, with each arrow having a number associated with it that indicates the number of transitions from the source node to the target, or the average time spent on the transition. This number can be toggled between frequency and time in the top-right corner. Clicking on the nodes or edges will result in pop-up windows appearing which will contain details and statistics about the nodes/edges.

Dataiku screenshot of the process discovery available in the webapp.

Within the menu of the webapp, we can do a few different things. In the menu, we have access to a variety of Selectors that will always select full traces — never only parts of them. The webapp comes with default preset selectors for every analysis, but optional selectors can be included by selecting additional columns from the workflow dataset through the Dataiku Application.

We can also select the maximum number of variants that will visualize. Please note that plotting too many variants will fail as the graph will get too large.

The star icon on the top right lets you define a reference process. To export a visualization outside the webapp, you can save it using the top-right Save button.

Assessing Process Fit to Real-World Data: Conformance Checks#

Conformance checking is the task of comparing a set of traces with a predefined process. It creates insights into how well real-world data conforms to a user-defined process and also monitors continuously how new traces fit the process. Conformance checking can take multiple forms, depending on the format of the saved process and business expectations.

Once a process has been saved in the Process Discovery tab, we can switch to the Conformance Checks tab where we’ll be able to click “Run Conformance Checks” and apply the conformance calculations to the existing traces.

Doing so will run our saved process through the Conformance Checks Flow zone which computes the conformance checks on the traces via a Python recipe. Back in the Webapp, once the Conformance Checks has been completed, an aggregated conformance score will be plotted on the right side.

This shows the average conformance of the traces defined by the left-hand side selectors relative to the reference process (as described in the description above the graph). The user can adjust the time granularity of the graph (monthly, weekly, daily). In production, these checks could be run daily or in real-time to monitor the conformity of incoming data to a set process.

Dataiku screenshot of the conformance checks in the webapp.

Exploring Individual Traces#

Within the CONFORMANCE view of the webapp, we can also explore individual traces (assuming conformance checks have been run). Selectors on the left-hand side will determine which traces will be displayed. The resulting table will show each trace’s Case ID, Conformance Score from the Conformance Checks, Start Time, and aggregated attributes for all other optional columns. The table can be sorted on each column, and when we click on a row from the table, a graph of the specific trace process will appear below with the same functionalities as the Process Discovery visualization.

Dataiku screenshot of the webapp view allowing for individual trace exploration.

Understand Processes with Visual Insights#

A dashboard is delivered with the solution, which contains three tabs:

Tab

Description

Process Summary

Contains an initial descriptive analysis of our process logs/data.

Conformance

Provides a conformance report per selected reference process from the Dataiku App.

Webapp

Embeds the aforementioned webapp into a page of the dashboard to make sharing across the organization simple.

Dataiku screenshot of the process summary tab.

The Process Summary page depends on several datasets in the Flow to generate key metrics and charts which analyze our initial process data. Here we can find the number of traces in our data, the average time for process completion, descriptive analytics about our start and end activities, the frequency of activities, overall case performance, and more.

Dataiku screenshot of the Conformance Report tab of the dashboard.

The Conformance page will be built by the final section of the Dataiku App and can generate a report for the reference process that was created and saved in the webapp. The report will contain our conformance graph showing the reference process with arrows indicating actual data Flow. Also, metrics and charts are generated to identify the number of cases that have been analyzed, the share of cases that conform to the reference process, the average conformance score, and the evolution of conformance metrics. This report can be changed and regenerated each time new data comes in or the reference process changes.

Automation#

Four scenarios automate parts of the computation of the Flow and are either triggered through the Dataiku Application or the webapp. Reporters can be created to send messages to Teams, Slack, email, etc. to keep our full organization informed. These scenarios can also be run ad-hoc as needed. Full detail on the scenarios can be found in the wiki.

Process mining workstreams often integrate additional data science techniques, such as automated anomaly detection or cluster analysis, to enrich process investigations and uncover deeper insights. Dataiku’s platform is ideally positioned to allow your team to pursue these additional paths and integrate the results into this process mining solution.

Reproducing these Processes With Minimal Effort For Your Own Data#

The intent of this project is to enable operations, strategy, and transformation teams to understand how Dataiku, can be used to create a visual map of your processes based on readily available process logs.

By creating a singular solution that can benefit and influence the decisions of a variety of teams in a single organization, smarter and more holistic strategies can be designed in order to deep-dive into specific processes, analyze outliers, and apply powerful statistical techniques to enable remediation and optimization efforts.

We’ve provided several suggestions on how to use process logs to interactively create processes and industrialize using conformance checks, but ultimately the “best” approach will depend on your specific needs and your data. Although we’ve focused on the financial services & insurance industry, process mining can be used in a variety of use cases across industries. If you’re interested in adapting this project to the specific goals and needs of your organization, roll-out and customization services can be offered on demand.