Solution | Insurance Claims Modeling#
Overview#
Business Case#
Generalized Linear Models (GLMs) are a common approach to consumer insurance claims modeling across the world, with a deep, rich, and proven track record. They are an industry standard, well-understood, and acceptable to stakeholders inside and outside the insurance firm.
Existing no- and low-code platforms for building and approving GLMs are often outdated and lack modern data science and analytic capabilities. They require complex and potentially unreliable nests of supporting systems to work effectively.
This solution acts as a template of how actuaries could use Dataiku to perform their work. By using this solution, actuaries can benefit from training GLMs in a visual environment, conduct extensive Exploratory Data Analysis, and push their models to production through a simple API deployment interface.
Installation#
The process to install this solution differs depending on whether you are using Dataiku Cloud or a self-managed instance.
Dataiku Cloud users should follow the instructions for installing solutions on cloud.
The Cloud Launchpad will automatically meet the technical requirements listed below, and add the Solution to your Dataiku instance.
Once the Solution has been added to your space, move ahead to Data Requirements.
After meeting the technical requirements below, self-managed users can install the Solution with the following instructions:
From the Design homepage of a Dataiku instance connected to the internet, click + Dataiku Solutions.
Search for and select Insurance Claims Modeling.
Click Install, changing the project folder into which the solution will be installed if needed.
From the Design homepage of a Dataiku instance connected to the internet, click + New Project.
Select Dataiku Solutions.
Search for and select Insurance Claims Modeling.
Note
Alternatively, download the Solution’s .zip project file, and import it to your Dataiku instance as a new project.
Technical Requirements#
To leverage this solution, you must meet the following requirements:
Have access to a Dataiku 12.0+* instance.
A Python 3.8 or 3.9 code environment named
solution_claim-modeling
with the following required packages:
scikit-learn>=1.0,<1.1
Flask<2.3
glum==2.6.0
cloudpickle>=1.3,<1.6
lightgbm>=3.2,<3.3
scikit-learn>=1.0,<1.1
scikit-optimize>=0.7,<0.10
scipy>=1.5,<1.6
statsmodels==0.12.2
xgboost==0.82
dash==2.3.1
dash_bootstrap_components==1.0
Data Requirements#
The Dataiku Flow was initially built using publicly available data from the French government and the CASDataset R package about insurance claims.
The car insurance datasets are:
Dataset |
Description |
---|---|
claim_frequency |
A 678,013 rows dataset with one row per policyholder containing information on them and their car and the number of claims they have made for this period. |
claim severity |
A 26,639 rows dataset consisting of an id column that is linked to the claim frequency dataset and a claim amount column that sums the total claim amount made by this policyholder for this period. |
These two datasets are joined on the ID column to have the claim amount associated with claims when there have been any.
The geographic datasets consist of:
Dataset |
Description |
---|---|
regions_correspondance |
Matches old French regions (before 2016) with current regions. |
regions polygons |
Contains polygons for each of the new regions. |
Workflow Overview#
You can follow along with the solution in the Dataiku gallery.
The project has the following high-level steps:
Input Historical Data and perform feature processing.
Conduct Exploratory Data Analysis for a deeper understanding.
Train models for claims modeling and pricing.
Review model performance.
Deploy models to an API for real-time predictions.
Interactively explore our models’ predictions with a pre-built webapp and dashboard.
Walkthrough#
Note
In addition to reading this document, it is recommended to read the wiki of the project before beginning to get a deeper technical understanding of how this Solution was created and more detailed explanations of Solution-specific vocabulary.
Gather Input Data and Prepare for Training#
Following the input of the initial joining of our historical claims data in the Input Flow zone, we move our claims data to the Feature Processing Flow zone to prepare the data for training. To do so, we apply a prepare recipe to the data in which Exposure and Claim Numbers are capped, and missing values for claim amounts are filled with 0. The decision to cap value is inspired by source research on Generalized Linear Models for Insurance Ratings. More details on this research can be explored in the Project Wiki. The result of this prepare recipe is then split into Train/Test sets. We apply an additional prepare recipe to the train set to ensure the reliability of the test results. The final prepare recipe performs additional transformations to analyze relationships and applies some additional value caps to focus on significant data. Although our data is now ready for training, we will first conduct Exploratory Data Analysis (EDA) on our data that can reveal some interesting patterns and insights in our historical data.
Extensive Exploratory Data Analysis for a Deeper Understanding#
Three Flow zones comprise our EDA in this solution:
Flow zone |
Description |
---|---|
Univariate Analysis |
Aims at analyzing each of the possible variables one by one to check their distribution. |
Cross Variable |
Identifies more complex dependencies in our data by looking at variables taken together and analyzing their joint distribution. |
Geographic Analysis |
Plots geographical variables on maps to enable visual confirmation of intuition by experienced analysts. |
Starting off with the Univariate Analysis Flow zone, our claims train dataset is used as an input which is then folded using a prepare step. Folding all the variables in this dataset allows us to avoid having to use as many group recipes as there are variables. The resulting, much longer, the dataset is then aggregated and grouped by the minimum claim number, claim amount sum, and exposure sum. We then unfold the data with a final prepare recipe to provide us with data on the Claim Frequency, Claim Severity, and Pure Premium contained in our train dataset. These values are visualized in the dashboard tabs of the same names.
Moving along to the Cross Variable Flow zone, we once again use our claims train dataset as an input and apply 3 different group by recipes. The first group by recipe computes the min and max of Density for each area. This analysis reveals that the ranges of Area and density do not intersect so any model using Density and Area would lead to overfitting. Additional group by recipes analyze the data by Vehicle Brand and Area, as well as Vehicle Brand and Vehicle Power. There does not seem to be any correlation between Vehicle Brand and Area, although there is a correlation between Brand and Power. These relationships are visualized in the final 3 graphs of the Claim Frequency tab of the dashboard.
Finally, the Geographic Analysis Flow zone uses our prepared claims train the dataset, as well as two of our original input datasets (regions_correspondance and regions_polygons) to associate the polygons to each region. The resulting geographical data is visualized using Dataiku’s geographic map building capabilities in the Map View tab of the dashboard. The maps allow us to see how data is distributed across regions. We first look at the sum of exposure and claim numbers. Our solution was built with data representing French regions but can be easily adapted with geographical data for other countries. Before moving onto the modeling part of this solution, it is important to spend some time exploring the aforementioned dashboard tabs to get a clear understanding of the underlying data that we will use to build our predictive models.
Train models to predict claim frequency, severity, and amount#
Similar to the previous section, 3 Flow zones are involved in the model training process, all of which use the GLM plugin to enable training of Generalized Linear Models within Dataiku’s VisualML feature:
Flow zone |
Description |
---|---|
Claim Frequency Modeling |
Trains a model to predict the number of claims made by a policyholder. |
Claim Severity Modeling |
Trains a model to predict the claim amount conditional on the existence of a claim. |
Pure Premium Modeling |
Trains a model to predict the claim amount unconditional on the existence of a claim. |
To train our Claim Frequency model we take the claims train dataset directly into a VisualML recipe. The recipe applies some additional feature preprocessing and handling before training a Generalized Linear Model Regression on the dataset. The previously created claims test set is used to analyze the performance of our model on 3 metrics; Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), and Deviance. Model results and comparison of GLM Actual vs Expected graphs can be explored in the GLM Summary View.
Our Claim Severity model requires that we first filter our claims dataset to only include observations where claims exist (i.e. ClaimNb > 0). Additionally, the claims test set is scored using the Claims Frequency model since we will rely on the ClaimNb prediction. The training script is the same as for Claim Frequency but with one obvious difference: here we are predicting Claim Amount. As a result of that difference the metric of importance for assessing our model’s performance will be the gamma deviance.
Finally, we train our Pure Premium model using the claim train dataset with the prediction target being Claim Amount. However, unlike our Claim Severity model, we do not need to filter out rows where claims don’t exist. We chose a Tweedie distribution to model the response and thus we analyze our model performance on the Tweedie deviance.
Evaluate the performance of our models#
We’ve spoken through the differences between each trained model but which model performs the best? The Model Comparison Flow zone uses scored holdout datasets to compare the performance of our 3 models. All 3 scored datasets are taken from their respective Flow zones and joined on the id key they all share. Additionally, a prepare recipe is used to compute the Compound Model prediction as the product of the Claim Number and Claim Amount predictions. We can now compare this prediction with the Tweedie prediction. For our solution, both the parametric measures fit and Lorenz curves indicate stronger performance by the Compound Model. However, more work on the feature handling may produce different results. Results can be visually explored in the Model Comparison tab of the dashboard.
Deploy our models to an API for real predictions#
All of the previously trained models are exposed by a deployed API service. The API Flow zone serves to group all models into a single Flow zone for organizational purposes. The deployed API service named claim_risk contains 3 endpoints (one for each prediction model). The enrichment is necessary beforehand as the model scripts were designed to contain the necessary feature processing. If you have an API node as part of our Dataiku subscription, this API service can be pushed to the API node to allow for real queries from your claims teams to be sent.
Enable Claims Teams with Pre-Built Interactive Dashboards#
Note
For 11.4+ instances with UIF enabled, retraining of the model is necessary prior to starting the webapp for the first time to avoid a permission denied error.
In addition to the pre-defined model comparison analysis detailed above and visualized in the Model Comparison dashboard tab, this solution comes with a pre-built webapp to allow for interactive model comparison. The interactive view available in the Interactive Model Comparison dashboard tab provides a view to explore models’ predictions, understand how each feature affects the models, and compare the model predictions.
The impacting features can be modified using sliders or dropdown menus which will cause an immediate call to the API which will, in turn, return predictions of each model. Due to the redundancy of Area and Density discovered during our EDA, Area is not an available feature in the webapp.
Note
It is possible to have the webapp use models directly deployed on the Flow, instead of the API service, by changing the use_api and api_node_url project variables.
Reproducing these Processes With Minimal Effort For Your Own Data#
The intent of this project is to enable claims teams to understand how Dataiku, and the new GLM Plugin, can be used to create an insurance pricing model based on historical claim data. By creating a singular solution that can benefit and influence the decisions of a variety of teams in a single organization, smarter and more holistic strategies can be designed in order to leverage GLM pricing Solutions, establish effective governance, and centralize pricing workflows without sacrificing agility.
We’ve provided several suggestions on how to use historical claims data to train predictive models but ultimately the “best” approach will depend on your specific needs and your data. If you’re interested in adapting this project to the specific goals and needs of your organization, roll-out and customization services can be offered on demand.