Inspect a model’s results#

See a screencast covering this section’s steps

Once your models have finished training, let’s see how Dataiku did.

  1. While in the Result tab, click on the Random forest model in Session 1 on the left hand side of the screen to open a detailed model report.

Dataiku screenshot the Result tab for a prediction task.

Check model explainability#

One important aspect of a model is the ability to understand its predictions. The Explainability section of the report includes many tools for doing so.

  1. In the Explainability section, click Feature importance to see an estimate of the influence of a feature on the predictions.

Dataiku screenshot of the feature importance chart for a model in the Lab.

Note

Due to the somewhat random nature of algorithms like random forest, you might not have exactly the same results throughout this modeling exercise. This is to be expected.

Check model performance#

You’ll also want to dive deeper into a model’s performance, starting with basic metrics for a classification problem like accuracy, precision, and recall.

  1. In the Performance section, click Confusion matrix to check how well the model did at classifying real and fake job postings.

Dataiku screenshot of the confusion matrix for a model in the Lab.

Check model information#

Alongside the results, you’ll also want to be sure how exactly the model was trained.

  1. In the Model Information section, click Features to check which features were included in the model, which were rejected (such as the text features), and how they were handled.

  2. When finished, click on Models to return to the Result home.

Dataiku screenshot of the feature handling for a model in the Lab.