Model features and prediction accuracy
Model Performance insights are available to anyone with an active Predict subscription and can be found in the App under the Predict tab.
Each model has a score based on the overall accuracy of the model when compared to other similar organisations to your own. You are able to click on each model to see a snapshot of the top predictors of each model although these are only a small part of the overall data that we use in these models.
In the model summary you can understand how the model was created with the number of data points used, the number of cases where the predicted event happened and the data points used on each donor in the creation of the model.
Most importantly, you will find Prediction Accuracy in this section providing a graph showing the accuracy based on a prediction ‘snapshot’ made 6 months ago (these graphs update daily).
Here's some information on how the graphs are updated.
Every 7 days we are producing a new set of predictions. We keep a database of every prediction we make and on a nightly basis we review these predictions like so.
We look at a 6-month window after the prediction date (i.e. if we made the predictions on 1st January 2022 , we will observe the period 1st January 2022 - 30th June 2022, if we made the predictions on 1-Dec-2022 we will look at 1st-Dec-2022 to today).
In that observation ‘window’ we calculate if each of the people have done the target action (i.e. donated to mail, appeal for DM Appeal prediction, churned from RG program for RG churn). So we now have our prediction (a probability or rank) and the outcome (a true/false field).
From that we can calculate how ‘good’ the predictions were. The scores will tend to improve as the date gets further into the past (more time to observe positive events) until we move outside the window and stop updating the calculations.
The data you will see in the app is the current values of the predictions we produced the closest to six months ago.
Watch Tim’s video above to learn more!