Accurate forecasts are incredibly important two us – that’s what our business is built on! We assess and ‘sense-check’ our predictions in two ways:
- We validate our predictions against historical data using a process called time slicing. Essentially, this involves creating training and validation data sets and back testing our predictions using your historical data. This process of generating predictions and comparing them to withheld data is repeated dozens of times so that we know that when we give you a prediction it is as reliable as we can make it based on the data. This also gives us a good idea of how effective the model is going to be in practice. Some events are inherently random, but machine learning allows you to predict and plan with a greater degree of confidence.
- The second way that we evaluate model performance is by running frequent experiments. An experiment involves a ‘control set’ that doesn’t receive any sort of campaign or intervention, and test set that does receive an intervention. This allows us to see how the predictions perform in a controlled, scientific experiment.