You will have noticed that some Dataro models come with two new fields in your CRM: the 'Score' field and the 'Rank' field. You might be wondering why.

**Scores**

Dataro Scores are a number on a scale from 0.0-1.0, and approximate the probability of a person doing a particular thing. The higher the score, the higher the probability of an individual taking the associated action. So if we were trying to predict direct mail giving, a person with a score of 0.76 would have roughly a 76 percent chance of giving in the upcoming DM appeal, according to our modelling.

**Ranks**

Ranks simply order your donors from most to least likely to take a particular action. So a person with Rank 1 is the most likely and rank 10,000 is the 10,000th most likely, etc.

**Why have ranks & scores?**

Ranks and Scores are the output of different models, as explained below. But basically they are different ways of looking at the same information.

It is possible to construct a campaign using either ranks or scores, however, we generally advise newer users to use the ranks and our suggested campaign sizes (on the homepage of the app) to build their campaigns, simply because it is easier. Ranks allow you to easily and quickly build campaigns to your desired size but targeting the top donors.

The scores are useful for calculating expected values on a per-donor-basis and can therefore be used to accurately calculate ROI curves and determine who should be included in a campaign. However, this is a significantly more complex method and we only advise this for more advanced users.

Some models like major giving and gift-in-will predictions only have ranks and no scores. This is simply because these are rarer events and there isnâ€™t enough data at any organisation to produce actionable probability scores.

**How does Dataro produce scores?**

The raw output of a machine learning model is an 'uncalibrated' probability value, ranging from 0.0 to 1.0.

This is a complex issue, but the primary reason for this is that in order to learn correct weights (internal rules) for the model, the distribution of the 'training' data must be evenly split between positive and negative samples (i.e. people who do the action and people who don't do the action). In reality, the distribution for the events we are interested in is very lopsided, so we must manipulate the training data to attain this balance of samples (either by dropping out negative cases or repeating positive cases).

The uncalibrated model scores, therefore, do not reflect the true probability of the event but do contain the information which we can use to order the donors (i.e. calculate the rank). What we mean by this is that for the uncalibrated scores, a 0.9 does not necessarily reflect a 90% chance of the event occurring but it is guaranteed to be more likely to happen than a score 0.89 (according to the model).

While the ranks are useful for actually building a campaign, there are many reasons that we might want to estimate the probability of a donor responding to the ask, for example: calculating an optimal campaign size or estimating returns before the campaign runs.

In order to generate more accurate probability scores, we train a secondary model to 'calibrate' the scores. This model uses the unbalanced (i.e. true) data distribution to learn that, for instance, an uncalibrated score of 0.9 actually corresponds to a probability of 0.7 and so on.

**How does Dataro produce ranks?**

The Rank is derived directly from the raw (uncalibrated) output of the machine learning model, and orders each donor from the most likely to the least likely. This is a very easy way to visualise the top priority donors for any given action. In most cases the uncalibrated model scores are unique, and the ranks are a true reflection of the ordering implied by the model.

## Comments

0 comments

Please sign in to leave a comment.