Why Should You Trust
Our Incrementality Model?

Trust is critical to the adoption of new techniques and products. 
Here’s how MetricWorks ensures that you can trust our
incrementality model to support your decision-making process

& Ground Truth

Model Calibration


Model Transparency

Below is a quick summary of how MetricWorks ensures that you can trust our incrementality model. Of course, we are happy to chat with you in great detail about any of this. Don’t be shy. Email us at demo@metric.works.

Experiments & Ground Truth

Trust begins with experiments to establish ground truth. You may have heard of incrementality testing or randomized controlled trials (RCTs) which require a large number of device IDs to be known before the experiment so that they can be randomly divided into a control group and a treatment group, where only the treatment group is served ads. That approach is completely impossible on iOS!

MetricWorks Polaris takes a different approach – a privacy-compatible methodology that is robust, widely utilized in other fields, and impervious to platform changes including ATT/SKAdNetwork on iOS 14. This technique is called the Synthetic Control Method.  The synthetic control group is essentially a weighted average of countries other than the treatment country which match up closely with the treatment country. The idea is to use the synthetic control to predict what would’ve happened had the treatment (usually a pause) never happened. 

Polaris automatically designs, validates, and proposes experiments that only affect a single country (the treatment country). In our experiment design, each metric gets its own synthetic control. If a closely matching control can’t be identified for any metric, the experiment is discarded and the treatment country is flagged as invalid for experiments.

Model Calibration

Each potential experiment is further validated through placebo testing which is like running a battery of simulated experiments on the data set. These ground truth findings are automatically used to calibrate the econometric model that is used to output all incrementality metrics across the entire media mix, even traffic that was not treated in the experiment. 

Therefore, experiments have a dual purpose:

1) to increase the accuracy of incrementality metrics for some traffic in a specific country to 100% and,

2) to substantially increase the accuracy of incrementality metrics across the rest of the model.

Risk Mitigation

It is worth noting that experiments are selected based on the desired balance between information gain and risk. MetricWorks coordinates the experiments with your relevant internal teams and analyzes the results with your teams.  To be clear, you are not required to commit to any wild changes in your UA process.

Model Transparency

For those teams that are data science savvy, we share details of the incrementality models such as components, coefficients, confidence intervals, multicollinearity and backtesting error to ensure that they understand the model.

The Bar Is Usefulness, Not Perfection

Keep in mind that the bar is usefulness, not perfection! No model is perfect. Models should be judged and adopted based on usefulness. MMMs are immediately useful even when uncalibrated because they actually measure incrementality unlike last touch which isn’t aligned with business value. Usefulness also increases over time as calibration occurs.

"From having led a data science team, there is an awful lot of important insights that can come from having algorithms just analyze data at massive scale. That's something that people are not physically capable of doing."

– Anthony Cross, Former VP of Data Science, UA & Product Management,
Big Fish Games

Ready To Get Started Today?

Share This