Three approaches to measure granular LTV in a SKAdnetwork era

Three approaches to measure granular LTV in a SKAdnetwork era
January 20, 2021 Simon Whittick

With Apple’s App Tracking Transparency (ATT) framework rolling out in early 2021, mobile user acquisition experts are preparing for many things. One of the biggest is preparing for life without Apple’s ID For Advertisers (IDFA) and preparing for life with Apple’s SKAdnetwork (SKAN) attribution solution. 

In this post we’ll look at specific solutions around this, covering:

  • A recap on the challenges SKAdNetwork presents when measuring LTV at a granular level.
  • Different approaches to measure LTV in an SKAdnetwork era:
    • Linear redistribution
    • Probabilistic redistribution
    • Top-down incrementality

SKAdNetwork challenges

SKAdnetwork presents a big challenge when calculating a user’s actual Lifetime Value (LTV) at a granular level across channels. This matters, because LTV is a fundamental metric for growing consumer mobile apps. So, what are the challenges?

  1. IDFAs are used to tie pre-install impressions and clicks across channels to post-install events which give user acquisition experts actual LTV by channel. Whilst SKAdnetwork will tie campaigns to an install, post-install behaviour and revenue is a little more murky.
  2. The first challenge post-install is that you won’t know exactly when an install happened due to randomisation, which sees SKAN send conversion data at a random time to maintain user anonymity. The second challenge post-install is the 24 hour conversion timer. The conversion value logic will largely be controlled by major channels like Facebook and Mobile Measurement Partners (MMPs) like Appsflyer. They currently plan to set a 24 hour conversion window. This means that to get a true LTV picture the user would need to come back every 24 hours. This makes it almost impossible to get a true picture of LTV by advertising channel.

Before going further it’s important to define what we mean by deterministic and probabilistic attribution:

  • Deterministic Attribution: Put simply, deterministic attribution is known to be accurate as it generally uses a user ID to track at a user-level and connect an install to a campaign. Whilst SKAdnetwork is deterministic attribution, it has gaps from the mechanics to obfuscate user-level tracking. This is where the need for probabilistic attribution comes in.
  • Probabilistic Attribution: This is attribution which uses statistical models to connect revenue to a campaign. Unless a user opts-in to share their IDFA you can’t deliver 100% confidence that a campaign drove certain revenue, so models are used to assign revenue to a campaign probabilistically with lower than 100% confidence.

We caught up with MetricWorks CEO Brian Krebs, Incrmntal CEO Maor Sadra and Algolift by Vungle GM Paul Bowen to explore probabilistic forms of attribution that fill gaps left by SKAdnetwork’s limited deterministic attribution. So let’s take a look at the three solutions we identified to measure LTV in an SKAdnetwork era.

1. Linear redistribution

This is the most simplistic form of LTV attribution. It’s something you could design yourself with very little data science resources, however, it delivers more directional data rather than a high degree of accuracy. This naturally hampers optimisation. It’s also last-touch, so it only gives credit to the last-interaction before a conversion rather than distributing value across all touchpoints influencing a conversion.

How does it work?

It starts with two data sources:

  1. Actual revenue cohorted to install day via internal databases and product analytics.
  2. SKAdnetwork data with channel attribution and 24 hours of post-install data cohorted to the day when you received it.

You can then assign users into clusters based on behaviour in the first 24 hours post-install and geo data, then linearly assign revenue to each channel based on the number of installs in a cluster. The more clusters you have the more accurate you can get:

Example 1: Single cluster

For example, you could have a simple single cluster and model revenue by the volume of installs:

Obviously, this treats all installs as equal. It ignores the possibility that different channels and campaigns will deliver a different quality of install. 

Example 2: Two clusters

So you can use the 24 hours of conversion actions (e.g. made a purchase, invited a friend, tracked exercise) to break users into two clusters:

This then starts to recognise that different channels are driving different standards of users. However, not all conversion actions are created equal. 

Example 3: Multiple clusters

So you could break down users into more granular clusters by the value of actions they’ve completed:

Naturally, the more you start to segment the more accurate LTV distribution is likely to be when you compare it to historical deterministic data at the channel or campaign level.

One thing you might find at this point when you validate against historical data is that you’re over-reporting organics. You could apply a consistent percentage adjustment to fix this and move more value to paid channels across the board. This will likely get you closer to 90% accuracy.

The other issue is that SKAdnetwork randomises the install timestamp within a 24 hour window so the install data could be out by a day and impact the redistribution above when looking at daily figures. The longer the date range used when doing redistribution, the more accurate it will become. If you look at it across 7 days, you’ll likely get to 80%+ accuracy.

Pros:

  • This is fairly easy to set up and doesn’t require much in the way of data science resources.
  • It also gives you some directional data if you just need something good enough to start with or as the basis of building a more sophisticated model.

Cons:

  • It is less accurate than today’s deterministic data and some of the more advanced solutions we’ll go on to look at which hampers optimisation.
  • You need to look at longer windows of data to increase accuracy, which could slow optimisation.

Best for …

Teams who have limited data science resources and want directional data that they can implement quickly and easily. Also, a useful starting point as a basis for building your own model.

2. Probabilistic redistribution

Probabilistic redistribution builds on linear redistribution, increasing the accuracy using more input signals and Machine Learning (ML) algorithms. Algolift have been fast out the gates with a probabilistic last-touch attribution solution for the SKAdnetwork era and we caught up with their GM Paul Bowen to understand a bit more.

How does it work?

Probabilistic redistribution works similarly to linear redistribution, which takes SKAdNetwork data, network-reported metrics and internal user level data. The key difference is:

    1. Deterministic attribution from MMPs: This data is used to assign opt-in installs via the App Tracking Transparency (ATT) framework.
    2. Advanced clustering using algorithmic modelling: This enables the creation of up to 32 clusters.

A key component here is SKAdnetwork’s 6-bit ConversionValue. The folks at Algolift are working with customers to build an optimum use of the ConversionValue as a proxy for LTV and there’s a lot of detail on that methodology here. However, at a top-level Paul shares:

For example, a game would likely use ConversionValue as a day 1 revenue contribution. You would use 1-bit to define day 0 or day 1 and then the other 5-bits would be used to create 32 buckets of day 1 revenue e.g. $0, $0.01-$2, $2-$5 etc … This means that for every user who installed yesterday you have them in a day 1 revenue cluster. We then use that value to match users from the anonymised user-level behavioural data with SKAdnetwork data with a probability that they came from a specific campaign.”

This is where opted-in deterministic data from MMPs can be used by assigning them a 100% probability of belonging to the campaign which they’re attributed to. For SKAdnetwork users, a percentage of the revenue that the user generates is assigned to the campaign based on the probability of them belonging to that campaign. The output is then a predicted day X Return on Ad Spend (pROAS) by campaign and channel at an aggregated level. The key thing here is that this is at an aggregate level, because the use of probabilities of a user belonging to a campaign mean that at a user-level the data wouldn’t be accurate. However, when aggregated at a campaign and channel level accuracy increases, likely beyond that of linear redistribution.

 

This raw data output can then be used to feed into your existing BI infrastructure or a tool like Appsumer. 

 

It’s also worth noting that whilst Algolift is grounded in last-touch probabilistic attribution they also offer top-down incrementality. We’ll go on to speak more about this methodology, but they use it to look at things like the uplift which organic receives from paid user acquisition.

Pros

  • Compared to linear redistribution of revenue this approach will deliver a higher degree of accuracy with more signals and advanced algorithms to probabilistically assign revenue.
  • This approach is anchored in and validated against deterministic data which is what user acquisition experts are familiar with.
  • With an off-the-shelf solution like Algolift the implementation of this isn’t too resource intensive given their expertise in ConversionValue implementation and the models are already trained.

Cons

  • It is still based on limited SKAdnetwork data in the first 24 hours. Whilst it’s more accurate than linear distribution due to more inputs and in-depth statistical models,  actual long-term LTV distribution is still likely to deviate somewhat from predictions.
  • To set this up yourself it will require some data science and engineering resources with plenty of time available.

Best for…

Teams spending upwards of $5k per a day in the gaming or subscription app verticals who want to continue optimising to ROAS at a campaign level. If you’re going to try and build this yourself you will also likely need some data science and engineering resources to implement it.

3. Top-down incrementality

The big difference here is that it’s not looking at last-touch distribution. As deterministic data disappears, incrementality – which is widely used outside digital advertising – is a completely new way of looking at things. It spreads revenue credit across channels that have a causal impact on influencing revenue. Models are very sophisticated and require significant data science expertise and time to master. MetricWorks is the vendor who has focused on using purely top-down incrementality to deliver LTV by channel in an SKAdnetwork era and Brian, shared some insights on how this works.

How does it work?

SImply put, it takes three major inputs:

  1. Aggregated cost data by channel from tools like Appsumer
  2. App event data 
  3. Real revenue data from product analytics and internal revenue sources

Econometric models are then applied to attribute revenue incrementally by channel and at more granular levels. Let’s just pause there to unpack econometric models quickly. 

Econometric models are a statistical technique used in economics to help describe economic relationships. For example, an economist might want to understand how average wages are affected by economic growth. They could use econometrics to help describe that relationship, showing how much average wages change when the economy grows. And they could then use that to forecast changes in wages, based on how they expect the economy to grow. These same techniques can be used in mobile advertising. For example, to show the relationship between how you spend on video ad networks and changes in revenue via Apple Search Ads or organic. 

MetricWorks’ econometric model has learnt across multiple advertisers to improve accuracy. However, unlike last-touch solutions it’s not possible to validate against historical deterministic  data as that all uses last-touch attribution. To validate accuracy for an advertiser, ground truth tests are used via Interrupted Time Series (ITS) experiments. Brian explains ITS experiments in their simplest form as:

You don’t mess with any variables e.g. bids, budgets, in-app flows or promotions. During a test period for say three days you turn off some traffic e.g. a campaign, geo or in an extreme test an entire network. You then compare the baseline of what would have happened if you hadn’t paused the traffic and what actually happened during the test, this is called the “counterfactual”. Through causal inference you take the difference and that’s your ground truth.

This is then compared to the models to validate them. The other advantage of doing ITS experiments is that they enrich the econometric models for you to improve accuracy. MetricWorks don’t require ITS tests to get value given they require switching off some revenue, however, they do validate and enrich MetricWorks’ models to give you more trust of the data.

In terms of trust, MetricWorks is also transparent about how models are working and give you statistical significance at different levels of granularity from channel down to creative level. As you would expect, statistical significance falls as you go more granular with sparse datasets.

From a data engineering perspective everything is ingested via APIs or for custom sources it will need to be delivered to a data warehouse. Once the models have done their work the data is delivered as dashboards or can be pulled into your data warehouse or tools like Appsumer via an API.

Pros:

  • You are looking at the incremental impact of investment rather than a winner takes all approach to attribution. This should increase budget efficiency if the models are accurate.
  • With traditional incrementality measurement such as Facebook’s Conversion Lift tool disappearing with IDFAs, it’s beneficial to retain incementality measurement with automated ITS experiments with a tool like MetricWorks. 
  • With an off-the-shelf solution like MetricWorks you can actually start seeing value quite quickly given their models have already been trained.

Cons:

  • Whilst there are benefits to running automated ITS experiments there is also an opportunity cost with short-term loss of revenue, so it’s important to strike the balance.
  • Like other approaches, at more granular levels accuracy of the model weakens due to data sparsity.
  • To build this in-house it would take a lot of time to build and train models with a very significant data science and engineering team.

Best for…

Gaming brands who aren’t satisfied with last-touch allocation. You would also need either a very significant data science team to build models or budget to invest in an out-the-box solution. 

Concluding thoughts

There’s been a lot of noise around solutions for revenue attribution in the SKAdnetwork era. Hopefully, we’ve managed to cut through some of that noise and highlight where different approaches are applicable with a taster on how they work. It’s also worth noting that no solution will deliver the same accuracy as deterministic data given the probabilistic element, so it’s important to get comfortable with that from the get go when evaluating solutions.

One solution we haven’t spoken about much is Incrmntal, that’s because they aren’t directly solving the specific problem of attributing revenue to channels or campaigns. As the name suggests they are using incrementality, however, their focus is more on using ML models and causal inference to make optimisations recommendations based on incrementality findings. These recommendations are delivered via a Trello like interface. We’ll cover them in more detail an upcoming post on incrementality.

The reason we care about informing you on these solutions is because they’re all integration partners and we’re working on ingesting their or your own modelled data into our data pipeline. This is so that we can deliver richer findings in Appsumer’s dashboards and reports as part of our new SKAdnetwork reporting and modelling solution, which you can read more about here. Hopefully it’s helped and we look forward to working with all these fine folks in the coming months to deliver you rich SKAdnetwork insights.