Deciding the Optimal Conversion Value and Window Setup for SKAdNetwork (SKAN)

We are a few weeks from the first anniversary of Apple’s AppTrackingTransparency (ATT) launching. And, Google’s now beginning to share Android’s mobile privacy plans. It seems like a good time to take stock of ATT’s impact on performance advertising so far.

When it comes to measurement, there has been progress by Apple on SKAN, however, there are still some challenges that advertisers are struggling to overcome. You may not think that there are challenges, but that’s likely because your measurement solutions are falling back on fingerprinting, meaning you can ignore SKAN (at least for the moment).

This is like sticking a band-aid over a serious wound, because it’s not a case of “if” Apple will clamp down on fingerprinting, it’s “when.” When that happens, if you haven’t overcome these challenges, you will be left very exposed. So, what are these challenges and how do you overcome them?

Key SKAN challenges

Whilst SKAN has been improved by Apple over the last year, performance advertisers are still grappling with the optimal way to implement SKAN. The challenge here breaks down into a few key areas that largely revolve around the Conversion Value element of the SKAN postback.

Conversion Value

To recap, Conversion Values are a 6-bit code (six 0s or 1s) included in the SKAN postback that creates up to 64 combinations of post-install conversion events you can track. They are designed to highlight the value of an install by identifying conversion events that happen such as a purchase, ad impression, trial signup, etc. 

There are many approaches to Conversion Values based on the monetization model of an app and what your mobile measurement partner (MMP) offers. You can see a summary of different approaches that we put together prior to ATT launching here. However, our thinking on this topic has moved forward a bit since then and we want to update based on how we’ve seen the most successful advertisers operate.

Generally, we’ve seen that the biggest challenge has been advertisers experimenting with Conversion Value setup regularly. This creates an inconsistent dataset, making it hard to compare performance apples to apples over time. 

Additionally, if your app doesn’t monetize quickly after an install or lacks the stickiness to keep users coming back every day after the initial install, then the Conversion Value on its own doesn’t give you much indication on the value of an install. So the ultimate question is: what is the optimal Conversion Value setup to move ahead with?

Conversion Window

In addition, there’s the Conversion Window. This is the time period you as an advertiser decide to leave between the install and the SKAN postback being sent to you. You only get one postback sent for each install and the minimum length is 24 hours. However, if there is no change in the Conversion Value for 24 hours it is automatically returned. 

Initially, Facebook dictated that advertisers on their platform needed to set this window to 24 hours. However, since then Facebook has stepped back from influencing Conversion Window setup, leaving advertisers free to test longer Conversion Windows.

The newness of SKAN, along with this greater freedom for advertisers to test their setup, has created a challenge. Longer Conversion Windows mean installs from the same day are firing back at random times, making it hard to cohort back to a specific install day of spend. So the ultimate question is: how long should I set the Conversion Window for my app?

Deduplication

Another challenge is around duplicate installs across different attribution sources. On iOS 14.5+, after an install happens, SKAdNetwork immediately starts tracking and attributing them. Simultaneously, an ATT prompt will be shown during the onboarding flow and the user may opt in to share their IDFA, meaning the install will also be attributed separately by the MMP.

In your reporting, this means you now have duplicate installs in your MMP and your SKAN postback with no way of identifying them. So the ultimate question is: how can I deduplicate installs across my MMP and SKAN attribution data? The significance of this is that part of the answer may lie in your Conversion Value setup.

What is the Optimal Conversion Value and Conversion Window Setup?

As always, tough questions tend to come with an answer of “it depends.” However, what we want to do here is breakdown what those dependencies are and how they broadly influence the answer.

To start, an important part of the Conversion Value and Conversion Window setup to emphasize is not to mess with it too often. That ensures you have consistent datasets to compare over time. 

When we initially looked at the Conversion Value setup in MMPs in this post, we reviewed some of the different Conversion Value models or schemas and how easy they were to set up in different MMPs. A Conversion Value model or schema is essentially how you use the six bits or digits to communicate the value of an install in your SKAN postback. 

Since then we have largely seen that the base model or schema used falls into one of these three models, which may all be called something different in your MMP of choice:

This model is used to measure the revenue generated by an install. Each increase in the Conversion Value represents a revenue increment, e.g $1 or $10, with 64 combinations including install.

This model is similar, however, each increase in the Conversion Value is a conversion event being completed by the user, which isn’t necessarily just a revenue event, e.g. completed level X, invited a friend, registered, etc.

This was initially the most popular model and simply tracks six conversion events and whether they happened or not.

So how do you decide the optimal Conversion Value model and Conversion Window length?

When it then comes to this there are three key factors to consider using product analytics:

  • Speed of monetization: Ultimately, ask yourself how many days does it take to monetize the majority of users? This will dictate the type of Conversion Value setup that will be optimal and give you a sense of how long your Conversion Window setup should be. Mostly you want to identify if a good majority of monetization happens in the first seven days. If so, by what day?
  • Product stickiness: What percentage of users use the app every day in the first seven days after install? Your Conversion Value will post back if it doesn’t change in 24 hours. If users are not coming back on day two or every day in the first seven days, you likely want to set the Conversion Window to one day or as many consecutive days as the majority of users return.
  • Linearity of onboarding funnel: How predictable or linear is your onboarding funnel? In the first few days after install, do users go through a consistent set of steps that are predictable and give a good indication of their monetization likelihood?

Once you have analyzed your analytics data to understand the answers to these questions, you can essentially follow this decision tree to define the optimum starting Conversion Value and Window setup for your app.

You could also create a hybrid custom model where you use a couple of bits/digits to track key conversion events and a couple of bits/digits to track revenue generated. Most advertisers use these base models/schemas as a starting point though.

The key to getting more valuable data, from what we’ve seen, is aligning product and user acquisition (UA) teams. The more your product and UA teams can do to accelerate monetization, increase product stickiness and make your onboarding flow more linear, the better value data you will get. The challenge here is balancing this against user experience and negative impacts on overall monetization.

Also, in most scenarios, the further right you go on the decision tree, you’re capturing data to be used for modeling revenue, which brings us to our next section.

Modeling and a Mindset Shift

Unless you have a rapid speed of monetization and a very sticky product in the days post-install, most of your revenue/lifetime value (LTV) data will need to be modeled. Essentially, this involves taking early monetization signals/conversion events and revenue data from internal systems to predict revenue at the campaign level. 

There are really three approaches to modeling, which we cover in more detail in this post:

  • Linear redistribution: In this simplistic model you you start with two data sources: 1) actual revenue cohorted to install day via internal databases and product analytics, and 2) SKAdNetwork data with channel attribution and post-install data cohorted to the day when you received it or assumed install day. You can then assign users into clusters based on conversion events post-install and then linearly assign revenue to each channel based on the number of installs in a cluster. 
  • Probabilistic redistribution: This works similarly to linear redistribution, which takes SKAdNetwork data, network-reported metrics and internal user level data. The key difference is adding deterministic attribution from MMPs to assign opt-in installs via the ATT framework and advanced clustering using algorithmic modeling, which enables the creation of many more clusters.
  • Top-down incrementality: This is a more advanced approach that takes three major inputs: 1) Aggregated cost data by channel from tools like Appsumer, 2) App event data and 3) Real revenue data from product analytics and internal revenue sources. Econometric models are then applied to attribute revenue incrementally by channel and at more granular levels.

However, you’ll need to adjust to the fact that you can’t deterministically measure everything to the nth degree. As user acquisition experts, we have a mindset of looking for the most accurate degree of measurement with user identifiers providing deterministic data and not trusting anything else. 

This world is gone. We should stop trying to cling onto it for iOS.

In this new world, we need to get comfortable with modeling and extrapolating data to understand campaign performance. The alternative is not having any data to justify iOS spend and ultimately the existence of team members. 

Your focus now needs to be on gathering the richest data possible through SKAN, and building the models and infrastructure to get data as accurate as possible. It won’t be perfect, but it’s better than nothing when building business cases and optimizing campaigns.

Using 1 Bit of your Conversion Value for Deduplication

To overcome the duplicate install problem, an increasingly popular approach championed by Appsflyer, is to use one of the 6-bits in the Conversion Value to identify whether or not an install (i.e. the person that just installed the app from an ad) has opted in via the ATT prompt e.g “0” = did not opt-in and “1” = did opt in.

Then on the back end, you can remove those who opted-in via ATT from your SKAN data to overcome the issue of duplicate installs when unifying it. 

Concluding thoughts

The impact of ATT is now clearer. While there will be smaller adjustments in the coming months and years (particularly around fingerprinting), it’s now easier to define and plan your new measurement approach for the privacy era in mobile performance advertising.