Apple’s App Tracking Transparency (ATT) implementation deadline is approaching like a high speed train and mobile user acquisition experts are focusing on the finer details of their SKAdNetwork implementation. A key part of SKAdNetwork is ConversionValue, and in this post we’re going to explore best practices for ConversionValue, covering:
- Recap of ConversionValue and limitations
- Industry ConversionValue best practices worth exploring
- How key MMPs (Adjust, Appsflyer, Branch and Kochava) are handling ConversionValue logic
- Based on all this information, we’ll conclude with some broad recommendations
Let’s jump in with a quick bit of background on ConversionValue.
A recap of ConversionValue in SKAdNetwork and limitations
ConversionValue has two core uses:
- Give an early indication of in-app revenue, engagement, or retention to judge early campaign performance
- Generate a user value that ad networks use to power campaign optimisation
It is a 6-bit value that is sent from Apple to the ad network that sourced the user for the app. Just to unpack what a “6-bit value” is, it’s 6 digits that can either be a 0 or 1. The lowest value being 000000 and the highest value being 111111. This means there are 64 possible combinations of 0’s and 1’s. You can assign values to each of those combinations based on in-app events to begin understanding what the value of an install is.
There are a few important things to understand here about the mechanics:
- When a user installs your app, ConversionValue is set to 000000 and a 24 hour timer is started.
- When this timer expires the data is sent back to the ad network and ultimately you but there is no data passed back before the timer expires or afterwards, it is a one time postback.
- However, every time the ConversionValue changes a new 24 hour timer starts. There are three ways this timer is stopped and the value is then sent back to the ad network:
- The ConversionValue does not change for 24 hours after the timer is reset
- The ConversionValue is lower than the previously recorded one
- A predefined measurement window that you’ve set expires
- After the timer expires the data is not immediately sent back to the ad network. Apple delays this by a random period of 0-24 hours to protect user privacy in terms of when the install occurred.
Having understood ConversionValue mechanics you can start to understand limitations.
With only a single postback, you have to balance waiting for richer data with getting data quickly for ad networks to optimise campaigns and for you to analyse. Even if you were to wait the minimum 24 hours, you still have to wait for Apple’s random 0-24 hour timer and if you’re using a MMP to manage ConversionValue logic they will likely need a period of time to process data. So in reality you’re going to be waiting 48+ hours post install to start analysing data.
Day of install
With Apple’s random timer you will struggle to cohort all behaviour back to an exact install day. You can’t understand whether a user installed the app on the day you received the postback or the day before. This makes it hard to cohort installs back to a day of advertising spending, which is challenging given the importance of cohorts to understand the impact of changes in mobile advertising.
Another big limitation of ConversionValue, which hasn’t been spoken about much is the privacy threshold Apple says it will implement. There isn’t much information about this threshold, but most suspect that the volume of installs needs to be at a certain level for each of your 64 conversion values for Apple to pass the data back on them. This is a step by Apple to protect user privacy, however, Apple hasn’t exposed what this threshold is. This makes it hard to estimate the impact.
This is similar to how Google can’t expose all search query data in search query reports. The volume of queries needs to reach a certain volume of searches before the search query can be exposed to advertisers to protect user privacy. Seer Interactive did an analysis of how much search query data was lost when this was implemented and found that 28% of search query data was lost. This 20-30% of data lost seems a reasonable assumption of what might be the impact of this, however, Apple may hold themselves to a higher privacy threshold than Google.
The key point here is that if you have a low volume of installs and a high number of conversion values, you could lose significantly more than 20-30% of data. Equally, if you have a high volume of installs and low number of conversion values your data loss could be lower than 20%.
Facebook’s 24 hour limit
Facebook’s algorithms will assume that only 24 hours of ConversionValue data has been captured when optimising campaigns with their algorithms. Google and other ad networks also appear keen to enforce similar limits to accelerate campaign optimisation and ultimately spend for them. By capturing more than 24 hours of data it could have a negative impact on campaign performance.
Having understood ConversionValue limitations, let’s take a look at what best practices have already been put out there in the industry.
ConversionValue best practices shared in the industry
There have been a number of ideas coming out from industry thought leaders. Two really interesting approaches we’ve enjoyed are from Paul at Algolift and David at DataSeat.
The key consideration is that there’s a tough balance between getting as much value from the ConversionValue logic and not letting too much time pass to ensure optimisation can happen quickly and accurately. For iOS apps this may require optimising onboarding content to get a better signal of user value early in the lifecycle e.g. day 1, day 3. It’s important monetisation and acquisition teams work hand-in-hand to design an optimum onboarding experience that shows user value in terms of events executed and doesn’t add friction for monetisation.
Cohorted pLTV Bins
One of the first set of best practices was put out there by our friends at Algolift. Their suggestion in this article is to use the first 2-bits to define days elapsed since install and keep the timer alive for 3 days. This would be:
- Day 1 = 01
- Day 2 = 10
- Day 3 = 11
The remaining 4-bits would then be used to put the user into one of 16 predicted Lifetime Value (pLTV) buckets.
So, the returned 6-bit ConversionValue would look something like the below:
You could scale this to 7 days of data by using the first 3-bits to send back days elapsed since install, then the remaining 3-bits to have 7 pLTV buckets.
The only challenge here is that it assumes you’re capturing more than 24 hours of data, which Facebook and other ad networks are assuming you’re not. By capturing three days of data, campaign optimisation on channels like Facebook could be sub-optimal.
Install day pLTV Bins
The other challenge that can be overcome using some of the bits is identifying an install day to cohort data back to. David Philippson, the CEO of DataSeat shared an approach to solve this on Mobile Dev Memo. This approach uses the last 3-bits to define the day of install, and the first 3-bits to define 8 pLTV buckets:
This approach has two advantages worth highlighting:
- Accurate cohort data: Not being able to assign an install day is painful for understanding the impact of changes via cohort reports. This is overcome using this technique.
- Faster data: With this approach you limit the conversion window timer to 24 hours post-install, which means ad networks can optimise faster and you can get data to analyse quicker.
The big challenge here is that Apple is deliberately setting a random timer when returning data to anonymise install time / day. This leaves a big open question of whether Apple will allow ConversionValue to be used in this way to record the day of install?
Ultimately with best practices – no matter which approach you use – the challenge is identifying how to use early events to group users into 6-64 buckets of pLTV. This will require a strong data science analysis of historical event data in the first 24 hours and how that correlates with longer-term LTV. It’s also worth noting that if you’re working with smaller install volumes you’re going to be best to work at the 6 rather than 64 end of this range to ensure you don’t fall foul of Apple’s mysterious privacy threshold.
Ideal vs Reality – How the MMPs are going to handle ConversionValue Logic
Whilst these best practice recommendations are strong, 90% of customers we spoke to are planning to use their Mobile Measurement Partner (MMP) to handle their ConversionValue logic. This introduces further constraints based on your MMPs flexibility. This means that a lot of these approaches fall short when it comes to actual implementation in your MMP, creating more confusion and frustration in the industry.
We’ve looked at how the major MMPs will handle ConversionValue logic to understand these limitations. The MMPs we have looked at are Adjust, Appsflyer, Branch and Kochava as these are the MMPs most used by our customers. We’ve taken information from publicly available resources and live implementation with customers.
Adjust’s out-the-box approach is fairly straight forward, and you can read the details here. Using Adjust, you can select 6 conversion value events despite the 64 combinations you have to play with. These are then translated into what the event is and reported in Adjust. This doesn’t count the number of times the event occurred or revenue associated with it; it simply tells you whether it happened or not. You can either select existing events or add new events using this guide.
Whilst we don’t know the exact setup we would assume each of the 6 bits is an event and 0=event did not occur and 1=event did occur e.g.
You also need to set up a measurement window, which is the amount of time you want to allow to elapse before the conversion value data is sent back to the ad network. The minimum is one hour.
The other alternative is to manage the SKAdNetwork setup yourself. The benefit is greater flexibility with the use of all 64 bits to track events at a more granular level. If you choose to manage your SKAdNetwork settings yourself, it’s important to note the following points.
- Your app developers need to call the SKAdNetwork registerAppForAdNetworkAttribution() method at app open
- Developers should define what each conversion value means within the app
- Adjust only reports the conversion value as it is received from the ad network. This will be a value between 0 – 63 as found in the network payload.
Within Appsflyer there are three types of “measurement modes” and you can only use 1. You can read more about the details on SKAdNetwork and ConversionValue setup in Appsflyer here. You can also set a measurement window countdown timer and the default is 24 hours. It’s also worth noting that you can set up your own ConversionValue logic and upload the schema in CSV format to Appsflyer and you can read more about how to do that here. However, to focus on the out-the-box measurement modes, they are:
- Revenue (Default setting) – Total revenue generated by the user
- Conversion – Records the unique in-app events (1-6) a user performs. This is essentially the same as the Adjust setup.
- Engagement – The number of times (0-63) a user performs a specific (single) in-app event (e.g. watches an ad, opened the app)
Let’s take a look at each measurement mode in a little more detail.
Revenue (Default setting)
In this scenario each of the 64 combinations of the 6 bits is a revenue value. You can set each combination or unit to be worth $0.01, £1.00 or $10.00. So for each setting the examples would look like this for the first three conversion values:
This can also handle multi-currencies. Foreign currency amounts are translated to USD. Fractions of a conversion value unit are rounded up to complete units. For Example, EUR 10, after translation using the current exchange rate is USD 11.25. If the conversion value unit rate selected is $1=1 unit, this is 11.25 units which is rounded up to 12 conversion value units.
Everyone wants to measure revenue, and the default measurement window on Appsflyer is set to 24 hours. The challenge here is that if D1 revenue doesn’t provide an indication of long-term LTV then this solution likely isn’t best for you. However, if there is a strong correlation between D1 revenue and longer-term LTV you could use this measurement mode to then forecast longer-term LTV.
If early revenue doesn’t have a strong correlation with longer-term LTV, then one of the next two options might be better for you.
This solution is much the same as Adjust’s setup. So each of the 6-bits represents an event in-app and if that bit is set to 0 the user didn’t perform it and if it’s set to 1 they did perform it. Similarly to Adjust this won’t count how many times the action was taken by the user it just tells you whether they did complete it at least once.
As with Adjust, the key here is identifying which 6 events have a strong correlation with longer-term LTV buckets.
For many apps, the level of engagement in the first 24 hours will be the best indicator of longer-term LTV. If that’s the case, this solution might be best. You can pick a single event (e.g. watched an ad, opened the app) and use the 64 combinations to count how many times that event is executed by a user. E.g. 000001 would be they watch one ad or 111111 would be they watched 63 ads.
The structure of how the 6 bits are used in Branch is quite straightforward and you can read more detail here. It’s worth noting that the default measurement window is currently set to 72 hours, however, they are going to change that to 24 hours in the next SDK update based on requests from Facebook and Google.
Within Branch each of the 64 combinations is an event. The idea is that you set these events in a sequential format based on their value to you in terms of long-term LTV. With 000000 being an install then 000001 being the first action a user is likely to take and 111111 being the most valuable event in terms of long-term monetisation.
The challenge here is that whatever the highest value action a user takes is what will be returned to you. However, you won’t know what other actions they take on the path to that action being taken. Branch has produced some useful advice in this blog to help customers with different goals define the most effective execution of this. It’s also worth noting that you don’t have to use all 64 values, particularly given you won’t get any data if the volume of users with a particular conversion value returned doesn’t meet Apple’s mysterious privacy threshold. The more conversion values available the higher chance you have of getting limited data returned, particularly if you don’t have huge volumes of installs.
The Kochava model is slightly different to other MMP defaults as it is the only one to introduce an element of time into the use of the 6-bits. You can read the details here. The first 3-bits can be used to track time elapsed since the install up to 7 days:
- Day 0 = 000
- Day 1 = 001
- Day 2 = 010
- Day 3 = 011
- Day 4 = 100
- Day 5 = 101
- Day 6 = 110
- Day 7 = 111
This will keep the timer active even if the user doesn’t execute any actions on a given day in the first seven days of installing the app.
There are also four different “conversion models” you can choose from, similar to Appsflyer’s three “measurement modes”.
It’s worth noting here that you can select to only track 1 day of data, meaning only 1-bit is used to track time elapsed since install and the other 5-bits are used to track events.
The difference between Appsflyer and Kochava “measurement modes” or “conversion models” are:
- For the “Revenue” model in Kochava you can define your own revenue increment e.g. $1, $5 etc for each conversion value. However, in Appsflyer you are limited to $0.01, $1.00 or $10.00. What’s unclear with Kochava is how their revenue model will handle multi-currency app exchange rates.
- Whilst Appsflyer has a “Conversion” setup that works like Adjust’s where you can track 6 events and look at whether they happened or not, Kochava offer two different options to track conversion events. The “Highest Event Completed” model works similarly to Branch’s and returns only the highest value event completed. The second option is “User Journey”, which allows you to combine 3 events in upto 31 user journeys depending on how long your measurement window is. How this works is a little unclear but we’d assume each user journey is given a value and if someone completes that user journey then 001 is the lowest value user journey and 111 the highest value, with only the highest value user journey being returned.
To summarise on MMPs, below we’ve broken down some of the key functionality around ConversionValue and whether each MMP will offer it.
Now that we’ve explored limitations, existing industry best practices and MMP setup it’s worth exploring some very broad recommendations. Before doing that, it’s worth highlighting a couple of unknowns that are frustrating and have caused quite a bit of diversity in how MMPs are handling this. Those unknowns are:
- What is the privacy threshold going to be from Apple? If you knew this it would be easier to calculate based on install volumes how many conversion values you should really be using to ensure you get maximum data returned by SKAdNetwork data.
- Will Apple allow you to use the first three bits to track the day of install? It seems unlikely, but if they did it would be useful to report cohorts.
However, let’s make these assumptions:
- We won’t know the threshold from Apple, but if you have low install volumes there’s no point using all 64 available values. It’s best to stick to 6-8 if you have low install volumes. For most, starting with 6 and scaling up with testing will likely be the best approach.
- Apple won’t allow you to use 3-bits to track day of install given the effort they’ve gone through to obfuscate this. If they allow it, then it’s definitely worthwhile.
With that in mind let’s look at some broad recommendations. As always, broad recommendations are hard because implementation will vary significantly based on your app monetisation model and MMP of choice. However, here’s a few directional recommendations to get you started:
- 24 hour measurement window: It’s likely best to set the measurement window to 24 hours in particular if Facebook is a major channel for you. This should ensure ad network algorithms are optimising your campaigns effectively. It also enables faster analysis and opens up all bits to be used for tracking user actions or revenue.
- If D1 revenue has a strong correlation with long-term LTV: Likely a lot less than 10% of apps fall into this category. However, if you are in the lucky few who do then measuring revenue in the first 24 hours will be the best approach. In Appsflyer and Kochava you can use the “Revenue” model to do this easily. In Adjust and Branch it will likely require either a custom setup or new events to create purchase buckets.
- You rely heavily on ad revenue: If your app is heavily ad funded and users watch ads in the first 24 hours then measuring how many ads they watch in the first 24 hours could be a good approach. In Appsflyer and Kochava you can use the “Engagement” model to do this. In Adjust and Kochava it would again require a custom setup or setting up new events to do this.
- You have a linear user journey on D1: If you have a very linear user journey on D1 (e.g. subscription apps) then the “Highest Event” model might be best. You can understand the highest event executed in the onboarding funnel. This is Branch’s approach and in Kochava you could either use the “Highest Event” or “User Journey” model. For Adjust and Appsflyer (“Conversion” model) you could track 6 events in the onboarding funnel to understand which have been executed.
- Everyone else: From what we’ve seen with customer implementation, most apps fall into this “everyone else” bucket. The most flexible approach is to understand which 6 events on day 1 have a strong correlation with longer-term LTV and start by tracking if that behaviour occurred. Through testing you can then refine and expand this approach. In Adjust and Appsflyer (“Conversion” model) this is straightforward. In Kochava you could likely make the “User Journey” model work for this or just get the highest value event returned. In Branch you would either need a custom setup or get the highest value event returned. An important part of this will be probabilistically assigning an LTV value based on the data you have. We explore this topic in detail in this post, which looks at “Three approaches to measure granular LTV in a SKAdnetwork era”. We worked with our friends at Algolift, Incrmntal and Metric Works to outline solutions including linear redistribution, probabilistic redistribution and top-down incrementality.
Let us know if you have additional recommendations, questions or thoughts @Appsumer on Twitter.
P.S. We’re planning to run a webinar looking at ConversionValue best practices and probabilistic LTV distribution. If you want to learn more you can get notified when the webinar is scheduled by signing up here.