
A week after the enforcement of Apple’s App Tracking Transparency (ATT) framework and increased usage of their SKAdNetwork (SKAN) attribution solution it’s been hard to keep up with all the lessons learnt so far. We thought we’d round up all the interesting things we’ve seen and the lessons we’ve learnt from live SKAN implementations so far.
We’ll also be running a “SKAN & ATT: Lessons learnt & early best practices” webinar with industry experts in the coming weeks once more lessons have been learnt to evaluate new best practices and how we should adapt as mobile user acquisition experts. You can get notified when we go live with this webinar by signing-up here.
Realities of SKAdNetwork (SKAN) data
Having worked on building performance views of SKAdNetwork data for clients over the last week we’ve learnt a few things about the realities. The big learning has been the importance of understanding the source of a metric when you’re analysing performance. The reality is that SKAdNetwork data is dressed up by different sources before it reaches you as the advertiser:
- Apple’s SKAdNetwork passes the raw postback with Campaign IDs, Source App IDs and raw ConversionValue’s to the ad network with a delay and privacy thresholds needing to be met (more on this later).
- When a Self-Attributing Network (SAN) receives this data they may model it. For example, Facebook will model the data at the ad level. Many other SANs are modelling data at different levels too.
- Your Media Measurement Partner (MMP) receives the data and attempts to translate the ConversionValue and could do further modelling.
In the SAN and MMP interfaces you’re going to see discrepancies, so it’s important to understand this and what the metric source is when viewing it in your own reporting or BI solution. Ultimately, you need to decide what your single source of truth is going to be.
Having a single source of truth becomes harder given something else we’ve noted this week. Whilst SANs like Facebook made it clear they were going to pass SKAN conversion and install data back to MMPs and MMPs appeared ready, we’re seeing a lot of SKAN MMP data from SANs without installs and conversions. On Facebook’s end this appears to be due to a slow release of documentation for Facebook’s Insights API. Whilst many MMPs are receiving SKAN cost, click and impression data via SAN integrations they don’t appear to have built the API integrations to receive SKAN conversion data just yet.
They are working on it but right now many MMPs SKAN data is currently reporting some SANs with zero conversions and it’s unclear what source those conversions are going to. It’s also unclear whether the conversion data, when they do integrate it, will be modelled data from SANs or raw SKAN postbacks. For now we’re integrating modelled SKAN conversion data directly from the SAN APIs given our breadth of out-the-box integrations. If you’re struggling to get a view of this data, feel free to get in touch.
Key lessons:
- It’s important to understand what dressing has happened to SKAN metrics before you receive it in your BI or reporting tool given discrepancies across interfaces due to modelling by SANs and MMPs.
- You’ll then need to decide whether you want your source of truth for SKAN to be your MMP attribution or modelled SAN conversions. The challenge with SAN conversions is you’re allowing the student to mark their own homework.
- If your MMP’s SKAN attribution is reporting zero installs and conversions from certain SANs you will need to understand what source those conversions are being attributed to instead. Also, it’s good to understand timelines for SKAN conversion integrations and whether they’ll be raw postbacks or modelled data from SANs.
The impact of Apple’s privacy threshold on SKAdNetwork (SKAN) data
We’ve talked a lot about Apple’s mysterious privacy threshold in SKAN and the impact it might have on how much data is actually returned via SKAN. There’s been very little data shared on this so far. However, buried in this article from Paul Bowen at Algolift by Vungle is this:
“Advertisers are quickly learning that the privacy threshold implemented by Apple as part of SKAdNetwork is limiting the amount of data they’re expecting to collect. This means they’re missing a large percentage of both source app ID and ConversionValue data to understand respectively where a given install occurred and to get a rough estimation of the LTV of an acquired user after installing. Some advertisers have reported as high as 80% of ConversionValue data being missing—data they expected to be reported back to their respective MMP.”
We had suggested in this article that it could be 20-30% of data lost based on Google’s previous privacy thresholds on search data. So 80% is a massive jump! However, a couple of important caveats:
- This is “some advertisers have reported as high as 80%”. It’s not aggregated data across multiple advertisers and is only “as high as 80%”. If an advertiser has a low volume of installs and high number of unique conversion values it is feasible that 80% could be lost. It will be interesting to see broader data on this.
- It could also be that the privacy threshold is for initial traffic e.g. you need to acquire at least 200 installs from a specific media partner to start seeing more data from SKAdNetwork. So if there are low installs that data might not be visible yet but SKAN attributions will increase over time.
- It’s also possible that data is being lost due to your MMP not having integrated SKAN conversion data from SANs, as we’ve already talked about. Once these integrations are in place it may be that data loss reduces and it becomes clearer what the actual impact of the privacy threshold is.
We’ll be trying to keep an eye on privacy thresholds with implementations and give a better idea on averages once things settle down and MMPs are fully integrated with SAN’s SKAN conversion data.
Key lessons:
- Keep a very close eye on SKAN data volumes vs what you’d expect. To make use of SKAN you will need to lose as little data as possible to the privacy threshold, so it will need fixing quickly.
- If you believe you’re losing a high percentage of attributed data to the privacy threshold then check that your MMP is ingesting SKAN conversion data from all SANs. If they are, you need to look at reducing the number of unique ConversionValues that you’re using on SKAdNetwork.
ATT opt-in rates
Tim Koscella from Kayzen, the in-house bidder for app developers and mobile advertisers, shared some interesting data on opt-in rates 7 days after ATT was born. They looked at “Authorized” vs “Denied” flags in ad requests after rollout and found about 15-20% of users accepted being tracked vs. 80-85% not allowing tracking.
Blue – “denied” ATT status
Purple – “authorized” ATT status
Red – “Not determined”
Flurry also released some opt-in data, which paints a picture of an even lower opt-in rate. They have:
- Worldwide opt-in rate at 13%
- US opt-in rate at just 5%
- Worldwide 5% of users have opted-out of even being asked permission
- In the US 2% of users have opted-out of being asked permission
Different methodologies and samples are likely the reason behind the variance and 10-20% was about the default expectation. However, it will be interesting to see this data segmented more as the sample size grows to look at things like opt-in rate by broader geos, app category and scale.
Key lesson: Don’t assume any higher than a 20% opt-in rate. It appears you’ll be doing well to get 20%.
ATT opt-in pre-prompts and alerts
Lots of great stuff around on ATT alerts and pre-prompts. Adam Lovallo from Thesis, a growth agency, surfaced this guidance from Apple on pre-ATT prompts. Key things highlighted were:
- You can’t imitate the ATT alert with your pre-ATT prompt. In particular, don’t create a button title that uses “Allow” or similar terms, because users don’t allow anything in a pre-alert screen.
- You can’t show an image of the standard ATT alert and modify it in any way with your pre-ATT prompt.
- You can’t draw a visual cue that draws people’s attention to the ATT alert “Allow” button.
With this in mind Sylvain Gauchet of GrowthGems.co and Babbel has created this useful gallery of ATT pre-prompts and alerts out in the wild. The trends so far?:
- About 80% are not using the pre-prompt.
- Most are also triggering the ATT alert early in the user journey, predominantly on app launch. Perhaps building trust before asking will enable more opt-ins. However, I’m sure best practices will develop quickly in the coming weeks and months around this.
- Most of the sub text in the ATT alert was focused on some form of “We’ll use this data to serve more personalised ads and experiences”. Interestingly, very few mentioned that it enables them to keep the service free or fund improvements to the product like Facebook has. A couple of different approaches to this text were:
Massive props to Sylvian for this great resource. If you see examples in the wild, you can submit them here.
Also, major props to Božo Janković at GameBiz Consulting who analysed how the top 100 grossing gaming apps are showing the ATT prompt. The key findings when he wrote it were:
- Only 42 of them showed the ATT alert at all.
- All who showed it did so right at app open, no delays.
- Only 8 showed a pre-ATT prompt.
- If the app was ad funded they’re nearly twice as likely to show the ATT alert and slightly more likely to show the pre-prompt.
- Similarly, casual games were slightly more likely to show the ATT alert than core games.
What’s most interesting is that less than half of the top 100 grossing apps are showing the ATT alert. Many are clearly taking the wait and see approach. Very few were also trying to show the benefits of opting-in in the alert sub-text e.g. keeps the app free, enables product development etc.
Key lessons:
- It might be worth experimenting with showing the prompt later in the user journey. However, the SKAdNetwork timer may limit how long you can wait.
- It’s also surprising how few are using the pre-prompt. This is a great chance to communicate more than you can in the alert sub-text. This analysis isn’t scientific so it’s possible that some are A/B testing opt-ins with and without the pre-prompt. This seems like a good early approach, but it’s definitely worth testing a thoughtful pre-prompt.
- Having more positive sub-text on the ATT alert or showing the benefits that opting-in gives beyond personalised ad and product experiences like Facebook might be worth experimenting with too.
Wrapping up
We’re still in a period of fluctuation after only a week living in this new world. Things will settle somewhat over the coming weeks as we share what we’re learning weekly. However, some important early lessons are:
- You need to understand what dressing has happened to SKAN metrics by SANs and MMPs by the time you’re doing performance analysis in reporting or BI solutions. Choose a source of truth.
- It’s also important to ensure your MMP has integrated SKAN conversion data from all SANs and whether this is a raw postback, modelled SAN data and what modelling has happened on the MMP end.
- Keep an eye on the impact of Apple’s privacy threshold and if you feel you’re losing significant volumes to this reduce the number of unique ConversionValues that you’re using.
- Work on the basis that your opt-in rate won’t be any greater than 20% right now if you’re using the ATT alert.
- Test use of the pre-prompt and more positive or benefit-led messaging in the alert sub-text rather than the default.
Finally, don’t stress! We’ll be back with more lessons next week and don’t forget to sign-up to get notified for our webinar where we’ll explore early best practices for SKAdNetwork and ATT alongside industry experts here.