← All posts
Tracking & Attribution
May 11, 2026

The ultimate guide to mobile app tracking & attribution

by
Ana Siu
Yellow curly Subscribe to Blog button

Running mobile app campaigns at scale? Try Bïrch for free.

Mobile app tracking and attribution have never been simple. But for a long time, they were predictable. With device IDs and deterministic matching, marketers could reasonably connect installs and in-app events back to specific campaigns.

That’s no longer the case.

Today, much of that signal is either restricted, delayed, or modeled. On iOS, user-level tracking has largely disappeared. On Android, it’s evolving. And across platforms, the data you rely on is increasingly fragmented.

This creates a fundamental problem: you can’t attribute performance without reliable tracking, and you can’t trust attribution without understanding how that data is collected—and where the signal weakens.

This guide breaks down both sides of that system: how mobile tracking works and where signal loss happens, how attribution is applied on top of that data, and how teams make decisions when the picture is incomplete.

What is mobile app tracking and attribution?

Mobile tracking is the process of capturing user interactions within an app, such as installs, in-app events, purchases, subscriptions, and re-engagement actions. This data is then passed to ad platforms, analytics tools, or mobile measurement partners (MMPs).

Minimal phone with event dots sending data to MMP, ad platforms, and analytics.

It differs from web tracking, which relies heavily on cookies and browser-based signals. Mobile tracking doesn’t have that option. It depends on device identifiers (IDFA on iOS, GAID on Android), SDKs embedded inside apps, and the intermediary layer of the app stores, which adds friction that the web doesn’t have.

The SDK is the core of the mobile tracking stack. It lives inside the app and is responsible for capturing events and sending them to the right destinations: your MMP, your ad platform, your internal BI system.

If something is missing, duplicated, or inconsistent in your data, it usually starts here.

How tracking data gets passed to ad platforms and MMPs

When a user sees an ad and installs an app, the ad platform records an impression or click. The MMP SDK, integrated into the app, then triggers an install event, which is sent to the MMP.

The MMP uses this install event, along with device-level data, to send attribution postbacks (including identifiers such as the device ID) to connected ad networks—essentially asking, "Whose installation was this?"

Each network checks its own data and responds if it can claim the install. If multiple sources are involved, the MMP compares their responses and attributes the install according to the industry-standard model, typically awarding credit to the last click.

That matching process isn’t straightforward. It depends on timing, attribution windows, device ID availability, and modeling assumptions that vary across platforms and tools.

The trouble is, there isn’t a single way to match installs to ads.

Deterministic matching, which relies on device identifiers such as IDFA or GAID, used to be the most reliable method. But on iOS, this now applies to a much smaller share of traffic.

As deterministic matching declined, probabilistic methods like fingerprinting were also heavily restricted—especially under Apple’s policies.

What remains is a mix of approaches:

  • Click-based matching (stronger signal)
  • Impression-based matching (weaker and prone to over-attribution)
  • SKAdNetwork postbacks (provide delayed, aggregated campaign data)

We also need to consider that most attribution no longer happens in real time, especially on iOS. Since device identifiers aren’t available by default, the MMP can access them only with explicit user consent within the app. As a result, postbacks may be delayed, limited, or modeled before they reach the MMP.

So, we are left with a layered system of partial signals with very different levels of reliability.

How MMPs decide which ad gets credit

Once a signal exists, the MMP still has to decide which ad gets credit. That depends on which attribution model is being applied. MMPs—AppsFlyer, Adjust, Branch, Singular—sit between your app and your ad platforms, deduplicating events and providing a more neutral source of cross-channel measurement than any single platform's self-reported data.

  • Last-click is still the default for many platforms, despite its tendency to over-credit the final touchpoint, which is well-documented. 
  • Multi-touch models attempt a more distributed view, though they require more data to be meaningful.
  • View-through attribution assigns credit when a user sees an ad but does not click, and later installs or converts within a defined attribution window. It is commonly supported by both ad platforms and MMPs, but is usually treated separately from click-through attribution and often given lower priority in deduplication logic, since click signals are generally considered stronger evidence of causation. Even so, view-through attribution remains important for impression-led channels such as video, display, and some gaming ad inventory, where ads can influence installs without generating a direct click. 

This is one of the main reasons for discrepancies between platform-reported numbers and your MMP's numbers.

Installs are also just the starting point. Events like account creation, first purchase, day-7 retention, and subscription renewal are often more meaningful for optimization than the install itself, and they're subject to all the same signal loss and matching uncertainty.

The attribution window adds another variable. It defines how long after a click or view a conversion can still be credited to an ad, whether that's an install or a post-install event like a purchase or subscription. 

A 7-day click window means any install within 7 days of a click is attributed to that campaign. A 1-day view window means an install within 24 hours of seeing an ad can be attributed to that impression. Platforms don't all use the same defaults, so if your MMP and your ad platform are running different windows, the gaps will look like tracking errors but are actually just mismatches. Aligning windows across your stack before drawing conclusions is one of the most impactful setup steps, and one of the most commonly skipped.

Why mobile tracking and attribution have gotten harder

Apple’s App Tracking Transparency (ATT), introduced with iOS 14.5, required apps to ask users for permission before accessing their IDFA. Most users declined. Since then, opt-in rates have remained low across the industry, meaning a significant portion of iOS traffic is now unattributed at the individual level.

The knock-on effects were significant.

MMP-reported installs began to diverge from platform numbers across many campaigns. Meta limited the number of trackable post-install events to eight per app. Attribution windows were compressed. Deterministic matching, the gold standard of mobile attribution, gave way to probabilistic modeling across much of iOS.

Richie David, CEO of Totally Home Furniture, who has overseen enterprise-level attribution shifts at scale, described what happened: “Campaigns that looked flat on aggregate reporting were getting more traction than the raw numbers suggested once holdout tests pulled out organic gains separately. The problem was that teams were still reading raw install counts like it was 2019, wondering why their CPIs looked great, but revenue wasn’t following.”

SKAdNetwork: what it gives you and what it doesn’t

Apple’s SKAdNetwork (SKAN) replaced user-level tracking on iOS with something more privacy-preserving. But it’s considerably less precise. You get aggregated install data, a 24–72-hour reporting delay, and no user-level detail. Every number is a weighted estimate.

What you can actually measure depends heavily on how you configure SKAdNetwork’s conversion values—a limited set of signals mapped to in-app events. Poor configuration permanently limits what your campaigns can optimize for.

Richie’s framing captures it well: “SKAdNetwork gave everyone aggregate data and labeled it measurement. The practical shift is to base your decisions on ranges rather than precise values—and get used to that ambiguity. Chasing precision that the data structurally cannot deliver is how you waste weeks optimizing against noise.”

Android, by contrast, has remained more deterministic. The Google Advertising ID (GAID) is still broadly accessible, making attribution more precise—for now.

Even so, attribution is never fully consistent. Each platform—Meta, Google, TikTok, Apple—reports attribution using its own models, windows, and view-through assumptions. Discrepancies between your ad platform and your MMP aren’t usually a sign that something is broken. They’re structural. Platforms have an incentive to claim as many conversions as possible, while your MMP is designed to be comparatively neutral.

“A gap of less than 15% is normal attribution overlap.”

Richie gave a useful benchmark: “A gap of less than 15% is normal attribution overlap—there’s nothing to act on there. Past 20%, check whether both sides are using the same attribution window before drawing conclusions. Most of the time, one side is counting engaged views and the other is only counting clicks.”

How experienced mobile teams navigate tracking and attribution today

The teams doing this well aren’t looking for a single clean number. They are triangulating from multiple imperfect sources—MMP data, SKAdNetwork signals, platform reporting, and in-app event data from their own backend. And when granular attribution breaks down, they fall back on broader efficiency metrics like MER (total revenue divided by total ad spend) to keep decisions grounded.

None of these sources tells the full story on its own. That’s why cross-referencing your MMP against your own backend revenue data matters more than most teams expect.

The install, meanwhile, has become a lagging indicator. Early in-app events—account creation, first session depth, first purchase within 72 hours, day-7 retention—tend to be more predictive of long-term value, fire earlier in the funnel, and hold up better under privacy restrictions than purchase events. 

Waiting for the install volume to make optimization decisions often means you have already spent a significant amount of budget in the wrong direction.

What good tracking and attribution hygiene looks like

The execution doesn’t need to be complex—but it does need to be deliberate. The teams navigating fragmented measurement best tend to share a few basic habits: they set things up thoughtfully before launch, keep their data consistent across tools, and treat their own data as the final word on performance.

To get this right:

  • Define your event taxonomy before you build your tracking setup, not after. Decide which events matter for optimization (not just reporting) and instrument those first.
  • Use consistent event naming across platforms and your MMP. Naming mismatches are a surprisingly common source of reporting discrepancies.
  • Audit your attribution windows regularly and align them across your MMP and ad platforms.
  • Deduplicate carefully, especially if you’re passing events from both your SDK and server-side tracking.
  • Cross-reference MMP data against your backend revenue data. Platform numbers and MMP numbers rarely match, but your backend data reflects what actually happened—real subscriptions, purchases, and revenue.

Experienced teams cross-reference multiple signals to make decisions.

As Yevhenii Tymoshenko, CMO at Skylum, puts it: “We never rely on one source alone. The MMP is our source of truth for cross-channel deduplication, but we always cross-reference it with our backend data for actual subscriptions and revenue.”

“We never rely on one source alone.”

The goal is to understand what directionally makes sense—and what actually drives revenue. That shift changes how performance is measured. Installs become less useful on their own. Early in-app events matter more because they show up sooner and hold up better.

At Skylum, that means tracking time-to-first-meaningful-result, trial-to-paid conversion, and early retention.

Doing this at scale requires automation. Teams like AdQuantum turn these signals into action—quickly and systematically. Running hundreds of campaigns daily, they combine automated optimization with manual oversight, using Bïrch alongside MMP data to manage volume without losing control.

That balance—automation for speed, judgment for direction—is what makes scale sustainable.

Setting up your mobile tracking and attribution stack

Choosing an MMP comes down to a few factors: SDK coverage for the platforms you run on, integration depth with your ad platforms, privacy compliance for the markets you operate in, and the quality of reporting and access to raw data.

The main players—AppsFlyer, Adjust, Branch, and Singular—all cover the core use cases. The differences tend to show up in pricing, support quality, and how they handle specific use cases like SKAdNetwork configuration, deep linking, or cohort analysis. 

It’s worth evaluating a few before committing, particularly if you’re running across multiple platforms or markets with different privacy requirements.

Before you integrate anything, define your event taxonomy. Work backward from the business outcomes that actually matter—subscriptions, LTV, retention milestones—and instrument the in-app events that predict those outcomes. 

How you collect and structure customer data upstream will shape what’s actually trackable, and if you’re also running web campaigns alongside mobile, getting conversion tracking right on that side matters just as much.

Common setup mistakes that cause tracking gaps or attribution errors

Most tracking problems tend to come from the same handful of decisions made early and rarely revisited. 

Misaligned attribution windows between your MMP and ad platforms are the most common culprit behind unexplained discrepancies, usually because no one checked whether both sides were using the same defaults. 

On iOS, missing or misconfigured SKAdNetwork conversion value schemas quietly limit the signal quality available for optimization without throwing any obvious errors.

Event duplication is another mistake that surfaces late—typically when teams add server-side tracking on top of an existing SDK setup without built-in deduplication logic.

And then there’s the most avoidable mistake: tracking installs only, with no downstream events instrumented, or delaying event setup until after launch. 

Either way, your first week’s spend generates no usable signal at exactly the moment you need it most.

From tracking data to campaign action

Tracking and attribution data are only valuable if you act on them. That might sound obvious, but it’s where a lot of teams lose momentum. They have the data, but the process for turning it into campaign decisions is slow, manual, or inconsistent.

The signals that matter most for optimization—early in-app events, retention patterns, efficiency metrics like MER—need to be connected to your campaign management workflow in a way that’s fast enough to be useful. Waiting a week to review performance and make adjustments manually works at a small scale, but it doesn’t hold as you grow.

Mariia Golitsyna, who has spent 15+ years in enterprise revenue, framed this clearly: “When attribution data conflicts, I don’t try to find the correct number—I look for consistency across signals and prioritize what correlates with downstream revenue, not just installs.”

“The biggest mistake is over-optimizing for install metrics instead of tracking activation quality, retention, and monetization patterns.”

At scale, automation becomes necessary. Teams running large campaign volumes can’t review every ad set manually or make every bid adjustment by hand. 

Rules-based automation—pausing underperformers when specific in-app event thresholds aren’t met, scaling campaigns that hit retention or ROAS targets, duplicating winning ad sets into new audiences—lets you stay responsive to performance data without building an unsustainable manual workflow.

This is what Bïrch is built for: connecting your performance signals to automated campaign actions. AdQuantum, for example, uses Bïrch’s automated rules for campaign duplication, data-driven relaunching, and reducing discrepancies between BI systems and ad platforms. The team runs hundreds of campaigns daily in a way that would be operationally impossible to manage without automation.

Bïrch automated rules

But automation isn’t a substitute for strategy. As Anton Kuzmin, Head of User Acquisition at AdQuantum, puts it: “You can’t automate your way out of creative or strategic gaps. Human insight, being honest with the audience, and product quality are still critical.” 

Automation handles the volume. Judgment still comes from the team.

Working with the constraints, not against them

Mobile tracking and attribution have become more complex over time. Privacy frameworks are evolving, platform measurement remains fragmented, and the gap between what you can observe and what you need to know won’t close anytime soon.

But teams that understand how the two layers connect—tracking, feeding attribution, and feeding optimization—tend to make better decisions with incomplete data. They instrument the right events early, align their measurement setup across their stack, and build workflows that let them act on signals quickly rather than waiting for certainty that isn’t coming.

Knowing where the data is reliable and where it’s estimated is half the work. The other half is building systems that let you act on that data quickly—without waiting for a level of precision that no longer exists.

Mobile attribution is about making the right decisions with incomplete information—and doing it consistently. If you’re running mobile app campaigns and want to see how Bïrch fits into that workflow, start a free trial today

FAQs

What is mobile app tracking?
Plus sign

Mobile app tracking is the process of capturing user interactions within an app—such as installs, in-app events, purchases, and subscriptions—and passing that data to ad platforms, analytics tools, or mobile measurement partners (MMPs). Unlike web tracking, it relies on device identifiers and SDKs rather than cookies.

What is mobile app attribution?
plus sign

Mobile app attribution is the process of linking a tracked user event to the ad or channel that drove it. MMPs like AppsFlyer, Adjust, Branch, and Singular are the standard tools for managing this across channels.

What’s the difference between tracking and attribution?
black plus sign

Tracking is the data collection layer, capturing what users do inside your app. Attribution is the interpretation layer, determining which ad or channel drove those actions. The two are interdependent: attribution is only as reliable as the tracking data that feeds it.

What is an MMP, and do I need one?

A mobile measurement partner, or MMP, is a third-party tool that sits between your app and your ad platforms. It deduplicates events and provides cross-channel attribution. If you’re running campaigns across multiple platforms (Meta, Google, TikTok, Apple Search Ads), an MMP is effectively required for any reliable cross-channel view.

How has Apple’s ATT affected mobile tracking and attribution?

ATT requires apps to ask users for permission before accessing their IDFA. Most users decline. This has significantly reduced iOS tracking fidelity, shifted attribution from deterministic to probabilistic modeling, and introduced SKAdNetwork as the primary privacy-preserving framework for iOS measurement.

What is SKAdNetwork?

SKAdNetwork (SKAN) is Apple’s privacy-preserving attribution framework. It provides aggregated install data without user-level tracking, with a 24–72-hour reporting delay and no individual user details. Every number from SKAN is a weighted estimate, making it useful for directional decisions, not precise optimization.

How do I reduce discrepancies between ad platforms and my MMP?

Start by aligning attribution windows—this is the most common source of apparent discrepancies. Then check whether both sides are counting the same event types (e.g., click-through only vs. click-through plus view-through). A gap of up to 15% is generally within the normal range. Gaps above 20% typically indicate a definitional or configuration mismatch that warrants investigation.

Ana Siu

is a content marketing expert and writer specializing in marketing, technology, and social change. She is a contributor to the Bïrch Blog and has a background in advertising, journalism, and SaaS.

Get started with Bïrch product

14
days free trial
Cancel anytime
No credit card required