In the fast-paced world of performance marketing, making decisions based on bad data isn't just an inconvenience, it's incredibly expensive. To get to the bottom of where teams are losing money without realizing it, I met with Artsiom Kazimirchyk, the CEO and Co-founder of Campaignswell.
Campaignswell is a platform built to help user acquisition teams and media buyers analyze and estimate their marketing efforts. By pulling data from ad networks, revenue systems, and attribution models, they build predictive lifetime value (LTV) models to help teams decide whether to scale or cut campaigns in a fraction of the usual time.
Given his deep visibility into the analytics of high-spending performance teams, Artsiom is perfectly positioned to break down the five most expensive data mistakes marketers make.
Please note, this interview has been edited for clarity and length.
Key takeaways
- “Clean” revenue reporting requires deductions: When calculating LTV, a major mistake is failing to properly deduct extra costs from the revenue calculation. Behind the sticker price of a product, there are payment system fees, refunds, chargebacks, dispute fees, and country taxes. You have to factor in these costs to avoid phantom revenue reporting.
- Cohort analysis beats the calendar view trap: The calendar view trap happens when you compare the money you spent in a specific month with the total revenue generated in that same month. This is flawed because that month's revenue includes income from users acquired in previous months. To more accurately analyze performance marketing, you must estimate the efficiency of individual cohorts, not the calendar month.
- Attribution requires alignment and technical stitching: To avoid duplicate reporting across channels, teams have to discuss and agree on how to count or split revenue. Furthermore, to successfully stitch mobile and web data, you may need to consider a dedicated data partner.

Mistake #1: Miscalculated lifetime value (LTV)
Scott: Let’s start with the most frequent ways teams miscalculate their customers' lifetime value. What usually causes these errors, and how long do they typically go unnoticed?
Artsiom: This is probably the most popular mistake we see. When businesses rely on subscription or one-time payment models, they use payment system providers. Behind the sticker price of a product, let's say it's $10, there are payment system fees, refunds, chargebacks, dispute fees, and country taxes. You might only actually receive $6 from that $10.
The mistake is failing to properly deduct these extra costs from the revenue calculation. A marketing team might see a customer acquisition cost (CAC) of $10 and an LTV of $12 or $13, leading them to believe the campaign is profitable. However, once you deduct refunds, fees, and taxes, that LTV might drop to $9. Suddenly, you are losing money on every user acquired.
We had a client come to us who had been making this exact miscalculation for six to eight months. The company thought everything was great and spent millions on marketing with a misled KPI, only to realize later that they had lost a significant amount of money.
Scott: It sounds like a core issue is that marketers aren’t scrutinizing these calculations enough?
Artsiom: Exactly. Marketers often receive metrics calculated by another team. They see “LTV” and accept it as a universal truth, but you really have to double-check. You need to ask what the calculation consists of, what deductions have been excluded, and exactly whose LTV it represents. For instance, if you optimize campaigns for subscribers, you must look at subscriber LTV. The specific LTV for the audience segment you’re analyzing.

Mistake #2: Phantom revenue reporting
Scott: This leads perfectly into mistake number two, which is phantom revenue reporting. Aside from standard refunds, what other missing deductions create the biggest gap between reported revenue and real revenue?
Artsiom: A major one right now involves web products and web funnels, which are very popular. When you use payment systems like Stripe, PayPal, or SolidGate, you have to closely monitor your dispute rate and dispute activity metrics.
To keep that dispute percentage low, companies often integrate external solutions like Rapid Dispute Resolution (RDR) alerts. These alerts cost money, and you have to factor that into your unit economics. A campaign might look profitable until you include these alert deductions. For some of our clients, these alerts can cost $50,000 to $100,000 per month, so it is a highly visible chunk of money that cannot be ignored in revenue reporting.
Whenever you are handed a revenue metric, you have to ask what clean actually means in that context and what has already been excluded from the gross amount.

Mistake #3: The calendar view trap
Scott: Let’s talk about the calendar view trap. What is this trap, and why is it so dangerous?
Artsiom: This is a classic mistake. Imagine your business revenue is a cake, specifically, a Napoleon cake with many distinct layers. Every layer represents a different cohort of users. A cohort is simply the group of users you acquired via paid ads.
The calendar view trap happens when you try to estimate the efficiency of your marketing by cutting a vertical slice of that cake for a specific month, like July. You compare the money you spent in July with the total revenue generated in July. The problem is that the revenue generated in July includes revenue from users acquired in previous months.

Scott: So you are matching today's spend against past success.
Artsiom: Exactly. It’s like comparing soft and sweet. To properly analyze performance marketing, you must estimate the efficiency of the individual layers (the cohorts), not the calendar month. If you acquire 100 users for $10 each today, you need to track the LTV of those specific users to see if that layer is profitable.
Building proper cohort analytics is much more difficult than calendar analytics. It usually requires hiring data engineers to join data across different systems or using a specialized business intelligence platform to process it for you.

Mistake #4: Broken or biased attribution
Scott: That complexity brings us to mistake number four: broken or biased attribution and stitching user journeys. Is it even possible to overcome broken attribution as user journeys get more complex?
Artsiom: First of all, there is no single “right” attribution model. It depends entirely on how your company builds its processes.
A common tricky scenario involves acquiring a user’s email address via Meta Ads. Weeks later, you send them an email and they convert. Does that revenue belong to Meta or to email marketing? If you aren't careful, both channels will claim the user, and you will accidentally duplicate your revenue in reporting. Teams have to discuss and agree on how to count or split that revenue to avoid duplicate reporting.
Scott: Where else does measurement usually break?
Artsiom: The connection between web and app journeys is very complex. A user might be acquired via a web campaign, make a payment on the web, but then download the app and make further in-app purchases. You need to bypass web attribution to mobile to attribute that in-app revenue back to the original web campaign that captured them. If you do nothing, you will just see the web revenue as web campaign success, and the mobile revenue will look like organic app growth.
To fix this, you need direct connections with raw, user-level data sources using MMPs (mobile measurement partners) for mobile and tools like Google Analytics, Amplitude, or Mixpanel for web, and then utilize attribution software to join those users via different IDs under the hood.

Mistake #5: The lagging insight illusion
Scott: Our final mistake is the lagging insight illusion. When teams are scaling spend, what are some false confidence signals, and how can they spot them early?
Artsiom: This ties directly back to how LTV is calculated. The traditional approach is taking historical data, calculating average coefficients, and applying them to your current cohorts. While this works the majority of the time, we live in a dynamic world where marketing teams run constant tests.
If you just calculate averages from the past, you assume your new audience will behave the exact same way. When a campaign brings in a radically different audience, that static coefficient becomes a false confidence signal.

Scott: How do you overcome this so you aren't waiting months to realize a campaign is failing?
Artsiom: You have to look at user behavior metrics, not just static historical coefficients. Machine learning algorithms can look at early signals, like how fast a user refunds, renews, or churns in the first few days, and predicts a highly accurate LTV.
For example, we had a client with an LTV of around $60 for several months. Suddenly, our prediction model flagged that the new users coming in only had an LTV of $45. Because we caught this signal in just a few days rather than months, the client was able to immediately decrease marketing spend and investigate. They realized it was a completely new audience that was emerging. We repackaged their product advertising for this audience, and successfully started growing again with optimized margins.
LTV should be viewed as the heart rate of your user acquisition. When it significantly drops or increases during early checkpoints, it is a trigger that something has changed with your audience, and you need to act immediately.
Scott: Artsiom, thank you so much for breaking down these data complexities. Where can people go to learn more about you and Campaignswell?
Artsiom: You can find me on LinkedIn, and anyone can book a demo call with me. I am always happy to talk with people who are deeply involved in user acquisition about how we can help them navigate these challenges.

In the fast-paced world of performance marketing, making decisions based on bad data isn't just an inconvenience, it's incredibly expensive. To get to the bottom of where teams are losing money without realizing it, I met with Artsiom Kazimirchyk, the CEO and Co-founder of Campaignswell.
Campaignswell is a platform built to help user acquisition teams and media buyers analyze and estimate their marketing efforts. By pulling data from ad networks, revenue systems, and attribution models, they build predictive lifetime value (LTV) models to help teams decide whether to scale or cut campaigns in a fraction of the usual time.
Given his deep visibility into the analytics of high-spending performance teams, Artsiom is perfectly positioned to break down the five most expensive data mistakes marketers make.
Please note, this interview has been edited for clarity and length.
Key takeaways
- “Clean” revenue reporting requires deductions: When calculating LTV, a major mistake is failing to properly deduct extra costs from the revenue calculation. Behind the sticker price of a product, there are payment system fees, refunds, chargebacks, dispute fees, and country taxes. You have to factor in these costs to avoid phantom revenue reporting.
- Cohort analysis beats the calendar view trap: The calendar view trap happens when you compare the money you spent in a specific month with the total revenue generated in that same month. This is flawed because that month's revenue includes income from users acquired in previous months. To more accurately analyze performance marketing, you must estimate the efficiency of individual cohorts, not the calendar month.
- Attribution requires alignment and technical stitching: To avoid duplicate reporting across channels, teams have to discuss and agree on how to count or split revenue. Furthermore, to successfully stitch mobile and web data, you may need to consider a dedicated data partner.

Mistake #1: Miscalculated lifetime value (LTV)
Scott: Let’s start with the most frequent ways teams miscalculate their customers' lifetime value. What usually causes these errors, and how long do they typically go unnoticed?
Artsiom: This is probably the most popular mistake we see. When businesses rely on subscription or one-time payment models, they use payment system providers. Behind the sticker price of a product, let's say it's $10, there are payment system fees, refunds, chargebacks, dispute fees, and country taxes. You might only actually receive $6 from that $10.
The mistake is failing to properly deduct these extra costs from the revenue calculation. A marketing team might see a customer acquisition cost (CAC) of $10 and an LTV of $12 or $13, leading them to believe the campaign is profitable. However, once you deduct refunds, fees, and taxes, that LTV might drop to $9. Suddenly, you are losing money on every user acquired.
We had a client come to us who had been making this exact miscalculation for six to eight months. The company thought everything was great and spent millions on marketing with a misled KPI, only to realize later that they had lost a significant amount of money.
Scott: It sounds like a core issue is that marketers aren’t scrutinizing these calculations enough?
Artsiom: Exactly. Marketers often receive metrics calculated by another team. They see “LTV” and accept it as a universal truth, but you really have to double-check. You need to ask what the calculation consists of, what deductions have been excluded, and exactly whose LTV it represents. For instance, if you optimize campaigns for subscribers, you must look at subscriber LTV. The specific LTV for the audience segment you’re analyzing.

Mistake #2: Phantom revenue reporting
Scott: This leads perfectly into mistake number two, which is phantom revenue reporting. Aside from standard refunds, what other missing deductions create the biggest gap between reported revenue and real revenue?
Artsiom: A major one right now involves web products and web funnels, which are very popular. When you use payment systems like Stripe, PayPal, or SolidGate, you have to closely monitor your dispute rate and dispute activity metrics.
To keep that dispute percentage low, companies often integrate external solutions like Rapid Dispute Resolution (RDR) alerts. These alerts cost money, and you have to factor that into your unit economics. A campaign might look profitable until you include these alert deductions. For some of our clients, these alerts can cost $50,000 to $100,000 per month, so it is a highly visible chunk of money that cannot be ignored in revenue reporting.
Whenever you are handed a revenue metric, you have to ask what clean actually means in that context and what has already been excluded from the gross amount.

Mistake #3: The calendar view trap
Scott: Let’s talk about the calendar view trap. What is this trap, and why is it so dangerous?
Artsiom: This is a classic mistake. Imagine your business revenue is a cake, specifically, a Napoleon cake with many distinct layers. Every layer represents a different cohort of users. A cohort is simply the group of users you acquired via paid ads.
The calendar view trap happens when you try to estimate the efficiency of your marketing by cutting a vertical slice of that cake for a specific month, like July. You compare the money you spent in July with the total revenue generated in July. The problem is that the revenue generated in July includes revenue from users acquired in previous months.

Scott: So you are matching today's spend against past success.
Artsiom: Exactly. It’s like comparing soft and sweet. To properly analyze performance marketing, you must estimate the efficiency of the individual layers (the cohorts), not the calendar month. If you acquire 100 users for $10 each today, you need to track the LTV of those specific users to see if that layer is profitable.
Building proper cohort analytics is much more difficult than calendar analytics. It usually requires hiring data engineers to join data across different systems or using a specialized business intelligence platform to process it for you.

Mistake #4: Broken or biased attribution
Scott: That complexity brings us to mistake number four: broken or biased attribution and stitching user journeys. Is it even possible to overcome broken attribution as user journeys get more complex?
Artsiom: First of all, there is no single “right” attribution model. It depends entirely on how your company builds its processes.
A common tricky scenario involves acquiring a user’s email address via Meta Ads. Weeks later, you send them an email and they convert. Does that revenue belong to Meta or to email marketing? If you aren't careful, both channels will claim the user, and you will accidentally duplicate your revenue in reporting. Teams have to discuss and agree on how to count or split that revenue to avoid duplicate reporting.
Scott: Where else does measurement usually break?
Artsiom: The connection between web and app journeys is very complex. A user might be acquired via a web campaign, make a payment on the web, but then download the app and make further in-app purchases. You need to bypass web attribution to mobile to attribute that in-app revenue back to the original web campaign that captured them. If you do nothing, you will just see the web revenue as web campaign success, and the mobile revenue will look like organic app growth.
To fix this, you need direct connections with raw, user-level data sources using MMPs (mobile measurement partners) for mobile and tools like Google Analytics, Amplitude, or Mixpanel for web, and then utilize attribution software to join those users via different IDs under the hood.

Mistake #5: The lagging insight illusion
Scott: Our final mistake is the lagging insight illusion. When teams are scaling spend, what are some false confidence signals, and how can they spot them early?
Artsiom: This ties directly back to how LTV is calculated. The traditional approach is taking historical data, calculating average coefficients, and applying them to your current cohorts. While this works the majority of the time, we live in a dynamic world where marketing teams run constant tests.
If you just calculate averages from the past, you assume your new audience will behave the exact same way. When a campaign brings in a radically different audience, that static coefficient becomes a false confidence signal.

Scott: How do you overcome this so you aren't waiting months to realize a campaign is failing?
Artsiom: You have to look at user behavior metrics, not just static historical coefficients. Machine learning algorithms can look at early signals, like how fast a user refunds, renews, or churns in the first few days, and predicts a highly accurate LTV.
For example, we had a client with an LTV of around $60 for several months. Suddenly, our prediction model flagged that the new users coming in only had an LTV of $45. Because we caught this signal in just a few days rather than months, the client was able to immediately decrease marketing spend and investigate. They realized it was a completely new audience that was emerging. We repackaged their product advertising for this audience, and successfully started growing again with optimized margins.
LTV should be viewed as the heart rate of your user acquisition. When it significantly drops or increases during early checkpoints, it is a trigger that something has changed with your audience, and you need to act immediately.
Scott: Artsiom, thank you so much for breaking down these data complexities. Where can people go to learn more about you and Campaignswell?
Artsiom: You can find me on LinkedIn, and anyone can book a demo call with me. I am always happy to talk with people who are deeply involved in user acquisition about how we can help them navigate these challenges.




