Stop guessing which beauty creatives work. Test them at scale with Bïrch Launcher.
Beauty is one of the most competitive verticals in paid social and one of the fastest-moving.
Globally, beauty sales grew by 10% last year. Online is outpacing offline (18% vs. 3%), and indie brands are still growing faster than large conglomerates.
What’s driving performance differences between brands isn’t usually their budget. It’s how they approach creative, structure campaigns, and reduce manual input.
This article looks at what’s working right now in paid marketing for beauty brands based on what we’re seeing across accounts and conversations with teams in the space, as well as input from Selfnamed, a product development and manufacturing partner that works with emerging and scaling beauty brands.
Key takeaways
- The gap between growing and stagnating beauty brands is increasingly operational. Creative testing, campaign structure, and automation matter more than budget alone.
- Creative remains the primary performance driver. Simple, product-focused, real-use formats consistently outperform highly produced or overly abstract ads.
- Testing works best as an ongoing system. Running multiple creative angles continuously leads to faster feedback and more stable performance.
- Not all customers are equal. Optimizing toward higher-value users, not just cheaper conversions, tends to improve long-term performance.
- Automation becomes essential at scale. The more decisions run through rules, the less performance depends on manual input. This is where Bïrch helps keep campaigns running efficiently.
- Most drop-off happens between interest and checkout. Small changes at this stage—especially around the timing of promotions—can have a disproportionate impact.
What’s changed for beauty marketers
A bit of context before getting into the specifics.
Clean, vegan, cruelty-free… A few years ago, these were differentiators. Now, they’re expected. In 2026, what’s influencing purchase decisions most are ingredient specificity and clinical proof. NielsenIQ found that 50% of consumers are willing to pay more for ingredient and supply chain transparency, and 49% are willing to pay more for elevated formulations.

This shift is also reflected in how skincare itself is evolving. Surface-level results are becoming less important to consumers than supporting the skin barrier and long-term skin health. People are paying less attention to how products are positioned and prioritizing those that actually work by supporting prevention, longevity, and deeper skin function.
This is also reflected in how products are developed. As Anete Vabule, CEO and co-founder of Selfnamed, explains: “The focus is on creating products that are more compatible with the skin, more transparent, and safer, while continuously improving performance.”
Brand loyalty has also softened. Where people used to stick with one brand and keep buying from it, they’re now more willing to switch based on what gets them the best results—and dupe culture has made that easier.
The purchase journey has also become less linear. For instance, discovery might happen on TikTok and then be followed by a quick search and a purchase on Amazon or a DTC site, sometimes within a few days.

As more of that journey happens within platforms, social commerce is becoming a real revenue channel, not just a discovery layer. 22% of global consumers have purchased directly through TikTok Shop, with rapid growth in beauty and personal care.
Meanwhile, online continues to outpace offline while paid channels are becoming more saturated and expensive. Brands are feeling pressure to get more out of what they already spend, rather than relying on scale alone.
Why creative makes or breaks beauty campaigns
If there’s one layer where beauty brands consistently win or lose, it’s creative. Campaign structure matters, but it won’t compensate for ads that don’t connect.
NielsenIQ’s neuroscience-based research helps explain why certain formats work. Watching someone use a product activates the mirror neuron network—people process it as if they’re using the product themselves. That’s part of why simple, realistic, close-up content tends to outperform polished studio work.
As the team at Selfnamed observes, “Today, especially on platforms like TikTok, brands are prioritizing user-generated content, as well as organic and sponsored video content.” These formats make it easier to show texture, application, and results in a way that feels more immediate and believable.
What tends to hold beauty creatives back is overproduction—too many cuts, overly artistic execution, or messaging that leans on negative qualifiers (like “no parabens” or “without harsh chemicals”). Competitor comparisons can have a similar effect.

In most cases, content that feels real—a person, a product, a clear result—drives stronger purchase intent than more stylized content.
Creative testing is non-negotiable in beauty
In their State of Beauty 2025 report, McKinsey highlights how beauty is driven by constant demand for newness. With TikTok and social commerce accelerating that cycle, creative fatigue sets in faster than it used to. A quarterly refresh often isn’t enough.
Brands that improve performance treat testing as part of the fabric of campaigns. And speed is a big part of this.
Selfnamed has noticed how brands that scale tend to move quickly when testing new ideas—responding to trends, launching variations, and validating what works in-market rather than relying on long development cycles.
Scentbird is a good example of what that looks like in practice. The team was spending a significant portion of their budget on ads that weren’t converting, so they needed a way to test more systematically without increasing manual workload.

Using Bïrch, they built a setup where different creative angles could be continuously tested and optimized—grouping variations, applying rules, and letting performance determine what to scale or pause.
As a result, they ran 1.8x more tests, reduced spend on zero-conversion ads by 76%, and cut manual optimization time by 30%.
What changed was the speed and consistency of the feedback loop.
Building a creative testing system that scales
Tools are making it easier to produce and test creative at scale.
Meta’s GenAI tools are an example, offering background generation, image expansion, text variations, and AI-assisted video. Using these makes it faster to produce and test multiple creative angles without rebuilding assets from scratch.
While production is becoming less of a bottleneck, the bigger constraint is still direction: what you choose to test and how you interpret what’s working.
In many accounts, this shows up as a loop: once a creative concept starts working, teams quickly generate variations—different edits, formats, or touch-ups—test them with audiences, and then roll the strongest ones into ongoing campaigns. Over time, this starts to reveal patterns around which styles or treatments resonate with different segments.
Bïrch Launcher builds on this by helping you turn those variations into structured, scalable tests. Multiple creative angles can run in parallel, rotate automatically, and be evaluated based on performance without constant manual input.

Instead of managing creatives one by one, you can see what’s performing, what’s fatiguing, and where to focus next.
One pattern we see a lot: these tools can speed up testing and decision-making, but they amplify whatever signal is already in the creative. If ads lean heavily on discounts, the system will find more discount-driven customers. The format doesn’t override the message.
Spending smarter on paid channels
Not all beauty customers offer brands the same long-term value.
A repeat skincare buyer behaves very differently from someone who purchases a discounted gift set once and doesn’t come back. Standard conversion optimization treats all conversions the same, while Meta’s value optimization prioritizes those more likely to generate higher revenue over time.
With value optimization, purchase value is sent via the Meta Pixel and Conversions API. The system then learns which users are more likely to drive long-term value. Meta reports an average ROAS that is around 12% higher than standard conversion optimization.
Where this shows up most clearly is in how brands apply it. In some accounts, stronger repurchase rates in specific markets lead to more aggressive bidding there. In others, conversion patterns differ by placement—Reels, Stories, and Feed don’t always perform the same, and bidding reflects that.

In some cases, a specific audience segment consistently drives repeat purchases, and optimization shifts toward that group instead of the cheapest conversions.
As the team at Selfnamed points out, brands that scale usually have a clearly defined target customer and a strong understanding of their pain points. Without that clarity, optimization tends to default toward volume rather than value.
The creative point still matters here. If ads are built around discounts, they will likely attract lower-value customers, even with value optimization in place.
Building creative systems that scale
As brands scale, there’s a point when manual optimization starts to break down. Beauty is especially sensitive to this. Seasonality, multi-market complexity, and fast-moving trends mean performance can shift faster than teams can realistically respond to manually.

Cocosolis ran into this as they expanded across more than 25 markets. Managing 20+ ad accounts manually made it difficult to maintain consistency. Performance varied, and decisions couldn’t keep up with the volume of changes.
The system they built with Bïrch distributed decision-making across different levels—campaign, ad set, and ad—so adjustments weren’t applied too broadly.
Timing was also localized. Activity patterns differed by market, so the rules reflected when users were actually active in each region.
Budgets and bids were adjusted dynamically based on performance, scaling what was working and pulling back where needed. There was also a recovery mechanism in place: paused ads could restart if performance improved once attribution data had caught up.
The result was consistency at scale: up to 102 hours saved monthly, up to 29,000 automation triggers per month, and more stable ROAS across markets.

What Cocosolis built reflects how automation is increasingly structured across beauty accounts.
For most teams, automation starts with creative testing. Performance is evaluated early using engagement and conversion signals, not just purchases. Promising creatives get budget increases, while weaker ones are reduced or paused. After a few days, clear winners are moved into scaling campaigns.
From there, automation focuses on maintaining stability. Performance is evaluated over longer windows, often with different targets by market to reflect variations in customer value. Systems also account for delayed attribution, allowing ads to recover if performance improves once data catches up.
At a higher level, automation acts as a safeguard—monitoring for anomalies such as sudden spikes in CPM, CPC, or spend and pausing activity when needed.
Most issues with automation come down to how it’s set up.
Many teams rely too heavily on platform-reported data to power automation. If results are inflated, this can lead to over-scaling. Others optimize only for purchases and cut potential winners too early, ignoring early signals like add-to-cart or checkout activity.
There are also structural issues. Inconsistent naming conventions make automation harder to scale, and applying the same CPA or ROAS targets across markets often leads to inefficient spend.
So the difference isn’t whether or not automation is used, but how well it’s structured. The same tools can either amplify performance or inefficiencies.
Closing the gap between interest and purchase
Beauty tends to generate a lot of interest. The drop-off usually happens somewhere between intent and checkout. Where that drop-off occurs matters because teams need to respond differently at each stage.
In practice, a large part of that drop-off comes from uncertainty at the decision stage—choosing between products, understanding how they work together, or figuring out what’s right for a specific need.
Selfnamed has seen that customers often look for guidance on which products to combine or how to build a routine, which is why bundles and curated sets tend to perform well. By reducing decision fatigue and clarifying the next step, brands can convert existing intent more effectively.
Once that uncertainty is reduced, the remaining friction often centers on timing and incentives.

Pricing and promotions are still among the more effective levers in digital beauty, but only at the right moment. Meta’s “Highlight your promotions” feature is one example. It surfaces promo codes directly in the in-app browser at checkout, so the offer appears when someone is deciding whether to complete the purchase.
Internal Meta testing showed a median 9% reduction in cost per purchase and a 10% lift in conversion rate. A randomized experiment across 675,000 ads found an average 4.6% increase in conversions among small and medium advertisers.

Beautyblender is one example of how this plays out in practice. The brand saw a 51% decrease in CPA, a +137% increase in click-through rate, and a +22% increase in ROAS without changing creative or audiences.
This tends to work best for seasonal campaigns, loyalty audiences, and cart abandoners—situations where the user is already close to converting.
One pattern we see is that heavy discount messaging in the ad itself tends to attract lower-value customers. A more effective approach is to lead with product and value in the creative, and surface the offer closer to checkout. Both can work together—just not doing the same job at the same time.
The compounding advantage of getting beauty marketing right
The brands pulling ahead in beauty tend to have a few things in common. They build trust around product efficacy, position themselves as accessibly premium, and stay close to how people actually discover and buy.
What sets the winners apart is how these pieces of the puzzle fit together to create an ongoing system. Creative is tested continuously, campaigns are optimized toward customer value, automation handles day-to-day decisions, and offers are introduced at the moments most likely to convert.
If you’re looking to build that kind of system without adding more manual workload, you can try Bïrch for free and see how it fits into your current campaigns.
FAQs
Stop guessing which beauty creatives work. Test them at scale with Bïrch Launcher.
Beauty is one of the most competitive verticals in paid social and one of the fastest-moving.
Globally, beauty sales grew by 10% last year. Online is outpacing offline (18% vs. 3%), and indie brands are still growing faster than large conglomerates.
What’s driving performance differences between brands isn’t usually their budget. It’s how they approach creative, structure campaigns, and reduce manual input.
This article looks at what’s working right now in paid marketing for beauty brands based on what we’re seeing across accounts and conversations with teams in the space, as well as input from Selfnamed, a product development and manufacturing partner that works with emerging and scaling beauty brands.
Key takeaways
- The gap between growing and stagnating beauty brands is increasingly operational. Creative testing, campaign structure, and automation matter more than budget alone.
- Creative remains the primary performance driver. Simple, product-focused, real-use formats consistently outperform highly produced or overly abstract ads.
- Testing works best as an ongoing system. Running multiple creative angles continuously leads to faster feedback and more stable performance.
- Not all customers are equal. Optimizing toward higher-value users, not just cheaper conversions, tends to improve long-term performance.
- Automation becomes essential at scale. The more decisions run through rules, the less performance depends on manual input. This is where Bïrch helps keep campaigns running efficiently.
- Most drop-off happens between interest and checkout. Small changes at this stage—especially around the timing of promotions—can have a disproportionate impact.
What’s changed for beauty marketers
A bit of context before getting into the specifics.
Clean, vegan, cruelty-free… A few years ago, these were differentiators. Now, they’re expected. In 2026, what’s influencing purchase decisions most are ingredient specificity and clinical proof. NielsenIQ found that 50% of consumers are willing to pay more for ingredient and supply chain transparency, and 49% are willing to pay more for elevated formulations.

This shift is also reflected in how skincare itself is evolving. Surface-level results are becoming less important to consumers than supporting the skin barrier and long-term skin health. People are paying less attention to how products are positioned and prioritizing those that actually work by supporting prevention, longevity, and deeper skin function.
This is also reflected in how products are developed. As Anete Vabule, CEO and co-founder of Selfnamed, explains: “The focus is on creating products that are more compatible with the skin, more transparent, and safer, while continuously improving performance.”
Brand loyalty has also softened. Where people used to stick with one brand and keep buying from it, they’re now more willing to switch based on what gets them the best results—and dupe culture has made that easier.
The purchase journey has also become less linear. For instance, discovery might happen on TikTok and then be followed by a quick search and a purchase on Amazon or a DTC site, sometimes within a few days.

As more of that journey happens within platforms, social commerce is becoming a real revenue channel, not just a discovery layer. 22% of global consumers have purchased directly through TikTok Shop, with rapid growth in beauty and personal care.
Meanwhile, online continues to outpace offline while paid channels are becoming more saturated and expensive. Brands are feeling pressure to get more out of what they already spend, rather than relying on scale alone.
Why creative makes or breaks beauty campaigns
If there’s one layer where beauty brands consistently win or lose, it’s creative. Campaign structure matters, but it won’t compensate for ads that don’t connect.
NielsenIQ’s neuroscience-based research helps explain why certain formats work. Watching someone use a product activates the mirror neuron network—people process it as if they’re using the product themselves. That’s part of why simple, realistic, close-up content tends to outperform polished studio work.
As the team at Selfnamed observes, “Today, especially on platforms like TikTok, brands are prioritizing user-generated content, as well as organic and sponsored video content.” These formats make it easier to show texture, application, and results in a way that feels more immediate and believable.
What tends to hold beauty creatives back is overproduction—too many cuts, overly artistic execution, or messaging that leans on negative qualifiers (like “no parabens” or “without harsh chemicals”). Competitor comparisons can have a similar effect.

In most cases, content that feels real—a person, a product, a clear result—drives stronger purchase intent than more stylized content.
Creative testing is non-negotiable in beauty
In their State of Beauty 2025 report, McKinsey highlights how beauty is driven by constant demand for newness. With TikTok and social commerce accelerating that cycle, creative fatigue sets in faster than it used to. A quarterly refresh often isn’t enough.
Brands that improve performance treat testing as part of the fabric of campaigns. And speed is a big part of this.
Selfnamed has noticed how brands that scale tend to move quickly when testing new ideas—responding to trends, launching variations, and validating what works in-market rather than relying on long development cycles.
Scentbird is a good example of what that looks like in practice. The team was spending a significant portion of their budget on ads that weren’t converting, so they needed a way to test more systematically without increasing manual workload.

Using Bïrch, they built a setup where different creative angles could be continuously tested and optimized—grouping variations, applying rules, and letting performance determine what to scale or pause.
As a result, they ran 1.8x more tests, reduced spend on zero-conversion ads by 76%, and cut manual optimization time by 30%.
What changed was the speed and consistency of the feedback loop.
Building a creative testing system that scales
Tools are making it easier to produce and test creative at scale.
Meta’s GenAI tools are an example, offering background generation, image expansion, text variations, and AI-assisted video. Using these makes it faster to produce and test multiple creative angles without rebuilding assets from scratch.
While production is becoming less of a bottleneck, the bigger constraint is still direction: what you choose to test and how you interpret what’s working.
In many accounts, this shows up as a loop: once a creative concept starts working, teams quickly generate variations—different edits, formats, or touch-ups—test them with audiences, and then roll the strongest ones into ongoing campaigns. Over time, this starts to reveal patterns around which styles or treatments resonate with different segments.
Bïrch Launcher builds on this by helping you turn those variations into structured, scalable tests. Multiple creative angles can run in parallel, rotate automatically, and be evaluated based on performance without constant manual input.

Instead of managing creatives one by one, you can see what’s performing, what’s fatiguing, and where to focus next.
One pattern we see a lot: these tools can speed up testing and decision-making, but they amplify whatever signal is already in the creative. If ads lean heavily on discounts, the system will find more discount-driven customers. The format doesn’t override the message.
Spending smarter on paid channels
Not all beauty customers offer brands the same long-term value.
A repeat skincare buyer behaves very differently from someone who purchases a discounted gift set once and doesn’t come back. Standard conversion optimization treats all conversions the same, while Meta’s value optimization prioritizes those more likely to generate higher revenue over time.
With value optimization, purchase value is sent via the Meta Pixel and Conversions API. The system then learns which users are more likely to drive long-term value. Meta reports an average ROAS that is around 12% higher than standard conversion optimization.
Where this shows up most clearly is in how brands apply it. In some accounts, stronger repurchase rates in specific markets lead to more aggressive bidding there. In others, conversion patterns differ by placement—Reels, Stories, and Feed don’t always perform the same, and bidding reflects that.

In some cases, a specific audience segment consistently drives repeat purchases, and optimization shifts toward that group instead of the cheapest conversions.
As the team at Selfnamed points out, brands that scale usually have a clearly defined target customer and a strong understanding of their pain points. Without that clarity, optimization tends to default toward volume rather than value.
The creative point still matters here. If ads are built around discounts, they will likely attract lower-value customers, even with value optimization in place.
Building creative systems that scale
As brands scale, there’s a point when manual optimization starts to break down. Beauty is especially sensitive to this. Seasonality, multi-market complexity, and fast-moving trends mean performance can shift faster than teams can realistically respond to manually.

Cocosolis ran into this as they expanded across more than 25 markets. Managing 20+ ad accounts manually made it difficult to maintain consistency. Performance varied, and decisions couldn’t keep up with the volume of changes.
The system they built with Bïrch distributed decision-making across different levels—campaign, ad set, and ad—so adjustments weren’t applied too broadly.
Timing was also localized. Activity patterns differed by market, so the rules reflected when users were actually active in each region.
Budgets and bids were adjusted dynamically based on performance, scaling what was working and pulling back where needed. There was also a recovery mechanism in place: paused ads could restart if performance improved once attribution data had caught up.
The result was consistency at scale: up to 102 hours saved monthly, up to 29,000 automation triggers per month, and more stable ROAS across markets.

What Cocosolis built reflects how automation is increasingly structured across beauty accounts.
For most teams, automation starts with creative testing. Performance is evaluated early using engagement and conversion signals, not just purchases. Promising creatives get budget increases, while weaker ones are reduced or paused. After a few days, clear winners are moved into scaling campaigns.
From there, automation focuses on maintaining stability. Performance is evaluated over longer windows, often with different targets by market to reflect variations in customer value. Systems also account for delayed attribution, allowing ads to recover if performance improves once data catches up.
At a higher level, automation acts as a safeguard—monitoring for anomalies such as sudden spikes in CPM, CPC, or spend and pausing activity when needed.
Most issues with automation come down to how it’s set up.
Many teams rely too heavily on platform-reported data to power automation. If results are inflated, this can lead to over-scaling. Others optimize only for purchases and cut potential winners too early, ignoring early signals like add-to-cart or checkout activity.
There are also structural issues. Inconsistent naming conventions make automation harder to scale, and applying the same CPA or ROAS targets across markets often leads to inefficient spend.
So the difference isn’t whether or not automation is used, but how well it’s structured. The same tools can either amplify performance or inefficiencies.
Closing the gap between interest and purchase
Beauty tends to generate a lot of interest. The drop-off usually happens somewhere between intent and checkout. Where that drop-off occurs matters because teams need to respond differently at each stage.
In practice, a large part of that drop-off comes from uncertainty at the decision stage—choosing between products, understanding how they work together, or figuring out what’s right for a specific need.
Selfnamed has seen that customers often look for guidance on which products to combine or how to build a routine, which is why bundles and curated sets tend to perform well. By reducing decision fatigue and clarifying the next step, brands can convert existing intent more effectively.
Once that uncertainty is reduced, the remaining friction often centers on timing and incentives.

Pricing and promotions are still among the more effective levers in digital beauty, but only at the right moment. Meta’s “Highlight your promotions” feature is one example. It surfaces promo codes directly in the in-app browser at checkout, so the offer appears when someone is deciding whether to complete the purchase.
Internal Meta testing showed a median 9% reduction in cost per purchase and a 10% lift in conversion rate. A randomized experiment across 675,000 ads found an average 4.6% increase in conversions among small and medium advertisers.

Beautyblender is one example of how this plays out in practice. The brand saw a 51% decrease in CPA, a +137% increase in click-through rate, and a +22% increase in ROAS without changing creative or audiences.
This tends to work best for seasonal campaigns, loyalty audiences, and cart abandoners—situations where the user is already close to converting.
One pattern we see is that heavy discount messaging in the ad itself tends to attract lower-value customers. A more effective approach is to lead with product and value in the creative, and surface the offer closer to checkout. Both can work together—just not doing the same job at the same time.
The compounding advantage of getting beauty marketing right
The brands pulling ahead in beauty tend to have a few things in common. They build trust around product efficacy, position themselves as accessibly premium, and stay close to how people actually discover and buy.
What sets the winners apart is how these pieces of the puzzle fit together to create an ongoing system. Creative is tested continuously, campaigns are optimized toward customer value, automation handles day-to-day decisions, and offers are introduced at the moments most likely to convert.
If you’re looking to build that kind of system without adding more manual workload, you can try Bïrch for free and see how it fits into your current campaigns.
FAQs
The most effective strategies tend to be operational rather than purely strategic. Brands that are growing are usually testing creative continuously, structuring campaigns around performance signals, and using automation to manage budgets, bids, and creative rotation.
Creatives play a bigger role in beauty than in most other categories because the product experience is visual. People want to see texture, application, and results. Formats that show real usage—close-ups, demos, before-and-after—tend to perform better because they help people imagine using the product themselves. More polished or abstract creatives often lead to fewer conversions.
In most cases, testing needs to be continuous rather than periodic. Creative fatigue sets in quickly, especially on platforms like TikTok and Instagram. Instead of refreshing every few months, brands that maintain performance usually run multiple creative variations simultaneously and replace underperforming ones on an ongoing basis.
Meta’s value optimization is designed to prioritize higher-value customers rather than just targeting more conversions. Instead of optimizing for the cheapest purchase, it uses purchase-value data (via the Meta Pixel and Conversions API) to identify users more likely to generate higher revenue over time. The system then shifts delivery toward those users.
In most cases, leading with discounts in the ad itself tends to attract lower-value customers. A more effective approach is to focus the creative on the product and its benefits, and introduce promotions closer to the point of purchase. This way, discounts act as a conversion trigger rather than the main reason someone clicks.






