Blog

← All posts
Inspiration
January 7, 2026

AI won’t save your strategy: why you need to be “AI ready” before you’re “AI first”

Before you try to build complex AI agents, ensure you’ve got these leadership fundamentals covered. Here’s a strategic framework for integrating AI without losing your human edge.

by
Scott Colenutt

I’m going to start with a hard truth I recently shared on LinkedIn:

If you try to build AI agents or workflows to compensate for a lack of strategic leadership skills, you’re on a fast track to tears.

We are currently living through a gold rush of efficiency. Every day, a new tool promises to automate monotony and scale our output. But in my work as an AI transformation consultant, and from my many years in marketing, I’ve noticed a dangerous pattern.

Marketers and founders are trying to use AI and automation to patch holes in their fundamental business and marketing strategies. They are attempting to build complex technical solutions to circumvent dealing with human leadership problems.

If you are looking for a list of "10 Prompts to Write Viral Ads," this isn’t that article. But if you want to ensure your marketing team survives the transition to the AI era without losing its soul (or its budget), we need to talk about strategy first.

Here is my philosophy on how to approach AI, not as a magic wand, but as a strategic amplifier.

Key takeaways

  • AI scales processes, it doesn’t fix them. Attempting to use automation to bypass fundamental leadership gaps or messy data strategies will only accelerate failure.
  • Prioritize being "AI ready" over "AI first." Success relies on mastering the "boring" fundamentals like data hygiene and allocating time for learning. Without clean inputs, your AI will produce hallucinations, not insights.
  • Treat AI as a challenger, not an “intern”. Instead of using AI as an intern to confirm your existing biases, flip the script and use it as a strategic mentor to critique your logic and uncover your blind spots.

The “fast track to tears”

It is tempting to view AI as a workaround for the things we don't want to deal with.

I recently worked with a CEO of a large gaming equipment manufacturer. He is enthusiastic about AI and its potential to improve business efficiencies, but also happens to be fairly conflict-averse. His 100+ person company has fallen behind on basic digital adoption, and there is a lack of governance in the company related to business process and data hygiene.

Instead of addressing these gaps, for which senior company leaders are accountable for, he’s been attempting to micromanage his way out of it using AI.

On one occasion recently he asked me to build a complex agent that would crawl YouTube daily, identify reviews of his products, transcribe these reviews, assess sentiment and cross-reference this against competitors (using a similar process). When I challenged him on why he wanted to do this, he eventually told me that his goal was to be able to determine whether his PR team were doing a good job.

He was trying to engineer an automated way to measure success because he hadn’t done the hard human work of sitting down with his team to define what "success" actually looks like.

The lesson: AI scales bad processes just as efficiently as it scales good ones. If your data is messy and your team alignment is broken, AI will just give you inaccurate insights faster than ever before. You cannot automate accountability.

I’ll now forever describe this as ROBOT TEARS. Nobody wants to see robot tears.

Focus on being “AI ready” before you are “AI first”

There is a lot of pressure to be "AI first." I prefer to ask clients if they are "AI ready."

Being AI ready isn't about having the most expensive technology solutions. It isn’t about how you communicate externally about your use or thoughts on AI. It is about whether you successfully navigated the last digital transformation era.

The 2010s were the era of cloud adoption, asynchronous work, and data hygiene. If your company still struggles with these, if your data is fragmented and your team can’t work across cloud-based asynchronous systems, you aren’t ready for AI.

To get ready, you need to focus on two boring but critical aspects of your business:

  • Data hygiene: AI is a "garbage in, garbage out" machine. If you haven't mastered the data skills required to keep your inputs clean, your AI outputs will be hallucinations, not insights.
  • The time commitment: You cannot expect your team to learn AI on top of a 100% utilization rate. You must agree on a policy (and philosophy) for self-development, and ensure your team understands it. Are you planning for the necessary time required to learn new AI skills and build AI solutions and initiatives? Do you already have a robust system in place for project management so you can plan realistically?

If you don’t make space for it, it won’t happen.

The economics of AI adoption: you have to spend to save

There is a misconception, particularly among mid-sized companies (100-500 person companies), that using AI will immediately slash operational costs.

The reality is more nuanced, and it depends entirely on your size:

  • Small businesses have the agility to learn and adapt to this era of AI. They can adopt systems fairly quickly because they can pivot fast, sometimes overnight.
  • Large corporations have the capital. They can make big bets on AI and absorb the massive costs of restructuring and innovation projects while running their business as usual.

Mid-sized companies are caught in a trap.

They want the benefits of AI and automation, but they battle the challenges of increased training costs and growing bureaucracy. They don't have the infinite budget of a corporation or the agility of a startup.

The hardest reality for any leader in a mid-sized company to accept is that you have to invest money before you save it.

You will face increased training costs. You will face a dip in productivity while your team learns to adapt to and trust new processes, tools and an AI mindset. You will need time to not only build AI processes and tools, but to validate them. If you try adopting AI with the expectation of day-one savings in return, you will likely disappoint yourself and potentially generate AI scepticism for yourself and your peers.

The validation trap: are you seeking truth or bias?

One of the simultaneously fascinating and terrifying observations I’ve made recently is that people are using AI assistants to validate their biases rather than challenge them.

In the past, when we had disagreements in life and work, we had to rely on law, policy, or ideally, conversation to find clarity. Now, people run to AI seeking definitive "proof" that they are right.

This is dangerous for leadership for two reasons:

  1. It kills emotional intelligence: The immediacy of an AI response encourages us to skip the human work of understanding someone else's perspective. We stop listening to understand triggers and start looking for data points to win arguments. This moves us toward a society where we see colleagues as transactional figures rather than emotionally intelligent equals.
  2. Prompt bias is real: AI responses are influenced by the information we feed them. If you are using AI to try and win an argument, you are likely asking it leading questions. You are essentially engineering a confirmation bias machine.

If you aren't familiar with confirmation bias, learning about it should be a foundational skill before you start deploying AI in your decision-making.

Flip the script: AI as the challenger, not the intern

So, how do you avoid the validation trap? You have to flip the script.

I see a lot of recommendations on using AI as an “intern” or an “assistant”. The real strategic value comes when you treat AI as a mentor or a critic.

When I start projects with new clients, I set up Custom GPTs or projects in the AI tools I use. In the instructions within these workspaces and tools, I tell the AI exactly who I am: my background, my strengths, and crucially my blind spots.

I include specific instructions: "Challenge my thinking. Highlight my biases. Tell me what I’m missing."

I ask it to critique my logic. This requires a mindset shift. You have to be secure enough to let a machine tell you that your argument is weak, you’re factually incorrect or your ideas require refinement.

This turns AI from a tool of convenience into a tool to aid critical thinking and shine a light on your bias.

The ecosystem effect

Finally, we need to be strategic about the tools we choose.

There is a lot of debate about which platform or model is "best" and it’s easy to get distracted by shiny object syndrome. But with respect to efficiency and AI adoption, you should be considering the ecosystem you’re invested in.

I see teams fragmented across tools, marketing is on ChatGPT, devs are on Claude, but the whole company runs on Microsoft 365. They often don't realize that Microsoft Copilot is sitting right there, integrated with their existing data and offering security and compliance that a rogue ChatGPT account can’t match. These oversights only happen when you lose yourself in the noise surrounding AI. Your decisions become heavily influenced by external news and trends rather than being anchored in your philosophies and business processes.

Data gravity is real.

  • If you are a Google workspace shop (Chromebooks, Pixels, Drive), Gemini is going to offer you the most friction-free workflow.
  • If you are deep in the Apple ecosystem, Apple Intelligence will likely become your default.

For startups and performance teams, this matters. You don't want to fragment your processes across multiple AI platforms. You want to build a "RAG" (Retrieval-Augmented Generation) ready system where your AI can be grounded in your internal data.

That only works if you have your eyes open and you’re able to look at the bigger picture of your business and how it operates.

Remember, to navigate AI, know your destination

AI is liberating. It allows us to bring ideas to life that would have taken weeks to execute in the past, but don’t use it to try and circumvent the important strategic work that anchors your company. Don't use it to reinforce your biases or win arguments. Use it to challenge you, to speed you up, and to help you scale, but only after you’ve done the hard work of defining where you actually want to go.

I’m going to start with a hard truth I recently shared on LinkedIn:

If you try to build AI agents or workflows to compensate for a lack of strategic leadership skills, you’re on a fast track to tears.

We are currently living through a gold rush of efficiency. Every day, a new tool promises to automate monotony and scale our output. But in my work as an AI transformation consultant, and from my many years in marketing, I’ve noticed a dangerous pattern.

Marketers and founders are trying to use AI and automation to patch holes in their fundamental business and marketing strategies. They are attempting to build complex technical solutions to circumvent dealing with human leadership problems.

If you are looking for a list of "10 Prompts to Write Viral Ads," this isn’t that article. But if you want to ensure your marketing team survives the transition to the AI era without losing its soul (or its budget), we need to talk about strategy first.

Here is my philosophy on how to approach AI, not as a magic wand, but as a strategic amplifier.

Key takeaways

  • AI scales processes, it doesn’t fix them. Attempting to use automation to bypass fundamental leadership gaps or messy data strategies will only accelerate failure.
  • Prioritize being "AI ready" over "AI first." Success relies on mastering the "boring" fundamentals like data hygiene and allocating time for learning. Without clean inputs, your AI will produce hallucinations, not insights.
  • Treat AI as a challenger, not an “intern”. Instead of using AI as an intern to confirm your existing biases, flip the script and use it as a strategic mentor to critique your logic and uncover your blind spots.

The “fast track to tears”

It is tempting to view AI as a workaround for the things we don't want to deal with.

I recently worked with a CEO of a large gaming equipment manufacturer. He is enthusiastic about AI and its potential to improve business efficiencies, but also happens to be fairly conflict-averse. His 100+ person company has fallen behind on basic digital adoption, and there is a lack of governance in the company related to business process and data hygiene.

Instead of addressing these gaps, for which senior company leaders are accountable for, he’s been attempting to micromanage his way out of it using AI.

On one occasion recently he asked me to build a complex agent that would crawl YouTube daily, identify reviews of his products, transcribe these reviews, assess sentiment and cross-reference this against competitors (using a similar process). When I challenged him on why he wanted to do this, he eventually told me that his goal was to be able to determine whether his PR team were doing a good job.

He was trying to engineer an automated way to measure success because he hadn’t done the hard human work of sitting down with his team to define what "success" actually looks like.

The lesson: AI scales bad processes just as efficiently as it scales good ones. If your data is messy and your team alignment is broken, AI will just give you inaccurate insights faster than ever before. You cannot automate accountability.

I’ll now forever describe this as ROBOT TEARS. Nobody wants to see robot tears.

Focus on being “AI ready” before you are “AI first”

There is a lot of pressure to be "AI first." I prefer to ask clients if they are "AI ready."

Being AI ready isn't about having the most expensive technology solutions. It isn’t about how you communicate externally about your use or thoughts on AI. It is about whether you successfully navigated the last digital transformation era.

The 2010s were the era of cloud adoption, asynchronous work, and data hygiene. If your company still struggles with these, if your data is fragmented and your team can’t work across cloud-based asynchronous systems, you aren’t ready for AI.

To get ready, you need to focus on two boring but critical aspects of your business:

  • Data hygiene: AI is a "garbage in, garbage out" machine. If you haven't mastered the data skills required to keep your inputs clean, your AI outputs will be hallucinations, not insights.
  • The time commitment: You cannot expect your team to learn AI on top of a 100% utilization rate. You must agree on a policy (and philosophy) for self-development, and ensure your team understands it. Are you planning for the necessary time required to learn new AI skills and build AI solutions and initiatives? Do you already have a robust system in place for project management so you can plan realistically?

If you don’t make space for it, it won’t happen.

The economics of AI adoption: you have to spend to save

There is a misconception, particularly among mid-sized companies (100-500 person companies), that using AI will immediately slash operational costs.

The reality is more nuanced, and it depends entirely on your size:

  • Small businesses have the agility to learn and adapt to this era of AI. They can adopt systems fairly quickly because they can pivot fast, sometimes overnight.
  • Large corporations have the capital. They can make big bets on AI and absorb the massive costs of restructuring and innovation projects while running their business as usual.

Mid-sized companies are caught in a trap.

They want the benefits of AI and automation, but they battle the challenges of increased training costs and growing bureaucracy. They don't have the infinite budget of a corporation or the agility of a startup.

The hardest reality for any leader in a mid-sized company to accept is that you have to invest money before you save it.

You will face increased training costs. You will face a dip in productivity while your team learns to adapt to and trust new processes, tools and an AI mindset. You will need time to not only build AI processes and tools, but to validate them. If you try adopting AI with the expectation of day-one savings in return, you will likely disappoint yourself and potentially generate AI scepticism for yourself and your peers.

The validation trap: are you seeking truth or bias?

One of the simultaneously fascinating and terrifying observations I’ve made recently is that people are using AI assistants to validate their biases rather than challenge them.

In the past, when we had disagreements in life and work, we had to rely on law, policy, or ideally, conversation to find clarity. Now, people run to AI seeking definitive "proof" that they are right.

This is dangerous for leadership for two reasons:

  1. It kills emotional intelligence: The immediacy of an AI response encourages us to skip the human work of understanding someone else's perspective. We stop listening to understand triggers and start looking for data points to win arguments. This moves us toward a society where we see colleagues as transactional figures rather than emotionally intelligent equals.
  2. Prompt bias is real: AI responses are influenced by the information we feed them. If you are using AI to try and win an argument, you are likely asking it leading questions. You are essentially engineering a confirmation bias machine.

If you aren't familiar with confirmation bias, learning about it should be a foundational skill before you start deploying AI in your decision-making.

Flip the script: AI as the challenger, not the intern

So, how do you avoid the validation trap? You have to flip the script.

I see a lot of recommendations on using AI as an “intern” or an “assistant”. The real strategic value comes when you treat AI as a mentor or a critic.

When I start projects with new clients, I set up Custom GPTs or projects in the AI tools I use. In the instructions within these workspaces and tools, I tell the AI exactly who I am: my background, my strengths, and crucially my blind spots.

I include specific instructions: "Challenge my thinking. Highlight my biases. Tell me what I’m missing."

I ask it to critique my logic. This requires a mindset shift. You have to be secure enough to let a machine tell you that your argument is weak, you’re factually incorrect or your ideas require refinement.

This turns AI from a tool of convenience into a tool to aid critical thinking and shine a light on your bias.

The ecosystem effect

Finally, we need to be strategic about the tools we choose.

There is a lot of debate about which platform or model is "best" and it’s easy to get distracted by shiny object syndrome. But with respect to efficiency and AI adoption, you should be considering the ecosystem you’re invested in.

I see teams fragmented across tools, marketing is on ChatGPT, devs are on Claude, but the whole company runs on Microsoft 365. They often don't realize that Microsoft Copilot is sitting right there, integrated with their existing data and offering security and compliance that a rogue ChatGPT account can’t match. These oversights only happen when you lose yourself in the noise surrounding AI. Your decisions become heavily influenced by external news and trends rather than being anchored in your philosophies and business processes.

Data gravity is real.

  • If you are a Google workspace shop (Chromebooks, Pixels, Drive), Gemini is going to offer you the most friction-free workflow.
  • If you are deep in the Apple ecosystem, Apple Intelligence will likely become your default.

For startups and performance teams, this matters. You don't want to fragment your processes across multiple AI platforms. You want to build a "RAG" (Retrieval-Augmented Generation) ready system where your AI can be grounded in your internal data.

That only works if you have your eyes open and you’re able to look at the bigger picture of your business and how it operates.

Remember, to navigate AI, know your destination

AI is liberating. It allows us to bring ideas to life that would have taken weeks to execute in the past, but don’t use it to try and circumvent the important strategic work that anchors your company. Don't use it to reinforce your biases or win arguments. Use it to challenge you, to speed you up, and to help you scale, but only after you’ve done the hard work of defining where you actually want to go.

Scott Colenutt
Scott is an experienced marketer, copywriter, podcaster and consultant with expertise spanning everything from performance marketing and UX to data analytics and AI strategy. He loves exploring the intricacies of marketing and turning complex, technical subjects into user friendly content.

Get started with Bïrch product

14
days free trial
Cancel anytime
No credit card required