Blog

Achim’s Razor

Positioning, Messaging, and Branding for B2B tech companies. Keep it simple. Keep it real.

0 Articles
Strategy

AI Adoption Barriers: How Leaders Can Drive Success

AI adoption fails without leadership. Learn how clear policies, pilots, and visible sponsorship remove barriers and accelerate organizational adoption.
August 19, 2025
|
5 min read

By Gerard Pietrykiewicz and Achim Klor

Achim is a fractional CMO who helps B2B GTM teams with brand-building and AI adoption. Gerard is a seasoned project manager and executive coach helping teams deliver software that actually works.

AI adoption gets stalled by leadership gaps: confusing policies, employee fear, and leaders who say “go” but don’t show how. If this feels a bit like Groundhog Day, you’re not alone. We’ve seen similar adoption challenges with desktop publishing, the Internet and World Wide Web, and blockchain. The technology is ready, but organizations stumble on the people side. This article looks at what leaders can do right now to remove those barriers and make adoption a little less stressful.

Takeaways

  • Simplify policies. If your AI rules are complex, no one will use them.
  • Start small pilots. Let early adopters show tangible wins for their peers.
  • Lead visibly. If leaders don’t use the tools, teams won’t either.
  • Tell the story outside. Package internal AI wins into external proof points.

Jim Collins, in How The Mighty Fall, describes how once-great companies decline: hubris born of success, denial of risk, and grasping for silver bullets instead of facing reality. AI adoption sits at a similar crossroads. Companies that wait and assume their past success buys them time risk sliding down a similar path.

Monday Morning Standup Plan (30 minutes)

Reset (5 min): 

  • Why we’re doing this. Safe guardrails, speed to act.

Decisions today (10 min):

  • Name one exec sponsor, one legal contact, and one AI champion per team.
  • Pick two pilot workflows per team for this month.

Guardrails (10 min):

  • Draft a v0.1 “Allowed / Restricted / Ask Legal” one-pager this week (use NIST AI RMF as a frame).

Metrics (5 min):

  • By Friday: pilots chosen, owners named, draft policy ready.
  • Next week: % of teams with a named exec sponsor and AI champion.

Barrier 1: Complex Policies

Employees avoid tools they don’t understand. If your AI usage rules look like a legal brief, adoption will stall. It's hard for any company to have a policy loose enough to allow for easy adoption and experimentation, yet restrictive enough to prevent critical data leakage.

Large corporations often have the budget, legal teams, and even their own data centers to set up AI policies and infrastructure. That gives them speed at scale. 

Smaller companies are technically more nimble, but without sufficient resources, they often default to over-restriction, sometimes banning AI entirely out of fear of risk. That means lost productivity and missed learning opportunities.

Opportunity: Make policies visual, clear, and quick to navigate. The goal isn’t control. It’s confidence. Guidance like the NIST AI Risk Management Framework shows how clarity enables trustworthy, scalable use (NIST). 

Barrier 2: Fear 

Employees fear what they don’t understand. And one-size-fits-all training doesn’t help.

When people see AI applied to their specific role (automating a report, simplifying customer emails), that fear turns into enthusiasm.

Pilot programs work. Early adopters can demonstrate real use cases, and their wins spread fast inside the org.

Opportunity: Treat those early adopters as internal champions. Prosci’s research shows “change agents” accelerate adoption (Prosci). Then turn those wins into short internal stories and customer-facing examples. That’s how adoption builds brand credibility, not just productivity.

Barrier 3: Leadership Hesitation

When executives hesitate, teams hesitate. The reverse is also true: when leaders use AI themselves, adoption accelerates.

Research on organizational change is clear: active, visible sponsorship is a top success factor (Prosci). It signals that experimentation is safe and expected.

And there’s an external benefit too. Leaders who show their own AI use give customers and partners confidence. It’s a market signal.

Opportunity: Leaders can’t delegate this. They need to be participants, not just sponsors.

Final Thoughts

To make AI adoption successful, leaders must create an environment where experimentation feels safe and useful.

The parallels to earlier waves of tech adoption are uncanny: the ones who figured this out first didn’t just get more efficient. They were remembered as the ones who defined the category because they were more effective adopting the tech.

The risk of waiting isn’t just lost productivity. It’s losing the perception battle before you even start. Credible stories and visible leadership shape buying decisions and long-term trust (Edelman–LinkedIn).

Leaders: simplify, experiment, participate, and share your wins. Your teams and your customers will thank you.

Sources

If you like this co-authored content, here are some more ways we can help:

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Insight

What the 80/20 Rule Taught Me About B2B GTM Strategy

I tried the 80/20 prompt on B2B GTM and uncovered real insights. Here’s what top teams do differently, and why most still struggle to prove what works.
August 5, 2025
|
5 min read

I read an interesting article on TechRadar by John-Anthony Disotto. He used the 80/20 rule to create a prompt that “makes learning skills incredibly easy.” It’s not mathematically deep or overly technical. So for fun, I gave it a go using B2B GTM. It worked, but not the way I expected.

Takeaways

  • The real GTM split is 60/25/15, not 80/20 or 60/40
  • Most teams know what to do, but don’t do it consistently
  • AI, brand, and content only work when integrated
  • Top teams measure what matters, not what’s easy
  • GTM success is about behavior, not tools

A Simple Prompt

The prompt is dead simple:

“Give me the 80/20 of X.”

Want to learn how to kickflip, cook sugo, or serve a tennis ball? Ask your favorite GenAI tool for the 80/20 and it will spit back a decent analysis based on Pareto’s famous principle

That said, as with all “absolutes”, take it with a grain of salt and pull out the nuggets you need. 

But it got me thinking: 

Could this actually work for something as messy and misunderstood as B2B tech GTM? 

I gave it a shot.

ChatGPT’s First Take

I first asked ChatGPT this prompt:

“What’s the 80/20 of B2B tech GTM in 2025?”

The answer came back fast using typical AI phrasing. 

80% of teams are stuck in the past:

  • Chasing MQLs, form fills, and attribution hacks
  • Reporting vanity metrics that don’t tie to revenue
  • Confusing activity with progress
  • Marketing is still seen as a support function, not a multiplier
  • Sales, Marketing, and Product all run separate playbooks
  • The CFO doesn’t trust GTM data—and for good reason

20% are evolving:

  • They’re shifting from funnels to buying signals
  • Treating brand as a reputation asset, not a logo
  • Aligning around buyer context, not lead scoring
  • Using Causal AI to explain what actually drives revenue
  • Marketing drives strategy, not just campaigns
  • GTM is one integrated motion: Brand > Demand > Revenue

The gist wasn’t wrong, but a few things were headscratchers, like “Using Causal AI” and “Marketing drives strategy.”

It sounded too binary. Too neat.

Fact-Checking the AI

So I then asked ChatGPT:

“Rigorously fact-check your 80/20 assessment for accuracy and credibility.”

The revised output was a bit more grounded.

20% of teams are winning because they:

  • Use AI tools to scale personalization and content workflows, not just auto-generate flimsy copy
  • Build brand traction via thought leadership, distinct voice, and clear measurement—even imperfect
  • Own SEO shift: optimize for AI-driven search discovery, not just keywords
  • Serve self-directed, digital-first buyers by being top-of-mind before sales ever starts
  • Stay lean, automate smartly, and avoid manual bloat

80% are plateaued by:

  • Under-invest in brand awareness and measurement—half don't track brand at all
  • Relied mostly on outdated content models or inconsistent execution
  • Can’t prove brand impact, so they funnel more budget into performance channels only
  • Lack strategic tool adoption—AI used for surface-level drafts, not integrated into workflows

OK, better. But I still wasn’t convinced.

The Peer Review

Still sceptical, I sent the same prompt to Claude.

Claude came back with a softer read. It said it’s more like a 60/40 split. Most teams are adapting, but slowly and inconsistently.

Claude made some good points:

  • Just because companies use AI doesn’t mean they’ve integrated it.
  • Most aren’t ignoring brand, they just don’t measure it well.
  • The gap between winners and laggards is real, but not as extreme as 80/20

I went back and forth with Claude and ChatGPT. Both made valid calls and pointed out where they aligned and where they differed. 

ChatGPT pushed back on Claude’s assessment:

“Claude’s response is solid—and mostly accurate. But Claude’s framing underplays the issues.”

After more back-and-forth, we landed somewhere in the middle.

The Real Split

60/25/15 is a more accurate split.

Side-by-side pie charts comparing three B2B GTM models: ChatGPT’s 80/20 split (80% stuck, 20% evolving), Claude’s 60/40 split (60% adapting, 40% ahead or behind), and the agreed 60/25/15 real-world split (60% in no man’s land, 25% doing the work, 15% behind).

60% are stuck in no man’s land:

  • Using ChatGPT for email drafts but still manually scoring leads and building reports in spreadsheets
  • Publishing weekly blogs and case studies but can’t trace which content actually drives pipeline
  • Talking about “self-directed buyers” in meetings but still gating everything and cold-calling from purchased lists

25% of B2B teams are doing the work:

  • AI triggers follow-up sequences based on engagement patterns and generates persona-specific outbound variants
  • They track “How did you hear about us?” in CRM and can show which content correlates with deal velocity
  • Marketing sits in Sales deal reviews; Sales inputs on content calendar; CS shares churn signals with both

15% are way behind:

  • Gating basic industry reports, counting form fills as qualified leads while conversion rates stay flat
  • Brand discussions focus on font choices and logo placement with zero budget for thought leadership
  • Weekly reports highlight email open rates and social impressions while Sales complains about lead quality

Which one sounds like your Monday morning standup?

If you’re in the 60%, pick one:

  • AI Integration: Replace one manual weekly task with an AI workflow this month
  • Brand Reputation: Add three questions to your CRM intake form that connect brand touchpoints to pipeline
  • GTM Alignment: Have Marketing sit in on Sales deal reviews, have Sales input on content calendar

What Actually Separates the Winners

Every modern B2B GTM team should be asking:

  • Can we point to three manual tasks AI eliminated this quarter, or are we just using it to write blog posts faster?
  • Does our brand help buyers remember us when they’re ready or just impress peers on launch day?
  • Do we know what signals indicate buying intent, or are we still confusing clicks for interest?
  • If I asked Sales, Marketing, Product, and CS separately to map our buyer journey, would I get the same answer?

AI, brand, content, buyer insight—none of it works in isolation.

What separates the top 25% isn’t access. It’s consistency. They’ve operationalized what the rest are still experimenting with.

Final Thoughts

The “80/20” prompt worked, but not how I expected.

It won’t give you a perfect framework. That’s OK. It doesn’t need to be perfect.

A yardstick that validates what you already know and uncovers some new truths is more than good enough. Whether the actual number is 65.7% instead of 60% doesn’t really matter.

The point is, most GTM teams in 2025 know what to do. They’re just not doing it strategically or consistently. And they’re not proving it works.

That’s the gap.

The teams pulling ahead aren’t chasing leads or trends. They’re going back to the basics, putting insight and strategy ahead of tactics. And they’re doing it better, faster, and with accountability.

They treat brand as a signal, not decoration. AI as infrastructure, not novelty. GTM as shared responsibility, not departmental silos.

And they measure what matters, not what’s easy.

They do the hard part first.

Where does your GTM sit?

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Sources

Primary Article Referenced:

  • Disotto, John-Anthony. “I tried this simple ChatGPT prompt that makes learning skills incredibly easy.” TechRadar.

Research Sources:

  • “The State of AI in B2B Marketing.” ON24.
  • “The State of B2B SaaS Brand Marketing.” Wynter.
  • “What’s working right now: B2B marketing trends and tactics in 2025.” Wynter.
  • “B2B Buying Behavior in 2025: 40 Stats and Five Hard Truths That Sales Can’t Ignore.” Corporate Visions.
  • “Why millennials continue to reshape B2B ecommerce.” Digital Commerce 360.
  • “In 2025, B2B Sales Has Changed—Have You?” 180 Operations.
Execution

Causal CMO #6: How to Run a Skunkworks GTM Project (Quietly)

CMOs use quiet pilots and causal AI to prove GTM impact and drive deal velocity before selling the story internally. Here’s how to execute one effectively.
July 29, 2025
|
5 min read

CMOs can make a big difference with CausalAI. It starts with what Mark Stouse calls a “quiet pilot.” Not a pitch. Not a deck. In Part 6 of The Causal CMO, Mark explains how GTM leaders can run skunkworks projects in the background without wasting months seeking buy-in or permission by using causal modeling. We used “deal velocity” as our project, but you can apply it to any core outcome: CAC, LTV, funnel integrity, partner yield, brand equity, even recruiting.

Takeaways

  • A quiet pilot isn’t sneaky, it’s smart
  • Causal pilots model outcomes, actions, and externalities
  • Most GTM teams are drowning in noise, not signal
  • The grief curve is real (and necessary)
  • Done right, you won’t need to sell it because others will eventually ask

Skunkworks Projects Are Not Sketchy 

As we touched on in Part 4 and Part 5, the goal of a “quiet pilot” is meant to protect signal integrity. It’s not about secrecy. 

If you announce you’re piloting CausalAI before you’ve proven anything, internal pressure will spike. Opinions will fly. Fear will kick in. And you’ll waste all your energy managing reactions instead of learning. 

“There’s nothing unethical about doing a quiet pilot. In fact, it’s probably the most ethical way to test something that matters.”

You want to reduce friction, not accountability. To observe, adjust, and verify results before you invite others in.

What You Actually Need to Model

Causal modeling isn’t magic. It’s just math applied to three distinct domains:

  1. Outcomes: Revenue, margin, cash flow, deal flow, LTV
  2. Internal levers: Marketing, Sales, Product, CS activities
  3. Externalities: Market forces, buyer psychology, macro risk

Most teams obsess over 1 and 2. But externalities drive 70–80% of performance. You’re not here to brute-force your way through them. You’re here to surf them.

“You can have the best mix in the world and still fail—if you ignore externalities.”

The Case: Deal Velocity Under Pressure

We used a B2B SaaS scenario:

  • CAC is stable. 
  • Cash flow is tight. 
  • The board’s impatient. 
  • The CMO needs to increase deal velocity, and fast.

Here’s how Mark broke it down:

  • CAC is a form of debt. If deals slip, you can’t pay it off.
  • Deal slippage usually signals buyer fear, not GTM failure.
  • Buyers need decision insurance to move forward. 
  • Without decision insurance, they delay and that delay kills cash flow.

Mark’s personal experience at Honeywell Aerospace demonstrates the effectiveness of running a quiet pilot:

“We improved deal velocity by almost 5%. That’s $11–12 billion of revenue moving faster into the company. The cash flow impact was extraordinary. The CFO became a fan.”

How to Run the Pilot

You don’t need perfection to start. You need clarity.

Step 1: Pick a business-critical outcome

Start with a question like this one:

“Out of everything we’re doing, what’s really driving deal velocity?”

Step 2: Model the 3 domains

  • Outcomes and internal levers will require your data (clean or messy, it doesn’t matter).
  • Externalities (macro, industry, buyer conditions) can be pulled from public sources like the SEC, Fed, or academic datasets.
  • Use synthetic data to simulate missing patterns. If your internal data is sparse, GenAI tools can generate approximations to bridge gaps and help simulate realistic data patterns.

“Even synthetic models become templates for real ones later.”

Step 3: Let the system run

Watch the forecast. Compare it to actuals. Adjust your mix. Then track again.

You’ll feel the change before you explain it. So will others.

“People will pass you in the hallway and say, ‘Something feels different.’ That’s your moment.”

What to Expect Emotionally

You’re going to get humbled. So be prepared.

“You’ll realize most of what you’ve been tracking is noise. You’ll grieve. You’ll deny. You’ll get angry. Then you’ll change.”

It’s normal to go through disbelief, regret, frustration and even grief as you uncover how much of your GTM effort was based on correlation or gut feel.

But that’s the cost of clarity.

And the reward?

A system that actually tells you what’s working and how to make it better.

When to Go Public

Not too soon. Let the model mature.

Here’s a rough timeline:

  • Month 0: Quiet start. Build models. Simulate where needed.
  • Month 3: Adjust the GTM mix based on signals.
  • Month 6: Teams feel the shift, even if they don’t know why.
  • Month 9–12: Go public. Brief the CFO and CEO. Build the case.

That’s when you gain credibility and the CEO and CFO lean in. 

That’s when the board invites you to present. 

That’s when peers start asking:

“If I gave you more budget, what could you do with it?”

Your story isn’t based on aspiration. It’s built on change.

This Isn’t Just for Marketing

Mark was very clear about the cross-functional application of CausalAI. 

You can apply causal models across the entire business:

  • HR: Recruiting, retention, employer brand
  • IR: Investor onboarding, perception shift
  • Product: Roadmap clarity, user friction reduction

If your team owns outcomes, causal modeling can help you prove what drives them, even outside GTM.

Final Thought

If you’re a CMO, CRO, or GTM lead hoping to “earn your seat at the table,” this is how you do it. Not with big claims or flashy decks. With evidence.

“This isn’t a threat. It’s a lifeboat. Everything else is the risk.”

You don’t need better math. You need better questions and the courage to ask them before you sell the answer.

Give it a go

  • Start with one question.
  • Model what matters.
  • Don’t sell the vision. Build it quietly.
  • Then let the results speak.

If you want to see what this looks like in practice, Mark has demo videos and 1:1 sessions available. Reach out to him directly on LinkedIn or email him at mark.stouse@proofanalytics.ai 

Missed the LinkedIn Live session? Rewatch Part 6.

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Strategy

Causal CMO #5: How to Prove GTM Value in the Boardroom

Causal AI is redefining how boards evaluate GTM value. Learn how CMOs can prove impact, defend spend, and lead with data under rising fiduciary pressure.
July 22, 2025
|
5 min read

Marketing teams that obsess over MTA, MQLs, CTR/CPL are under more pressure than ever. BS detectors are on high alert in the boardroom. In Part 5 of The Causal CMO, Mark Stouse outlines what the C-suite already expects. GTM leaders need to be fluent in interpreting proof, spend, and business acumen.

Takeaways

  • Fiduciary duty now applies to all officers, not just CEOs
  • Boards want to see real business impact, not activity metrics
  • GTM teams need to learn finance, speak cross-functionally, and test quietly
  • CAC and LTV are often fiction
  • The real blocker to change is courage

Proof Is Now Required 

As Mark and I already discussed, the rules changed in 2023

The Delaware Chancery Court’s 2023 ruling expanded fiduciary duty from CEOs and boards to all corporate officers, including CMOs, CROs, CDAOs, and other GTM leaders.

That means we’re now individually accountable for risk oversight and due diligence. Not just our intent. Our judgment too.

“This is changing the definition of the way business decisions are evaluated… What did you do to test that? What did you do to identify risk and remediate the risk?”

Boardroom expectations have shifted. They want marketing accountability, not activity metrics. If your GTM budget is still defended with correlation math, you’re going to lose the room.

Causal AI gives you something different: Proof. 

It tests what causes performance and why, how much it contributed, and what to do next.

It operates a lot like a GPS, recalculating your position in real time, suggesting alternate routes, and showing what could happen under different conditions.

What Boards Actually Want

Boards don’t want us to show them more dashboards. Think of it like the bridge of a ship. The Captain and First Officer

They want decision clarity:

  • What happened?
  • Why did it happen?
  • What should we do next?
  • What are our options?

Causal AI models cause and effect based on live conditions, not lagging indicators. It runs continuously. It adjusts to change. It simulates outcomes with real or synthetic data.

“GPS says, ‘I know where you are.’ You say where you want to go. It gives you a route. Then, if something changes—an accident, traffic—it reroutes. Causal AI works the same way.”

Mark shared a great story from one of his clients. During COVID, the finance team at Johnson Controls planned to cut marketing by 40%. But causal modeling showed how that would destroy revenue 1, 2, and 3 years out.

“The negative effects… were terrible. Awful. Like, profoundly wretched.”

Finance still made cuts, but only by 15%, not 40%. Because the data made the risk real.

Most CAC Models Are Fiction

A lot of B2B companies still treat CAC and LTV as truth. Mark didn’t mince words:

“CAC is a pro rata of some larger number. That pro rata is not real.”

And LTV?

“In the vast majority of cases, it’s completely made up.”

The bigger issue: CAC isn’t just a cost. It’s a form of debt. If you spend $250K chasing an RFP and don’t keep the client long enough to pay that back, you’re in the red. Period.

This mindset shift matters most for CMOs trying to earn budget.

“You’ve got to understand unit economics of improvement… how much money it takes to drive real causality in the market. That’s true CAC. Not the BS a lot of teams have been selling.”

What does that look like?

The chart below is a simulated example of a typical flat MTA pro rata model compared to a variable causal model.

Bar chart comparing fake pro-rata CAC vs. true causal CAC across three fictional B2B customers.

To be fair, it’s not always deception. It’s often desperation. Most teams are never given the tools to calculate real causality.

Marketing Must Step Up

CMOs say they want a seat at the table. But most still operate like support teams.

If you want credibility in the boardroom, act like it’s your business and your money.

“I became very good at interpreting marketing into the language of whoever I was talking to, like HR, Legal, Finance, the CEO. No marketing jargon. Just business terms.”

Boards fund systems that scale. That means reframing GTM as a system, not a series of tactics. It’s a mindset that requires critical thinking and letting go of outdated playbooks. 

“This is the difference between being seen as a business contributor and being a craft shop.”

Start learning finance. Take courses. Do sales training. Train your team. Speak the language of the business. That’s how you earn respect and influence decisions.

Build Proof. Then Tell the Story.

Part 4 covered how to start a skunkworks project

  • small budget
  • small team
  • no fanfare

In Part 5, Mark explains why the silence matters.

“You want to assemble your story of change. You won’t have that if you declare you’re doing this up front.”

The goal here is to earn sequential trust over time, not to be secretive. When people feel the improvement first, they’re far more likely to believe the explanation later.

“If they already believe it, they’ll accept the facts. If they hear the facts first, they’ll resist.”

So don’t lead with a deck. Don’t sell a vision. Build causal models behind the scenes. Learn what’s working. Adjust what’s not. Let the results speak. The key is to keep learning. 

Then, when the timing’s right, you can confidently walk into the boardroom with a better story and the data to back it up.

The Real Bottleneck Is Courage

The hardest part of this shift isn’t modeling. It’s having the guts to do the right thing instead of always doing things right.

“The biggest issue we all face is courage. The courage to act.”

Too many marketing leaders stay stuck because it’s safer. Even if nothing changes.

“If you’ve tried the old approach your whole career and it hasn’t worked… then you’ve got to change.”

And according to Mark, to be that change, you have to stop waiting for permission, stop hiding behind bad math, and start proving your worth quietly, confidently, and causally.

Final Thoughts

Navigating a business is kind of like flying a plane. Causal AI gives GTM teams the instruments they need to fly safely through volatility. It does more than “keep you in the air.” It helps you choose better paths when visibility disappears.

Implementing Causal AI into GTM requires a mindset shift. Marketing leaders will need to let go of legacy systems like MTA because the change is coming fast.

Here’s where to begin:

  1. Learn the language of finance
  2. Translate GTM into business outcomes
  3. Start a Skunk Works project and prove it works

If you want a seat at the table, you need to earn it and prove it. 

Missed the LinkedIn Live session? Rewatch Part 5.

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Execution

Causal CMO #4: How to Operationalize Causal AI in GTM

Learn how to operationalize Causal AI across your GTM strategy. Shift from attribution to effectiveness, and build decision clarity before 2028.
July 15, 2025
|
5 min read

A lot of GTM teams are overloaded. New tech. New tools. New hype. All promising transformation, but rarely delivering clarity. In this recap of Part 4 of The Causal CMO, Mark Stouse explains why operationalizing Causal AI isn’t just about buying another tool. It first requires a mindset shift. A hard reset on how GTM teams define risk, read signals, and move forward.

Takeaways

  • Causal AI won’t become self-serve until teams are ready to use it
  • Dashboards tell you what happened, Causal AI tells you why it happened
  • Externalities like time lag and volatility are rarely accounted for in GTM
  • You can’t get to efficiency until you prove effectiveness
  • Counterfactuals are the gateway to clarity without the data fight

Mindset Shift Before Metrics

Before you operationalize anything, you need to think differently. 

The first step isn’t modeling or tooling. It’s dropping the need to be right. 

Causal AI is only effective if you’re willing to look at what’s actually happening, not what you hoped would happen. 

So do you want to be right? Or do you want to be effective?

This is a key distinction, especially for Go-To-Market teams. Instead of constantly trying to prove they’re right, they should ask what they need to do next.

That shift is already starting to happen.

Proof Analytics, for example, no longer looks like a traditional SaaS product. Most of Proof’s clients now rely on software-enabled services because self-serve just doesn’t work when teams are overwhelmed. 

It’s not a tech problem. It’s a saturation problem.

“Teams today are saturated like ground that’s been rained on for too long. They can’t absorb anything new. The water just runs off.”

Too many GTM teams are stuck on this treadmill. They’re still chasing efficiency because it’s easier to cut cost than drive growth. But efficiency without proven effectiveness is meaningless.

And that’s where Causal AI comes in.

The Problem With Attribution Isn’t Just the Model

As we discussed in Part 1, Multitouch Attribution (MTA) assumes linearity and doesn’t account for time lag. It only focuses on the dots, not the lines in between.

Dashboards typically treat data like a mirror. But as Mark pointed out, data only reflects the past, and past is not prologue. Like crude oil, it’s useless until refined.

“There is no intrinsic value in data. Only in what it gets refined into.”

Mark shared a story from a meeting where a CIO showed how easy it was to manipulate attribution weights. Then he had various leaders at the table do the same. Same data, four outcomes. Each one reflected a different bias.

Guess who had the least credibility?

Yup. Marketing. The CIO said to them:

“Of everyone in the room, you arguably have the most bias. Your outcome is dead last in terms of credibility.”

Causal AI mitigates gaming the system. It tests patterns for causal relevance and recalibrates in real time. If the forecast starts to degrade, it tells you what to do next.

It doesn’t care if the news is good or bad. It just tells the (inconvenient) truth. 

Market Conditions Come First

Before the model even begins, Mark’s team maps the external environment. They model the headwinds and tailwinds first. Only then do they plug in what the company is doing.

This is where most teams fall short.

Too often GTM is treated like an isolated system. But it’s not. It’s subject to risk, time lag, and external forces that marketers rarely model. And it shows.

According to Mark, the effectiveness of B2B GTM spend has dropped from 75% to just above 50% since 2018. That’s not a tactics problem. It’s a market awareness problem.

“The average B2B team is frozen in their perspective. They’re not thinking about the externalities unless it gets so bad they can’t ignore it.”

The result? Poor decisions, reactive guidance, missed opportunities. And an inability to plan for value because no one knows where in the calendar to look for it.

Counterfactuals Make It Real

One of the most powerful features of Causal AI is the ability to model counterfactuals. What would happen if we made a different decision?

Until recently, this required expensive, synthetic data. Now, GenAI tools make it accessible. With a detailed enough scenario prompt, teams can simulate outcomes, measure impact, and prioritize programs before spending a dime.

It’s like an A/B test for strategy. No need to touch real data. No risk of tripping legal wires. Just clarity.

“Most stealth efforts start here. The counterfactual model shows what’s probably happening. Then you go get the real data to prove it.”

It’s also the easiest way to build internal buy-in. Teams can explore alternatives without asking other departments for access or permission.

Start With Skunk Works

You don’t need a top-down mandate to operationalize Causal AI. In fact, Mark recommends the opposite.

Start small. Keep it quiet. Don’t even call it a transformation.

“Carve off a small budget and a couple of people. Model and learn for 9 to 12 months. Don’t say anything. Just execute.”

As the team starts learning and adjusting, people will notice. They’ll feel the shift before they understand it. Then, when the time is right, you explain how it happened.

“If people feel the improvement first, they’ll accept the facts. If they hear the facts first, they’ll resist.”

There’s no manipulation in this. It’s psychology. Let results speak before you tell the story. It’s like asking for forgiveness instead of permission. 

Causal AI Is Not Departmental

Causal AI is not a marketing or revenue tool. It’s a business system. 

Mark says the best clients are already thinking in terms of enterprise models. Finance teams often lead the adoption. They use causal modeling transparently across departments to see what’s working for the business as a whole.

And leadgen is not the goal. The board doesn’t care about leads. They care about cash flow, growth, and risk. In other words, bigger deals, faster deals, and more of them. Causal AI connects those dots and the lines in between.

“You can’t get to efficiency unless you know if it’s effective.”

Final Thoughts

Effectiveness is not a tactic. It’s a lens.

Causal AI doesn’t ask if you were right. It helps you get better at seeing possibilities and becoming more effective. 

We’ll explore that further in Part 5 as we dig into investment decisions and boardroom conversations.

Until then, ask yourself:

Are you willing to be wrong long enough to get it right?

Missed the LinkedIn Live session? Rewatch Part 4.

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Insight

Why AI Adoption Still Frustrates Most Managers (And What Helps)

Tried automating with AI? We did too. Here’s what broke, what worked, and why most tools still miss the mark for real-world business teams.
July 8, 2025
|
5 min read

By Gerard Pietrykiewicz and Achim Klor

Achim is a fractional CMO helping various B2B GTM teams with AI adoption. Gerard is a seasoned project manager and executive coach helping teams deliver software that actually works.

Most AI automation tools look easy in demo videos. But when we tried building a simple system to summarize calls and send reports, reality hit hard: clunky UIs, unexpected limitations, and lots of wasted time. Still, when used right (especially for early prototyping), AI can be a breakthrough in team alignment. This article shares what worked, what didn’t, and where we go from here.

Takeaways

  • AI agents today are still scripts, not true automation. The promise of autonomous workflows is overstated.
  • Tools like make.com and ChatGPT require technical fluency. Setup is far more complex than the marketing suggests.
  • Security and data access are non-trivial blockers. Granting full email or Drive access raises concerns many don’t consider.
  • AI is great for prototyping and alignment. “Vibe-coding” can quickly resolve communication gaps.
  • The next frontier is usability. AI won’t go mainstream until the interfaces catch up to user needs.

AI Hype vs. Reality

So, we recently jumped into the whole AI automation thing. The goal was simple: use make.com to build something that would summarize weekly Google Meet calls and email a neat report. 

Easy, right? 

All those flashy Instagram and LinkedIn videos had led us to believe it would be. We've worked with Zapier before, and this would be a similar, straightforward experience, right?

Not so fast.

First hurdle: Google Meet doesn’t just hand over transcripts with an API. Nope. They’re stuck in Google Docs in Drive. You have to give make.com access to a specific folder. Then came a “simple” filter for recent documents. Simple, unless you don’t know the exact code. The “intuitive” interface felt more like a maze when what was sorely needed was real control.

Then, to send the summary via Gmail, you have to link your entire account to make.com. That can make anyone uneasy, to say the least. Finally, setting up ChatGPT with API keys and managing credits wasn’t hard on its own, but put it all together, and it became a bigger headache than expected.

The make.com AI assistant, supposedly there to help, burned through free credits like kindling while trying to resolve a basic filter issue. The frustration wasn’t with the idea; it was with how hard it was to “make” it work. After an hour wrestling with the interface, it was clear that our time was better spent elsewhere.

AI Automation Workflow Breakdown: the intended automation setup

AI automation workflow showing steps from Google Meet to Gmail, with callouts for API limitations, make.com setup hurdles, and privacy concerns.

Stephen Klein, CEO of Curiouser.AI, hits the nail on the head here in this LinkedIn post. He argues that most of the “agentic AI” buzz is just that. Buzz. 

Today’s AI “agents” are often just scripts, not independent thinkers. We’re years away from true autonomous AI. 

Klein is right. Businesses risk chasing inflated promises, throwing money at “Hype-as-a-Service” instead of real solutions.

A Glimmer of Hope

Despite the roadblocks, there is reason to be optimistic. A recent “vibe-coding” experiment (quickly mocking up a concept using AI tools without overengineering) is a good example. 

For non-technical managers leading software teams, you can use it to quickly build a basic idea prototype. We tossed the code, sure, but it completely changed how the team communicated. It cut down on all the detailed upfront planning we usually do. 

Could we build a full, production-ready solution with vibe-coding today? Probably not. But the immediate wins (clear talks, faster decisions, smoother development) were huge.

One time, we were stuck on a feature. Everyone had a different idea of what “simple” meant. We spent hours in meetings, just talking in circles. With vibe-coding, I cobbled together a rough version in an hour. We put it on the screen, and suddenly, everyone saw the same thing. The room went from confused murmurs to “Oh, I get it!” in seconds. It was a game-changer for clarity.
 
Gerard Pietrykiewicz

Before Vibe-Coding After Vibe-Coding
Confusion over what “simple” meant Clear, shared understanding
Long meetings, vague ideas Fast alignment via prototype
No one on the same page Instant “Oh, I get it” moment

This experience shows one clear truth: AI tools are harder to use than the marketing videos suggest. And no, we’re not ready to fire all our developers. But those who stick with it, who push past the early bumps and use these tools wisely, will find a real edge. 

New tech always has growing pains. Think about how easy GenAI has made things. It went from complex APIs to something almost anyone could use overnight.

Moving Forward with Clarity, Not Chaos

Yes, Stephen Klein is right to warn us about blindly following the hype. But his warnings shouldn’t stop us from trying things out. They should guide us to explore with care and common sense. 

As leaders, our challenge is to bridge limits by pushing for simpler, more intuitive solutions. Maybe AI itself should design user interfaces that actually make sense for managers, not just developers.

It reminds me of the early days of the internet. Back in the 1980s, it was powerful, but only for those who understood complex commands. Then along came the web browser (anyone remember Netscape?), a simple interface that opened the World Wide Web to everyone. AI needs its browser moment.
 
Achim Klor

Final Thoughts

Like any new tech, AI tools will continue to trip us up. But every experiment makes us better. The more we test, the more likely we are to build the future we want, not just buy into the one being sold.

If you like this co-authored content, here are some more ways we can help:

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!