Blog

Achim’s Razor

Positioning, Messaging, and Branding for B2B tech companies. Keep it simple. Keep it real.

0 Articles
Insight

Causal CMO #3: Causal AI is the GPS for Your GTM, Get Ready for 2028

Causal AI isn’t just tech. It shows what really drives GTM results. Learn how to move beyond attribution and start building your causal advantage before 2028.
June 25, 2025
|
5 min read

Many of us are still in the early stages of AI adoption, experimenting, testing, and trying to make sense of how it fits. But the pressure to move beyond pattern-matching is building. In Part 3 of The Causal CMO, Mark Stouse explains that Causal AI isn’t just a tech upgrade—it’s a new layer of accountability. It recalibrates forecasts, reveals the impact of GTM decisions, and removes the guesswork from budget conversations. This article outlines what GTM teams need to prepare for as Causal AI becomes mainstream. 

Takeaways

  • Causal AI shows you what actually caused your business outcomes.
  • CEOs already know that data from attribution models can be gamed.
  • You don’t need perfect data to get started.
  • AI doesn’t kill creativity or purpose. It makes both stronger.
  • By 2028, causal tools will make it easier to call your bluff.

Learn What to Do Next

One of Mark’s biggest points was to embrace being wrong. Be effective instead of being right. If GTM teams want to get ahead, they need to let go of trying to control everything. 

We’re already seeing this pressure hit big consulting firms. Demand for their AI services is off the charts, but clients aren’t paying what they used to. It’s forcing layoffs because staff aren’t fluent in AI. This is what the 2028 reckoning looks like in real time: not a tech crisis, but a credibility one. 

Causal AI doesn’t care about any of this. It calls things as they are. It adjusts automatically based on what’s going on around you. 

Mark calls it a GPS for your business. 

“If things start to degrade in the forecast, it tells you what to do to get back on track. That’s why we modeled Proof Analytics after GPS.” 

Unlike forecasting the weather, which looks at past patterns, Causal AI tools like Proof Analytics measure cause and effect in real time, based on current conditions and the actual levers you can pull. 

Proof Analytics user interface showing a real-time scenario analysis of online sales forecasts across multiple inputs and external factors.
Proof Analytics showing four potential causal what-if scenarios

Not All AI Is the Same

Mark outlined four categories of AI:

The leap from correlation to causality is the break point. GTM teams stuck in attribution are falling behind. Those preparing for Causal AI will be ready when it becomes standard.

“We have about three years to cross the river. If you don’t, it’s going to be very hard after 2028.”

Unlike attribution modeling, which relies on correlation and weighting, Causal AI directly isolates impact.

The Myth of Control

Causal AI forces a choice: keep pretending we’re in control, or start navigating with truth.

Mark compared what we control across different grading scales as illustrated below.

Perceived Control Across Different Grading Scales

Bar chart comparing average control across academic, baseball, and surfing performance scales. Academic shows 90% in control; baseball, 35%; surfing, 6%.

In each grading scale, what counts as success depends on how much is actually in our control.

In school, we control most of our grade because it’s based on the work we hand in. In baseball, a .350 hitter strikes out 2 out of 3 times but makes the Hall of Fame. In surfing, a world-class pro may wipe out 94% of the time. In war or pandemics, almost nothing is in your control.

“Business is more like baseball. If we start grading it that way, we end up with a lot more truth.”

Same goes for marketing. As much as 70% of GTM performance is driven by external factors.

“If you don’t know what the currents are, you won’t know how to steer the ship.”

That’s why your job isn’t to be perfect. It’s to reroute. 

Causal AI detects shifts and tells you what to do. It zooms from big picture to ground level, depending on what the decision calls for.

Progress. Not perfection.

Tech Doesn’t Kill Purpose. It Reveals It.

There’s a quiet fear around AI, especially in creative and strategic roles. The idea that if a machine can see something you can’t, your work might not matter. That’s just not true. 

If the tools are properly learned and configured, they amplify creativity. 

Take GenAI, for example. If you dive into ChatGPT without training it on facts or setting expectations, it’s like hiring an intern and never giving them a clear job description. 

The problem isn’t the tech.

“If you pick up the tool, you have purpose. That’s not loss. That’s awareness.”

Mark also shared how his son, a private chef, uses GenAI in the background while cooking. It helps plan menus, tailor preferences, and provide real-time input. It doesn’t replace his job. It makes him better at it.

It’s like that for marketers too. 

“Marketing is a multiplier. It doesn’t need shared revenue credit. It needs causal proof of lift.”

Even before the first Industrial Revolution, tech has been a multiplier. It has expanded human capability, freeing people up to focus on innovating and creating. If you’re still defending multi-touch attribution (MTA), it’s not your data that’s outdated. It’s your mindset. 

What Happens When AI Calls Your Bluff?

You can’t hide behind attribution dashboards anymore.

During interviews with Fortune 2000 firms, Mark uncovered a common thread: C-suites aren’t frustrated by skill gaps. They’re frustrated by teams who can’t explain impact.

“They come up with total BS programs to justify spend. Do they not know that we know this is stupid?”
 
Fortune 2000 CEO

Attribution models are weighted and easy to manipulate. Change the weights, change the story. Everyone knows this. That's why MTA charts get ignored in the boardroom.

Causal models run continuously. They adjust to change. They reroute you when conditions shift. Causal AI works like a GPS. It gives teams the heads-up they need to adapt.

The Treadmill Is Breaking

A lot of GTM teams are stuck on a productivity treadmill. Budgets are cut. Expectations stay high. Nobody knows what’s actually working.

AI will expose that. Early on, it will cut 30-40% of activities. Not because it’s ruthless, but because that activity wasn't creating impact in the first place.

“We’ve just always been doing it. With AI, everybody will know.”

In other words, if you’re not using AI with a causal lens, you’re optimizing noise.

Final Thoughts

My conversation with Mark made a few things very clear:

  • AI isn’t replacing you. But if you ignore it, someone else will outperform you.
  • Attribution logic can’t handle lag, volatility, or context. Causal AI can.
  • GTM teams have until 2028 to make the shift or risk falling behind.

In Part 4, we’ll talk about what it looks like to operationalize this shift.

Stay tuned.

Missed the LinkedIn Live session? Rewatch Part 3.

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Insight

Causal CMO #2: How to Move From Patterns to Proof

Pattern-based models like attribution help predict behavior, but they don’t explain cause. This article shows how causal AI reframes GTM, risk, and performance.
June 18, 2025
|
5 min read

Models like Bayesian and NBD-Dirichlet are powerful ways to predict human behavior. But they don’t explain why things happen. They can spot patterns. They don’t prove cause. In this recap of Part 2 of The Causal CMO, Mark Stouse explains the shift from pattern-based models to causal inference and why that shift matters now more than ever for GTM teams.

Takeaways

  • Bayesian models show patterns, not causes.
  • NBD-Dirichlet assumes behavior won’t change.
  • Most GTM teams optimize for efficiency, not effectiveness.
  • Marketing only multiplies sales when sales is working.
  • Attribution hides risk. Causal models expose it.

“Why” is Not a Nice-To-Have

One of the things Mark started with is an age-old question we’ve always tried to answer in business:

“Why things happen is the number one thing that the scientific method seeks to understand.”

Bayesian models were a step in that direction. But the Bayes’ Theorem is 300 years old. And they’re predictive. They can’t tell us what caused what and why. 

What they are very good at is telling us the probability that something is happening.

“If you see smoke, a Bayesian model helps update the probability there’s a fire. But it won’t tell you what caused the smoke.”

Same goes for NBD-Dirichlet. It’s great for describing past behavior. It can even predict short-term purchasing patterns.

You can see this in action on many e-commerce platforms. Amazon uses NDB-Dirichlet to model the probability of repeat purchases. If you buy a certain product, for example, you almost always see a “You May Also Like” CTA. 

This kind of modeling assumes that buyer behavior doesn’t change much over time. 

But human beings are irrational. We choose A today and B tomorrow, depending on how we feel. And in B2B tech, with its layers of procurement bureaucracy, stakeholders, and decision time lag… well, you see where I’m going with this. 

The Shift from Correlation to Causation

Most GTM systems in place today are designed to support correlation-based marketing, not causal decision-making.

They’re pattern matchers. Attribution tools. Regression models.

None of them explain cause and effect. They were built to scale performance efficiency, not understand impact like cost-effectiveness.

Marketing is just as much to blame as anyone. But the rot began with deterministic thinking at the leadership level. 

“Most of the bad info came from founders and VCs. They wanted deterministic systems. The idea was simple: put a quarter in, get a lead out.”

That kind of worked for a while, when interest rates were low, uncertainty was minimal, and pipelines moved fast.

But once volatility hit (time lag, market noise, internal complexity), those patterns fall apart.

Causal inference doesn’t just show you a pattern. It tests whether that pattern causes anything meaningful to happen.

And it does it continuously in real-time.

That’s the power behind causal AI.

Flowchart showing the evolution of analytics models: Descriptive, Predictive, Correlation-Based, Causal Inference, and Causal AI, each linked to a guiding question.

Why Marketing Effectiveness Has Collapsed

One of the biggest reasons GTM performance has tanked in the last two decades is that most GTM teams still operate like the environment hasn’t changed.

“70% of the world is stuff you don’t control. And most marketers don’t even account for it.”

The effectiveness of GTM teams has dropped off a cliff, not because marketers suddenly got worse, but because the headwinds got stronger and faster.

  • Deals are taking longer.
  • Budgets are getting slashed.
  • CFOs are pulling back anything they can’t defend.

It’s a full-blown marketing effectiveness collapse, clearly visible in 2025. 

And yet, everyone still keeps trying to optimize for efficiency. Perhaps it’s because we blindly believe we are already effective. 

“You can’t be efficient until you’re effective.”

This is an important reminder: Marketing is a non-linear multiplier. Sales is a linear value creator. 

For the past 25 years, we’ve been trying to force marketing to abide by Sales’s linear outcomes. That’s no different than forcing a square peg through a round hole. 

In a causal model, you can calculate the lift marketing creates while Sales is executing.

If Sales underperforms, Marketing’s lift is zero. If Sales is kicking butt, marketing can multiply Sales efficiency by 5x and Sales effectiveness by 8x. 

That’s not for debate. It’s proven math.

Visual showing the multiplier effect of marketing on sales: 8x more effective and 5x more efficient, explaining that GTM investments are multiplicative, not additive.

Sadly, that multiplier logic doesn’t show up in attribution because attribution is correlation. 

It’s only visible in causal inference.

4 Types of AI

Mark explained the four types of AI in play today. Only one explains “why.”

  1. Generative AI is what most people are experimenting with
  2. Analytical AI is correlation-based pattern matching
  3. Causal AI models cause and effect
  4. Autonomous AI is not real yet (mostly hype)

Most GTM teams are stuck between the first two. 

They’re still optimizing a traditional sales and marketing funnel with pattern-matching tools that can’t distinguish signal from noise. In other words, transactional tactics. 

Worse still, they’re getting excited about autonomous agents that “do work for you,” without asking if the work being done is even useful.

“Agentic AI without causal logic is just automation with lipstick.”

Causal AI and the CFO

One of the most overlooked shifts in GTM accountability is that causal models are increasingly being used by finance.

“FP&A and ops teams are going to be the ones evaluating GTM performance. Not marketing itself.”

This shift is already underway. It’s part of a larger response to Delaware’s expanded fiduciary rules.

“All officers now carry personal liability if shareholder risk isn’t managed responsibly.”

Which means random budget cuts, especially in marketing, are going to get harder to justify.

Causal AI gives CFOs the scalpel they’ve needed for years. It helps them decide what to cut, and more importantly, what not to cut. This is how CFOs use causal AI for GTM decisions.

Causal AI is like a CRM for cause and effect that updates continuously and informs real decisions in real time.

GPS Can’t Help If You’re Blindfolded

One of the analogies Mark uses when explaining Causal AI is that it’s like a GPS. It doesn’t promise precision. But it helps you avoid collisions and reroute when the road changes.

“The route that worked last quarter might not work today. The conditions changed. The terrain shifted. And if you’re not paying attention, you’re going to hit something.”

And it’s not just rerouting. 

These systems simulate what’s likely to happen next based on shifting inputs, so you can course-correct before problems hit.

So if you’re running models designed for deterministic systems, you’re essentially driving blind.

Final Thoughts

The hype around AI isn’t going away. Neither is the pressure to “cut costs,” especially in GTM.

A lot of budget cuts these days are based on correlation, on what appears to be contributing. But contribution isn’t the same as causation. Just because a channel shows up in the report doesn’t mean it drove the outcome.

And if you slash the wrong input, you could kill something that was actually working. That’s the long-term damage most teams don’t see coming (or keep missing). 

“AI is going to become a lie detector. It will show where the correlation falls apart.”

That’s the shift GTM leaders need to prepare for.

Fast.

Missed Part 2? Rewatch it on LinkedIn.

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Insight

Causal CMO #1: Attribution still dominates GTM. That’s a problem.

GTM teams still rely on attribution. That’s a problem. Learn how causality models reduce risk, reveal drivers, and rebuild CMO credibility.
June 10, 2025
|
5 min read

Many GTM teams continue to rely on correlation to justify decisions. It’s an ongoing problem. In this kickoff to our Causal CMO Series, Mark Stouse and I cover why marketing attribution continues to fail, how internal data keeps getting “engineered”, and how new legal rulings will put GTM leaders directly in the line of fire.

Takeaways

  • Correlation can’t explain real outcomes. Causality can.
  • Internal GTM data is often manipulated under pressure. Not malicious, just human.
  • Delaware rulings make all officers, including CMOs, accountable for data quality.
  • Attribution logic like first, last, and multi-touch is correlation, not cause.
  • Causal models start from business outcomes and force teams to reverse-engineer what actually moved the needle.

Welcome to the Causal CMO

Mark Stouse and I kicked off the first session with a direct conversation about how marketing attribution models still hold GTM teams back from reporting real-world buyer behavior. Too many are still stuck in correlation, and it’s costing them. 

Correlation feels safe. It’s anything but.

Marketers look for patterns. It’s what we've been trained to do. But in complex, time-lagged buying cycles like in B2B tech, correlation doesn’t tell you anything useful, like what actually caused the deal to close (or not).

Tom Fishburne's cartoon showing a business meeting where a team misinterprets correlation between sales and shaved heads, humorously illustrating flawed logic in marketing data analysis.
Source: Marketoonist, by Tom Fishburne

Mark made a great point: correlation is binary. It either exists or it doesn’t. That makes it easy for humans to understand, which is why we gravitate toward it. But it’s not how markets work. It’s not how buyers behave. It’s not what happens in the real world. 

Causality tells you what actually led to the outcome. It’s multidimensional. It accounts for context, time lag, headwinds and tailwinds, and everything else correlation ignores. That’s why it’s harder to fake and why it’s so valuable.

Many GTM teams still lean on correlation because it’s faster and easier to defend, especially under pressure. As Mark pointed out, correlation-based patterns are easy to manipulate. If the data doesn’t match the story you need to tell, you can tweak the inputs until it does. That’s why attribution charts and dashboards persist: they give the illusion of precision without exposing the actual drivers. It looks clean. It feels controllable. But it’s a shortcut that hides the real story.

An intuitive decision is either really right or really wrong. No leadership team can afford that kind of spread.

Your data is lying to you.

Mark shared a case where a fraud detection tool scoured over 14 years of CRM records. More than two-thirds of the data was found to be “engineered.”

It wasn’t one bad actor. It was many people over many years, under tremendous pressure to prove their seat at the table, while slowly reshaping the story to fit what they needed to show.

People use data to defend themselves, not to learn.

This kind of manipulation is hard to catch with correlative tools. Causal systems, on the other hand, make it obvious when something doesn’t add up. It’s unavoidable. 

Delaware changed the rules. CMOs are now on the hook.

Unlike Sarbanes-Oxley, which took years to pass and gave leadership time to prepare, the Delaware rulings came quickly and without warning. Mark called it one of the biggest blind spots in recent corporate memory. 

The 2023 McDonald’s case expanded fiduciary oversight beyond the boardroom. Now every officer (CMOs included) is legally accountable for the quality and integrity of business data.

The writing’s on the wall. If you’re not governing your data, you’re exposed.

Data quality is now the number one trigger for shareholder lawsuits. CRM systems are full of data manipulation and missing governance. Lawyers know it’s an easy audit. If your GTM data can’t hold up under causal scrutiny, you’re exposed.

Attribution isn’t just flawed. It’s obsolete.

First-touch. Last-touch. MTA. Even Bayesian models. They’re all correlative. They’re all easy to manipulate. And they all fall apart under scrutiny.

Mark told the story of a meeting where a CIO changed the MTA weightings mid-call, then had someone else change them again. Marketing freaked out, but had no causal rationale to defend their weightings. The numbers were all made up.

If your model changes when you tweak the sliders, it’s not a model. It’s a toy.

Jon Miller, founder of Marketo and Engagio, recently said himself that Attribution is BS

Respect to Jon for saying it out loud. That’s a bit like the first step in the 12-step program: admitting you were wrong. And that’s where every CMO still holding onto attribution logic has to start. 

Mark followed up with his own take on why causality is quickly becoming the standard.

Both are worth reading. 

Marketing is probabilistic, not deterministic.

Causal models don’t promise certainty. They help you understand what likely led to an outcome, accounting for what you can and cannot control. 

Mark compared it to throwing a rock off a building. Gravity ensures it will hit the ground every time, but where it lands and what it hits is a different question, especially when you consider things like time of day, weather, etc.

Two-panel black-and-white cartoon showing a rock falling from a building—during the day toward a crowd of pedestrians, and at night onto an empty street—highlighting how context changes the outcome of the same action.
Gravity is the constant. Everything else is a variable. (Image generated by AI)

It’s the same with marketing. You know your efforts will have an effect. What you need to model is the direction, magnitude, and time lag.

Start with outcomes. Work backwards.

The shift to causality has nothing to do with better math. As Mark said, if a vendor’s pitch is built on “new math,” you should run. The math already exists, and it works. 

What matters is asking the right questions. Don’t start with your data. Start with the outcome you care about.

  • What moved deal velocity? 
  • What increased the average contract value? 
  • What pulled new buyers into your sales process?
Start with the outcome. Work backwards. See if the data supports it.

That shift exposes where the real drivers are. And it resets expectations for performance. 

Mark compared it to baseball. If you hit .350 in the majors, you’re a Hall of Famer, even though you failed two-thirds of the time. Baseball is full of external variables players can’t control. 

Side-by-side comparison of Babe Ruth’s .342 batting average with a failing test score, showing how success in high-variance environments like baseball differs from academic grading.
Marketing is more like baseball than academia.

In academia, it’s the opposite. Most of your GPA is in your control. 

GTM should be judged like baseball but it’s not. Markets are messy. Nothing is certain. Causal modeling reflects that uncertainty by showing you what data you’re missing or misreading. 

But traditional marketing metrics like attribution expect certainty. Marketers are held to a similar standard as academia. This makes zero sense given how uncertain markets are. 

And therein lies the problem, and it is a critical insight for GTM teams. We’re trying to make sense of uncertainty using tools that assume predictability. The classic square peg through a round hole.

Attribution tools weren’t designed for complexity or context. They were built to assign credit. They don’t help GTM teams. They polarize them.

Final Thoughts

Mark explained a few key things every GTM team needs to plan for, including how correlation fails, why causality matters, what legal risk CMOs now face, and how to move beyond attribution logic in B2B GTM.

In Part 2, we’ll get into the mechanics. What causal models actually look like. How to map time lag and external variables. And how to build something your CFO will trust.

Missed the session? Watch it on LinkedIn here.

In the meantime, here’s a quick FAQ to clarify the core ideas, especially if you're new to the conversation or want to share this with your team.

  • What’s the difference between correlation and causality in marketing?
    Correlation shows when two things happen together. Causality shows what actually drove the result, taking time lag, context, and other variables into account.
  • Why are attribution models unreliable?
    Because they’re based on correlation. They’re easy to manipulate, and they rarely reflect what actually influenced the outcome.
  • What’s the legal risk for CMOs in 2025?
    Delaware rulings now hold all corporate officers, including CMOs, legally accountable for data quality. Shareholder lawsuits are already targeting flawed CRM data.
  • What’s a better alternative to MTA or last-touch models?
    Causal modeling. It starts from outcomes and works backwards to isolate what actually moved the needle.
  • Do I need better data to start?
    Not necessarily. You need better questions. Causal models help you figure out what data matters and where the gaps are.

If your dashboard still runs on attribution logic, this is your chance to change it.

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Insight

Better At vs. Better Than: How GTM Teams Get Out of Their Own Way

Learn how GTM teams stop tripping over each other by shifting from ego to impact. Better At vs. Better Than—what mindset drives growth?
June 3, 2025
|
5 min read

GTM teams trip over each other because of culture, not strategy. “Better At” is a mindset that can shift your team from competing for credit to actually getting better at working together. Be “Better At” curiosity, not control; showing up to contribute, not one-upping. If you’re leading a B2B team and you’re tired of the same old drama, this one’s for you.

Takeaways

  • GTM teams break down when people fight for credit instead of solving real problems together.
  • Strained relationships across GTM teams is a trust issue.
  • Being curious, generous, and clear beats being clever, loud, or right.
  • Marketing teams that ask better questions create more value.
  • Better At means improving the team while you improve yourself.

Better At, Not Better Than

We don’t need more clever acronyms or prompts; another playbook or dashboard.

We need braver marketers. People who care more about showing up and improving their tribe.

That’s what Tracy Borreson and I got into during her Crazy Stupid Marketing podcast.

We started with marketing. But the conversation kept pulling us deeper into mindset, culture, and how GTM teams can stop tripping over each other.

Our discussion built on what I shared earlier in this LinkedIn Pulse article and led to a thoughtful question:

What happens when we stop trying to be better THAN each other… and start getting better AT helping each other?

How “Better At” Started

This idea started in a personal place. A strained conversation. A moment that reminded me we’re all just doing our best with what we’ve got.

And a quote from Epictetus.

“These reasonings do not cohere: I am richer than you, therefore I am better than you; I am more eloquent than you, therefore I am better than you. On the contrary these rather cohere: I am richer than you, therefore my possessions are greater than yours; I am more eloquent than you, therefore my speech is superior to yours. But you are neither possession nor speech.”
 
Epictetus, Enchiridion, Chapter 44

That stuck with me.

Being better at something doesn’t make you better than someone.

And if you’re better at something, what if you helped someone else become better at it too? What if they got better at it and showed someone else?

That’s the heart of it.

Better At is about think how we can improve others while improving ourselves.

We grow. We pay it forward. We do the work and learn together.

Why Marketing Needs This Shift Now

There’s an old joke that goes something like this:

How many marketers does it take to screw in a lightbulb? Just one. But all the others think they can do it better.

Every aspect of an organization is full of this kind of “better than” behavior, not just marketing.

We chase credit, one-up each other, and cover our asses by throwing each other under the bus. 

We can’t help it. It’s systemic and it starts at an early age.

It kills effectiveness and alignment no matter how efficient or better we think our silos are.

0 Effectiveness X 5 Efficiency = 0

If cross-functional teams can’t co-create value, no amount of leadgen and demandgen will save them.

Better than creates friction. Better at creates connection. 

Creating a culture of curiosity starts by changing what you reward. 

Stop Competing. Start Contributing.

Healthy competition is good. Sports is a good example.

But you can’t win the hearts and minds of your teammates by always competing with them or looking for the “easy button” to make yourself bigger than you are.

AI can help you be better at marketing. But only if it sharpens your thinking, not replaces it.

If your team’s output feels generic, the problem isn’t the tool. It’s the fear behind how it’s being used.

Be generous, empathetic, and useful. Ask better questions. You have to be the change you want to see. That’s how you get better at making better contributions.

Colonel Chris Hadfield quote: Things aren’t scary. People are scared. Every single person you meet is struggling.

Build a “Better At” Culture

If you lead a B2B tech company and this resonates, here are 3 things to consider:

  1. Audit your language. Are your teams talking about credit or contribution? Check your Slack threads, meeting notes, and handoff docs.
  2. Ask the multiplier question. What are we X-times great at, but getting zero return from because we’re not aligned in our thinking?
  3. Run a “Better At” session. Bring in your GTM leaders and ask: Where are we trying to be better than each other, when we could be better at something together?

You don’t need a re-org. You just need a shift in thinking.

When GTM teams work together, the impact shows up in shorter sales cycles, better conversion rates, and less wasted spend.

A Simple Ask

You don’t need to overhaul your GTM strategy overnight. But what if you started asking different questions?

Take these into your next leadership meeting:

  • Are we competing internally, or contributing collectively?
  • What would it look like to be better at partnerships, handoffs, and feedback?
  • Where are we rewarding performance over progress?
  • How are we creating a marketing culture of curiosity, not compliance?
  • And where are we still acting like attribution is more important than alignment?

Better At isn’t a tactic, a course, a playbook. It’s a mindset.

So… where are you trying to be better than, when you could be better at?

Let’s talk about it. 

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Strategy

Bayesian ≠ Causal: Why GTM Metrics Still Miss the Mark

Most B2B GTM teams confuse correlation with causation. Learn how Bayesian and causal models differ and how to measure what actually drives results.
May 27, 2025
|
5 min read

Many GTM teams still track activity instead of impact. Bayesian models estimate what likely played a role. But they don’t explain what caused the result, or what might have happened if something changed. Causal models test and measure what contributed and why.

Takeaways

  • Attribution shows what happened. It doesn’t explain what contributed and why.
  • Bayesian models estimate probability, not cause.
  • Causal models test what would’ve happened if you did something different.
  • Causal AI applies the logic behind causal modeling at scale.

Precision and Truth Are Not the Same

As mentioned in last week’s Razor, Bayesian models can help marketers get out of the last-touch attribution trap. They give us a way to estimate how likely it is that something contributed to a particular result.

But that’s not the same as knowing what caused the result and why.

Too many GTM teams still confuse probability with proof; correlation with causation. But more precision does not mean more truth. 

Causal models answer a different question: what would have happened if we had done something else? That’s what your CFO wants answered. And it’s the one your current model can’t touch.

We need to ask better questions instead of defending bad math.

Much of this discussion was sparked by Mark Stouse on LinkedIn. He clarified a common misconception: that Bayesian modeling is the same as causal inference. It’s not. And that distinction is what we’re getting into.

The past is not prologue.
 
Mark Stouse, CEO, Proof Causal Advisory

What Most GTM Teams Still Get Wrong

Most attribution models are shortcuts, not models.

Rule-based. Last-touch. “Influenced” revenue. They’re easy to run. Easy to explain. But disconnected from real buying behavior.

Attribution measures who gets credit, not contribution.

Bayesian modeling doesn’t rely on a single touchpoint or fixed credit rule. It estimates how likely it is that something played a role, like a channel, sequence, or delay.

Bayesian models give you a better approximation of influence than rule-based methods. But they stop short of answering the causal question: What made this happen and why?

Most attribution models never get past basic association (correlation).

As Jacqueline Dooley explains in this MarTech article, rule-based methods don’t reflect how buying actually works. They measure what happened, not why it happened.

In other words, most GTM teams are still stuck in Level 1 of the causal ladder.

Pearl’s Ladder of Causation adapted for GTM measurement stages.

What Bayesian Models Are Good At

Bayesian models help you estimate whether something played a role. Not how much credit to assign.

That’s why they help measure things like brand recall, ad decay, and media saturation. They estimate influence but they don’t explain the cause. 

Bayesian vs. Causal Models: What They Can and Can’t Tell You

Bayesian Causal
Output How likely something contributed What would have happened if something changed
Based on Observed behavior Structured interventions and counterfactuals
Strengths Estimates influence, handles uncertainty Simulates alternate outcomes, proves effect
Limitations Doesn’t explain why Requires strong assumptions and structure
Used for Brand recall, decay, media saturation Forecasting, lift tests, strategy simulation

Mark isn’t the only one pushing for clarity here.

Quentin Gallea wrote an excellent article on Medium that details how machine learning models are built to predict outcomes, not explain them. They’re correlation engines. And when teams mistake those outputs for causal insight, bad decisions follow.

If your model only shows what happened under existing conditions, it can’t tell you what would’ve happened if something changed. That’s the whole point of causal reasoning.

Causal AI tools like Proof Analytics (shown below) helps teams run “what if” scenarios at scale. It uses machine learning to handle the messiness of data with causal logic to explain what can actually make an impact.

Causal AI tools like Proof Analytics help teams run “what if” scenarios at scale.

What Causal Models Tell Us That Bayesian Models Can’t

Causal modeling shows what might have happened if you changed something, like timing, budget, message.

That’s the question your CFO is already asking.

As Mark pointed out, Bayesian models can’t answer that. Unless you impose a causal structure, they just update likelihoods based on what already occurred.

If you’re only predicting what’s likely under existing conditions, you’re stuck in correlation. 

Click Path vs. Causal Chain

What we track in attribution (Click Path) What causal models simulate (Causal Chain)
Ad → Webinar → Demo → Closed Won Ad removed → Webinar → Demo → ?
Ad → No Demo → No Sale Ad replaced → Event → Closed Lost
Ad → Case Study → Sales Call → Closed Won Same sequence, different budget → ?

What to Measure 

As mentioned, GTM dashboards show you what happened, like clicks. They don’t tell you what contributed to those clicks and why. 

Bayesian models help you spot patterns. 

  • How often something showed up. 
  • How long it stuck. 
  • How likely it played a role.

That’s useful. But it’s not enough.

Why? Because even though Bayesian models are probabilistic, they don’t model counterfactuals unless a causal structure is added. They estimate likelihoods, not outcomes under different conditions.

If you want to know whether something made a difference (or what would’ve happened if you did it differently) you need a model that can test it.

So instead of more data, focus on the data you already have.

Move Beyond Focus More On
Rule-based attribution Patterns of exposure over time
Clicks and form fills Contribution across all channels
Volume of MQLs Influence on decision-making
Campaigns measured in isolation Cumulative brand and media impact
Basic activity metrics Probable cause, not just correlation

By the way, if you’ve never looked at buyer behavior through a statistical lens, Dale Harrison’s 10-part LinkedIn series on the NBD-Dirichlet model is worth bookmarking. This series will help you understand how buyers typically behave in a category:

  • how often most people buy (70% of all purchases are made by light buyers, not heavy ones)
  • how rarely they buy from the same brand twice
  • why brand growth depends more on reaching more buyers than retaining the same ones

Final Thoughts

Rule-based attribution like first/last-touch only tracks what happened. It doesn’t explain what mattered.

Bayesian modeling gets you closer by helping you see patterns. But it doesn’t explain cause.

Causal models let you test what could make an impact, what may not, and why.

And as Mark Stouse pointed out, this only works if you’re using a proper causal framework. Bayesian models can’t tell you what caused something unless that structure is built in.

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Strategy

How Bayesian Models Measure Brand Impact Before Buyers Click

Bayesian models show what moved buyers before they clicked. Learn how to prove brand impact and fix last-touch bias in your B2B attribution strategy.
May 19, 2025
|
5 min read

Traditional attribution models do not help marketers. Last-touch attribution winds up becoming click-based marketing metrics that rarely hold up when the CEO or CFO asks, “Where’s the revenue?” Bayesian models offer a better way to measure what’s actually impacting the bottom line, building the brand, and influencing pre-funnel activity. This article shows you how to measure brand impact using Bayesian attribution models, especially for B2B teams tired of broken funnels.

Takeaways

  • Last-touch attribution is a marketing-sourced metric trap. It over-credits the final click and underestimates the impact of brand-building.
  • Bayesian models help us account for when and how a touchpoint influences conversion, not just if it does.
  • Ad fatigue happens when too many impressions decrease conversions.
  • Familiar brands benefit from within-channel synergy; unfamiliar brands need cross-channel reinforcement.
  • Bayesian models can also help predict pre-funnel influence, including non-converting journeys and offline media.

What Is Bayesian Modeling?

A Bayesian model helps you set and update expectations based on new evidence. Unlike traditional attribution, Bayesian methods can surface marketing impact pre-funnel.

Think weather forecasts: You start with what you know (like the season), then adjust your expectations based on new clues (like thunder). 

In marketing, Bayesian modelling weighs each channel’s influence based on how often it contributes to a sale, how recently it was seen, and how it interacts with other touchpoints.

Bayesian and causal models can overlap, but they’re not the same. Bayesian models estimate probability, like how likely something is based on data and prior beliefs. Causal models estimate what happens when something changes. The strongest marketing analytics use both: probabilistic thinking to handle uncertainty, and causal structure to guide decisions.

If you want to geek out a little more, Niall Oulton at PyMC Labs wrote an excellent piece on Medium about Bayesian Marketing Mix Modeling

It’s a great place to diver deeper. 

Why Bayesian Attribution Beats Last-Touch for B2B Marketing

Instead of guessing or oversimplifying, Bayesian modeling uses probability and real-world behavior to show what actually contributed to a sale, and how much.

Elena Jasper provides a good explanation using a research paper published in 2022, Bayesian Modeling of Marketing Attribution. It shows how impressions from multiple ads shape purchase probability over time. In a nutshell, too many impressions (especially from display or search) can actually reduce the chance of conversion.

Even more insightful, the model gives proper credit to owned and offline channels that traditional attribution ignores. 

Check out Elena’s Bayesian Attribution podcast episode.

Bayesian Models Show Influence

This is where things get interesting for brand builders.

Another study from 2015, The Impact of Brand Familiarity on Online and Offline Media Effectiveness, used a Bayesian Vector Autoregressive (BVAR) model to track synergy between media types. 

Here’s what they found:

  • Familiar brands get more value from “within-online synergy” (owned and paid media working together)
  • Unfamiliar brands benefit more from “cross-channel synergy” (digital and offline working together)

In other words, the value of your brand influences how effective your media is. So if you’re only looking at last-touch clicks, you’re missing the bigger picture. 

Bar chart showing stronger online synergy for familiar brands and stronger cross-channel synergy for unfamiliar brands.

This chart compares how different types of media synergy play out based on brand familiarity. Familiar brands benefit more from reinforcing messages within the same (online) channel. Unfamiliar brands get a bigger boost from cross-channel combinations, especially from pairing digital with offline.

  • Within-Online Synergy: How well paid and owned digital channels reinforce each other.
  • Cross-Channel Synergy: How well digital and traditional/offline channels combine.
  • Synergy Score: A regression-based measure of how much more effective two channels are together than separately.

SOURCE: The Impact of Brand Familiarity on Online and Offline Media Effectiveness (Pauwels et al., 2015), See Section 4.4, Table 3

Yes, It Also Helps You to See Pre-Funnel Impact Too

Bayesian models can also account for non-converting paths. That means they help you see how early exposures from media like TV, radio, podcasts, branded search, and earned media changed the likelihood of a purchase, even if the customer didn’t buy right away.

The ability to prove that your brand is being remembered is the holy grail of brand marketing.

Bar chart comparing credit given to last-touch vs. early exposures under different attribution models.

This chart compares how two attribution models assign credit for a conversion. Bayesian models are better suited for evaluating pre-funnel impact. They account for influence, not just transactions. 

These models don’t deliver hard proof. They provide probabilistic estimates, like how likely each channel or impression influenced conversion, even when no one clicks. It’s not deterministic, but it’s a far better approximation of real buyer behavior.

In a nutshell, memory and exposure matter, even when they don’t lead directly to a form fill.

When you start combining that with media decay rates and interaction effects, you finally have a way to show how long your brand-building efforts stick and when they fade.

SOURCE: Bayesian Modeling of Marketing Attribution (Sinha et al., 2022), See Section 4.2.3: “Interaction Effects”

Exponential decay curve showing how ad influence fades with time.

This chart shows how quickly a single ad loses its persuasive power. Influence fades exponentially, especially for short-lived channels like display or search. This is important for building brand reputation because a memorable first impression doesn’t last forever. Brand building isn’t one and done. 

This supports what the Sinha Bayesian attribution paper modeled: ad influence is not equal, and timing matters.

SOURCE: Bayesian Modeling of Marketing Attribution (Sinha et al., 2022). See Section 4.2.2: “Direct Effect”; Figure 5: Posterior distribution of ad decay parameters

Chart showing how conversion probability flattens after repeated ad exposures for SaaS vs. enterprise.

This chart shows saturation and how conversion probability builds with more ad impressions, then flattens out. Most SaaS GTM (self-serve, freemium, free trial) ramp up fast, but fatigue soon after (peaks around 12 impressions). Enterprise GTM builds more slowly, but needs more impressions to hit its ceiling (closer to 25 impressions).

Regardless of the model, impressions lose influence over time. That’s ad decay in action. But the number of impressions it takes to move the needle? That’s where most SaaS solutions and enterprise solutions part ways.

SOURCE: Bayesian Modeling of Marketing Attribution (Sinha et al., 2022), See Section 4.2.3: “Interaction Effects”; Figure 7: Negative interaction from high ad frequency. Real-world ad-to-pipeline benchmarks from WordStream, Databox, and SaaStr.

How to Get Started Without Boiling the Ocean

Most brands aren’t ready to run full Bayesian models. That’s OK.

It’s better to tackle the low-hanging fruit and build from there:

  • Track both converting and non-converting paths
  • Look for signal decay (how quickly clicks or views stop driving action)
  • Identify how owned, earned, and offline channels might contribute earlier than you think
  • Ask your data team or vendor if they support probabilistic models (some do; many fake it)

So if you’re only measuring what’s easy to measure, you’ll keep spending money in the wrong places and frustrating your exec team.

Measure This Not Just This
Decay of ad influence over time Last-click or last-touch only
Non-converting journeys Only converting paths
Cross-channel synergy Single-channel views
Confidence intervals in attribution Fixed attribution weights
Owned and offline media impact Only digital paid media

Final Thoughts

Like Causal models, Bayesian models are essential B2B marketing analytics. Relying on click-based attribution hides where budget is wasted and where your brand building is pulling weight.

Causal and Bayesian models aren’t mutually exclusive. Bayesian Structural Time Series, for example, blend both and help estimate impact while accounting for timing, media decay, and external variables.

These models and tools help us make smarter marketing decisions.

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!