Blog  Keep it simple. Keep it real.

Achim’s Razor

Weekly insights about GTM effectiveness, building brand reputation, and AI adoption.

0 Articles
Strategy

Why You Shouldn’t Trust a Forecast That Starts With the Past

Most B2B forecasts fail upstream. Mark Stouse explains why weak models, time lag, and bad assumptions break GTM planning and revenue trust.
March 24, 2026
|
5 min read

Most forecasts fail before they’re even built. 

Not because of bad data. Because of bad thinking… upstream.

That’s what Mark Stouse, CEO of Proof Causal Advisory, and I were chatting about even before we got into our latest Causal CMO chat

Before models, cadence, or tooling, GTM teams need to deal with something harder: the thinking that makes a forecast worth defending.

Takeaways

  • Chasing every market move keeps you behind the problem. Pick durable plays instead.
  • “Making the quarter from what marketing does in that same quarter” was always a fantasy (and still is).
  • Strategy is not the how. It’s the what. Most companies don’t actually have one.
  • A trustworthy forecast isn’t rooted in the past. It updates when reality changes.
  • If you can’t explain the variance, you’re not ready to share it yet.

Stop chasing the graph

Most GTM teams don’t want to hear this:

“The temptation is to try and follow every move that the graph makes with some sort of counter to it. The problem is that if you do that, you’re always behind your externalities. You’re never going to get ahead of them.”

In other words, stop reacting to every signal. 

Pick investments that hold up across a wide range of conditions. In B2B right now, especially tech, that means brand and reputation. Not because it’s comfortable. Because it’s what’s working.

The catch is time lag. 

Mark put it at 9 to 24 months before brand investment shows up in real results. I’ve seen similar timeframes (18 to 36 months). That’s why so many leaders cut it. They couldn’t connect the cause to the effect. So they called it worthless. The lag was hiding the impact.

Chart showing B2B marketing time lag: recognized revenue reaches 50% around month 6, while realized revenue reaches 50% around month 18, highlighting where leadership gets impatient.
How to stop chasing the graph… using a graph (haha)

We’ve covered time lag before. It's a core part of my End of MQLs series.

Right now, buyers aren’t shopping for the best pitch. They’re managing downside risk. They gravitate toward names they already trust, especially in times of crisis when headwinds are strongest. If you haven’t been building up your brand reputation before the pressure hits, you’re starting from zero when it matters most.

This connects to what we covered in our previous Causal CMO chat on the 95:5 rule. Only 5% of your market is in-market at any given time. The other 95% are forming impressions with or without you. 

Brand is what stays in the room when your sales team isn’t.

DemandGen was a cheap money phenomenon

“For 20 years in B2B, marketers and salespeople agreed on very few things. But one was that demand gen was real and you could somehow make somebody want to buy your product. The main reason why that appeared to work for so long is that money was cheap.”

When capital was abundant and risk tolerance was high, activity looked like causation. You pushed, things happened, you took credit. 

But when capital tightened, the model broke. The activity stayed. The results didn’t follow.

“This whole idea of making the quarter based on what marketing is doing in that same quarter was always a fantasy. Always. The time lags are too long for that to be true.”

As Rohan Light shared, “cheap money equals lazy thinking”. The constraint we’re in now is forcing a more honest accounting.

These systems weren’t built by idiots. They were built for different conditions. Those conditions are gone. 

We covered similar threads in The GTM Reality Gap and The Illusion of Control.

Mindset before model

To be clear, this isn’t a tooling problem.

“This is a mindset issue. It’s not a technology issue. One of the things you have to confront is just how much we think in patterns. When we do that, we assume the past is prologue. It’s not.”

Patterns help… until the market changes and you’re still using the old map. Stack enough of them and you get a house of cards. Either very right or very wrong, with no middle ground. 

Right now, reality is moving fast enough that the patterns expire before most teams notice.

Strategy is not the “How”

“Your strategy is the most important thing. Do you have a durable strategy? Can it survive a lot of different changes, a lot of volatility? The planning, the ops, the tactics: that can easily change and should change and will change all the time. Strategy should not.”

Strategy is the what. Not the how.

Most companies don’t have a business strategy. If your org runs a marketing strategy, a sales strategy, a data strategy, and an IT strategy in parallel, that’s usually a sure giveaway there’s nothing unified sitting above them. 

A recent HBR study of 500 organizations found that firms with stronger foresight capabilities — built around continuous signal detection and updating, not one-off planning exercises — report a meaningful performance edge. That finding matches what Mark said: durable strategy isn’t a static document. It’s a capability.

Without a durable strategy, you’re not forecasting. You’re decorating a guess.

What makes a forecast trustworthy?

Not the one rooted in the past. Conditions change. A model that can’t update isn’t a model.

“The definition of a trustworthy model is the one that gets closest to reality. You have to have a good model that represents reality, and then you have to be able to update it with whatever cadence is right for your business.”

The GPS analogy Mark often uses is the right frame. It doesn’t refuse to work when there’s an accident ahead. It recalculates. And it tells you why you’re going to be late. 

That’s the test: not just accuracy, but explainability.

“If you have a gap between your projection and reality but can explain what caused it, you’re there. If you can’t explain the variance, you’re not done yet.”

One more thing: real-time data sounds valuable but often works against you. People slow down when flooded with signals. What you need is the right data at the cadence your decisions actually require, not a dashboard that keeps everyone busy and nobody clear.

Final thoughts

The standard is not prettier, louder dashboards. Not more pipeline theater. Not false certainty dressed up as confidence.

It’s a model that updates. A strategy that holds. A variance you can explain.

That’s a forecast worth taking to your leadership team.

That’s a forecast everyone can trust.

Missed the session? Watch it here.


If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Strategy

How Not to Hire with AI and Fix Broken Recruiting Processes

AI-written JDs attract AI-written CVs. Here’s how overloaded teams create hiring noise and how to use AI to cut admin without handing off judgment.
March 17, 2026
|
5 min read

Co-authored by Gerard Pietrykiewicz and Achim Klor

Recently, Gerard received a recruiter pitch on LinkedIn.

It looked like AI wrote it and nobody bothered to fix it.

The role wanted a Business Analyst, Project Manager, AI expert, cybersecurity expert, and governance translator between business and technical teams. 

It also wanted 8+ years of AI experience. Really?

LinkedIn recruiter message showing an AI-generated job pitch with unrealistic requirements, including 8+ years of AI experience for a hybrid BA, PM, cybersecurity, and governance role.

This is where a lot of AI adoption goes off the rails.

Not because the tool failed.

Pressed to fill roles, nobody stops to define the actual work. So AI gets handed the job. It produces a bloated, catch-all mess no serious candidate wants to read. It gets blasted out at scale. Candidates fire back AI-generated applications and CVs at scale. HR gets buried in a sea of synthetic sameness.

That’s not a hiring process.

That’s an AI-to-AI death spiral.

HR is just one example. Similar spirals show up across the organization. AI creates the noise, then more AI gets used to clean it up.

The spiral in plain terms

  1. A hiring manager, pressed for time, asks AI to write a job posting.
  2. The posting goes out with inflated requirements nobody stopped to sanity-check.
  3. A recruiter feeds it into an outreach tool. Thousands of messages go out.
  4. Hundreds of AI-assisted applications come back within hours.
  5. The company deploys more AI to filter the noise it just created.

Nobody saved time. Nobody hired better. The team ran a faster version of a broken process and called it progress.

SHRM reports that average cost-per-hire and time-to-hire have both risen over the last three years, even as AI use in HR has climbed. More automation did not automatically produce better hiring.

HR is drowning in its own output

HR teams are not doing this because they are careless. They are buried.

Requisitions pile up. Inboxes fill. Scheduling, screening, follow-ups, compliance documentation. It is a lot of admin for a function that is supposed to be about people and judgment.

So AI looks like relief. Write the JD faster. Screen faster. Reach more candidates faster. SHRM reports that nearly 9 in 10 HR professionals in organizations using AI for recruiting say it saves time or increases efficiency. It is also being used heavily for tasks like writing job descriptions and screening resumes.

The problem is that speed without clarity makes it worse. A badly written job description used to waste a few days. Now it creates a flood of waste.

One generic posting goes out. Hundreds of AI-assisted CVs come back. HR saves time on the front end, then loses it all cleaning up a mess they helped create.

Delegation is not abdication. AI handles the admin. It does not replace judgment.

The JD problem is a thinking problem

A marketing hire is not a product hire. A sales hire is not a customer success hire.

A CS leader might need process discipline, de-escalation skill, and commercial awareness. A product marketer might need positioning, customer insight, and message testing. An AE needs discovery skill, objection handling, and the ability to translate business pain into urgency.

These are not the same jobs.

Too many AI-generated JDs make them sound like they were copied from the same template with a few nouns swapped. That’s not speed. And it doesn’t hide the fact that nobody defined the actual work.

Good candidates can tell. They know when a posting was written by someone who understands the role. They also know when it was stitched together from generic prompts and wishful thinking.

Use AI to clear admin not avoid thinking

AI should help hiring teams get more deliberate. Not more automatic.

Use it for scheduling, interview summaries, scorecard drafts, candidate FAQ replies, debrief notes. That’s the repetitive work that eats recruiter time and produces nothing that requires a human.

Clear that work first. Then use the time to do what AI cannot:

  • define the role
  • write a JD that reflects real work
  • design an evaluation that tests for the right things

Canada’s Public Service Commission is clear on this point. Hiring managers remain accountable for decisions in the hiring process, and they must validate AI-generated ideas and suggestions to make sure the content is accurate, relevant, and adapted to actual hiring needs.

What to do this week

Before you open the next requisition, answer three questions:

  1. What work is not getting done right now?
  2. What part of the recruiting process can AI take off your team’s plate?
  3. What judgment still needs a human?

Write the JD after you answer those. Not before.

Final thoughts

If your hiring process creates noise faster than your team can think, AI will not help you. It'll just amplify your confusion.

The tool is not the problem. 

Delegating your thinking to it is.

If you like this co-authored content, here are some more ways we can help:

Cheers!

Achim is a fractional CMO who helps B2B GTM teams with brand-building and AI adoption. Gerard is a seasoned project manager and executive coach helping teams deliver software that actually works.

This article is AC-A and published on LinkedIn. Join the conversation!

Insight

The Causal Bridge: A Common Language for Your CEO, CFO, and GTM Team

GTM is failing because CEOs, CFOs, and GTM teams work from different models of reality. Here's what the bridge looks like and why attribution was never the answer.
March 11, 2026
|
5 min read

Your GTM playbook worked... two years ago.

That’s the uncomfortable (and inconvenient) truth that stood out after my recent Causal CMO chat with Mark Stouse, the CEO at Proof Causal Advisory

We didn’t debate whether old methods are broken. That’s already been settled. We got into what it actually takes to fix the problem: a shared model of reality that CEOs, CFOs, and GTM teams can all work from.

Right now, most companies do not have that. They have siloed dashboards, attribution debates, and quarterly budget battles. The CEO wants confidence. The CFO wants proof. The GTM team wants room to do the work. 

Nobody is singing from the same song sheet. 

And that, my friends, is the Causal Bridge the entire organization must build together. 

Here’s the recap.

Takeaways

  • Multi-touch attribution measures assumptions, not reality. It never worked.
  • 70-80% of GTM outcomes are driven by external forces you do not control.
  • Finance now treats GTM spend as a loan. It expects payback.
  • Cut marketing today. Watch sales fall off the cliff in nine to twelve months.

You can be right and still be wrong

Mark didn’t mince words about the truth of this common mindset:

“You can be right two years ago or a year ago and be dead wrong today. And that’s just circumstances outside of your control.”

Most GTM banter is insular and plays the blame game. Wrong strategy. Wrong hires. Wrong execution. But Mark’s research points to a different culprit: Externalities

They are the headwinds, tailwinds, and crosswinds in the market that make up 70-80% of what drives outcomes. Things no one on your team controls.

He estimates that in the last 10-15 years, over 95% of GTM ineffectiveness traces back to teams failing to factor in those external forces. We keep running the same plays into a market that has already moved way past us.

“You’ve got your framework, and you’re turning the crank over and over again. But you’re doing it into the teeth of a hurricane.”

If that sounds familiar, the problem is not your team. It is the model you are using to read the market.

 

Finance changed the rules

Marketing budgets have flatlined. Gartner’s 2025 CMO spend survey put them at 7.7% of company revenue, flat year over year, while scrutiny from Finance keeps rising. That pressure has a specific shape.

Finance is no longer treating GTM spend as opex. It’s now considered a loan of shareholder capital. Revenue acquisition, profit acquisition, cash flow acquisition. And loans require payback. 

When CAC spikes while deal volume falls and velocity slows, the payback period stretches until the math stops working.

“A CFO once said to me that if go-to-market was its own business, it would be bankrupt.”

The three outcomes Finance is actually watching: 

  1. more deals
  2. bigger deals
  3. faster close

Each maps to revenue, margin, and cash flow. If your reporting cannot connect activity to those three things, you are speaking a language the CFO cannot act on.

Right now, faster time to close is the single cash flow move most likely to get a CFO off your back. That is how tight the pressure is.

The measurement system was broken from the start

Here is the part that makes the CFO’s skepticism harder to argue with.

The attribution tools GTM teams have relied on for 10-15 years were not measuring reality. They were measuring assumptions. 

Every model, first-touch, last-touch, multi-touch, was built on arbitrary weights and a fiction that B2B buying is a linear, deterministic, and trackable sequence of cause and effect. It is not. It has never been

Multi-touch attribution in particular looked at desired reality, not actual reality. Change the weightings, change the results. That is not measurement. That is a fairytale generator.

“MTA didn’t look at reality. It looked at your desired reality. It looked at your beliefs.”

What makes this harder to dismiss: Jon Miller, co-founder of Marketo and one of the architects of modern demand generation, admitted “Attribution is BS”. 

His argument: buyers are already two-thirds through their decision process before they engage with vendors. The interactions that actually shaped the deal, anonymous content consumption, word of mouth, prior brand exposure, happened long before any tracking pixel fired. The “credit” arbitrarily went to whatever campaign was running when someone finally filled out a form.

Miller’s conclusion: teams have been grading marketing with broken math

The result: brand investment starved, short-termism rewarded, sales-marketing alignment broken, and stalled growth. That damage is real and it accumulated over more than a decade.

Example from Jon Miller, co-founder of Marketo, showing why attribution models fail: flawed assumptions, invisible buyer journeys, and damage from over-attributing to demand
Source: Jon Miller, www.jonmiller.com

This is why Finance lost trust in GTM reporting. It is not just that the numbers were sometimes wrong. It is that the system was designed in a way that let teams produce whatever numbers they needed to cover their asses. CFOs figured that out. That is a significant part of why the loan framing now lands the way it does.

The cliff shows up nine months late

True story: A CFO wanted to eliminate marketing entirely. Just to see what happened.

Well, there’s actually a model for that:

“We can show it with marketing. We can show the date that you kill all funding for marketing. We can show how long it takes for the results to degrade your performance, and then it’ll show the cliff that you fall off of. It ranges from nine to twelve months. It’s as sure as death and taxes.”

Time lag is what fools people. Cut marketing, ride the wave from past investment, coast for say, 9-12 months, then watch sales nosedive. By the time it registers, you are nine months behind on rebuilding and killed all your momentum. 

We covered this pattern during our Illusion of Control chat. 

Brand, for example, is not a soft metric. It is a multiplier of sales productivity that requires more time than we are willing to give it. And because time lag makes the impact of building brand reputation invisible in the short term, impatient CEOs and CFOs don’t invest in it. 

De-risking decisions is the real aha moment

When leaders see a causal model for the first time, the first readout shows waste. That almost always makes everyone at the table squeamish. But there is good news:

“The real aha moment is when it dawns on them that going forward, they have just de-risked their spend and their decisions dramatically.”

That is the shift from CYA and defending past spend to making better calls about future spend. From fighting over revenue credit to understanding that sales gets the credit for driving revenue and marketing multiplies what sales can do (marketing does NOT create revenue!).

With correlation-based tools, we change the weightings and we change the results. We can manufacture any narrative. With Causal AI, we cannot. The model does not care about our assumptions. It reflects reality. 

“There’s great alignment between Causal AI and reality. In fact, I say a lot that causation equals reality.”

That is exactly why Causal AI builds trust with Finance, and exactly why most GTM leaders have been slow to adopt it.

Start small. Do it quietly.

We already talked about how to do a skunkworks GTM project. Mark brought it up again because it works, and because it sidesteps the politics that kill most internal change efforts before they start.

If you’re not ready to make this a company-wide initiative, start by picking one part of the business, run the model quietly for nine months, and make decisions based on what it shows.

“People around you will not understand that you’ve done something, but they will feel it. You’ll start getting comments in the hallway: there’s a new energy coming out of marketing.”

When results are visible, tell the story. Your peers already believe it. You are just giving them the explanation for what they felt.

Final thoughts

Good teams are getting judged like bad teams right now. Not because they stopped performing. Because the market moved and the model did not.

The CEO, CFO, and GTM team are not failing to communicate because they are stupid or in different departments. They are working from different models of reality. 

Until that changes, every quarter continues to be the same fight. More pressure. More reporting. Same broken loop.

The market moved. The model did not. 

That is the problem. 

And now you know what the fix looks like.

Missed the session? Watch it here. Mark’s full 5-part research is on his Substack.

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Insight

More Output, Less Thinking. That’s the Real AI Problem.

71% of B2B firms use AI mainly to produce content. Most aren't thinking better. They're just moving faster in the wrong direction.
March 3, 2026
|
5 min read

By Gerard Pietrykiewicz and Achim Klor

Achim is a fractional CMO who helps B2B GTM teams with brand-building and AI adoption. Gerard is a seasoned project manager and executive coach helping teams deliver software that actually works.

The White House recently posted a TikTok with AI-edited media that put fake words in Brady Tkachuk’s mouth. Tkachuk called out “the fake news” publicly

And although the video disclosed it contained AI-generated media, the damage was done. It stayed up and reached over 11 million views as of this publication.

That’s what happens when you prioritize output over judgment.

The label doesn’t protect you. The audience still holds someone accountable. Someone has to do damage control. In the case of the White House TikTok, that someone is Brady Tkachuk.

The point of this example is that B2B GTM teams are also doing a version of this. Here’s why.

Takeaways

  • AI doesn’t fix a bad brief. It executes one at scale.
  • Most B2B teams use AI to produce more content, not better content.
  • Effectiveness first. Efficiency second. Flip that order and you just fail faster.
  • If your team can’t show you the thinking behind the work, you have a leadership problem, not a tool problem.

The pressure came first

Before AI, GTM teams were already stretched. Headcount flat. Targets up. Leadership asking for more content, more campaigns, more touchpoints, more MQLs.

Along comes AI, which looks like the answer. Faster copy. Faster decks. Faster everything.

Nobody stopped to ask: more of what, exactly? For whom? Why would anyone care?

Skip those questions and you don’t get growth. You get more crap. 

And chaos. 

Here’s what we keep seeing

Content goes out without critical thinking, fact-checking, or proofreading. The assumption is that someone else reviewed it. Often nobody did. 

Teams publish without knowing who it’s for or why anyone would care. 

Remember the Queensland Symphony Orchestra AI-generated Facebook ad?

Queensland Symphony Orchestra Facebook ad showing an AI-generated image of a couple in formal wear seated in an ornate concert hall, with violin players visible in the audience around them.

That was in 2024 when AI was nowhere near as good as it is today. And it was still approved!

Even their own musicians took exception. According to Slipped Disc, when musicians raised concerns with the marketing director, they were told to “stay in their lane.” Boooo!

The Guardian covered the broader fallout, including the arts union calling it unprofessional and disrespectful. The post was eventually removed. 

For an organization whose entire value is human performance and craft, the audience saw the contradiction immediately.

When the focus stays on tools instead of the broken model underneath, this is what you get. 

8 x 0 is still 0

If the thinking is weak, publishing faster doesn’t help. You just reach the wrong audience (or no audience) with the wrong message faster.

AI doesn’t fix a bad execution. It amplifies it.

Effectiveness first. Then efficiency. Flip that order and you don’t get more done. You do the wrong things faster, with more anxiety, and less to show for it.

A demo tape still needs a band

The vibecoding article we wrote in December is worth repeating here. The demo tape captures an idea so the band can react and interact. Then you go to the studio with real instruments and real people and lay down the actual track.

The demo was never the album. It was a faster way to ideate.

AI is like the demo tape phase. Use it to make you better at your job. Research faster, stress-test messaging, get that first draft done so your team has something to go on. In other words, “demo” your creative ideas.

That’s the right way to use the tool.

Use AI to help you do a better job, not do the job for you.

The final output still needs a human who understands the customer and can tell the difference between content that connects and content that just takes up space.

YouTube comments on a video post. Three viewers criticize AI-generated footage.

If you “ship the demo”, buyers can tell, as shown above from a few comments of a popular YouTube channel.

This is a leadership problem

A 2026 survey of 277 B2B marketing leaders in the UK and Ireland found that 71% of companies use AI primarily for content creation, and 56% see its main value in tactical execution. Most teams aren’t using AI to think better. They’re using it to ship faster in whatever direction they were already going.

If that direction is wrong (or outdated), you’re just headed in the wrong direction faster.

The tools didn’t create this. The pressure to work old models did. And pressure is a leadership variable. 

The tool is not the problem. The thinking (or lack thereof) behind the tool is.

If your team can’t tell you who a piece of content is for, what decision it supports, and what proof backs the claims, that’s not an AI problem. 

Fewer campaigns with a real point of view beat more campaigns with none. In this case, less is more.

Final thoughts

Before anything goes out, ask your team to show you two things: 

  1. the thinking behind it
  2. the prompt they used

If they can show you both, you’re good. 

If all they can show you is the prompt, that’s your “come to Jesus” conversation.

And that conversation isn’t about the tool.

If you like this co-authored content, here are some more ways we can help:

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Insight

Your Stack Is Not the Fix. Your Model Is.

Your martech stack does exactly what you told it to do. If GTM results are sliding, the problem isn't the tools. It's the model underneath them. Here's what to fix.
February 24, 2026
|
5 min read

Two weeks ago, a CFO told Mark Stouse, “My go-to-market operation is bankrupt.”

Not struggling. Not underperforming. 

Bankrupt.

This is where a lot of B2B tech companies quietly are right now. CAC keeps climbing. Deal size is down. Cycles are longer. And the default response is to replace tools, rebuild the dashboard, or layer AI on top of it all.

During our recent Causal CMO chat, Mark laid out a simple but uncomfortable truth: the tools aren’t the problem. The GTM model underneath them is. 

And that’s a huge reason why B2B GTM remains stuck.

Takeaways

  • Your martech stack reflects your assumptions. If results are sliding, audit your model before you buy more software.
  • Being wrong isn’t the problem. Staying wrong is.
  • AI on top of bad logic scales the problem.
  • GTM teams that can’t explain why things stopped working are creating a career-level risk.

Software is codified belief

All software, including your entire martech stack, is codified logic. It embeds assumptions about how buying works, how leads behave, and what predicts revenue.

“All software is codified information. It represents the way we learn and process information. And that means it embodied the logic that went into B2B marketing and go-to-market, starting in about 2000-ish.”

That’s the deterministic gumball machine. You put a quarter in. A gumball comes out.

Gemini generated retro pop-art illustration of a gumball machine filled with MQL-labeled gumballs, with a hand inserting a 25-cent coin and the word CLINK — representing the deterministic B2B lead generation model.

Apply that to B2B revenue and you get: fill the top of the funnel, closed deals come out the bottom.

The problem is that buying is human behavior. Not thermodynamics. Treating it as deterministic was always wrong. It “appeared to work” long enough that none of us had to face it. Now we do.

The reality is marketing has always been probabilistic. It has never been a linear deterministic process

The wrong model, at scale

Here’s what happens when you build technology on top of flawed assumptions like “gumball” logic:

“Technology is a point of leverage for human activity. If you’re wrong in your logic, automating it for scale just means you create more crap.” 

That’s why a lot of teams feel worse after modernizing their stack. More automation. More sequences. More scoring. More dashboards. More “confidence”. Less truth.

The tools worked exactly as designed. The design was the problem.

That design wasn't invented by marketers. It came from VC boards and investors who wanted predictability and a narrative they could control. 

“The idea of a deterministic go-to-market machine originated with VCs.” 

A lot of what GTM teams are being blamed for now was baked in from the top. The problem is conditions changed. For example, “no decision” now kills more deals than competitors do

Old logic is exposed.

The consequences of being wrong

This is the part most GTM conversations skip.

B2B marketing hasn’t grappled with what it actually means to have been wrong. Not just tactically. Foundationally. And the proof is how AI is being deployed right now: layered on top of the same frameworks that already stopped working before GenAI became a thing.

“No one likes to hear this. I don’t like to hear it. But if we have the wrong tool, it’s because it has the wrong logic sequence. It’s embodying our logic sequence, the one that we told it to have.” 

Here’s why this is important to understand if GTM teams want to fix the model:

“Accumulated knowledge and experience goes straight to the heart of our self-concept. As soon as you tell me that 20% of my knowledge is obsolete, I take that rather personally. When I realized that the price of learning is being wrong about what I thought I knew before, I became much more okay with it. Even if your response is, I’m just not going to learn anymore, you’re still wrong. You’re just wrong and frozen.”

Let that sink in for a moment.

Wrong and frozen is not a neutral position. It’s a career-ending one for GTM leaders who can’t explain to their CFO why results keep sliding.

And if you keep hitting a wall with your leadership team because they don’t want to hear the truth, it may be time to update your CV.

What to keep, what to kill

A causal model is what Mark calls “a digital twin of known reality”. It surfaces what’s actually driving outcomes, net of everything you don’t control. And it produces something most dashboards never give you: a stack rank of what’s working.

“You see things change places in that stack rank. If you’re in the bottom third, you need to kill it or figure out why.”

It also tells you time to value. Every tactic has a different lag to results. If you don’t know when something is supposed to pay off, you’ll either kill it too early or keep it too long.

This is where GTM leaders need to step up and call a spade a spade:

“Time lag allows you to set expectations accurately with your executive team. Let’s say, looking in advance, this is going to create a lot of value, but it’s going to take 16 months. If you come to me and complain at month 12, or month 10, I’ll point back and say, You agreed. Here’s your signature.”

That’s not a forecast. That’s a defensible commitment. There’s a difference.

GPS, not dashboards

A causal model doesn’t report on last quarter. It tells you where you are now, what's changed, and what route gives you the best chance of hitting your destination. Judea Pearl calls this causal engineering: not just what happened, but why, and what to do differently.

Mark equates this causal mindset to a GPS:

“It will start to say: you were on a really good route. But things have changed. This is not a good route anymore. In fact, you may need to switch cars.” 

If the map is wrong, every route looks optimized. You’re still lost.

This shift in mindset is what GTM teams need to consider to get their models unstuck.

Final thoughts

If your GTM is stalling, ask yourself these questions before you approve any new tool or campaign:

  • Can you explain in plain language why you win deals and why you lose them?
  • Do you know which tactics are actually driving revenue, net of market conditions?
  • Are you tracking “no decision” as a first-class outcome?
  • When did you last audit the assumptions your stack is built on?
  • If you doubled activity next quarter, would the underlying logic hold?

If the answers are unclear, you don’t have an execution problem. You have a model problem.

Start there. Write down your current GTM assumptions. All of them. Then ask which ones you’ve actually tested and which ones you inherited. 

That’s the first step. It costs nothing but honesty.

More won’t fix it. Faster won’t fix it.

Fixing the logic fixes it.

Missed the session? Watch it here.

Mark’s full research is on his Substack.

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Execution

AI OKR Cadence: Run AI for 12 Months and Prove the Uplift

Set an AI OKR, run a 12-month cadence, and measure whether AI caused the uplift. Includes guardrails, outsourced workflows, and no-cheating metrics.
February 17, 2026
|
5 min read

By Gerard Pietrykiewicz and Achim Klor

Achim is a fractional CMO who helps B2B GTM teams with brand-building and AI adoption. Gerard is a seasoned project manager and executive coach helping teams deliver software that actually works.

If your AI plan does not start with a real business problem, it’s a hobby.

Write one page that says: 

  • I have a problem. 
  • I have an obstacle. 
  • AI can help me outsource.

Then turn it into an OKR, so it survives meetings, churn, and Q2 priorities.

Key takeaways

  • Start with the obstacle, not the tool.
  • AI helps people solve problems. It does not own the decision.
  • Pick one use case. Measure it like you mean it.
  • Guardrails speed adoption because people stop guessing what’s allowed.
  • If it’s not on a calendar, it won’t stick.

Sound familiar?

Someone says, “We need an AI strategy.”

Two weeks later you have:

  • a tool short-list
  • a slide deck full of swimlanes
  • ten opinions
  • zero change in how work gets done

That’s a common pattern.

This article is a follow-up to One-Page AI Strategy Template: Replace Roadmaps With Clarity, where we argued that your AI strategy should fit on one page.

This article shows you how to write it, then turn it into an OKR, then run it for 12 months without it turning into another forgotten “playbook” in your drawer.

The One-Page AI Strategy

Open a doc and answer the following four questions.

1) Diagnosis: where are we now?

A strategy has to earn its keep.

Prompt:

  • What is our single most important goal for the next 12–18 months?
  • What obstacle is blocking this goal right now?
  • Where can AI take work from 0 to 80% in minutes so our people can finish the last 20% and ship?

Example:

  • Goal: Improve retention.
  • Obstacle: Support is stuck answering repetitive tickets all day. Response time slips. Burnout rises.
  • How AI helps: Deflect the repetitive work so humans can handle complex cases.

If your diagnosis starts with “We want to use AI,” you’re already off course. AI is not a replacement plan. It’s an outsourcing plan for the first draft.

2) Guiding policy: how will we use AI?

This section stops chaos.

You need two things:

  • Primary focus: Make it one use case. One sentence. For example: “Our primary AI focus is to reduce support load by resolving common inquiries with approved AI tools, so agents can spend time on complex cases.” This is your spine.
  • Guardrails: They are the lanes that let people move faster because they stop second-guessing. Stay in your lane. Not sure? See the table below.

Category What it means Examples
Permitted Low-risk, pre-approved Summaries of internal docs, first drafts, meeting notes, unit test scaffolding, outlines
Restricted Needs review Anything using customer data, automating client-facing messages, connecting tools to production systems
Forbidden Too risky PII in public tools, sensitive financial data in prompts, unapproved tools connected to company systems

AI helps you solve the problem. It does not solve it for you. It does not own the decision.

If you want a practical risk anchor, OWASP’s LLM Top 10 maps well to what breaks in real deployments (prompt injection, insecure integrations, unsafe output handling).

And if leadership wants a governance reference, NIST AI RMF and the GenAI profile give you a credible backbone without turning this into a policy manual.

3) Target: What does success look like (and did AI matter)?

Pick one metric that proves the policy worked.

Target:

  • Business KR: the outcome we want.
  • AI Lift KR: the delta we only get because of AI.
  • No-cheating KR: the metric that catches gaming or quality collapse.

Example: 

  • Business KR: “Resolve 50% of incoming support tickets by end of Q3 with 90% CSAT.”
  • AI Lift KR: “AI increases ticket resolution rate by +15 points versus the same process without AI (holdout test).”

Now add no-cheating metrics:

  • Reopen rate
  • Escalation rate
  • Time-to-resolution for escalations

If those get worse, your “wins” are fake.

4) Coherent actions: what are the first steps?

List the next 2–3 actions for the next 90 days.

Example:

  1. Pick one approved pilot tool and one measurement method.
  2. Categorize the top 20 ticket types and define the escalation path.
  3. Publish the Allowed/Restricted/Not allowed policy and train managers on it.

If your first step is “create a committee,” you’re writing a plan to feel safe, not to get results.

Turn the one-page strategy into an OKR

This is where it becomes operational.

Google’s OKR guidance keeps it simple: 

  • Objectives set direction.
  • Key Results stay measurable and easy to grade.

Objective

Your Primary focus becomes the Objective. Human. Clear. Directional. For example: “Use AI to reduce support load so our agents can solve customers’ hardest problems.”

Key Results

Your Target becomes the KR. Numeric. Time-bound. Auditable. For example: “Resolve 50% of incoming support tickets via approved AI by end of Q3, with 90% CSAT.”

Now each team can set supporting KRs that fit their world, but everyone works against the same top-level definition of success.

12-Month AI Cadence

Here’s a 12-month timeline with five swimlanes. This cadence keeps your AI outsourcing plan alive after kickoff.

12-month AI OKR cadence diagram with swimlanes for OKR review, guardrails, outsourced workflows, enablement, and measurement.

Final thoughts

AI adoption fails because of a lack of planning and discipline.

People can’t connect it to a problem they actually own.

As we already covered in the previous article, one page fixes that.

An OKR keeps it alive.

A calendar makes it stick.

AI does not replace accountability. It exposes whether you have any.

If you like this co-authored content, here are some more ways we can help:

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!