Blog  Keep it simple. Keep it real.

Achim’s Razor

Weekly insights about GTM effectiveness, building brand reputation, and AI adoption.

0 Articles
Strategy

Getting Started with AI Is Easy. Making It Matter Is Hard.

Only 6% of companies get real value from AI. The rest push tools without guardrails or trust. What the PocketOS failure and 2026 data reveal.
April 29, 2026
|
5 min read

Co-authored by Gerard Pietrykiewicz and Achim Klor

AI rollouts often follow the same script: Leadership announces an initiative, a team lead books a session, another demos a prompt that turns a paragraph into bullet points. People nod and maybe try it once or twice.

Then Monday hits.

Deadlines pile up, Slack is noisy, and a quiet question sits in the background... Am I supposed to use this? Or stay out of trouble?

That is where most adoption dies. Not because people don’t get it. Because nobody told them where the line is, and they have watched enough colleagues get shown the door under the banner of “AI efficiency” to know the cost of guessing wrong.

Takeaways

  • Only 6% of companies capture meaningful enterprise value from AI. The gap is organizational, not technical.
  • Most AI failures are governance failures. Click-and-hope is how production databases get deleted in 9 seconds.
  • A monthly seminar won’t change how people use AI. Clear policies, scoped credentials, safe sandboxes, and visible leadership use will.

The fear is not irrational

Layoffs are real, and so is the framing executives use to justify them.

It usually gets spun as “cost cutting” or “restructuring.” More often than not, that language is covering for poor judgment and weak management. AI just gives bad decisions a more fashionable label. 

Writer’s 2026 enterprise survey of 2,400 executives and employees found that 60% of companies plan to lay off workers who will not adopt AI, and 64% of CEOs fear losing their own jobs if they fail to lead the transition. The same survey found 55% of execs describe their AI rollout as “a chaotic free-for-all,” and 54% say AI is “tearing their company apart.” Stanford’s 2026 AI Index puts a third of organizations on track for AI-driven workforce reductions in the next year.

When leadership talks about AI mostly as a cost-cutting lever, asking the same workforce to enthusiastically adopt it is asking them to hand over the knife.

People aren’t dumb. They notice.

Some experiment in private using personal accounts. UpGuard’s 2025 research found more than 80% of workers, including nearly 90% of security professionals, use unapproved AI tools at work.

And it’s not a training problem. It is a trust and governance problem. It doesn’t get solved with a monthly all-hands or a 50-page policy nobody reads.

What just happened at PocketOS

On Friday, April 25, 2026, an AI coding agent deleted the production database and all volume-level backups at PocketOS.

It took nine seconds.

The agent was Cursor running Claude Opus 4.6, widely considered one of the most capable coding models available.

According to founder Jer Crane, the agent hit a credential mismatch in staging, found a Railway API token sitting in an unrelated file, and decided “entirely on its own initiative” to fix the problem by deleting the volume. No confirmation prompt. No human in the loop.

Here is the part that should keep CIOs and CFOs up at night. The token had been created for managing domains. But Railway’s system gave it full permissions across every operation in the account, including destructive ones.

In other words, a key meant for the front door opened the vault. Yikes!

When asked to explain itself, the agent produced a written confession that started with “I violated every principle I was given” and listed each safety rule it had violated. Crane called it a “systemic failure” that made the incident “not only possible but inevitable.”

Railway’s CEO restored the data using internal disaster backups, but PocketOS still lost more than 30 hours of customer-facing operations and had to fall back to a three-month-old backup for some records. Customers showed up at car rental counters with no booking records to find them by.

Systemic. That is the right word.

The AI did not malfunction. It did exactly what an autonomous agent does when nobody scopes its access or defines what it can and cannot touch.

This is the next phase of the problem. AI is no longer just drafting copy or summarizing meetings. It is taking action against production systems. The cost of getting it wrong is no longer a bland paragraph.

Delegation is not abdication

We saw a similar version of this in How Not To Hire With AI. Recruiters ran candidates through AI screeners, accepted the rankings, and moved on. The bias and the bad calls came out later.

AI just makes that pattern faster and more expensive.

Delegation means you define the task, set the boundaries, and own the outcome. Abdication means you click run and hope.

Too many teams think they’re delegating when they’re not.

The two failure modes

  1. “The tool will handle it.” People treat AI like a vending machine. Prompt in, answer out, ship it. The output sounds fine, which is exactly the problem. It sounds right just long enough to pass through the next person’s review, who is also moving fast.
  2. “I will use it once it is perfect.” Someone tries it once. It hallucinates a citation or breaks a formula. They go back to manual work and wait for the tool to mature instead of learning to work with it. So nothing changes.

One group moves too fast without thinking. The other group never moves.

Both miss the point that AI is not a replacement for judgment, it is a tool that demands more of it.

What separates the companies getting real value

McKinsey’s 2025 State of AI survey of nearly 2,000 organizations found that only about 6% are capturing meaningful enterprise-level value from AI. That’s an organizational gap, not a technical gap.

The high performers do two things differently.

  1. They are roughly three times more likely to fundamentally redesign workflows around AI rather than bolt it onto existing processes.
  2. They are far more likely to have defined human validation rules: 65% versus 23% for everyone else.

Translation: The companies winning with AI have decided in advance which outputs and actions need a human to check the work. Everyone else is improvising.

The same survey found that 51% of organizations reported at least one negative AI incident in the past year. PocketOS is not an outlier. It’s the visible end of a much wider pattern.

What leaders are actually failing to build

Here is what we see most often:

Leadership wants more output and faster execution. They push it down. They expect their teams to figure out AI on their own. When something breaks, the person closest to the keyboard gets blamed.

That’s anything but adoption.

If you want AI to work in your organization, put the scaffolding in place first:

  • Clear policies on what AI can do unattended, what needs review, and what is off-limits. A short list people can hold in their head, not a doc nobody reads.
  • Real governance for agentic tools that take action. Production access, write permissions, and deletion rights need scoped credentials, approvals, logging, and rollback by default. The PocketOS incident was not just a credential problem. It was an autonomous agent with broad reach, finding a key it should never have had access to, and using it without a confirmation step. Railway has already changed its API in response.
  • A safe environment to learn. Sandboxes, defined low-stakes use cases, and permission to try, fail, and report what broke without fear of being walked out the door.
  • Training as scaffolding, not theater. Ongoing, role-specific, tied to actual workflows. Champions inside teams who translate the abstract into the practical Monday-morning version.
  • Visible leadership use. If executives never show their own messy prompts, mistakes, and corrections, nobody else will either.

Training is in there. It’s just not the whole answer, or even the first one. The first is creating a safe place to use the tool, like a skunkworks.

Final thoughts

AI does not create accountability problems. It reveals them. It’s a judgment (or lack of) amplifier.

If your people think clearly and have room to work, it helps. If they are scared, under-equipped, and waiting to be blamed, it scales the problem.

So the question is not “how do we train people on AI.”

The question is where in your organization you are still demanding results without giving people the systems, guardrails, and safety to do the work.

Fix that first.


If you like this co-authored content, here are some more ways we can help:

Cheers!

Achim is a fractional CMO who helps B2B GTM teams with brand-building and AI adoption. Gerard is a seasoned project manager and executive coach helping teams deliver software that actually works.

This article is AC-A and published on LinkedIn. Join the conversation!

Insight

GTM Math Is Grading a Market That No Longer Exists

Most GTM teams judge today's market with yesterday's math. Why correlation breaks, where the missing 70-80% lives, and what to tell the board.
April 22, 2026
|
5 min read

Boardroom question: “How do we know what’s working and what’s not?”

That was the stress test in my latest Causal CMO chat with Mark Stouse, CEO at Proof Causal Advisory.

The problem isn’t the question. It’s the math most teams are using to answer it.

If you’re struggling with GTM effectiveness, it’s not because you have a weak dashboard. 

You’re struggling because your model is reading from an outdated map. 

It’s the reason why leadership hears one story while reality delivers another.

Takeaways

  • Correlation grades patterns. That only works when the world holds still. 
  • 70 to 80% of what drives GTM outcomes is external. Most teams skip that part.
  • A lot of your data is older than the market it’s supposed to describe.
  • Cutting spend into a headwind usually accelerates the decline, not the recovery.

GTM teams still use correlation to grade the past

Correlation worked for decades because the world was stable enough. Extrapolate the last four quarters, get a reasonable next quarter. Econometric models, actuarial tables, sales forecasts, marketing mix models. All leaned on the same bet: past was prologue.

Past is no longer prologue. (Was it ever?)

Insurance is the cleanest tell. Several carriers have pulled out of entire states. Not because they’re bad at pricing risk, but because their models can no longer price the risk with confidence. They’d rather walk away from the revenue than write policies they can’t defend.

Swiss Re’s latest catastrophe data shows why that pressure is building. Wildfires, storms, and floods accounted for a record 92% of global insured natural-catastrophe losses in 2025. The underlying conditions shifted. The models are still calibrated to the old ones.

The same pattern is showing up in GTM. In a recent MarTech analysis, Mark reported B2B GTM effectiveness fell from 78% in 2018 to 47% in 2025 across 478 companies. That is not a rounding error. That is a model that no longer fits its market.

“What has been promised is not what’s actually happening. And that’s the dead giveaway.”

The “missing middle” every GTM plan skips

Most plans describe the actions on the left and the outcomes on the right. They leave out the middle.

The middle is everything you don’t control: tariffs, rates, inflation, war, competitor moves, buyer budget pressure, category fatigue, shifts in buying committees. The middle accounts for 70 to 80% of what actually drives the outcome.

If you don’t measure it, you can’t explain why the plan half-worked. And you definitely can’t tell the CFO.

Most of that data is available. Financial institutions publish it. Governments publish it. Competitor movement is tractable. You just have to actually include it in the model. Google’s Meridian documentation makes the same point in plainer terms: causal inference estimates effect under real conditions, not correlation in historical data.

Mark put the gap between correlation-guided and causation-guided decisions at 90 to 100 degrees off on a compass rose. 

Not off by a few points. Pointed in the wrong direction.

The scuba analogy worth stealing

Drift diving: You drop into a current and go neutral buoyancy. The current carries you and it feels great. Then you try to turn around.

What was a tailwind is now a headwind. You’re burning oxygen fast and going nowhere.

GTM spend into a headwind works the same way. The same activity costs more to produce the same result. If you’re not documenting the headwind, it just looks like the team underperformed.

This is why cutting GTM spend during turbulence is usually the wrong reflex. Not the tired “your competitors went quiet, be loud” story. The real reason: staying even in a headwind already costs more. Cutting into that accelerates the decline. If leadership can’t quantify the headwind, they’ll blame execution for a market condition.

Why leaders stay inside the four walls

Looking inside the four walls of the company instead of outside is an all too common bad habit. Pipeline, velocity, rep activity, campaign throughput. All internal. All controllable. All missing the 70 to 80%.

Teams stay inside because that’s where control lives. It’s comfortable. It’s defensible. You can put it in a deck.

But none of it is reality.

As of 2025, more than half of B2B GTM spend is now ineffective, and it’s not because teams suddenly took stupid pills. They just stopped looking outside. The externalities got louder while the dashboards stayed the same. 

The deeper resistance is different. If a causal model shows the old playbook didn’t work, what does that do to my credibility? 

When the environment has shifted this much, retrospective blame is a waste of time. Nobody called this environment cleanly with a correlation model. The question is not whether the old playbook was right. The question is whether the current one is.

“Causal AI is not something to be afraid of. Causal AI is reality. Its whole goal is to show you a model, a digital twin of reality, so that you can navigate it more successfully.”

A GPS doesn’t grade your past driving. It reroutes when conditions change. That’s the point.

A pressure-test worth trying

Use GenAI to generate high-fidelity synthetic data for a strategy you haven’t run yet, then pressure-test it through a causal model.

Use case: Your big agency walks in with a hot proposal to change the game for your business. Deeply insightful. Expensive. Before you put a dime behind it, upload the proposal to a causal model and ask: What is the likelihood this actually works? What would have to be true in the market? Three-year play or twenty-year play? 

This kind of tooling leans heavily on one real strength of pattern matching: it’s more reliable at telling you something is a bad idea than a great one.

Worth having before you sign the SOW.

Two questions that do the work

Reality is not a matter of opinion. It’s gravity.

“Reality is what you run into when you’ve made a mistake.”

You can get the signal early by modeling externalities, or you can get it late from a missed quarter. One costs less than the other.

These are the two questions worth writing down:

  1. For us to be successful, what else in the marketplace has to be true?
  2. And what would really hammer us if it was true?

Two questions. No technology required. They will force the conversation outside the four walls.

You can do the same gut-check on your own buying behavior. Same muscle. Different mirror.

Headstart

Write down your top three GTM assumptions for the next two quarters. List the external conditions that must hold true for each. Flag the ones that aren’t holding now.

Check the date range on your forecasting data. If it reaches back more than three years, the model is averaging a world that’s gone.

Before the next board update, add a slide on headwinds and tailwinds with a number attached. If you can’t quantify it yet, say so and commit to a date.

Missed the session? Watch it here.


If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Strategy

Questions Boards Should Ask GTM Teams

Most boards are still asking yesterday's questions. Mark Stouse on CAC debt, time lag, fiduciary risk, and the five questions every board needs to ask now.
April 8, 2026
|
5 min read

Boards and C-suites are becoming more aware of this fact: Past is not prologue

It’s also something I have covered with Mark Stouse, CEO of Proof Causal Advisory, on many occasions, and it was the key pillar in our last Causal CMO conversation.

Most board conversations about go-to-market still treat historical data as a guide to future probability. That model no longer works. And according to Mark’s 5-part GTM Effectiveness Report, the numbers show it: B2B GTM effectiveness fell from 78% in 2018 to 47% in 2025. 

The gap between what boards are asking and what they actually need to know is getting expensive.

In this recap, Mark explains why and how to prepare.

Takeaways

  • Historical GTM data is decaying in relevance. Old data is not representative of current reality nor future probability.
  • GTM is the canary in the coalmine. It catches external market problems before the rest of the business does.
  • CAC is a loan of shareholder capital. Right now, the market isn’t paying it back.
  • Awareness, confidence, and trust are the most durable GTM assets. Time lag is one reason boards cut them too soon.
  • Boards that aren’t asking the right questions face more than a performance problem. They face a governance one.

The canary in the coal mine

Here’s something boards consistently get wrong.

“There’s been a default idea in boards and C-suites that if go-to-market is failing in some way, it’s their fault. As opposed to saying, ‘Huh, I wonder what’s going on out there in the marketplace that’s causing go-to-market to catch it first?’ Think of it as a virus. It’s go-to-market that’s gonna catch it before the rest of the business catches it. Go-to-market is sort of a canary in the coal mine.”

The data right now backs that up. CAC is climbing. Deal volume is down. Average deal size is down. Deal velocity is slowing. And around 73% of B2B tech deals are closing without a decision being made.

That’s a 13% increase in three years over research from The Jolt Effect

A CFO Mark spoke to recently put it bluntly: “By that standard, my go-to-market effort is bankrupt.”

Mark’s response: 

“That’s actually a really powerful way of putting it. If all you have is exploding costs and no countervailing exploding revenue, it’s only a matter of time you’re going to be out of business.”

The payback period on CAC becomes incalculable when deals die in indecision. Mark and I already covered the mechanics. You can check it out here.

The time lag trap

Not to beat a dead horse, but the resistance to the reality of time lag is still an ongoing GTM debate. I’m not kidding. 

People are motivated by risk. In volatile markets, awareness, confidence, and trust (aka ACT), the three pillars of brand and reputation, matter more, not less. 

The 95:5 rule from LinkedIn’s B2B Institute and Ehrenberg-Bass research puts a number on it: most of your future buyers are out of market right now. Brand is how you reach them before the window opens. Which is also why cutting it feels safe. Until it isn’t.

For example, early in COVID, AirBnB’s CEO publicly cut nearly all marketing. There was no immediate revenue impact. But reality hit hard 12 months later. 

“You’re gonna have that experience for approximately the next year. And then you’re going to suddenly come out from under the overhang of all that accumulated marketing, and all of a sudden your revenue is going to fall like a rock.”

Boards keep missing the same pattern. 

CMOs get replaced just as the investment they made starts to compound. The next leader rides the wave, credits their own work, and then watches demand fall off a cliff (again!) when the accumulated effect runs out.

Two questions follow from this:

  • What are we funding today that won’t show up for 9 to 18 months?
  • What are we crediting today that was actually built 9 to 18 months ago?

Skip those and you’ll misread momentum and punish the wrong people.

Do we have a fully loaded CAC?

This is one of the first questions boards should be asking.

“CAC is usually a reflection of marketing costs of customer acquisition. But it is so much more than that. It’s almost like a mini P&L. You’ve got sales CAC, product CAC, customer success CAC. Anything that you do anywhere in your business that touches a customer in a way that makes them either want to buy or not buy, is CAC.”

Most companies don’t have that full picture. Without it, the return calculation is wrong.

And the problem compounds. CAC isn’t just a cost. It’s a loan of shareholder (or ownership) capital. When deal volume shrinks, deal values drop, and velocity slows, the GTM Debt accumulating inside the company grows while the ability to repay it deteriorates. Most boards have never seen that number laid out plainly.

The follow-on question isn’t just “what is CAC?” It’s: what is CAC buying us, how long is the payback period, and how much of that spend is dying in no-decision outcomes?

The governance question

“Increasingly, if boards are not asking these questions and demanding the right answers, they’re in breach of their fiduciary duty if they’re talking about a Delaware corporation. If otherwise, based on new regulations around decision governance, that’s a lack of decision governance, which really is almost an identical idea to fiduciary duty. They’re increasingly going to be scrutinized very uncomfortably.”

The legal foundation is real. In January 2023, the Delaware Chancery Court ruled in the McDonald’s case that fiduciary duty of oversight extends to all corporate officers, not just the board and CEO. If an officer can’t outline a system by which their part of the business is managed on a risk-adjusted basis, they’re exposed. 

The same pressure is moving through SEC priorities and decision governance expectations. Mark wrote an excellent piece about this here: The Key GTM Governance Questions.

If you don’t have a system to evaluate decisions and spend on a risk-adjusted basis, you are liable. 

What boards should ask now

Every C-suite and GTM team should be prepared to answer these questions if and when they come up:

  1. Do we have a fully loaded CAC, and do we understand the GTM Debt accumulating inside the company?
  2. What external forces are affecting our GTM performance right now, and is our team actually tracking them?
  3. What are we investing in today that won’t show up for 6 to 18 months, and has the board reviewed our time-lag model?
  4. Can our GTM team demonstrate causal, not correlational, attribution between spend and revenue outcomes?
  5. If we removed a major GTM spend category tomorrow, could we tell in advance what the revenue impact would be and when?

Most teams can’t answer them cleanly. That is the point.

For a full list of governance questions, Mark has assembled 12 of them here: The key GTM governance questions every company must move to address in 2026.

“What is the reality right now? What is the reality, and what is the reality likely to be if we model it a year from now, two years from now, three years from now?”

If your board isn’t asking that question, someone else will ask it for them.

Missed the session? Watch it here.


If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Strategy

Why You Shouldn’t Trust a Forecast That Starts With the Past

Most B2B forecasts fail upstream. Mark Stouse explains why weak models, time lag, and bad assumptions break GTM planning and revenue trust.
March 24, 2026
|
5 min read

Most forecasts fail before they’re even built. 

Not because of bad data. Because of bad thinking… upstream.

That’s what Mark Stouse, CEO of Proof Causal Advisory, and I were chatting about even before we got into our latest Causal CMO chat

Before models, cadence, or tooling, GTM teams need to deal with something harder: the thinking that makes a forecast worth defending.

Takeaways

  • Chasing every market move keeps you behind the problem. Pick durable plays instead.
  • “Making the quarter from what marketing does in that same quarter” was always a fantasy (and still is).
  • Strategy is not the how. It’s the what. Most companies don’t actually have one.
  • A trustworthy forecast isn’t rooted in the past. It updates when reality changes.
  • If you can’t explain the variance, you’re not ready to share it yet.

Stop chasing the graph

Most GTM teams don’t want to hear this:

“The temptation is to try and follow every move that the graph makes with some sort of counter to it. The problem is that if you do that, you’re always behind your externalities. You’re never going to get ahead of them.”

In other words, stop reacting to every signal. 

Pick investments that hold up across a wide range of conditions. In B2B right now, especially tech, that means brand and reputation. Not because it’s comfortable. Because it’s what’s working.

The catch is time lag. 

Mark put it at 9 to 24 months before brand investment shows up in real results. I’ve seen similar timeframes (18 to 36 months). That’s why so many leaders cut it. They couldn’t connect the cause to the effect. So they called it worthless. The lag was hiding the impact.

Chart showing B2B marketing time lag: recognized revenue reaches 50% around month 6, while realized revenue reaches 50% around month 18, highlighting where leadership gets impatient.
How to stop chasing the graph… using a graph (haha)

We’ve covered time lag before. It's a core part of my End of MQLs series.

Right now, buyers aren’t shopping for the best pitch. They’re managing downside risk. They gravitate toward names they already trust, especially in times of crisis when headwinds are strongest. If you haven’t been building up your brand reputation before the pressure hits, you’re starting from zero when it matters most.

This connects to what we covered in our previous Causal CMO chat on the 95:5 rule. Only 5% of your market is in-market at any given time. The other 95% are forming impressions with or without you. 

Brand is what stays in the room when your sales team isn’t.

DemandGen was a cheap money phenomenon

“For 20 years in B2B, marketers and salespeople agreed on very few things. But one was that demand gen was real and you could somehow make somebody want to buy your product. The main reason why that appeared to work for so long is that money was cheap.”

When capital was abundant and risk tolerance was high, activity looked like causation. You pushed, things happened, you took credit. 

But when capital tightened, the model broke. The activity stayed. The results didn’t follow.

“This whole idea of making the quarter based on what marketing is doing in that same quarter was always a fantasy. Always. The time lags are too long for that to be true.”

As Rohan Light shared, “cheap money equals lazy thinking”. The constraint we’re in now is forcing a more honest accounting.

These systems weren’t built by idiots. They were built for different conditions. Those conditions are gone. 

We covered similar threads in The GTM Reality Gap and The Illusion of Control.

Mindset before model

To be clear, this isn’t a tooling problem.

“This is a mindset issue. It’s not a technology issue. One of the things you have to confront is just how much we think in patterns. When we do that, we assume the past is prologue. It’s not.”

Patterns help… until the market changes and you’re still using the old map. Stack enough of them and you get a house of cards. Either very right or very wrong, with no middle ground. 

Right now, reality is moving fast enough that the patterns expire before most teams notice.

Strategy is not the “How”

“Your strategy is the most important thing. Do you have a durable strategy? Can it survive a lot of different changes, a lot of volatility? The planning, the ops, the tactics: that can easily change and should change and will change all the time. Strategy should not.”

Strategy is the what. Not the how.

Most companies don’t have a business strategy. If your org runs a marketing strategy, a sales strategy, a data strategy, and an IT strategy in parallel, that’s usually a sure giveaway there’s nothing unified sitting above them. 

A recent HBR study of 500 organizations found that firms with stronger foresight capabilities — built around continuous signal detection and updating, not one-off planning exercises — report a meaningful performance edge. That finding matches what Mark said: durable strategy isn’t a static document. It’s a capability.

Without a durable strategy, you’re not forecasting. You’re decorating a guess.

What makes a forecast trustworthy?

Not the one rooted in the past. Conditions change. A model that can’t update isn’t a model.

“The definition of a trustworthy model is the one that gets closest to reality. You have to have a good model that represents reality, and then you have to be able to update it with whatever cadence is right for your business.”

The GPS analogy Mark often uses is the right frame. It doesn’t refuse to work when there’s an accident ahead. It recalculates. And it tells you why you’re going to be late. 

That’s the test: not just accuracy, but explainability.

“If you have a gap between your projection and reality but can explain what caused it, you’re there. If you can’t explain the variance, you’re not done yet.”

One more thing: real-time data sounds valuable but often works against you. People slow down when flooded with signals. What you need is the right data at the cadence your decisions actually require, not a dashboard that keeps everyone busy and nobody clear.

Final thoughts

The standard is not prettier, louder dashboards. Not more pipeline theater. Not false certainty dressed up as confidence.

It’s a model that updates. A strategy that holds. A variance you can explain.

That’s a forecast worth taking to your leadership team.

That’s a forecast everyone can trust.

Missed the session? Watch it here.


If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Strategy

How Not to Hire with AI and Fix Broken Recruiting Processes

AI-written JDs attract AI-written CVs. Here’s how overloaded teams create hiring noise and how to use AI to cut admin without handing off judgment.
March 17, 2026
|
5 min read

Co-authored by Gerard Pietrykiewicz and Achim Klor

Recently, Gerard received a recruiter pitch on LinkedIn.

It looked like AI wrote it and nobody bothered to fix it.

The role wanted a Business Analyst, Project Manager, AI expert, cybersecurity expert, and governance translator between business and technical teams. 

It also wanted 8+ years of AI experience. Really?

LinkedIn recruiter message showing an AI-generated job pitch with unrealistic requirements, including 8+ years of AI experience for a hybrid BA, PM, cybersecurity, and governance role.

This is where a lot of AI adoption goes off the rails.

Not because the tool failed.

Pressed to fill roles, nobody stops to define the actual work. So AI gets handed the job. It produces a bloated, catch-all mess no serious candidate wants to read. It gets blasted out at scale. Candidates fire back AI-generated applications and CVs at scale. HR gets buried in a sea of synthetic sameness.

That’s not a hiring process.

That’s an AI-to-AI death spiral.

HR is just one example. Similar spirals show up across the organization. AI creates the noise, then more AI gets used to clean it up.

The spiral in plain terms

  1. A hiring manager, pressed for time, asks AI to write a job posting.
  2. The posting goes out with inflated requirements nobody stopped to sanity-check.
  3. A recruiter feeds it into an outreach tool. Thousands of messages go out.
  4. Hundreds of AI-assisted applications come back within hours.
  5. The company deploys more AI to filter the noise it just created.

Nobody saved time. Nobody hired better. The team ran a faster version of a broken process and called it progress.

SHRM reports that average cost-per-hire and time-to-hire have both risen over the last three years, even as AI use in HR has climbed. More automation did not automatically produce better hiring.

HR is drowning in its own output

HR teams are not doing this because they are careless. They are buried.

Requisitions pile up. Inboxes fill. Scheduling, screening, follow-ups, compliance documentation. It is a lot of admin for a function that is supposed to be about people and judgment.

So AI looks like relief. Write the JD faster. Screen faster. Reach more candidates faster. SHRM reports that nearly 9 in 10 HR professionals in organizations using AI for recruiting say it saves time or increases efficiency. It is also being used heavily for tasks like writing job descriptions and screening resumes.

The problem is that speed without clarity makes it worse. A badly written job description used to waste a few days. Now it creates a flood of waste.

One generic posting goes out. Hundreds of AI-assisted CVs come back. HR saves time on the front end, then loses it all cleaning up a mess they helped create.

Delegation is not abdication. AI handles the admin. It does not replace judgment.

The JD problem is a thinking problem

A marketing hire is not a product hire. A sales hire is not a customer success hire.

A CS leader might need process discipline, de-escalation skill, and commercial awareness. A product marketer might need positioning, customer insight, and message testing. An AE needs discovery skill, objection handling, and the ability to translate business pain into urgency.

These are not the same jobs.

Too many AI-generated JDs make them sound like they were copied from the same template with a few nouns swapped. That’s not speed. And it doesn’t hide the fact that nobody defined the actual work.

Good candidates can tell. They know when a posting was written by someone who understands the role. They also know when it was stitched together from generic prompts and wishful thinking.

Use AI to clear admin not avoid thinking

AI should help hiring teams get more deliberate. Not more automatic.

Use it for scheduling, interview summaries, scorecard drafts, candidate FAQ replies, debrief notes. That’s the repetitive work that eats recruiter time and produces nothing that requires a human.

Clear that work first. Then use the time to do what AI cannot:

  • define the role
  • write a JD that reflects real work
  • design an evaluation that tests for the right things

Canada’s Public Service Commission is clear on this point. Hiring managers remain accountable for decisions in the hiring process, and they must validate AI-generated ideas and suggestions to make sure the content is accurate, relevant, and adapted to actual hiring needs.

What to do this week

Before you open the next requisition, answer three questions:

  1. What work is not getting done right now?
  2. What part of the recruiting process can AI take off your team’s plate?
  3. What judgment still needs a human?

Write the JD after you answer those. Not before.

Final thoughts

If your hiring process creates noise faster than your team can think, AI will not help you. It'll just amplify your confusion.

The tool is not the problem. 

Delegating your thinking to it is.

If you like this co-authored content, here are some more ways we can help:

Cheers!

Achim is a fractional CMO who helps B2B GTM teams with brand-building and AI adoption. Gerard is a seasoned project manager and executive coach helping teams deliver software that actually works.

This article is AC-A and published on LinkedIn. Join the conversation!

Insight

The Causal Bridge: A Common Language for Your CEO, CFO, and GTM Team

GTM is failing because CEOs, CFOs, and GTM teams work from different models of reality. Here's what the bridge looks like and why attribution was never the answer.
March 11, 2026
|
5 min read

Your GTM playbook worked... two years ago.

That’s the uncomfortable (and inconvenient) truth that stood out after my recent Causal CMO chat with Mark Stouse, the CEO at Proof Causal Advisory

We didn’t debate whether old methods are broken. That’s already been settled. We got into what it actually takes to fix the problem: a shared model of reality that CEOs, CFOs, and GTM teams can all work from.

Right now, most companies do not have that. They have siloed dashboards, attribution debates, and quarterly budget battles. The CEO wants confidence. The CFO wants proof. The GTM team wants room to do the work. 

Nobody is singing from the same song sheet. 

And that, my friends, is the Causal Bridge the entire organization must build together. 

Here’s the recap.

Takeaways

  • Multi-touch attribution measures assumptions, not reality. It never worked.
  • 70-80% of GTM outcomes are driven by external forces you do not control.
  • Finance now treats GTM spend as a loan. It expects payback.
  • Cut marketing today. Watch sales fall off the cliff in nine to twelve months.

You can be right and still be wrong

Mark didn’t mince words about the truth of this common mindset:

“You can be right two years ago or a year ago and be dead wrong today. And that’s just circumstances outside of your control.”

Most GTM banter is insular and plays the blame game. Wrong strategy. Wrong hires. Wrong execution. But Mark’s research points to a different culprit: Externalities

They are the headwinds, tailwinds, and crosswinds in the market that make up 70-80% of what drives outcomes. Things no one on your team controls.

He estimates that in the last 10-15 years, over 95% of GTM ineffectiveness traces back to teams failing to factor in those external forces. We keep running the same plays into a market that has already moved way past us.

“You’ve got your framework, and you’re turning the crank over and over again. But you’re doing it into the teeth of a hurricane.”

If that sounds familiar, the problem is not your team. It is the model you are using to read the market.

 

Finance changed the rules

Marketing budgets have flatlined. Gartner’s 2025 CMO spend survey put them at 7.7% of company revenue, flat year over year, while scrutiny from Finance keeps rising. That pressure has a specific shape.

Finance is no longer treating GTM spend as opex. It’s now considered a loan of shareholder capital. Revenue acquisition, profit acquisition, cash flow acquisition. And loans require payback. 

When CAC spikes while deal volume falls and velocity slows, the payback period stretches until the math stops working.

“A CFO once said to me that if go-to-market was its own business, it would be bankrupt.”

The three outcomes Finance is actually watching: 

  1. more deals
  2. bigger deals
  3. faster close

Each maps to revenue, margin, and cash flow. If your reporting cannot connect activity to those three things, you are speaking a language the CFO cannot act on.

Right now, faster time to close is the single cash flow move most likely to get a CFO off your back. That is how tight the pressure is.

The measurement system was broken from the start

Here is the part that makes the CFO’s skepticism harder to argue with.

The attribution tools GTM teams have relied on for 10-15 years were not measuring reality. They were measuring assumptions. 

Every model, first-touch, last-touch, multi-touch, was built on arbitrary weights and a fiction that B2B buying is a linear, deterministic, and trackable sequence of cause and effect. It is not. It has never been

Multi-touch attribution in particular looked at desired reality, not actual reality. Change the weightings, change the results. That is not measurement. That is a fairytale generator.

“MTA didn’t look at reality. It looked at your desired reality. It looked at your beliefs.”

What makes this harder to dismiss: Jon Miller, co-founder of Marketo and one of the architects of modern demand generation, admitted “Attribution is BS”. 

His argument: buyers are already two-thirds through their decision process before they engage with vendors. The interactions that actually shaped the deal, anonymous content consumption, word of mouth, prior brand exposure, happened long before any tracking pixel fired. The “credit” arbitrarily went to whatever campaign was running when someone finally filled out a form.

Miller’s conclusion: teams have been grading marketing with broken math

The result: brand investment starved, short-termism rewarded, sales-marketing alignment broken, and stalled growth. That damage is real and it accumulated over more than a decade.

Example from Jon Miller, co-founder of Marketo, showing why attribution models fail: flawed assumptions, invisible buyer journeys, and damage from over-attributing to demand
Source: Jon Miller, www.jonmiller.com

This is why Finance lost trust in GTM reporting. It is not just that the numbers were sometimes wrong. It is that the system was designed in a way that let teams produce whatever numbers they needed to cover their asses. CFOs figured that out. That is a significant part of why the loan framing now lands the way it does.

The cliff shows up nine months late

True story: A CFO wanted to eliminate marketing entirely. Just to see what happened.

Well, there’s actually a model for that:

“We can show it with marketing. We can show the date that you kill all funding for marketing. We can show how long it takes for the results to degrade your performance, and then it’ll show the cliff that you fall off of. It ranges from nine to twelve months. It’s as sure as death and taxes.”

Time lag is what fools people. Cut marketing, ride the wave from past investment, coast for say, 9-12 months, then watch sales nosedive. By the time it registers, you are nine months behind on rebuilding and killed all your momentum. 

We covered this pattern during our Illusion of Control chat. 

Brand, for example, is not a soft metric. It is a multiplier of sales productivity that requires more time than we are willing to give it. And because time lag makes the impact of building brand reputation invisible in the short term, impatient CEOs and CFOs don’t invest in it. 

De-risking decisions is the real aha moment

When leaders see a causal model for the first time, the first readout shows waste. That almost always makes everyone at the table squeamish. But there is good news:

“The real aha moment is when it dawns on them that going forward, they have just de-risked their spend and their decisions dramatically.”

That is the shift from CYA and defending past spend to making better calls about future spend. From fighting over revenue credit to understanding that sales gets the credit for driving revenue and marketing multiplies what sales can do (marketing does NOT create revenue!).

With correlation-based tools, we change the weightings and we change the results. We can manufacture any narrative. With Causal AI, we cannot. The model does not care about our assumptions. It reflects reality. 

“There’s great alignment between Causal AI and reality. In fact, I say a lot that causation equals reality.”

That is exactly why Causal AI builds trust with Finance, and exactly why most GTM leaders have been slow to adopt it.

Start small. Do it quietly.

We already talked about how to do a skunkworks GTM project. Mark brought it up again because it works, and because it sidesteps the politics that kill most internal change efforts before they start.

If you’re not ready to make this a company-wide initiative, start by picking one part of the business, run the model quietly for nine months, and make decisions based on what it shows.

“People around you will not understand that you’ve done something, but they will feel it. You’ll start getting comments in the hallway: there’s a new energy coming out of marketing.”

When results are visible, tell the story. Your peers already believe it. You are just giving them the explanation for what they felt.

Final thoughts

Good teams are getting judged like bad teams right now. Not because they stopped performing. Because the market moved and the model did not.

The CEO, CFO, and GTM team are not failing to communicate because they are stupid or in different departments. They are working from different models of reality. 

Until that changes, every quarter continues to be the same fight. More pressure. More reporting. Same broken loop.

The market moved. The model did not. 

That is the problem. 

And now you know what the fix looks like.

Missed the session? Watch it here. Mark’s full 5-part research is on his Substack.

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!