Positioning, Messaging, and Branding for B2B tech companies. Keep it simple. Keep it real.
By Gerard Pietrykiewicz and Achim Klor
Achim is a fractional CMO who helps B2B GTM teams with brand-building and AI adoption. Gerard is a seasoned project manager and executive coach helping teams deliver software that actually works.
AI adoption gets stalled by leadership gaps: confusing policies, employee fear, and leaders who say “go” but don’t show how. If this feels a bit like Groundhog Day, you’re not alone. We’ve seen similar adoption challenges with desktop publishing, the Internet and World Wide Web, and blockchain. The technology is ready, but organizations stumble on the people side. This article looks at what leaders can do right now to remove those barriers and make adoption a little less stressful.
Jim Collins, in How The Mighty Fall, describes how once-great companies decline: hubris born of success, denial of risk, and grasping for silver bullets instead of facing reality. AI adoption sits at a similar crossroads. Companies that wait and assume their past success buys them time risk sliding down a similar path.
Reset (5 min):
Decisions today (10 min):
Guardrails (10 min):
Metrics (5 min):
Employees avoid tools they don’t understand. If your AI usage rules look like a legal brief, adoption will stall. It's hard for any company to have a policy loose enough to allow for easy adoption and experimentation, yet restrictive enough to prevent critical data leakage.
Large corporations often have the budget, legal teams, and even their own data centers to set up AI policies and infrastructure. That gives them speed at scale.
Smaller companies are technically more nimble, but without sufficient resources, they often default to over-restriction, sometimes banning AI entirely out of fear of risk. That means lost productivity and missed learning opportunities.
Opportunity: Make policies visual, clear, and quick to navigate. The goal isn’t control. It’s confidence. Guidance like the NIST AI Risk Management Framework shows how clarity enables trustworthy, scalable use (NIST).
Employees fear what they don’t understand. And one-size-fits-all training doesn’t help.
When people see AI applied to their specific role (automating a report, simplifying customer emails), that fear turns into enthusiasm.
Pilot programs work. Early adopters can demonstrate real use cases, and their wins spread fast inside the org.
Opportunity: Treat those early adopters as internal champions. Prosci’s research shows “change agents” accelerate adoption (Prosci). Then turn those wins into short internal stories and customer-facing examples. That’s how adoption builds brand credibility, not just productivity.
When executives hesitate, teams hesitate. The reverse is also true: when leaders use AI themselves, adoption accelerates.
Research on organizational change is clear: active, visible sponsorship is a top success factor (Prosci). It signals that experimentation is safe and expected.
And there’s an external benefit too. Leaders who show their own AI use give customers and partners confidence. It’s a market signal.
Opportunity: Leaders can’t delegate this. They need to be participants, not just sponsors.
To make AI adoption successful, leaders must create an environment where experimentation feels safe and useful.
The parallels to earlier waves of tech adoption are uncanny: the ones who figured this out first didn’t just get more efficient. They were remembered as the ones who defined the category because they were more effective adopting the tech.
The risk of waiting isn’t just lost productivity. It’s losing the perception battle before you even start. Credible stories and visible leadership shape buying decisions and long-term trust (Edelman–LinkedIn).
Leaders: simplify, experiment, participate, and share your wins. Your teams and your customers will thank you.
If you like this co-authored content, here are some more ways we can help:
Cheers!
This article is AC-A and published on LinkedIn. Join the conversation!
I read an interesting article on TechRadar by John-Anthony Disotto. He used the 80/20 rule to create a prompt that “makes learning skills incredibly easy.” It’s not mathematically deep or overly technical. So for fun, I gave it a go using B2B GTM. It worked, but not the way I expected.
The prompt is dead simple:
“Give me the 80/20 of X.”
Want to learn how to kickflip, cook sugo, or serve a tennis ball? Ask your favorite GenAI tool for the 80/20 and it will spit back a decent analysis based on Pareto’s famous principle.
That said, as with all “absolutes”, take it with a grain of salt and pull out the nuggets you need.
But it got me thinking:
Could this actually work for something as messy and misunderstood as B2B tech GTM?
I gave it a shot.
I first asked ChatGPT this prompt:
“What’s the 80/20 of B2B tech GTM in 2025?”
The answer came back fast using typical AI phrasing.
80% of teams are stuck in the past:
20% are evolving:
The gist wasn’t wrong, but a few things were headscratchers, like “Using Causal AI” and “Marketing drives strategy.”
It sounded too binary. Too neat.
So I then asked ChatGPT:
“Rigorously fact-check your 80/20 assessment for accuracy and credibility.”
The revised output was a bit more grounded.
20% of teams are winning because they:
80% are plateaued by:
OK, better. But I still wasn’t convinced.
Still sceptical, I sent the same prompt to Claude.
Claude came back with a softer read. It said it’s more like a 60/40 split. Most teams are adapting, but slowly and inconsistently.
Claude made some good points:
I went back and forth with Claude and ChatGPT. Both made valid calls and pointed out where they aligned and where they differed.
ChatGPT pushed back on Claude’s assessment:
“Claude’s response is solid—and mostly accurate. But Claude’s framing underplays the issues.”
After more back-and-forth, we landed somewhere in the middle.
60/25/15 is a more accurate split.
60% are stuck in no man’s land:
25% of B2B teams are doing the work:
15% are way behind:
Which one sounds like your Monday morning standup?
If you’re in the 60%, pick one:
Every modern B2B GTM team should be asking:
AI, brand, content, buyer insight—none of it works in isolation.
What separates the top 25% isn’t access. It’s consistency. They’ve operationalized what the rest are still experimenting with.
The “80/20” prompt worked, but not how I expected.
It won’t give you a perfect framework. That’s OK. It doesn’t need to be perfect.
A yardstick that validates what you already know and uncovers some new truths is more than good enough. Whether the actual number is 65.7% instead of 60% doesn’t really matter.
The point is, most GTM teams in 2025 know what to do. They’re just not doing it strategically or consistently. And they’re not proving it works.
That’s the gap.
The teams pulling ahead aren’t chasing leads or trends. They’re going back to the basics, putting insight and strategy ahead of tactics. And they’re doing it better, faster, and with accountability.
They treat brand as a signal, not decoration. AI as infrastructure, not novelty. GTM as shared responsibility, not departmental silos.
And they measure what matters, not what’s easy.
They do the hard part first.
Where does your GTM sit?
If you like this content, here are some more ways I can help:
Cheers!
This article is AC-A and published on LinkedIn. Join the conversation!
Primary Article Referenced:
Research Sources:
CMOs can make a big difference with CausalAI. It starts with what Mark Stouse calls a “quiet pilot.” Not a pitch. Not a deck. In Part 6 of The Causal CMO, Mark explains how GTM leaders can run skunkworks projects in the background without wasting months seeking buy-in or permission by using causal modeling. We used “deal velocity” as our project, but you can apply it to any core outcome: CAC, LTV, funnel integrity, partner yield, brand equity, even recruiting.
As we touched on in Part 4 and Part 5, the goal of a “quiet pilot” is meant to protect signal integrity. It’s not about secrecy.
If you announce you’re piloting CausalAI before you’ve proven anything, internal pressure will spike. Opinions will fly. Fear will kick in. And you’ll waste all your energy managing reactions instead of learning.
“There’s nothing unethical about doing a quiet pilot. In fact, it’s probably the most ethical way to test something that matters.”
You want to reduce friction, not accountability. To observe, adjust, and verify results before you invite others in.
Causal modeling isn’t magic. It’s just math applied to three distinct domains:
Most teams obsess over 1 and 2. But externalities drive 70–80% of performance. You’re not here to brute-force your way through them. You’re here to surf them.
“You can have the best mix in the world and still fail—if you ignore externalities.”
We used a B2B SaaS scenario:
Here’s how Mark broke it down:
Mark’s personal experience at Honeywell Aerospace demonstrates the effectiveness of running a quiet pilot:
“We improved deal velocity by almost 5%. That’s $11–12 billion of revenue moving faster into the company. The cash flow impact was extraordinary. The CFO became a fan.”
You don’t need perfection to start. You need clarity.
Start with a question like this one:
“Out of everything we’re doing, what’s really driving deal velocity?”
“Even synthetic models become templates for real ones later.”
Watch the forecast. Compare it to actuals. Adjust your mix. Then track again.
You’ll feel the change before you explain it. So will others.
“People will pass you in the hallway and say, ‘Something feels different.’ That’s your moment.”
You’re going to get humbled. So be prepared.
“You’ll realize most of what you’ve been tracking is noise. You’ll grieve. You’ll deny. You’ll get angry. Then you’ll change.”
It’s normal to go through disbelief, regret, frustration and even grief as you uncover how much of your GTM effort was based on correlation or gut feel.
But that’s the cost of clarity.
And the reward?
A system that actually tells you what’s working and how to make it better.
Not too soon. Let the model mature.
Here’s a rough timeline:
That’s when you gain credibility and the CEO and CFO lean in.
That’s when the board invites you to present.
That’s when peers start asking:
“If I gave you more budget, what could you do with it?”
Your story isn’t based on aspiration. It’s built on change.
Mark was very clear about the cross-functional application of CausalAI.
You can apply causal models across the entire business:
If your team owns outcomes, causal modeling can help you prove what drives them, even outside GTM.
If you’re a CMO, CRO, or GTM lead hoping to “earn your seat at the table,” this is how you do it. Not with big claims or flashy decks. With evidence.
“This isn’t a threat. It’s a lifeboat. Everything else is the risk.”
You don’t need better math. You need better questions and the courage to ask them before you sell the answer.
If you want to see what this looks like in practice, Mark has demo videos and 1:1 sessions available. Reach out to him directly on LinkedIn or email him at mark.stouse@proofanalytics.ai
Missed the LinkedIn Live session? Rewatch Part 6.
If you like this content, here are some more ways I can help:
Cheers!
This article is AC-A and published on LinkedIn. Join the conversation!
Marketing teams that obsess over MTA, MQLs, CTR/CPL are under more pressure than ever. BS detectors are on high alert in the boardroom. In Part 5 of The Causal CMO, Mark Stouse outlines what the C-suite already expects. GTM leaders need to be fluent in interpreting proof, spend, and business acumen.
As Mark and I already discussed, the rules changed in 2023.
The Delaware Chancery Court’s 2023 ruling expanded fiduciary duty from CEOs and boards to all corporate officers, including CMOs, CROs, CDAOs, and other GTM leaders.
That means we’re now individually accountable for risk oversight and due diligence. Not just our intent. Our judgment too.
“This is changing the definition of the way business decisions are evaluated… What did you do to test that? What did you do to identify risk and remediate the risk?”
Boardroom expectations have shifted. They want marketing accountability, not activity metrics. If your GTM budget is still defended with correlation math, you’re going to lose the room.
Causal AI gives you something different: Proof.
It tests what causes performance and why, how much it contributed, and what to do next.
It operates a lot like a GPS, recalculating your position in real time, suggesting alternate routes, and showing what could happen under different conditions.
Boards don’t want us to show them more dashboards. Think of it like the bridge of a ship. The Captain and First Officer
They want decision clarity:
Causal AI models cause and effect based on live conditions, not lagging indicators. It runs continuously. It adjusts to change. It simulates outcomes with real or synthetic data.
“GPS says, ‘I know where you are.’ You say where you want to go. It gives you a route. Then, if something changes—an accident, traffic—it reroutes. Causal AI works the same way.”
Mark shared a great story from one of his clients. During COVID, the finance team at Johnson Controls planned to cut marketing by 40%. But causal modeling showed how that would destroy revenue 1, 2, and 3 years out.
“The negative effects… were terrible. Awful. Like, profoundly wretched.”
Finance still made cuts, but only by 15%, not 40%. Because the data made the risk real.
A lot of B2B companies still treat CAC and LTV as truth. Mark didn’t mince words:
“CAC is a pro rata of some larger number. That pro rata is not real.”
And LTV?
“In the vast majority of cases, it’s completely made up.”
The bigger issue: CAC isn’t just a cost. It’s a form of debt. If you spend $250K chasing an RFP and don’t keep the client long enough to pay that back, you’re in the red. Period.
This mindset shift matters most for CMOs trying to earn budget.
“You’ve got to understand unit economics of improvement… how much money it takes to drive real causality in the market. That’s true CAC. Not the BS a lot of teams have been selling.”
What does that look like?
The chart below is a simulated example of a typical flat MTA pro rata model compared to a variable causal model.
To be fair, it’s not always deception. It’s often desperation. Most teams are never given the tools to calculate real causality.
CMOs say they want a seat at the table. But most still operate like support teams.
If you want credibility in the boardroom, act like it’s your business and your money.
“I became very good at interpreting marketing into the language of whoever I was talking to, like HR, Legal, Finance, the CEO. No marketing jargon. Just business terms.”
Boards fund systems that scale. That means reframing GTM as a system, not a series of tactics. It’s a mindset that requires critical thinking and letting go of outdated playbooks.
“This is the difference between being seen as a business contributor and being a craft shop.”
Start learning finance. Take courses. Do sales training. Train your team. Speak the language of the business. That’s how you earn respect and influence decisions.
Part 4 covered how to start a skunkworks project:
In Part 5, Mark explains why the silence matters.
“You want to assemble your story of change. You won’t have that if you declare you’re doing this up front.”
The goal here is to earn sequential trust over time, not to be secretive. When people feel the improvement first, they’re far more likely to believe the explanation later.
“If they already believe it, they’ll accept the facts. If they hear the facts first, they’ll resist.”
So don’t lead with a deck. Don’t sell a vision. Build causal models behind the scenes. Learn what’s working. Adjust what’s not. Let the results speak. The key is to keep learning.
Then, when the timing’s right, you can confidently walk into the boardroom with a better story and the data to back it up.
The hardest part of this shift isn’t modeling. It’s having the guts to do the right thing instead of always doing things right.
“The biggest issue we all face is courage. The courage to act.”
Too many marketing leaders stay stuck because it’s safer. Even if nothing changes.
“If you’ve tried the old approach your whole career and it hasn’t worked… then you’ve got to change.”
And according to Mark, to be that change, you have to stop waiting for permission, stop hiding behind bad math, and start proving your worth quietly, confidently, and causally.
Navigating a business is kind of like flying a plane. Causal AI gives GTM teams the instruments they need to fly safely through volatility. It does more than “keep you in the air.” It helps you choose better paths when visibility disappears.
Implementing Causal AI into GTM requires a mindset shift. Marketing leaders will need to let go of legacy systems like MTA because the change is coming fast.
Here’s where to begin:
If you want a seat at the table, you need to earn it and prove it.
Missed the LinkedIn Live session? Rewatch Part 5.
If you like this content, here are some more ways I can help:
Cheers!
This article is AC-A and published on LinkedIn. Join the conversation!
A lot of GTM teams are overloaded. New tech. New tools. New hype. All promising transformation, but rarely delivering clarity. In this recap of Part 4 of The Causal CMO, Mark Stouse explains why operationalizing Causal AI isn’t just about buying another tool. It first requires a mindset shift. A hard reset on how GTM teams define risk, read signals, and move forward.
Before you operationalize anything, you need to think differently.
The first step isn’t modeling or tooling. It’s dropping the need to be right.
Causal AI is only effective if you’re willing to look at what’s actually happening, not what you hoped would happen.
So do you want to be right? Or do you want to be effective?
This is a key distinction, especially for Go-To-Market teams. Instead of constantly trying to prove they’re right, they should ask what they need to do next.
That shift is already starting to happen.
Proof Analytics, for example, no longer looks like a traditional SaaS product. Most of Proof’s clients now rely on software-enabled services because self-serve just doesn’t work when teams are overwhelmed.
It’s not a tech problem. It’s a saturation problem.
“Teams today are saturated like ground that’s been rained on for too long. They can’t absorb anything new. The water just runs off.”
Too many GTM teams are stuck on this treadmill. They’re still chasing efficiency because it’s easier to cut cost than drive growth. But efficiency without proven effectiveness is meaningless.
And that’s where Causal AI comes in.
As we discussed in Part 1, Multitouch Attribution (MTA) assumes linearity and doesn’t account for time lag. It only focuses on the dots, not the lines in between.
Dashboards typically treat data like a mirror. But as Mark pointed out, data only reflects the past, and past is not prologue. Like crude oil, it’s useless until refined.
“There is no intrinsic value in data. Only in what it gets refined into.”
Mark shared a story from a meeting where a CIO showed how easy it was to manipulate attribution weights. Then he had various leaders at the table do the same. Same data, four outcomes. Each one reflected a different bias.
Guess who had the least credibility?
Yup. Marketing. The CIO said to them:
“Of everyone in the room, you arguably have the most bias. Your outcome is dead last in terms of credibility.”
Causal AI mitigates gaming the system. It tests patterns for causal relevance and recalibrates in real time. If the forecast starts to degrade, it tells you what to do next.
It doesn’t care if the news is good or bad. It just tells the (inconvenient) truth.
Before the model even begins, Mark’s team maps the external environment. They model the headwinds and tailwinds first. Only then do they plug in what the company is doing.
This is where most teams fall short.
Too often GTM is treated like an isolated system. But it’s not. It’s subject to risk, time lag, and external forces that marketers rarely model. And it shows.
According to Mark, the effectiveness of B2B GTM spend has dropped from 75% to just above 50% since 2018. That’s not a tactics problem. It’s a market awareness problem.
“The average B2B team is frozen in their perspective. They’re not thinking about the externalities unless it gets so bad they can’t ignore it.”
The result? Poor decisions, reactive guidance, missed opportunities. And an inability to plan for value because no one knows where in the calendar to look for it.
One of the most powerful features of Causal AI is the ability to model counterfactuals. What would happen if we made a different decision?
Until recently, this required expensive, synthetic data. Now, GenAI tools make it accessible. With a detailed enough scenario prompt, teams can simulate outcomes, measure impact, and prioritize programs before spending a dime.
It’s like an A/B test for strategy. No need to touch real data. No risk of tripping legal wires. Just clarity.
“Most stealth efforts start here. The counterfactual model shows what’s probably happening. Then you go get the real data to prove it.”
It’s also the easiest way to build internal buy-in. Teams can explore alternatives without asking other departments for access or permission.
You don’t need a top-down mandate to operationalize Causal AI. In fact, Mark recommends the opposite.
Start small. Keep it quiet. Don’t even call it a transformation.
“Carve off a small budget and a couple of people. Model and learn for 9 to 12 months. Don’t say anything. Just execute.”
As the team starts learning and adjusting, people will notice. They’ll feel the shift before they understand it. Then, when the time is right, you explain how it happened.
“If people feel the improvement first, they’ll accept the facts. If they hear the facts first, they’ll resist.”
There’s no manipulation in this. It’s psychology. Let results speak before you tell the story. It’s like asking for forgiveness instead of permission.
Causal AI is not a marketing or revenue tool. It’s a business system.
Mark says the best clients are already thinking in terms of enterprise models. Finance teams often lead the adoption. They use causal modeling transparently across departments to see what’s working for the business as a whole.
And leadgen is not the goal. The board doesn’t care about leads. They care about cash flow, growth, and risk. In other words, bigger deals, faster deals, and more of them. Causal AI connects those dots and the lines in between.
“You can’t get to efficiency unless you know if it’s effective.”
Effectiveness is not a tactic. It’s a lens.
Causal AI doesn’t ask if you were right. It helps you get better at seeing possibilities and becoming more effective.
We’ll explore that further in Part 5 as we dig into investment decisions and boardroom conversations.
Until then, ask yourself:
Are you willing to be wrong long enough to get it right?
Missed the LinkedIn Live session? Rewatch Part 4.
If you like this content, here are some more ways I can help:
Cheers!
This article is AC-A and published on LinkedIn. Join the conversation!
By Gerard Pietrykiewicz and Achim Klor
Achim is a fractional CMO helping various B2B GTM teams with AI adoption. Gerard is a seasoned project manager and executive coach helping teams deliver software that actually works.
Most AI automation tools look easy in demo videos. But when we tried building a simple system to summarize calls and send reports, reality hit hard: clunky UIs, unexpected limitations, and lots of wasted time. Still, when used right (especially for early prototyping), AI can be a breakthrough in team alignment. This article shares what worked, what didn’t, and where we go from here.
So, we recently jumped into the whole AI automation thing. The goal was simple: use make.com to build something that would summarize weekly Google Meet calls and email a neat report.
Easy, right?
All those flashy Instagram and LinkedIn videos had led us to believe it would be. We've worked with Zapier before, and this would be a similar, straightforward experience, right?
Not so fast.
First hurdle: Google Meet doesn’t just hand over transcripts with an API. Nope. They’re stuck in Google Docs in Drive. You have to give make.com access to a specific folder. Then came a “simple” filter for recent documents. Simple, unless you don’t know the exact code. The “intuitive” interface felt more like a maze when what was sorely needed was real control.
Then, to send the summary via Gmail, you have to link your entire account to make.com. That can make anyone uneasy, to say the least. Finally, setting up ChatGPT with API keys and managing credits wasn’t hard on its own, but put it all together, and it became a bigger headache than expected.
The make.com AI assistant, supposedly there to help, burned through free credits like kindling while trying to resolve a basic filter issue. The frustration wasn’t with the idea; it was with how hard it was to “make” it work. After an hour wrestling with the interface, it was clear that our time was better spent elsewhere.
Stephen Klein, CEO of Curiouser.AI, hits the nail on the head here in this LinkedIn post. He argues that most of the “agentic AI” buzz is just that. Buzz.
Today’s AI “agents” are often just scripts, not independent thinkers. We’re years away from true autonomous AI.
Klein is right. Businesses risk chasing inflated promises, throwing money at “Hype-as-a-Service” instead of real solutions.
Despite the roadblocks, there is reason to be optimistic. A recent “vibe-coding” experiment (quickly mocking up a concept using AI tools without overengineering) is a good example.
For non-technical managers leading software teams, you can use it to quickly build a basic idea prototype. We tossed the code, sure, but it completely changed how the team communicated. It cut down on all the detailed upfront planning we usually do.
Could we build a full, production-ready solution with vibe-coding today? Probably not. But the immediate wins (clear talks, faster decisions, smoother development) were huge.
One time, we were stuck on a feature. Everyone had a different idea of what “simple” meant. We spent hours in meetings, just talking in circles. With vibe-coding, I cobbled together a rough version in an hour. We put it on the screen, and suddenly, everyone saw the same thing. The room went from confused murmurs to “Oh, I get it!” in seconds. It was a game-changer for clarity.
Gerard Pietrykiewicz
This experience shows one clear truth: AI tools are harder to use than the marketing videos suggest. And no, we’re not ready to fire all our developers. But those who stick with it, who push past the early bumps and use these tools wisely, will find a real edge.
New tech always has growing pains. Think about how easy GenAI has made things. It went from complex APIs to something almost anyone could use overnight.
Yes, Stephen Klein is right to warn us about blindly following the hype. But his warnings shouldn’t stop us from trying things out. They should guide us to explore with care and common sense.
As leaders, our challenge is to bridge limits by pushing for simpler, more intuitive solutions. Maybe AI itself should design user interfaces that actually make sense for managers, not just developers.
It reminds me of the early days of the internet. Back in the 1980s, it was powerful, but only for those who understood complex commands. Then along came the web browser (anyone remember Netscape?), a simple interface that opened the World Wide Web to everyone. AI needs its browser moment.
Achim Klor
Like any new tech, AI tools will continue to trip us up. But every experiment makes us better. The more we test, the more likely we are to build the future we want, not just buy into the one being sold.
If you like this co-authored content, here are some more ways we can help:
Cheers!
This article is AC-A and published on LinkedIn. Join the conversation!