Cave Bits by Mouseflow: Uncovering Website Analytics

AI in CRO What Experts Really Think (and How It’s Actually Being Used)

Mouseflow

In this episode, we explore the reality of AI in CRO and debunk the myth that AI will replace CRO professionals. Tune in to learn why human judgment still wins in strategic decision-making and how the future involves Humans + AI working smarter.

Speaker:

Welcome back to the deep dive. Okay, I want to get straight into it today. Let's do it. Because the topic we're covering is, well, it's pretty much the only thing anyone in our industry is talking about right now.

Speaker 1:

It really is. It feels like the ground is shifting under our feet.

Speaker:

Yeah, and I don't mean that in a, hey, check out this cool new tool kind of way. I mean it in a uh, is my career going to exist in five years kind of way.

Speaker 1:

That's the real question, isn't it? The anxiety is palpable.

Speaker:

Absolutely. We are looking at artificial intelligence, but you know, specifically how it's crashing into the world of conversion rate optimization. CRO.

Speaker 1:

Yeah.

Speaker:

And I have to be honest, when I first picked up this report, it's called AI and CRO, what experts really think. I was bracing myself for some bad news.

Speaker 2:

I can imagine.

Speaker:

Because if you just look at CRO on paper, what is it? It's data analysis, it's A/B testing, pattern recognition. I mean, that sounds exactly like a job description for a super powerful algorithm.

Speaker 1:

It seems like the perfect candidate for automation, at least on the surface. And you're right, the report opens right up with that sentiment. It says something AI will replace CRO.

Speaker:

Right.

Speaker 1:

It's not a fringe theory, it's like the dominant anxiety.

Speaker:

So let's not beat around the bush. Is this a eulogy for the human optimizer? I mean, I see tools now that claim to auto-generate, copy, auto-design variants, auto-allocate traffic. What is left for us to do?

Speaker 1:

Well, maybe put the resignation letter away for a minute. Okay. Because once you actually talk to the experts, the people deploying these tools at a huge scale, the whole narrative, it just changes completely.

Speaker:

Oh so?

Speaker 1:

The consensus in this deep dive is surprisingly very firm. AI isn't taking over optimization.

Speaker:

Okay. I have to push back on that immediately. That sounds a little like wishful thinking.

Speaker 1:

I get.

Speaker:

We're talking about machines that can process petabytes of data in seconds.

Speaker 1:

I can't do that. You can't do that. Why aren't we just admitting the machine is better at the math?

Speaker:

Because CRO isn't math. That's the entire misconception.

Speaker 1:

If CRO were just make the number go up, then yes, the AI would win hands down. But the experts here, they argue that CRO is actually about understanding real people.

Speaker:

Understanding real people.

Speaker 1:

And that is a very, very different data set.

Speaker:

Is it though? I mean, everything we do online, it leaves a data trail. Clicks, scrolls, heat maps. Isn't understanding people just a fancy way of saying aggregating enough behavioral data?

Speaker 1:

Ah, but that's the trap. That's thinking of people as just, you know, wallets with cursors attached.

Speaker:

Right.

Speaker 1:

The report makes a crucial distinction here. AI is phenomenal at telling you what happened. It can look at your dashboard and say conversions dropped 15% on mobile between 2 and 4 p.m.

Speaker:

Which is useful.

Speaker 1:

Incredibly useful, but it cannot tell you why. And the why usually lives completely outside the data set.

Speaker:

Give me a concrete example because I feel like AI is getting better at the why every single day. You know, sentiment analysis and all that.

Speaker 1:

Okay, sure. Let's imagine a B2B sauce company, high-value enterprise pricing page. All of a sudden, conversion rates just tank the data, the AI's world. It shows that time on page is the same, scroll depth is the same, but nobody is clicking book a demo.

Speaker:

So the AI would suggest what? Changing the button color, shortening the headline.

Speaker 1:

Exactly. It starts iterating on the page elements. It tries to optimize the local funnel, but the human optimizer, who understands context, realizes something else. Which is that the company just announced a shift in their service level agreement that kind of screws over small businesses. And the whole market is reacting to a press release that isn't even on your website.

Speaker:

Wow. Okay, so the AI is optimizing a burning building.

Speaker 1:

Precisely. The source material calls this the context blind spot. AI can't fully understand user context because user context is the entire human experience. It's not just the session data.

Speaker:

So it hits this hard ceiling, it can optimize locally on the page, but it fails globally because it just doesn't know what's happening in the real world.

Speaker 1:

It lacks cultural and emotional peripheral vision. It doesn't know it's a holiday, or that a competitor just launched a free version of your product.

Speaker:

Or that the phrase you're using in your headline just became a political slur on Twitter yesterday.

Speaker 1:

Yes, exactly. That's a perfect example.

Speaker:

Okay, that makes sense. But let's play devil's advocate for a second. We know LLMs are getting smarter, they have access to the news, social sentiment. Isn't this blind spot just a temporary glitch? Won't like GBT six or seven eventually connect those dots?

Speaker 1:

It might connect the data points, but can it weigh them? That's the real issue. It's about discerning signal from noise. If you feed an AI everything news, weather, stock prices, it starts finding these correlations that aren't causal at all. It might tell you conversions dropped because it's raining in Seattle.

Speaker:

And a human knows. That's just ridiculous.

Speaker 1:

A human knows it's irrelevant. The report really emphasizes that context is king. And for right now and for the foreseeable future, we're the only ones who can carry that crown. We understand motivation. AI just understands action.

Speaker:

I like that distinction, motivation versus action. But this this brings us to a weird paradox that the report mentions. If AI is this powerful tool, but it's kind of dumb about the why, there's a real danger there.

Speaker 1:

A huge danger.

Speaker:

They call it blindly trusting AI answers.

Speaker 1:

This is the danger zone. And honestly, this is where I see a lot of smart professionals getting burned.

Speaker:

But isn't the whole sales pitch of these tools that we can trust them? You know, let our AI algorithm optimize your ad spend. Let our AI write your winning copy.

Speaker 1:

Of course.

Speaker:

We buy the tools specifically so we can be passive. Are you saying that whole sales pitch is a lie?

Speaker 1:

I'm saying the sales pitch is dangerous if you swallow it whole. The report warns against accepting the output without asking the right questions. The failure mode here is just passivity.

Speaker:

It's the set it and forget it mentality.

Speaker 1:

Right. And think about the speed we all operate at. You're under pressure to launch a campaign. You ask the AI for five headline variations, it spits them out, they look fine.

Speaker:

They look fine.

Speaker 1:

Grammatically correct, they contain the keywords, so you ship them.

Speaker:

And I just saved myself two hours of brainstorming.

Speaker 1:

You did. But you stopped thinking. You didn't ask, is this tone actually aligned with our brand voice, or does it just sound like a generic sales bot? You didn't ask, is this making a promise our product can't actually keep? Right. The AI doesn't have a conscience, and it certainly doesn't care about your brand equity. It just predicts the next likely word.

Speaker:

So the real danger isn't that the AI goes rogue and I don't know, destroys the website. The danger is that we get lazy and let mediocrity slide because it was generated so efficiently.

Speaker 1:

Efficient mediocrity. That is a great way to put it. The experts in the report, they stress that using AI actually requires more critical thinking from us, not less.

Speaker:

Which is so ironic, right? I want the tool to reduce my cognitive load, but you're telling me I need to ramp up my skepticism.

Speaker 1:

Your role just shifts. You stop being the generator and you start being the editor.

Speaker:

And being a good editor is hard work.

Speaker 1:

It's very hard work. You have to verify, you have to challenge, and you have to have the confidence to say, no, this is wrong, even when the machine is telling you it's statistically probable.

Speaker:

Okay, so we've established that AI is basically a sociopath with no context, and we shouldn't trust it blindly. Which sounds pretty bleak.

Speaker 1:

It does a bit, yeah.

Speaker:

But the report isn't anti-AI, is it?

Speaker 1:

Not at all. In fact, once you get past all the warnings, the experts are incredibly bullish on it. But they are bullish on very specific use cases. Okay. The report outlines what it calls a sweet spot, three key areas where AI doesn't just help, it absolutely dominates.

Speaker:

Let's break these down because this is the practical stuff people need. The first one is velocity.

Speaker 1:

Or move faster.

Speaker:

Okay, but I want to be careful here. We just talked about how speed can be dangerous shipping bad headlines fast. So how is velocity a benefit?

Speaker 1:

Think about the blank page problem. How much time do you personally spend staring at a blinking cursor, just trying to come up with the first hypothesis for a test?

Speaker:

Way too much. And by the time I come up with like three ideas, I'm already exhausted.

Speaker 1:

Exactly. AI removes the friction of initiation. It can generate 50 hypotheses in 10 seconds.

Speaker:

50.

Speaker 1:

50. Now, 40 of them might be total trash, but 10 of them might be gold. A human might never have even reached those 10 gold ideas because they burned out on idea number four.

Speaker:

So it's about increasing the volume of experimentation.

Speaker 1:

It's about accelerating the build phase of that build, measure, learn loop. If you can test 10 times as many variations, you just statistically increase your chances of finding a winner.

Speaker:

Provided you are filtering them with human judgment.

Speaker 1:

That's the key. Always. And there's a technical side of velocity too, right? Like multivariate testing.

Speaker:

Oh, for sure. Trying to test four headlines against three hero images and two button colors.

Speaker 1:

Doing that setup manually is a total nightmare. AI just automates the permuting. It lets you run these complex experiments that, you know, just wouldn't be feasible for a human team to manage on their own.

Speaker:

Okay, so velocity is about removing the bottleneck of creation and setup. What about the second area? Vision.

Speaker 1:

This one is my favorite. Spot patterns. This is where AI just leaves humans in the dust.

Speaker:

How so? I mean, we're pretty good at spotting patterns. I could look at a graph and see a downward trend.

Speaker 1:

Sure, you can see a trend in two dimensions, maybe three. You can see traffic versus time or conversion versus device, but can you see the correlation between time a day, referral source, browser version, and cart value all at the same time?

Speaker:

My brain hurts just trying to visualize that chart.

Speaker 1:

Exactly. We have cognitive limits. AI operates in like n-dimensional space. It can look at a data set and say, hey, do you notice that users on iOS 17 coming from LinkedIn are bouncing at a 90% rate? But only if they land on the pricing page.

Speaker:

That is a needle in a haystack.

Speaker 1:

It's a needle in a stack of other needles. A human analyst might spend a week pivoting tables in Excel to find that. Or they might never find it because they didn't even think to look for that specific combination.

Speaker:

But the AI doesn't need to think to look.

Speaker 1:

It just processes everything and flags the anomalies.

Speaker:

So in this case, the AI is the detective, and it's finding clues that are literally invisible to the naked eye.

Speaker 1:

It creates vision where we are blind. It surfaces the questions that we should be asking.

Speaker:

Okay, philosophies. The third one is synthesis, to summarize insights.

Speaker 1:

This is all about the unstructured data problem.

Speaker:

Unstructured data. You mean like customer reviews and things like that?

Speaker 1:

Reviews, support tickets, chat logs, open-ended survey responses, all the qualitative stuff.

Speaker:

Right.

Speaker 1:

In the old days, and by old days I mean like two years ago, if you wanted to analyze qualitative data, you had to read it. All of it.

Speaker:

And let's be real, nobody reads all of it. You read the first 50 reviews, you get bored, and you assume you know what everyone is saying.

Speaker 1:

Or you just cherry pick the reviews that confirm what you already believe. Look, this guy loves the new feature while ignoring the 200 people who hate it.

Speaker:

Guilty as charged.

Speaker 1:

AI just changes the physics of this. You can feed it 5,000 support tickets from the last month and just ask, what are the top three friction points for users trying to upgrade?

Speaker:

And it doesn't get bored.

Speaker 1:

It doesn't get bored. And it has no confirmation bias. It just gives you a synthesized view. Forty percent of users are confused by the tier naming convention. Now that is an actionable insight that might have been buried forever.

Speaker:

So synthesis is really about listening at scale.

Speaker 1:

That's it. It turns all that noise into a clear signal.

Speaker:

You know, I'm looking at these three benefits velocity, vision, and synthesis. And what strikes me is that none of them are strategy. Velocity is execution, vision is observation, synthesis is organization, but not one of them is actual decision making.

Speaker 1:

And that brings us to the verdict of the entire report. If there is one sentence I want people to like tattoo on their forearms, it's this one. It can't make decisions for you.

Speaker:

Yeah, the report is very firm on this. Human judgment still wins.

Speaker 1:

Let's really unpack that word judgment, because I think people sometimes confuse judgment with prediction.

Speaker:

Right. If I ask ChatGPT which of these two headlines is better, and it picks one, isn't that a judgment?

Speaker 1:

No. That is a probabilistic prediction based on its training data. It's just guessing which one is statistically more likely to align with patterns of good headlines it has seen in the past.

Speaker:

So what is judgment then?

Speaker 1:

Judgment is weighing consequences. Judgment is ethics, judgment is strategy. Let's go back to that Saw's pricing page example. The AI might predict if you make it impossible to cancel the subscription online and force people to call a phone number, your retention rates will go up.

Speaker:

And statistically, that's probably true. Dark patterns work.

Speaker 1:

Exactly. The math checks out. But a human using judgment says, yes, retention will go up for three months, but our brand reputation will be destroyed, our trust pilot score will tank, and we'll probably get sued.

Speaker:

So the AI optimizes for the metric, but the human optimizes for the business.

Speaker 1:

That's it perfectly. The AI creates the map. It says this is the fastest route. The human driver says, yeah, but that route goes through a swamp and I'm driving a sedan. That is judgment.

Speaker:

It's funny because for years we've been told that to survive, we need to be more data-driven, more like computers. But this report is suggesting the complete opposite. It's saying to survive the AI era, we need to be more human.

Speaker 1:

It's the ultimate pivot. The value of the person listening to this, your value, isn't in your ability to build a pivot table anymore. The machine can do that. Your value is in your ability to look at that table and ask, so what?

Speaker:

The so what question.

Speaker 1:

That's the money question. The AI says mobile traffic is up. You ask, so what? The AI says bounce rate is down. You ask, so what? Does that mean they're engaging, or does that mean they just can't find the logout button? Meaning is the exclusive domain of the human.

Speaker:

This reframes the entire career path, doesn't it? I feel like a lot of people in CRO and marketing, they've built their identities around being the technical person.

Speaker 1:

For sure, the one who knows how to set up the tags and run the scripts.

Speaker:

Yeah.

Speaker 1:

And look, those technical skills are still useful, but they are becoming commodities. The skills that are gaining value are the so-called soft skills, empathy, curiosity, strategic thinking, the ability to argue with the data.

Speaker:

Arguing with the data, I love that.

Speaker 1:

You have to. You have to be the skeptic in the room. The future isn't AI versus humans. The report sums it up perfectly. It's humans plus AI working smarter.

Speaker:

It's a cyborg model.

Speaker 1:

It really is. You let the AI handle all the grunt work, the pattern spotting, the initial drafting, the massive data crunching, and that frees you up to do the job you were actually hired to do in the first place.

Speaker:

Which is to understand the customer and grow the business.

Speaker 1:

Exactly. Nobody got into marketing because they loved manually tagging 5,000 support tickets. God knows. So let the robot do the robot work. You do the human work.

Speaker:

It sounds like we're all getting a promotion. We're being promoted from data processor to head of strategy.

Speaker 1:

Or head of judgment. I think that's the title we should all be aiming for.

Speaker:

Head of judgment. I like the sound of that. It commands a certain salary, I would think.

Speaker 1:

It certainly should.

Speaker:

So we've gone from this fear of being replaced to realizing we're actually being augmented. We know AI gives us velocity, vision, and synthesis. But we also know it has this massive blind spot with context, and it can be dangerous if we trust it passively.

Speaker 1:

Right. And ultimately, the guardrael is you, the human in the loop.

Speaker:

Okay, before we wrap up, I want to leave everyone with a bit of a challenge here.

Speaker 1:

Ooh, I like it.

Speaker:

We've been talking about how AI takes away the grunt work so we could focus on the high-level thinking.

Speaker 2:

Right.

Speaker:

But that's actually a really scary proposition for some people. Because the grunt work is safe, it's busy work. You can hide behind a spreadsheet for eight hours and feel productive.

Speaker 1:

That is very true.

Speaker:

So the question to you is if the AI takes away all the busy work, are you ready to step up? When you strip away the tasks that the machine can do, is your empathy, is your strategy, is your judgment actually sharp enough to carry the full load?

Speaker 2:

That is the provocative question. There is nowhere to hide anymore. If you're not bringing insight, you're not bringing value.

Speaker:

The machine is doing its part. Now it's time for us to do ours. Think about that next time you open your dashboard. Thanks for listening to the deep dive. We'll see you next time.

Speaker 2:

Stay curious.