Finally Standing on (Statistically) Solid Ground: Mapping GEO

An Interview with Nick Haigler

Nick Haigler

Nick Haigler, R&D Lead, AI & Innovation at Seer Interactive

(Letting us all know there’s nothing to worry about… sure, Nick. Sure.)

After my interview with Jono Alderson, I spent a few days feeling like someone had yanked the floorboards out from under my career. Websites might become museums. Journeys might disappear. Machines might curate reality itself (in fact, they might already be doing so…). You know, light stuff.

So when it was time for the next interview, I was craving something familiar. Something steady. Something I could measure without needing three philosophy degrees and a therapist.

Enter: numbers. 💜

Nick Haigler didn’t wax poetic about the metaphysics of discovery. He brought conversion rates, testing methodologies, and charts that practically asked me how my day was going. Within minutes, we were discussing CTRs, sample sizes, and user intent patterns. I felt my heart rate return to a normal pace for the first time in weeks.

It turns out that GEO looks a lot less intimidating when someone hands you the data.

Nick approaches this entire shift like an analyst: you observe, you test, you compare, you learn. 

For the first time on this Mapping GEO adventure, I felt like I was standing on statistically solid ground.

Lower CTR, Higher CVR

One of the things I love most about Nick is how calmly he delivers information that should, in theory, terrify me. Case in point: we are entering a world where sites get fewer clicks. Much fewer. AI Overviews are already shrinking organic traffic because users increasingly get full answers without ever leaving the AI’s ecosystem [4].

But here’s the twist that made my little optimizer heart skip a beat.

The clicks we do get are better.

“AI Overviews reduce clicks,” Nick told me, “but the folks who click are much more down-funnel and more likely to engage or convert” [7].

Translation: lower CTR, higher CVR.

Or, in language I find deeply comforting: fewer visitors, less noise, more meaningful behavior.

Nick explained that AI introduces friction by answering the “basic stuff” before a user ever reaches you. The only people who click through are the ones who actually want something. They’ve already consumed context. They already understand the category. They already know what they want.

It’s like your website suddenly became a speakeasy. Only the bees' knees, cat’s pajamas, and real McCoys get through the door.

But Survivorship Bias

And this part I understood immediately. It’s pure survivorship bias.

Traffic is down, but conversion is up? If your speakeasy door is no longer on your homepage but somewhere “out there” — deep in an AI-curated conversation — then, yes, the folks who get through that door are more qualified. Great.

But what about the people who never even saw the door?

This is where things get uncomfortable.

AI could be telling perfectly interested, ready-to-buy users that your product is “out of stock,” or “not recommended,” or “rated poorly,” based on stale, partial, or just plain incorrect data. Those users — who would have been good fits — never make it to your site at all.

This is what survivorship bias looks like in the agentic age. It’s the gap between the clicks you see and the opportunities you don’t.

Or as the hospitality world calls it: unconstrained demand — the true demand that exists beyond what your data currently reveals [30].

Nick’s data reinforces this, too. In one of his cases, AI-driven visitors converted at significantly higher rates than traditional organic traffic — not because the website changed but because the user—i.e., people with stronger intent—did [29].

When AI systems take over the top-of-funnel educational work, the traffic that reaches your site is more qualified, more ready, and more likely to convert. And as an experimentation nerd, I couldn’t help thinking:

This is the kind of shift that fundamentally changes what “good traffic” even means.


When Your Brand Has an Evil AI Twin

Somewhere around the middle of our conversation, Nick said a sentence so quietly and casually that it took my brain a full three seconds to register the severity.

He said, “Sometimes the AI just gets you wrong.”

Donald Glover bringing pizza into a room filled with fire from Community

That was it. No fanfare. No foreboding music. Just that. Meanwhile, inside my head, a tiny version of me began thinking, “This is the darkest timeline.”

Wrong how?
Wrong like… a typo?
Wrong like… “not indexed”?
Wrong like… “this brand no longer exists and also maybe committed tax fraud in 2014”?

Upstream Distortion

Nick explained that AI systems do not rely on what your brand says. They rely on the entire information environment that surrounds you: old reviews, outdated descriptions, neglected blog posts, Reddit rants, and the occasional conspiracy theorist who’s convinced your customer support team is staffed by lizard people.

And because these models blend all of that together, they sometimes surface versions of your brand that feel… off.

Or at least slightly outdated.

This is upstream distortion. LLMs form a “mental” model of your brand before a user ever expresses intent. If that model is incorrect, you may never even be part of the user's consideration set.

Nick has an entire study on this problem. He refers to it as the amplification of brand misconceptions [28]. The model isn’t malfunctioning. It’s doing exactly what it was trained to do: synthesize a picture of you from whatever data it can find. If that picture is distorted, that is not an AI failure. That is a brand governance failure.

He summed it up perfectly, “If AI has the wrong idea about your brand, the model is just reflecting the information environment you’re leaving behind.”

Translation for my fellow experimenters: If the data is messy, the machine will be messy. (Junk in → junk out).

And the scary part is how subtle these distortions can be. It might not be “Your product is terrible,” or “Your service no longer exists.” It might be a single outdated product detail that influences a recommendation, or a misinterpreted feature. Or it might be one old review that keeps resurfacing because it happens to be the most structured data the model can find.

Models are incredibly sensitive to gaps, and when you don’t fill those gaps, someone else will. 

The good news is that the solution is not mystical. It is tactical. The solution is:

  • Brand monitoring through LLMs.

  • Fresh, authoritative, consistent content. 

  • Modernized descriptions.

  • Clear product and service definitions.

  • Direct answers to the questions people ask.

Over time, LLMs learn to prefer your version of the story because it is the most coherent one they have. Jono talked lots about this in our previous Mapping GEO conversation.

This section of the conversation didn’t scare me as much as it grounded me. Machines do not misrepresent brands maliciously. They misrepresent brands because brands become repositories for all signals (some of which may not match reality).

Upstream distortion is preventable. You just have to notice it.

When LLMs Send Your Customers Into a Maze… and the Velociraptors Are Loose

Right after we finished discussing upstream distortion (all the ways an LLM can misunderstand your brand), Nick dropped another quiet bomb into the conversation:

Velociraptor opening a door

“AI is taking over the top of the funnel.”

He said it casually.

I heard it like the moment when someone realizes the velociraptors have learned how to open doors.


Downstream Distortion

Because here’s what he meant:

AI systems are now educating our customers about our products… not us.

They summarize, filter, and stitch context together. They decide what matters. Users arrive at their purchase decision, not wondering whether your product is the right fit, but already halfway convinced. (Or fully unconvinced.) And that conviction didn’t come from you or your site. It came from the velociraptor.

Credit: Giphy, Jurassic Park.

How many others have the velociraptors scared away? (Or… eaten?)

How Velociraptors Are Providing the Educational Brief

Nick studies this constantly. His datasets show the same pattern: AI Overviews shrink clicks, but the remaining clicks are far more qualified [7, 29].

That part is great.

But the missing part, the people who were routed elsewhere, never even making it to your site, because the LLM misread something or surfaced outdated information, is where my optimizer brain whispered: uh-oh.

This is survivorship bias in the age of GEO. 

Traffic is down, conversion is up? Yay! Yes. But also…maybe slow down on the celebratory cupcakes. Because if your “speakeasy door” is no longer on your homepage but out in some AI-curated conversation, you’re only seeing the survivors. The ones who made it past the raptors.

Everyone else? They’re not in your analytics. You don’t know where they are. You only vaguely know that they are. The unconstrained demand, the delta between the customers you see and the customers you should have had (if the raptors hadn’t scared them off) [30], well, that’s something we haven’t really begun to study.

That’s why Nick kept emphasizing how AI systems compress what used to be an entire educational journey into a few synthesized lines. You used to guide the user. Now, the velociraptor provides the briefing.

And if your brand isn’t feeding the LLM enough fresh, accurate, well-structured information [28], well…

How To Not Be Prey

But if you do feed it the right material?

  • Clear content

  • Consistent signal

  • Authoritative updates

  • Structured, current information

Now, your site features open doors, short queues, and the exact products your user wants. You might as well get out of your Jeep and admire the AI that brought in your best customers! Might even play it a sweeping John Williams theme. But the one you should be thanking is yourself. You did the work. 

Nick kept circling this point because it really is the structural shift: AI isn’t just changing user journeys. It is taking ownership of the consideration cycle.

Your job has shifted from guiding users through a funnel to ensuring the AI has the right material to guide them there effectively for you.

Not stories, but signals. Not arcs, but context fragments. Not pages, but meaning.

And if you don’t keep the LLM well fed, you don’t lose a customer. You risk losing the entire unseen demand curve [30].

Redefining “Good Traffic”

Nick warned me early in our conversation that “every KPI we’ve trusted for the last 20 years is about to look wrong.”

Reader, he was not kidding.

If the beginning of our conversation was where I perked up (“yay, numbers I understand”), this is where those numbers started behaving like gremlins doused in Red Bull. Teams everywhere are reporting the same baffling patterns:

  • High impressions… low clicks.

  • Lower scroll depth… higher conversion.

  • Shorter sessions… better revenue per visitor.

  • More brand queries… less organic traffic.

  • Engagement metrics collapsing… while sales go up.

At first glance, it appears that your analytics dashboard is experiencing an existential crisis.

But Nick explained the real issue: your traffic is fundamentally different.

How How is Now

AI Overviews and LLM-driven discovery have fundamentally reshaped how people arrive at your site. Here, how isn’t just the manner by which they arrive but also the condition in which they arrive. Users aren’t arriving curious anymore. They’re arriving pre-briefed. The LLM already explained who you are, what you offer, how you compare to competitors, and what trade-offs matter for the purchase decision.

As Kevin Indig put it in the UX study on AI Overviews, users “skip the exploration phase entirely.”  They don’t wander. They don’t browse. They no longer behave like traditional organic traffic. [5]

Nick sees this too. In his AIO CTR dataset, he found that impressions remained relatively stable, but CTR dropped sharply, while the conversion rate climbed [7]. And in the ChatGPT conversion case study, AI-referred users converted at dramatically higher rates than standard organic traffic. Not because the site changed, but because the person arriving on the site had already done the homework somewhere else. [29]

When that happens, all the pretty behavior metrics melt into nonsense:

  • Scroll depth collapses because users land, confirm, and act.

  • Time on site shrinks because the LLM already handled the education.

  • Bounce rate improves because only the most qualified visitors show up.

  • Conversion rate rises because these visitors are farther along the funnel.

I joked that it’s like users are showing up with the cheat sheet already filled out. 

The Red Herring

But the metric shift has a darker side. If all your KPIs point to “less engagement, more conversion,” it’s easy to misdiagnose what’s happening. You might think: “Oh! We improved our site experience.”

But if you look closer, the real story is this:

AI is handling the top-of-funnel work you once did. Your analytics only measure what happens after the handoff.

This is why Wil Reynolds argues that KPIs from 2020 simply don’t work in a GenAI environment [8], and why Tracy McDonald’s September 2025 AIO update shows CTR continuing to slide as AI Overviews become more robust [27].

It isn’t that your traffic is “low quality” or “getting worse.” It’s that your traffic is becoming a self-selected subset of highly primed buyers, and all your metrics are reflecting that change.

The Real Challenge

Which leads to the big reframe Nick kept emphasizing: 

You don’t have a traffic problem. You have a visibility interpretation problem.

If you measure performance only by what happens after the AI-curated handoff, you’re only measuring survivors. The people who made it past:

  • Upstream brand interpretation

  • Midstream product evaluation

  • Downstream availability checks

  • Velociraptors

The ones who didn’t? They never appear in your analytics. They’re invisible.

credit: Giphy. Clueless cheetah? Or invisible data?

That doesn’t mean you’re winning. It means you have no idea whether you’re winning.

And that, perhaps more than anything, is why Nick’s work matters so much. He isn’t just identifying behavioral anomalies. He’s mapping the new relationship between visibility, demand, and conversion.

In the GEO era, meaningful measurement isn’t about “traffic.” It’s about understanding what the traffic you do see represents… and what the traffic you don’t see might be trying to tell you (in dance?).

Credit: Giphy, La La Land, dancing in LA traffic, as one does

The New Visibility Risk: Internal Inconsistency

Or If Your Teams Can’t Agree, the Model Won’t Either

If AI systems can misinterpret brands before the click and misdirect users after the click, then the real question becomes unavoidable:

So… how do we keep models from getting everything wrong in the first place?

Nick didn’t flinch. He didn’t get dramatic. He didn’t act like this required esoteric technical magic. He just said the part most teams never want to hear out loud:

“Most of the time, the model gets things wrong because the brand hasn’t given it anything better to work with.”

The research backs him up.

His article on LLM-amplified brand misconceptions shows this exact pattern [28]. If the ecosystem is full of outdated, inconsistent, shallow, or contradictory information, the LLM doesn’t “figure out the truth.” It averages the noise. Your brand becomes the composite of whatever scraps the model can gather.

That is not a reputation strategy. That is chaos in a trench coat.

What the Model Needs From You

Nick’s guidance was refreshingly tactical:

  • Publish fresh content (recency, y’all!).

  • Make sure it is authoritative.

  • Answer real questions users have. (relevant)

  • Address misconceptions directly, rather than hoping they fade.

  • Keep your information environment clean, aligned, and current.

None of this is flashy. None of it wins awards. (Although awards might help with the authoritative thing!) All of it is hard to do consistently.

This is the part where most experimentation, SEO, UX, and content teams realize they have been treating brand knowledge as a “we’ll fix it later” chore. But in the agentic era, “later” is the gap the model fills for you, using whatever it found from three years ago.

And unlike traditional search, LLMs do not rely on a single source of truth. They synthesize across the entire ecosystem. So when a long-forgotten blog post, an old product description, or a stale PDF contradicts your fresh, shiny homepage…

The model doesn’t pick a side. It blends the story.

If that blend is wrong, your users only see the wrong version. 

You only see the effect: strange behavior, weird traffic drops, confusing patterns, missing demand. 

Governance Matters More than Churning Out Content

Nick said something else I found really compelling, “LLMs reward clarity and consistency. Not volume.”

These line up with what Alisa and Jono have found, too. Both point toward entity clarity, meaning, and stable signals as the real drivers of visibility [2, 3, 14, 16]. Not how many posts you publish. Not how many pages you rank for. Not how loud you shout.

Meaning over motion.

The New Job Description

What Nick is really saying is this:

Your job is no longer to persuade users. Your job is to teach the model how to persuade users.

And that requires a level of cross-functional governance most teams have never built. Not the glamorous kind. Not the enterprise framework buried in a shared folder called “FINAL_v7_APPROVED_USE_THIS_ONE” that lives in a Confluence page last updated during the Obama administration. (Are you laughing or crying right now? Both? Yeah. Admit it. We all have this document.)

Credit: Giphy, Floppy disk

I mean real governance: shared definitions, current content, structured updates, and, hardest of all, getting everyone to agree on what the brand knows about itself and how it communicates that knowledge. 

Because if you can’t even agree internally on what the brand is saying, what chance does an LLM have? That’s how your speakeasy door ends up in an alley no one can find. And when inconsistency grows, you don’t just confuse teams — you lose visibility long before your analytics show a single sign something’s wrong.

CRO Meets GEO

Or Why My Sample Size Calculator Just Started Crying

By this point in the interview, my optimizer brain wasn’t buzzing in the usual “yay, sample size calculations!” way. It was buzzing in the “wait… how do you run experiments when an LLM has already pre-baked the outcome?” way.

Because here’s the part no one warns experimenters about: We’re no longer testing whether users understand our carefully curated site journey... We’re testing whether models understand our brand.

And that is a… shift.

Nick was calm about all of this (as usual). I, meanwhile, was mentally mapping escape routes like someone who just heard a velociraptor click a door handle in the next room.

Jurassic Park, Clever Girl

The First Big Shift: Testing Upstream, Not Just Onsite

In traditional CRO, you design variant B, you measure the delta with control A, and life is good.

In the GEO era? LLMs are the new middlemen.

Before a user ever lands on your meticulously structured A or your perfectly planned B, an AI system has:

  • interpreted your brand

  • summarized your offering

  • compared you to competitors

  • filtered your availability

  • considered the user’s personalized previous preferences

  • pulled your information into a synthesized recommendation

…and only then handed the user over to your site.

It’s Jono’s machine-immune-systems idea brought to life [17]: algorithms constantly filter, adjust, defend, and “clean up” the information environment before users ever interact with your UX.

So now, experiments must account for two layers:

  1. What you change onsite, and

  2. How AI systems interpret those changes upstream.

This is where Nick’s research becomes essential. His datasets show how even small shifts in content clarity or entity wording can influence whether AI systems surface the right information or quietly swap in a competitor instead [28].

Your “test” might succeed… but only if the model understood what you changed.

The Second Shift: Your KPIs Must Evolve

Nick and I spent a long time on this one.

If AI-curated visitors behave differently: already briefed, already confident, already halfway through the decision - then your old KPIs stop telling the whole story.

Metrics like:

  • bounce rate

  • scroll depth

  • time on site

  • entry page mix

  • micro-conversions

…all start behaving like caffeinated raccoons.

Juliana Jackson also discussed this in her talk on fragmented discovery [23]: users don’t “journey” anymore; they jump. AI Overviews accelerate that fragmentation by providing users with the entire educational arc before they ever enter your domain.

So the useful questions shift from:

“Did the variant win?” to “Did this change improve how models understand us?”

“Did this convert more users onsite?” to “Did this increase the number of qualified users who made it past the AI filters?”

Meaningful uplift isn’t just the delta on your site; it’s the delta in how many people the LLMs are willing to send you.

Third Shift: Content is Now Part of Your Experimentation Stack

This was one of my biggest aha moments.

Nick pointed out that AI systems heavily weight structured, up-to-date, unambiguous content. When a brand updates:

• a definition
• a specification
• a misconception
• a product fact
• a well-formed question

…the LLM’s answers update quickly.

Which means:

Updating content is an experiment. And measuring model response is a result.

The loop now looks like:

Content → Model Interpretation → Visibility Shift → Traffic Changes → Onsite Behavior → Revenue

Experimenters must get comfortable testing second-order effects. Update a PDP today, and your onsite CVR might not budge tomorrow. But your traffic mix might shift next week because the model’s confidence score improved.

Welcome to experimentation’s new butterfly effect.

Fourth shift: Experimentation Becomes Cross-functional, or it Doesn’t Work at All

Nick and I kept returning to this. Experimentation in the agentic era isn’t just the job of any one team anymore.

You need:

  • content teams

  • SEO/visibility teams

  • UX researchers

  • data scientists

  • brand/PR

  • product

  • and experimentation

…all aligned around the same definitions and the same information ecosystem.

If your content says one thing, your metadata says another, your support docs are from 2018, and your CEO says something else on a podcast?

The LLM has no idea what’s true. Your results become noise.

The Mindset Shift Nick Taught Me

Nick didn’t phrase it quite like this, but it was the subtext running through everything he shared:

Experimentation is no longer about changing user behavior. It’s about changing model behavior so that users reach you in the first place.

Your experiments still matter. Your insights still matter. Your analytics still matter.

But now, every test occurs within a giant, invisible system that shapes, filters, rewrites, and narrates the experience before the user even arrives.

We used to test journeys. Now we test discoverability, clarity, consistency, and the integrity of the information ecosystem around the brand.

It’s still experimentation…just not the kind we were doing in our comfortable little website and mobile app silos. Honestly? I think it’s more exciting.

More complex, yes.
More chaotic, definitely.
More full of velociraptors, absolutely.

But exciting.

Credit: Giphy, Jurassic Park, baby dinosaurs

Rapid Fire with Nick Haigler

I wrapped our conversation the same way I wrapped Alisa’s and Jono’s — with five quick questions meant to be fun, fast, and probably revealing. Nick, of course, answered them with the calm precision of someone who keeps spreadsheets for recreation. Meanwhile, I kept interrupting myself, thinking, “Ooh, more numbers.”

Q. What is the biggest change in how people find things today versus five years ago?

Nick: “The amount of research that they can do within a minute now that they couldn't do in the past, whether that's open up multiple different tabs, or clicking through on some different links, there are so many fewer clicks that are needed now than five years ago.”

Great point. Especially the TABS!! So Many TABS!! (Yes, I realize I can close them, but why would I????)

Q: What's the single most important thing a brand should do differently to stay visible in the age of AI?

Nick: “Dig into your own data. There are so many general studies out there.... We need to be very specific to the vertical that we're in. So first, I would say, let's look into what traffic and what type of performance they're already getting from AI. Let's benchmark it, and then let's see what we can do to influence it or shift that in the future.”

Benchmark against your SELF first - and then your own vertical. This is such great advice. Stop comparing yourself to the Amazons, Bookings, and Netlfixes of the world. Unless that’s your vertical!

Q: What's one thing experimenters should stop doing right now, because it no longer works?

Nick: “I would say for an experimenter, the llms.txt files… We've tested a couple of different times. They're indexed in Google, but as far as driving any AI references, we're not seeing those impact at all.”

This one was over my head. Fortunately, we have smart people like Nick to teach us. The llms.txt is a new type of file for websites that provides instructions for large language models to understand and access site content more effectively, similar to how a sitemap.xml file helps search engines. It's a markdown file that maps out a site's structure, key content, and important pages to help AI tools like ChatGPT, Perplexity, and Claude navigate and process the information for queries. Nick has tested it, and it doesn't seem to have an impact. 🤷

Q: What's the most misunderstood thing about AI & search?

Nick: “I mean, it's a hot topic, but I'm going to stand firm and say that they are separate things, and it's just because they're different, not very different, but there are different outputs that we're looking to get, and different KPIs that we're looking to get from each of these.” 

This one is interesting to me - because it’s very different from what Jono said. Jono said the most misunderstood thing was that prompts matter. Nick seems to be suggesting that AI and search are very different things (or maybe AI Search and Organic Search - GEO and SEO? ARE very different. Maybe worth a follow-up).

Q: If you could recommend one resource for this AI age, what (or who) would it be?

Nick: “I would honestly say Kevin Indig. I've been reading a lot of his stuff on his Growth Memo lately, so check him out. He's very active on LinkedIn.” 

Closing Thoughts

Talking with Nick felt like someone calmly handing me a flashlight in a cave I didn’t realize I was already inside. Every time I thought I understood the terrain — fewer clicks, higher intent, survivorship bias, the brand’s evil AI twin, the velociraptor maze — Nick would quietly point out another path we now have to pay attention to.

But here’s the thing I walked away feeling:

We are not powerless.

This new world isn’t chaos. It’s just… different. And experimenters? We’re built for “different.” This is what we do. We notice patterns. We adapt frameworks. We figure out which levers matter when the old ones stop working.

Nick’s work doesn’t just diagnose what’s changing. It gives us new places to look, new signals to measure, and new ways to interpret the customer paths that LLMs now mediate on our behalf.

If GEO is the new terrain, Nick handed us a map.

Explorer’s Log

Here’s our running reading list so far. It’s a cumulative trail of everything we’ve discovered (and occasionally tripped over) along the way. It includes articles referenced in our first Mapping GEO piece, this one, and a few that we’ll likely cover in future installments. I’m keeping the numbering consistent for those joining the journey mid-map, because honestly, I’m learning right alongside you.

Credit: Giphy, Safari time

  1. How Search Works – Google Search Central Blog: Google’s official guide to crawling, indexing, and ranking. Helpful for understanding what they say drives discovery (and what probably doesn’t). https://developers.google.com/search/docs/fundamentals/how-search-works.

  2. SEO for AI Search Engines: An Early POV – Alisa Scharf (Seer Interactive): Alisa breaks down what optimization means when the “search engine” is an AI model instead of a crawler. Context and entity understanding now outrank keywords. https://www.seerinteractive.com/insights/seo-for-ai-search-engines-an-early-pov.

  3. Study: The AI Search Landscape Beyond the SEO vs GEO Hype – Alisa Scharf & Marketa Williams (Seer Interactive): A comprehensive industry study on how AI-driven discovery systems interpret and deliver information and where traditional SEO falls short. https://www.seerinteractive.com/insights/study-the-ai-search-landscape-beyond-the-seo-vs-geo-hype.

  4. 2024 Zero-Click Study – Rand Fishkin (SparkToro): Quantifies how often users find what they need directly in search results, explaining why “visibility” no longer means “traffic.” https://sparktoro.com/blog/2024-zero-click-search-study-for-every-1000-us-google-searches-only-374-clicks-go-to-the-open-web-in-the-eu-its-360/.

  5. The First-Ever UX Study of Google’s AI Overviews – Kevin Indig (Growth Memo): Early research showing how AI Overviews reshape user behavior, trust, and attention, and why “top ranking” might not matter anymore. https://www.growth-memo.com/p/the-first-ever-ux-study-of-googles

  6. The Impact of AI Overviews on SEO – Kevin Indig: Quantifies the ripple effect of AI Overviews on traffic and click-through rates. A practical must-read for visibility watchers. https://www.growth-memo.com/p/the-impact-of-ai-overviews-on-seo

  7. How AI Overviews Are Impacting CTR: 5 Initial Takeaways – Nick Haigler (Seer Interactive): How Google’s AI Overviews shift clicks and impressions, revealing why CTR benchmarks are no longer reliable indicators. https//www.seerinteractive.com/insights/how-ai-overviews-are-impacting-ctr-5-initial-takeaways.

  8. Why 2020’s SEO KPIs Won’t Work in 2024 in a GenAI & Data-Scarce World – Wil Reynolds (Seer Interactive): Argues that success in the GenAI era can’t be measured in clicks. He makes a convincing case for measuring authority and usefulness instead. https://www.seerinteractive.com/insights/why-2020s-seo-kpis-wont-work-in-2024-in-a-genai-data-scarce-world.

  9. The Next Big Thing: AI Browsers – Alisa Scharf, John Lovett & Jordan Strauss (Seer Interactive): Explores how AI browsers summarize and filter web content, potentially replacing traditional search altogether. https://www.seerinteractive.com/insights/the-next-big-thing-ai-browsers-what-marketing-leaders-need-to-know-now.

  10. Claude’s Economic Index Reports – Anthropic: Regular updates analyzing how AI tools are used globally by whom, for what, and how often. A macro view of GEO’s evolving audience. https://www.anthropic.com/research/anthropic-economic-index-september-2025-report.

  11. AI Memory Features Will Transform Search and Marketing – Christian J. Ward: Why AI “memory” makes discovery personal, creating continuity between queries and transforming how people find (and re-find) information. https://www.bedatable.com/p/ai-memory-features-will-transform-search-and-marketing.

  12. The Growth Plateau: Why Investing in Brand Awareness Is Your Next Strategic Move – Brittani Hunsaker (Seer Interactive): Argues that when performance plateaus, strong brand signals become the differentiator and the foundation of long-term growth. https://www.seerinteractive.com/insights/the-growth-plateau-why-investing-in-brand-awareness-is-your-next-strategic-move.

  13. Personas Are Critical for AI Search – Kevin Indig & Amanda Johnson (Growth Memo):  Explains why personas are still essential in the age of AI search. They help feed systems the right context. https://www.growth-memo.com/p/personas-are-critical-for-ai-search.

  14. The URL-Shaped Web – Jono Alderson: Explores how the web is evolving from pages and links to entities and meaning and how that changes “ownership.” https://www.jonoalderson.com/conjecture/url-shaped-web.

  15. The Sea of Sameness Problem in Content Marketing & SEO – Wil Reynolds: Reminds us that ranking isn’t the goal; helping people is. Real value lies in content that’s original, trustworthy, and shareable. https://www.seerinteractive.com/insights/the-sea-of-sameness-problem-in-content-marketing-seo.

  16. On Propaganda, Perception & Reputation Hacking – Jono Alderson: A provocative look at how algorithms manipulate what audiences see and believe and why reputation is now a survival skill. https://www.jonoalderson.com/conjecture/propaganda-perception-reputation-hacking.

  17. Marketing Against the Machine Immune System – Jono Alderson: Explains how algorithms evolve to defend their own definitions of “truth,” filtering the digital environment like living immune systems. https://www.jonoalderson.com/conjecture/machine-immune-systems.

  18. Brand Authenticity and Consumer Trust in the Digital Age – Aditi Mehta: Explores how genuine behavior (not performative authenticity) builds long-term consumer trust. https://management.eurekajournals.com/index.php/IJTOMM/article/view/992.

  19. Digital Ethnography: Principles and Practice – Sarah Pink, Heather Horst, John Postill, Larissa Hjorth, Tania Lewis & Jo Tacchi: A foundational work in studying online culture and behavior through anthropology. Key to understanding brand perception in digital spaces. https://www.christian-cohen.de/wp-content/uploads/2019/09/Pink-et-al-Digital-Ethnography_-Principles-and-Practice-Sage-2016-compressed.pdf.

  20. The Optimizer’s Playbook: Expert Strategies for Digital and Real-Life SuccessNo Hacks Podcast, Sani Manić: Candid interviews with optimization pros on where strategy meets psychology (and humility). https://www.nohackspod.com/episodes.

  21. When Humans and AI Work Best Together — and When Each Is Better Alone – MIT Sloan Management Review: Explores how humans and AI complement each other in decision-making. Useful for understanding collaboration in GEO systems. https://mitsloan.mit.edu/ideas-made-to-matter/when-humans-and-ai-work-best-together-and-when-each-better-alone

  22. As AI Meets the Reputation Economy, We’re All Being Silently Judged – Harvard Business Review: Explains how AI-driven reputation systems continuously evaluate people and brands (often invisibly), shaping access, trust, and opportunity. https://hbr.org/2018/01/as-ai-meets-the-reputation-economy-were-all-being-silently-judged.

  23. Aligning Insights, Intent, and Impact” – TLC Talk: Juliana Jackson dismantles the linear “customer journey” and reframes discovery as fragmented; argues for content-market fit over funnels. https://youtu.be/iCuVLTSdGD8.

  24. The Zero-Effort Lie: How AI is Accelerating the Death of the Internet: Exposes how AI’s promise of effortless creation is eroding creativity, meaning, and quality (and why human effort still matters.) https://nohacks.substack.com/p/the-zero-effort-lie-how-ai-is-accelerating.

  25. LLM Conversion Rates – Nick Haigler (Wix Studio AI Search Lab): Why large language models often drive higher conversion rates and how marketers can tap into them. https://www.wix.com/studio/ai-search-lab/llm-conversion-rates.

  26. Study: How Do Stadium Sponsorships Impact Localized AI Visibility? – Nick Haigler & Katie Perkins (Seer Interactive):  Banks with stadium sponsorships appear 3× more often in local AI search and 3.7× more in their home markets. https://www.seerinteractive.com/insights/study-stadium-sponsorships-impact-ai-visibility.

  27. AIO Impact on Google CTR: September 2025 Update – Tracy McDonald (Seer Interactive): The latest data on how AI Overviews continue to reshape click-through rates across Google results. https://www.seerinteractive.com/insights/aio-impact-on-google-ctr-september-2025-update.

  28. How LLMs Amplify Brand Misconceptions & How to Address Them With GEO. Nick Haigler.  If AI tools surface misconceptions about your brand, publish authoritative, updated content to overwrite them over time. https://www.seerinteractive.com/insights/using-geo-to-address-brand-misconceptions.

  29. Case Study: 6 Learnings, 1 site - How Traffic from ChatGPT Converts. By Nick Haigler & Garman Chan. Shows that AI-driven visitors convert at much higher rates due to stronger intent. https://www.seerinteractive.com/insights/case-study-6-learnings-about-how-traffic-from-chatgpt-converts.

  30. The Hotelier’s survival guide: “survivorship bias” and how to analyse the Unconstrained Demand for your hotel (Part 2). By Marta Romero. A clear explanation of how visible performance can mask hidden demand, and why measuring only the visitors you see leads to dangerous misreads in the AI-filtered era. https://www.mirai.com/blog/the-hoteliers-survival-guide-survivorship-bias-and-how-to-analyse-the-unconstrained-demand-for-your-hotel-part-2/.

Next
Next

Mapping GEO: Reputation, Reality, and the End (?) of the Website