Who Owns GEO in Your Org? CMO or CTO?

GEO defaults to marketing in most companies. The leverage sits in engineering. The ownership matrix, the operational test, and the case for moving it.

Org chart with GEO ownership moving from marketing to engineering
Org chart with GEO ownership moving from marketing to engineering

Key Takeaways

  • Where the leverage actually sits — Schema.org coverage, render mode, robots policy, llms.txt, page-level entity linking, and observability of citation rate are engineering surfaces. None of them are content decisions.
  • What marketing should still own — The editorial layer: voice, topic priorities, the canonical answer block, and brand co-occurrence strategy. That is real ownership of a real surface. It is not the whole thing.
  • The reporting line that breaks — When GEO reports into the CMO, schema and rendering changes get filed as "SEO tickets" and queue behind every other backlog item. Citation rate stays flat for two quarters and the budget gets cut.
  • The working ownership pattern — CTO accountable for the system. CMO accountable for the editorial inputs. A small joint working group, weekly, with a single dashboard. Anything else collapses into finger-pointing within a quarter.

The question every operating committee will face in 2026

A growth lead walks into a Tuesday operating committee with a slide about generative engine optimisation. The slide claims citations from ChatGPT and Perplexity are converting at meaningfully higher rates than normal organic traffic. Early pilot data points at up to a three-times uplift on certain query classes. Attribution is still a developing field, but the directional signal is real. The CFO asks what the budget ask is. The growth lead says headcount and tooling. The COO asks who owns it. The room goes quiet.

This is the moment most companies get the reporting line wrong. The default answer is marketing, because the word search is in the room. The CMO inherits the workstream, opens a Notion page called AI Search Strategy 2026, and the work stalls for the rest of the year.

The reason it stalls is structural. The CMO has authority over the editorial inputs, but lacks the technical sovereignty to ship the changes that decide whether an AI engine cites your pages. Engineering has that sovereignty. When the work reports into the function that does not hold the levers, every change becomes a cross-functional ticket. Cross-functional tickets queue behind product roadmap items. Two quarters in, citation rate risks staying flat and the budget gets cut at the next planning cycle.

Marketing owns the intent. Engineering owns the delivery interface. Inverting that is the same as putting database performance under the head of product.

The eight workstreams that decide whether you get cited

To answer the ownership question you have to look at what actually moves the metric. The literature is still thin, but the levers are clear from any honest experiment in the category. CTAIO Labs ran one earlier this year and the per-engine deltas are public, so you can audit the list against real numbers rather than vendor pitch decks.

In our analysis of the eight primary levers for AI citation, the ownership split looks like this. The leverage ranking is approximate and category-dependent. The functional split is not.

  1. Server-side rendering. Modern crawlers are gaining JavaScript-rendering capability, but most LLM indexers prioritise compute efficiency and bias toward the initial HTML payload. Content hydrated after the first response pays a visibility tax that varies by engine and by week. This is an engineering decision about render mode and framework configuration. Marketing carries the goal; engineering carries the execution.
  2. Schema.org coverage. Agents read JSON-LD even when Google does not surface rich results. Article, FAQPage, Product, HowTo, Organization, with proper author and dateModified fields. In many orgs the SEO team writes the schema spec inside the CMS. The point is not who drafts the values, it is who owns the validation pipeline and the regression tests. Engineering owns the system that keeps schema correct across thousands of pages.
  3. llms.txt. The spec is young and most teams have not shipped one. When they do, who writes it matters less than who deploys it, versions it, and keeps it in sync with the actual top-value pages. That is platform engineering. CTAIO Labs ran a thirty-day citation experiment on this specific intervention. The methodology and the per-engine deltas are documented.
  4. Robots and AI-agent policy. GPTBot, OAI-SearchBot, PerplexityBot, ClaudeBot, Google-Extended. Each one is a separate decision about whether to allow training, allow inference-time access, or both. This is a high-stakes infrastructure decision with IP, legal, and security inputs (see the New York Times-OpenAI litigation for the upper bound on what gets contested). Marketing should be consulted on the visibility trade-offs. Engineering must own the execution.
  5. Entity linking. When your content mentions a person, company, product, or concept, link it to a canonical reference. Wikipedia, Wikidata, the official domain. Agents use these links to disambiguate entities and to build trust. The decision about which entities to canonicalise is editorial. The infrastructure to do it at scale is engineering. In B2B, much of the entity work points at proprietary product APIs and schemas, and that part is purely engineering.
  6. Citation-rate observability. If you cannot see your citation rate per engine, per query class, per page, you are not operating a programme. You are operating without an observability framework, and any conclusions you draw from the data will be noise. The tooling layer is real. Our experiment on the visibility-tools category covers ten vendors with the numbers and the variance disclosures. Whoever owns observability for the main product should own this dashboard.
  7. The canonical answer block. Two to four sentences near the top of every page, in the exact language a user would phrase the question. Agents extract this verbatim. This is editorial work. The content team owns it. Engineering owns the template that exposes it consistently across thousands of pages.
  8. Editorial voice and brand co-occurrence. The strategic decisions about which topics to own, which authorities to align with, and which co-occurrence patterns to cultivate. This is the part of the surface that genuinely belongs to marketing.

Six of the eight are engineering. One is shared. One is marketing.

A working ownership matrix you can paste into the next operating review

Workstream Engineering Content / Marketing
Schema.org coverage and JSON-LD correctness Owned Reviewed
Server-side rendering and initial-HTML payload Owned
robots.txt and AI-agent allowlist policy Owned Consulted
llms.txt structure and per-section descriptions Owned Reviewed
Entity linking and Wikidata reconciliation Owned Reviewed
Citation-rate measurement and dashboarding Owned Consulted
Canonical answer block on each page Consulted Owned
Editorial voice, topic priorities, brand co-occurrence Owned

Frame it correctly and the pattern is conventional. Engineering owns the system. Marketing owns the editorial inputs to the system. A small joint working group meets weekly with a single dashboard in front of them. It is the same operating pattern you use for product velocity, conversion-rate optimisation, or any other metric that touches both functions.

Treat LLM citation rate the way you treat p95 latency

The real case for engineering ownership is not political. It is operational, and it is what gets you compounding returns instead of campaign-shaped ones.

A marketing function tends to treat citation rate as a campaign metric. It goes up after a content sprint, plateaus, then drifts down. The team runs another sprint. The metric goes up again. Over twelve months, the editorial work compounds, but the underlying system does not. That is the pattern this argument is trying to avoid.

An engineering function will treat citation rate the way it treats any production metric. Service-level objectives. A dashboard with thresholds. Alerts when the rate drops outside the expected band on a specific engine. Postmortems when a regression lands. The work compounds, because the system improves underneath the editorial layer.

The analogy has one important limit. Latency is an internal metric. You control the system that produces it. Citation rate is an external metric produced by a third-party engine whose algorithm you do not control. A Perplexity model update can move your number 50% overnight, and the postmortem will conclude, correctly, that nothing in your system broke. The right framing is observability of an external interface. The SLOs you set should be on system readiness (is the schema valid, is the SSR healthy, did the llms.txt deploy cleanly) and on drift detection per engine. Not on the engine algorithm itself.

The operational test
Ask the function currently considering ownership a single question. What is your alerting threshold for a 20% drop in Perplexity citation rate week-over-week, and who gets paged? If the answer involves a Slack channel and an end-of-week review, the function is operating it as marketing. If the answer is a PagerDuty rotation tied to a specific dashboard, the function is operating it as infrastructure. One of those produces a programme. The other produces a slide deck.

The counter-argument, and why it does not survive contact

The argument for marketing ownership goes like this. Marketing already owns SEO. SEO and GEO are adjacent disciplines. The handoff cost of moving one without the other is real. The content team has institutional knowledge about voice, topic priorities, and brand. Engineering does not.

The first two points are correct. The third point is the one that fails. Voice and topic priorities are the editorial layer, and that layer should stay with the content team in either ownership model. The question is who owns the surface underneath. Keeping the surface with marketing because the editorial layer is there inverts the product-engineering split that every operating committee already runs on for the main product. Nobody defends that inversion out loud once you name it plainly.

The cleaner pattern is the one operating committees already understand. The CTO is accountable for the system. The CMO is accountable for the inputs to the system. Both report into the CEO. Neither reports into the other.

Four caveats senior CTOs will reasonably raise

Four legitimate objections to the argument above, with the working response to each.

  1. Attribution is broken at the engine level. Most LLM products strip referrer headers and UTM parameters before any traffic reaches your site. You cannot get clean GA4 channel reporting on AI-mediated sessions the way you can on Google referrals. The working response is to combine three weaker signals: log analysis of bot user-agents on the page side, branded query lift in GSC, and direct citation tracking via vendor or in-house tooling. None of the three is clean. Together they are a defensible baseline.
  2. Legal and IP exposure on robots and llms.txt. Allowing GPTBot or ClaudeBot has real implications for training-data consent, content licensing, and the increasingly active litigation around publisher-to-LLM relationships. This is not a marketing decision, and it is not purely engineering either. The robots and llms.txt policy belongs in a brief joint decision document signed by the CTO, the General Counsel, and the CMO. Engineering executes it. The fact that this conversation has to happen at all reinforces the case for CTO-led ownership.
  3. B2B vs B2C dynamics differ. In B2C, citation rate maps closely to consumer-facing AI products, and the work resembles SEO with a different surface. In B2B, the citations that matter are inside Perplexity Enterprise, ChatGPT Team, and internal RAG deployments. These are surfaces where the buyer is already evaluating you. B2B GEO sits closer to developer relations than to content marketing, which strengthens the engineering-ownership argument further.
  4. Small companies do not have a platform engineering function. Below roughly fifty people the question collapses. Whoever can ship a schema change and an llms.txt update this week without escalating owns the work. In practice that is the founder or the lead engineer. The matrix above is for companies where the CTO and CMO are separate people with separate teams.

What to do this quarter if you are the CTO

Three concrete steps. None of them require headcount.

  1. Run a one-page audit of the eight workstreams above against your current stack. Score each one red, amber, green. The exercise takes a senior engineer four hours. The output is the agenda for the next operating committee.
  2. Take accountability for the system in writing. Send an email to the CMO and the CEO that says you are taking ownership of the technical surface for AI search and proposing a weekly joint working group with content. This is the political move that prevents the workstream from drifting into the marketing budget and disappearing.
  3. Stand up the dashboard before the tooling. Before you buy a $499 a month platform, instrument what you can with a Python script that hits the public APIs of the major engines once a week against a fixed query set. The dashboard is the artefact that gives the working group something concrete to look at. Tooling decisions become easier once the baseline exists.

What to do this quarter if you are the CMO

The mirror version. Three steps, none of them headcount.

  1. Formally delegate the technical surface to engineering, and retain authority over the brand knowledge graph. Going first costs nothing and prevents a year of low-leverage work on tickets that the function cannot actually close. The CMO who does this first looks decisive and ends up with a better result on both surfaces.
  2. Keep accountability for the editorial layer and the brand co-occurrence strategy. That is the surface marketing genuinely owns. Define what good looks like for canonical answer blocks across the top hundred pages. Brief the content team on the brand co-occurrence pattern you want to cultivate.
  3. Be the function that frames the dashboard. Engineering will build the citation-rate dashboard. Marketing should be the function that decides which query classes matter, which engines matter, and what the target rate is. That framing decision is more valuable than the implementation.

Where this goes over the next eighteen months

The reporting-line question gets settled industry-wide by the end of 2027. The companies that move first end up with a measurable lead, because the right structure lets engineering work on the actual problem instead of writing cross-functional tickets that go nowhere. The companies that get it wrong spend a year discovering that the workstream is stuck and then quietly move it.

The cleaner version of the argument is the one that gets written into role definitions. A new senior individual contributor title sits on the platform team. Call it AI Search Engineer, Platform Engineer for Discovery, or whatever the local convention prefers. A peer on the content side keeps the editorial layer. The pair has a weekly review with shared metrics. The function reports into the CTO with a dotted line to the CMO.

A handful of companies are already operating this way. The model works. Copy it before your competitors decide to.

Frequently asked questions

Why does GEO ownership matter as a discrete question?

Because the surface is new and reporting lines are still being drawn. Most companies put it under marketing by default, because the words "search" and "visibility" appear in the description. Once the reporting line is set, the budget, the headcount, and the prioritisation follow it. If the line is wrong, the work stays stuck. This is the moment to set it correctly.

What is the actual difference between SEO ownership and GEO ownership?

SEO has a thirty-year history of being a content team problem because the levers were mostly editorial: keyword targeting, internal linking, content quality. The technical SEO surface stayed small. GEO inverts that ratio. The editorial layer is still real, but six of the eight workstreams that move citation rate are engineering decisions: schema coverage, structured data correctness, server-side rendering, llms.txt, robots policy, and the instrumentation that tells you any of it is working.

Should the CTO take this on if the CMO already owns SEO?

Yes, and the framing matters. This is not a coup over the existing SEO function. The content team keeps the editorial layer and stays accountable for what gets said. The CTO takes accountability for the system the content runs on. The split is the same one engineering and product already operate under for the main product: PM decides what gets shipped, engineering decides how it runs in production.

What if we are a small company with no CMO or CTO?

In a small company the question collapses into something simpler: which person on the team has the access and the judgment to ship a schema.org change and an llms.txt update this week without escalating? That person owns GEO. In practice it is almost always the founder or the lead engineer, not the marketing hire.

How long until this question is settled industry-wide?

Twelve to eighteen months. By the end of 2027 the role definitions for Head of AI Search, AI Search Engineer, and the rest of the stack will exist as recognised LinkedIn titles. The reporting line will have converged. The companies that get it right earlier than that will have a measurable lead on citation rate, because the right reporting line lets engineering work on the actual problem instead of writing tickets for marketing.

How does this relate to GEO, AEO, and LLM SEO as separate disciplines?

The labels matter less than the ownership question. Whatever the vendor pitch deck calls it, the underlying work is the same: make your pages machine-readable, expose the right structured data, render on the server, allow the agents, instrument the result. The category-level explainer for the four terms lives in the AI Search hub on We The Flywheel.

Only 3 slots available this month

Ready to Transform Your AI Strategy?

Get personalized guidance from someone who's led AI initiatives at Adidas, Sweetgreen, and 50+ Fortune 500 projects.

Trusted by leaders at
Google · Amazon · Nike · Adidas · McDonald's