Skip to main content
ADVERTISERS

AI monetization for brands.

AI audiences don’t show up in Google Analytics. Here’s how brand teams are reaching them anyway.

Why brand teams are confused about AI audiences.

A brand team that looks at its own analytics dashboard today will see a smaller share of customer research happening on the brand’s own properties than it saw two years ago. The drop is real and the cause is known. Users are moving the research phase of their journey into AI apps. They ask an agent what to buy, what to try, what to avoid. The agent answers. The user comes to the brand’s website only at the bottom of the funnel, if at all.

Standard web analytics do not show this migration because the AI apps do not send referring traffic the way search engines did. When a user asks a chatbot about running shoes and then buys a pair, the analytics attribution lands on whatever the last click was — brand direct, organic search, paid search, or nothing if the purchase happened in a physical store. The AI-layer influence step is invisible to the tools brand teams have depended on for a decade.

The second source of confusion is vocabulary. The AI ecosystem does not yet use the category names brand teams use. A brand team looking for “display advertising in ChatGPT” will find scattered product announcements. A brand team looking for “AI app ad network” will find a handful of vendors with incompatible definitions. The category exists but the naming has not settled. For the shared terminology we use across this site, see the glossary.

The third source of confusion is the measurement stack. Brand teams have been asking their agencies for an AI layer reach metric for two years and receiving either nothing or a mix-and-match of metrics that do not sum. Share of Placement is the metric we introduced to solve the summing problem. See the Share of Placement page for how it works and why it is the AI-layer analogue of share of voice.

The four-format playbook.

Surfacedd ships four disclosed Surface types across its network of third-party AI apps. Each Surface is a different creative fit. The playbook for a brand team is to map existing creative assets to the Surface types and to run the formats where the fit is native rather than retrofitted.

Text Surfaces for direct response and branded discovery.

The text Surface is a labeled sponsor unit next to a chatbot answer. Short headline, sponsor line, destination URL. Copy-forward brands with clear calls to action fit the format directly. Retailers, D2C brands, subscription services, and local businesses typically start here because the creative transfers from paid search with minor rewrites.

Image Surfaces for consumer goods and lifestyle brands.

The image Surface is a labeled product placement inside an AI-generated image. The generator composites the product into a scene with a visible sponsor credit in the frame. Categories already running product-in-scene creative — home goods, fashion, food, beverages, travel — fit the format without a rebrief. Categories that do not usually render as a clean product-in-scene need a creative pass before going live.

Voice Surfaces for branded audio.

The voice Surface is a 5 to 10 second disclosed audio segment inside a voice assistant reply. Brands with existing audio assets fit the format directly. Brands without audio use Surfacedd’s production support, which covers voicing, edit, and disclosure mastering to the published spec. Voice Surfaces carry scarcer inventory than text or image and are best used as a complement rather than the sole channel.

Code Surfaces for developer-facing brands.

The code Surface is a labeled sponsor line in the comment layer of a code completion. Developer tools, cloud services, APIs, and infrastructure brands fit the format because their audience writes code for a living. Consumer brands generally skip the code Surface; the audience does not match.

Measurement without cookies.

Brand teams running on Surfacedd get aggregate reporting only. No third-party cookies, no cross-site IDs, no per-user attribution graphs. The reporting stops at aggregate because the architecture stops at aggregate. The tradeoff is deliberate.

The dashboard reports impressions, clicks, CTR, CPC, CPM, completion rate for voice, and Share of Placement against the declared competitive set. Breakdowns run by Surface, by app, by context bucket, and by geography. Reporting refreshes hourly during live campaigns. CSV export and API access are standard.

Brand lift sits on a separate track. Standard panel-based survey measurement runs across context buckets and geographies. Pre and post panels measure awareness, consideration, and purchase intent against a matched control. The methodology is coarser than identity linked brand lift studies from the display era and more stable under the direction privacy law is heading. We support several measurement partners through the API.

Conversion attribution runs through marketing mix modeling and through last-click attribution on the brand’s own side. Surfacedd exposure becomes a variable in the marketing mix model alongside TV, digital, and retail media. The model assigns a lift coefficient to the AI layer across quarters. Brand teams new to MMM typically bring the Surfacedd exposure series into the existing model rather than building a new one.

How to brief a creative for AI placements.

A brief for AI placements shares most of its structure with a brief for digital display. Four differences matter enough to be called out.

Audience definition runs on context rather than persona. A persona-based brief that describes a 34-year-old suburban homeowner does not map to AI Surfaces cleanly because the targeting does not select users by demographic graph. The brief should describe the prompts the target audience is likely to enter and the Surfaces that audience is likely to see. Instead of persona, write the top five to ten prompt intents the campaign wants to appear against.

Creative assets run per Surface type, not per screen size. A single visual design does not span text, image, voice, and code. The brief should include one asset family per Surface type with matching disclosure allowances. Image Surfaces need a clean product still the generator can composite into a scene. Voice Surfaces need audio mastered to loudness, length, and silence-pad specs. Code Surfaces need a one-line sponsor copy. Text Surfaces need three to five headline variants for A/B.

Messaging runs conversationally. AI users arrive at a Surface after asking a question in plain language. Creative that reads like a conversational continuation performs better than creative that reads like a banner ad dropped into a chat window. Headlines should match the cadence of the surrounding agent output without claiming to be the agent.

Disclosure is structural, not optional. The brief should assume every Surface carries a sponsor label and a disclosure container. Creative teams working on AI placements for the first time sometimes try to design around the label or to minimize it; doing so fails creative review. The brief should frame the disclosure as a constant and design within it.

Budget allocation.

Brand teams new to the AI layer typically allocate between three and seven percent of the working media budget to Surfacedd in the first year. The range reflects category variance more than brand variance. Categories with heavy AI app audience overlap sit at the higher end; categories with lighter overlap sit at the lower end. For the audience sizing by category, see the 2026 report.

Within the AI layer allocation, the default split across formats tracks inventory availability rather than audience preference. Text Surfaces take the majority of early spend because the pool is widest and the creative transfers fastest. Image Surfaces take the next slice. Voice and code sit last, either because the inventory is narrower or because the creative fit is narrower. Brand teams running across all four Surfaces typically land near a 50 / 25 / 15 / 10 split in the first quarter.

The second-year allocation depends on the first-year reporting. Brand teams that see a clean lift signal in the marketing mix model usually increase allocation in year two. Brand teams that do not see the signal often reduce rather than remove; removal is rare because the non-participation cost becomes visible in Share of Placement reports against the competitive set.

For brand teams buying alongside ChatGPT Ads, the allocation split between first-party and third-party networks sits close to 50 / 50 at reporting maturity. First-year splits skew to first-party because the onboarding path is simpler. Second-year splits usually balance out as the third-party side catches up on reporting and reach reports roll up across both. See reach AI users for the position on how the two layers fit together.

FAQ

Frequently asked questions.

Our media plan runs on GRPs and CPMs. How does this fit?
Surfacedd reports CPM, CTR, and a network-specific reach metric called Share of Placement. GRPs do not translate cleanly because AI outputs do not carry a fixed inventory grid the way TV dayparts do. Brand teams typically bring Share of Placement into the media plan as a separate line for the AI layer, alongside CPM-based reach for linear and digital. Quarterly reviews combine the series.
What part of the marketing budget should this come from?
Most brand teams fund initial AI layer spend from the digital performance budget for direct response Surfaces and from the brand building budget for image and voice Surfaces. A few teams fund it from search because the intent signal structurally resembles search. The correct split depends on the internal P&L owner of AI audiences; we can walk through the allocation during onboarding.
Can we run our existing brand guidelines on AI Surfaces?
Brand guidelines for logo use, color, and tone transfer directly. What does not transfer is the assumption that creative sits on a fixed canvas. Image Surfaces composite the product into a scene the generator produces, so the product still needs to be clean and isolable. Voice Surfaces need audio mastered to a specific loudness and duration. Code Surfaces need a one-line sponsor copy. Guidelines update, not replace.
How do we measure brand lift without a third-party ID?
Brand lift runs through panel-based survey measurement, not identifier-linked exposure matching. Advertisers typically run a pre and post panel using a measurement partner we support through the API. The measurement sits on top of aggregate exposure data by context bucket and geography. The methodology is less granular than cookie-based lift was and more durable under privacy law.
How long does a typical pilot run?
Most brand pilots run 8 to 12 weeks across one or two Surface types. That window captures enough impression volume to report Share of Placement against the competitive set and to run a brand lift panel. Pilots under 4 weeks rarely generate stable reporting because context buckets have natural seasonality the short window does not span. Longer pilots run across all four Surfaces.
What changes for 2026 specifically?
Third-party AI app usage grew past the threshold where a brand team can treat it as a test budget. The audience is large enough that non-participation is a reach gap, not a timing choice. Our 2026 report on AI app advertising breaks down audience size, category adoption, and spending patterns by vertical. Most brand teams are using that report to set the baseline for their first full-year plan.
FOR ADVERTISERS

Reach users inside the AI tools they already use.

CPC from $0.50. CPM from $5. Text, image, voice, and code placements across independent AI apps.