Skip to main content
REPORT

The State of AI App Advertising 2026.

Data from 120+ AI apps in Surfacedd’s network. The first public benchmark report for this category.

What this report will cover.

AI app advertising became a real line item in 2026. OpenAI shipped ads inside ChatGPT for free-tier users. Google placed AdSense units inside third-party chatbot conversations. A handful of independent networks went live with surface-native inventory. Developers started publishing payout numbers. Advertisers started budgeting against the layer. What none of those moves produced was a shared benchmark.

The State of AI App Advertising 2026 is the first public benchmark report for the category. It draws on first-party data from 120+ AI apps integrated with Surfacedd. It reports CPMs and CTRs by Surface type. It reports revenue share figures and developer payout curves by app size. It names the integration patterns that are working and the ones that are not. The scope is the ad layer that sits inside AI outputs, not display networks attached to AI domains.

The promise is specific. Every number in the report is tied to a network impression record, not a survey response or a panel estimate. Per-app numbers are anonymized, but aggregate numbers are auditable. Methodology is printed in the report itself, so any analyst or academic can evaluate the approach before citing the figures.

The audience is three groups. Advertisers and media buyers who need benchmark CPMs to build plans. AI app developers who need to know whether their revenue is in line with category norms. Analysts, journalists, and academics who need a citable reference point for the size and shape of the category in 2026.

Methodology.

The report uses network-wide first-party data from AI apps integrated with Surfacedd. Every impression and every click counted in the report was served and logged by Surfacedd’s ad server. No panel data, no extrapolation from a sample. If a number is in the report, it came from a record on our side of the network.

Participation is opt-in. Each app in the network chooses whether to let its aggregated metrics be included in public benchmark reports. Apps that opt out are excluded from the dataset entirely. Apps that opt in retain the right to withdraw before publication. That means the benchmark population is a self-selected subset of the network, and we say so in the report itself.

Per-app numbers are anonymized. No individual app is named, and traffic figures are reported in tiers rather than exact counts. That protects commercial sensitivity while still letting the aggregate numbers land. Readers see the distribution; they do not see which app sits where on it.

Aggregates are weighted by traffic tier rather than by raw impression count. This matters because a small number of very large apps would otherwise dominate the averages. Tier weighting produces a number that better reflects the typical app experience in the category. The report prints both the raw and tier-weighted views so readers can see the delta.

The report period is the 2026 calendar year, rolled up quarterly for time-series views. Historic comparisons to 2025 are included where equivalent data exists on our side. Where it does not, we say so.

What to expect when it ships.

The report ships as a downloadable PDF with a companion data appendix. Expect ten sections, in this order.

  1. Executive summary. The top ten findings in a page, written for a reader who will not get past the first spread. Each finding is tied to a specific chart later in the report.
  2. Methodology. A full write-up of the dataset, opt-in rules, anonymization, tier weighting, and time windows. This section is the same one summarized above, in long form.
  3. Market shape. How many AI apps are running ads, in which categories, at what scale. Category density scores for the most active verticals.
  4. CPM and CTR by Surface type. Benchmarks for text, image, voice, and code Surfaces. Distributions, not single averages. Per-Surface notes on pricing volatility during the year.
  5. Revenue benchmarks by app size. Monthly revenue curves plotted against DAU tiers. What a 10K-DAU app is actually earning versus a 1M-DAU app, net of revenue share.
  6. Integration patterns. Which SDK integration shapes correlate with higher revenue per mille. Placement density, disclosure styling, and retry behavior are all called out.
  7. Disclosure and user trust. Survey data from end users on how they perceive sponsored AI Surfaces, paired with opt-out rates from apps that expose the control.
  8. What’s broken. The category failures we observed during the year, written bluntly. Fraud patterns, agent-loop click inflation, disclosure workarounds, and broken payout models.
  9. Predictions. A short forward view for 2027, grounded in the trends visible in the 2026 dataset.
  10. Downloads and citations. The dataset appendix, the preferred citation format, and the rights granted to readers.

The data is still being compiled. This page is the placeholder for the finished report, which will replace this content when ready. The methodology, scope, and section outline above are locked. The numbers are what we are working on.

Get notified when it ships.

Leave your email and we will send the report the day it publishes. No interim mailing list. One email, one link, one PDF.

0+Developers waiting
0Brands already listed

Citation guidance.

Once the report is published, the preferred citation format is: Surfacedd, The State of AI App Advertising 2026, published [date]. Use the actual publication date, not the report year. The report year and the publication date are not the same: the report covers calendar 2026 and will publish in early 2027.

For press. Journalists may quote any chart or figure with attribution to Surfacedd and a link to the report page. Screenshots of charts are permitted under the same terms. We ask that reporters use the chart title as printed in the report, since the titles are written to be quoted.

For academic work. The report carries a DOI, issued at publication, which is the preferred identifier for citation. The dataset appendix is released under a license printed in the terms page, and the methodology section is written to be evaluable without access to the raw logs. Academics who need underlying data for replication can request it through the contact page; we grant access under an NDA that permits publication of findings.

For marketing use. Brands and agencies may reference figures from the report in their own materials with attribution. We ask that the figure be printed in full, not rounded or restated, and that the report title and year appear next to it. Paraphrasing a benchmark into a stat that reads better but is no longer the number we published is not permitted use.

What not to do. Do not present Surfacedd numbers as independent industry averages; they are first-party benchmarks from one network. Do not extrapolate per-app numbers from aggregates; the methodology explicitly forbids that inference. Do not remove the methodology context when quoting a headline figure.

While the report is in production, these pages cover adjacent ground. The AI ad network, defined sets out the category and the properties an AI-native network has to meet. Advertising for AI agents zooms out to the agent layer and the mechanics of reaching users who delegate decisions to AI. Share of Placement is the brand metric we are proposing for the AI ad era, and it is the framework several charts in the report are built against. Chatbot advertising in 2026covers the year’s moves from OpenAI, Google, Koah, and Anthropic in long form, with context for anyone coming to the category cold.

SURFACEDD

Advertising for AI agents, built to be disclosed.

Join the waitlist. We are onboarding developers and advertisers in the order they sign up.