How to reach ChatGPT users, beyond ChatGPT.
ChatGPT has 800M weekly users. Third-party AI apps reach a different, overlapping audience. Both matter.
The overlap and the gap.
ChatGPT sits at roughly 800 million weekly active users. The third-party AI app category sits at hundreds of millions more, split across chatbots, writing assistants, image generators, voice-first assistants, and coding tools. The two audiences overlap. The overlap is neither negligible nor complete. The gap is where the reach conversation actually lives.
Panel measurement across the two segments points at an overlap near 60 to 70 percent by weekly active users. A ChatGPT user has a roughly two-in-three chance of also using a third-party AI app during the same week. A third-party AI app user has a similar chance of using ChatGPT. The tails are where the story gets interesting. The 30 to 40 percent of third-party app users who do not touch ChatGPT during the week are often the specialist audiences: designers living in image tools, developers living in coding tools, ops professionals living in automation tools, knowledge workers living in research tools.
Those specialist tails are small in percentage and large in dollar value. A designer who spends 20 hours a week in an image generator is worth more to a furniture brand than a casual ChatGPT user the same brand might reach with a display banner. A developer who spends 30 hours a week in a coding tool is worth more to a cloud services brand than any broad impression buy. The tails carry the intent. Running only in ChatGPT misses them because they do not sit in ChatGPT for those workflows.
The gap matters whether the brand is a generalist or a specialist. A generalist loses frequency against users who spend their focused hours in third-party apps. A specialist loses the audience entirely. Either way, a single-network buy is a partial buy. For the wider network argument, see reach AI users.
Where ChatGPT’s audience also spends time.
ChatGPT users are mobile across the AI layer. The workflows that split off to specialist apps follow a predictable pattern.
Image generation and design.
Image-first workflows are the largest splinter. Users who want to generate a specific style, edit a product photo, or run a concept board typically leave ChatGPT for a dedicated image tool. The dedicated tool has better controls, better style fidelity, and a design-focused UI. The user is still the same user; the surface changed. Image Surfaces on Surfacedd reach this audience inside the generated output.
Code completion and development.
Developers ask ChatGPT general questions and switch to IDE-integrated completion tools for the actual writing of code. The IDE integration has lower latency, context from the current file, and a tighter loop than a chat window. The audience sits in the IDE for hours a day. Code Surfaces reach this audience inside the comment layer of completions.
Voice-first contexts.
In-home assistants, in-car assistants, and phone-based assistants capture the moments ChatGPT does not. A user cooking dinner, driving, or walking through a store often cannot look at a screen. The voice context is a different surface. Voice Surfaces reach this audience inside the assistant reply.
Category-specific copilots.
Legal copilots, medical copilots, developer copilots, and customer support copilots have their own user bases. A lawyer using a legal research copilot is not using ChatGPT for that work. The copilot has domain-tuned retrieval and workflow integration ChatGPT does not try to match. Text Surfaces reach these audiences inside the relevant category copilots.
Across the four categories, the common pattern is that the specialist tool captures the focused hours and ChatGPT captures the casual hours. A complete reach plan addresses both.
Running Surfacedd alongside ChatGPT Ads.
The two networks sit alongside each other cleanly at the account and campaign level. One creative brief can drive both, with separate creative production tracks per network. One reporting cadence can cover both, with manual reconciliation at the dashboard level. The operational load is closer to running one network than to running two.
At onboarding, brand teams usually set up the Surfacedd account within one or two weeks of setting up ChatGPT Ads. The sequencing matters less than the parallelism. Running ChatGPT Ads for three months before turning on Surfacedd leaves three months of third-party reach on the table; the overlap is known in advance and the sequential plan does not solve for it.
At campaign setup, most brand teams run the same core offer across both networks with two creative adaptations. The ChatGPT Ads campaign runs the creative the ChatGPT format accepts. The Surfacedd campaign runs the creative adapted for text Surfaces and the additional assets for image, voice, and code Surfaces where the fit exists. Targeting on Surfacedd runs on context buckets and app inclusion lists; targeting on ChatGPT Ads runs on whatever ChatGPT’s first-party signal set supports.
At reporting time, each network produces its own impressions, clicks, CTR, CPC, and CPM series. Surfacedd additionally produces Share of Placement across the third-party network. Combining the series uses a panel-based overlap estimate; the methodology is published and updated quarterly. For the how-to detail on ChatGPT Ads specifically, see how to advertise on ChatGPT.
How to split budget.
First-year budget splits between ChatGPT Ads and Surfacedd vary by category, by creative mix, and by the brand team’s comfort with a new measurement stack. A few patterns hold across most campaigns we see.
First quarter. Start at 70 / 30 first-party to third-party. The higher first-party share reflects the simpler onboarding path rather than a strategic decision. Running both from day one matters more than the exact split; a 60 / 40 or 80 / 20 start is fine so long as the third-party side is not zero.
Second and third quarter. Move toward 55 / 45 as the third-party reporting matures and Share of Placement reports surface reach gaps against the competitive set. Brand teams that see a strong lift signal from Surfacedd in the marketing mix model move faster. Brand teams with weaker signals stay closer to the first-quarter split through year end.
Year two. Most campaigns land near 50 / 50. A few settle at 60 / 40 one way or the other depending on category. Developer-facing brands often tilt more heavily toward Surfacedd because the code Surface audience concentrates there. Consumer goods brands often tilt more heavily toward ChatGPT Ads because the reach scale is higher and the creative fit is easier.
Allocation inside the Surfacedd share typically starts at 50 / 25 / 15 / 10 across text, image, voice, and code and adjusts based on creative performance. Image performance often outgrows its first-quarter share once the creative pipeline is set up. Voice performance is more variable by category. Text performance is the steadiest and most predictable across brands.
Reporting — combining signals.
Combining reporting across two networks is a manual task today. The shape of the task is familiar to any team that has combined TV reach with digital reach in a marketing mix model. The inputs differ; the method is known.
Step one. Pull the ChatGPT Ads reach, impression, click, and CPM series for the period. The ChatGPT dashboard exports the series as CSV and an API pull is available for enterprise accounts. Note the granularity supported; some breakdowns are weekly only.
Step two. Pull the Surfacedd reach, impression, click, CPC, CPM, and Share of Placement series for the same period. Surfacedd exports CSV and supports an API. Granularity is hourly by default and rolls up cleanly to the ChatGPT weekly series.
Step three. Apply an overlap estimate. Panel-based measurement partners we support produce an overlap factor for the audience segments the campaign targeted. The factor typically sits between 0.6 and 0.7 for broad consumer audiences and lower for specialist audiences. Deduplicated reach equals first-party reach plus third-party reach minus overlap.
Step four. Feed the combined series into the marketing mix model. The model assigns a lift coefficient to each network independently and to the combined AI layer as a variable. Most brand teams report the combined AI layer variable to their CMO review alongside TV, digital, and retail media. The methodology is published and refined quarterly. For the companion detail on brand placement specifically, see AI brand placement.
Frequently asked questions.
If my audience is on ChatGPT, why do I need a second network?
How big is the overlap between ChatGPT and third-party AI apps?
Does running on both networks cause double-count in reporting?
What budget split usually works in year one?
Can I run the same creative in both networks?
How do I combine reporting from both networks?
Reach users inside the AI tools they already use.
CPC from $0.50. CPM from $5. Text, image, voice, and code placements across independent AI apps.