The Great Targeting Takeback: Why PMax and Advantage+ Are Brilliant Strategy
Firstly, MERRY CHRISTMAS and happy Christmas Eve. You can tell this time of year gets me so bored I write essays about digital advertising and it’s impact.
For most of the modern era of digital advertising, marketers have told ourselves a comforting story.
We told ourselves that the advertiser was the strategist. We chose the audience. We chose the levers. We chose the placements. We chose the constraints. The platform was a high-tech delivery truck that moved our intent into the world, then sent us a spreadsheet so we could feel in control.
That story is ending.
Meta and Google are not “removing targeting” because they suddenly believe control is overrated. They are removing it because targeting, as a product interface, was an economic mistake. It handed the most valuable part of the system, the right to decide what an impression is worth, to the buyer and to a sprawling ecosystem of third parties that sat between buyer and publisher.
Performance Max and Advantage+ are the correction. They are the platforms reclaiming the decision function, then productizing it back to us as “automation”.
If you follow the incentives, it’s hard not to admire how clean the strategy is. And if you care about incrementality, it’s even more consequential than most people realize.
This piece lays out the long arc: a history of targeting, what each stage actually was, and the recurring pattern most people miss. Then I’ll explain how PMax and Advantage+ work, and why the structural shift is likely to be good for Meta and Google’s long-run economics. Then I’ll get into the uncomfortable part: why this “automation” makes the entire advertiser base a better competitor, and why that tends to raise the clearing price of ads over time.
Finally, I’ll pull a thread that deserves to be front and centre: Byron Sharp has been telling the industry for years that most “targeting” is a weak lever for real growth. That lens matters here because it changes what we should be optimizing for in the first place (and shows why optimizers may be good for taking budget but not chasing growth).
And it leads to the punchline: in a world where platforms run the allocator, third-party incrementality measurement becomes a structural advantage. Not just for boardroom budgeting, but because it gives you unique training data to feed the platform beasts. In that world, marketing mix models stop being “reporting”. They become competitive infrastructure.
A short history of targeting (and the recurring role of the third party)
When marketers talk about targeting, we tend to talk about it like it is a modern invention. It isn’t. The impulse is ancient: you want the right message in front of the right person at the right time. What changes across eras is not the desire. It is the mechanism, and more importantly, who gets to define “right person”.
Across every stage, one question keeps showing up:
Who sets the targeting parameters: the publisher, the buyer, or a third party sitting between them?
Stage 0: Targeting did not exist (mass broadcast)
For most of advertising history, targeting was blunt because media itself was blunt.
You bought reach. You bought repetition. You bought the cultural campfire.
Early radio and TV were not “targeted”. They were pooled attention. You didn’t pick individuals. You picked a time slot and a show and hoped the households you cared about were sitting there. Even the logic of the media plan was fundamentally probabilistic.
But there’s an important nuance. Targeting was still present, it was just implicit. The buyer chose which shelf to rent. The publisher built the shelf.
The parameter was still set outside the publisher.
Stage 1: Contextual and program-based targeting (the content is the proxy)
Then targeting became deliberate: context as proxy.
Print always had this. A fishing magazine is a filter. A finance section is a filter. An industry trade title is a filter. Early broadcast did the same thing through programs. Even soap operas were not just drama, they were household buyer delivery systems.
This is where targeting becomes “select the environment”.
The control point still sits with the buyer. The publisher sells the environment. The buyer decides which environment is valuable.
Stage 2: Demographics becomes a currency (measurement arrives)
As measurement improved, targeting became tradable.
Panels, surveys, ratings, readership studies. The industry built a shared language for who was actually watching or reading. Age and gender became dominant not because they are philosophically precise, but because they were measurable and scalable as a trading currency.
This is the first era where targeting gets standardized enough to become an actual market unit.
But again, look at where the decision lives. The buyer still picks the demo. The agency still “owns” the plan. The publisher sells inventory described in that currency.
The targeting parameter is still external.
Stage 3: Title and channel specialization (fragmentation creates shelves)
As media fragmented, targeting got sharper without necessarily needing better data.
Cable is targeting by channel. Niche magazines are targeting by title. Radio is targeting by station identity. Outdoor has a crude version through placement and commuter flow.
The mechanic is simple: create a media product that attracts a type of person, then sell access to that product.
This is not platform targeting. It is audience self-sorting. And once again, the buyer defines targeting by deciding which shelf to rent.
Stage 4: Database marketing and CRM (the customer file becomes the target)
Before we even get to cookies, there’s a stage many digital natives forget: database marketing.
Direct mail, loyalty programs, membership lists, point-of-sale data. Brands started building customer files, then using those files to target offers and communications.
This is a big step because the “target” becomes something the advertiser owns. It is not just the publisher’s audience. It is the advertiser’s relationship graph.
But notice the theme: the publisher is still not defining the parameter. The buyer is. The advertiser is. The third party is often the CRM system or the data vendor helping segment the list.
Stage 5: The digital footprint arrives (the web makes behavior legible)
Then the internet changed the physics.
For the first time, behaviour became legible at scale. Pages were events. Clicks were signals. Searches were intent expressed in plain text.
Two major targeting systems emerged:
Search, where the “targeting” is the keyword.
Display, where targeting became behavioral and contextual via cookies, ad networks, and later real-time bidding.
This is also where the targeting ecosystem explodes. Data brokers, DMPs, retargeting vendors, audience segment sellers, attribution vendors. The value chain becomes crowded.
And crucially, many of these actors are not the publisher. They are third parties defining audiences, stitching identity, and selling segments that publishers then deliver.
The third party becomes the nervous system of targeting.
Stage 6: Programmatic (targeting becomes liquid)
Programmatic made targeting liquid.
Once you have auctions, you can price impressions in real time. Once you have cookies, you can label users. Once you have DMPs, you can bundle those users into segments that feel tradable.
This is the era of “sports car dashboards” in ad managers. Hundreds of levers. Thousands of segments. Endless micro-optimizations.
The buyer sets the constraints. The third-party ecosystem sells the segments. The publisher supplies the impressions.
Targeting is still mostly defined outside the publisher.
Stage 7: Mobile identity and platform-scale modeling (walled gardens consolidate power)
Mobile shifted the centre of gravity again.
The most valuable behavioural data stopped living on the open web. It lived inside apps. It lived behind logins. It lived inside walled environments.
This is where Meta and Google’s advantage became structural. They did not just have inventory, they had first-party signal density at a scale the open web could not replicate.
Marketers still had targeting levers, but the data advantage had already moved inside the platforms.
And then the spine broke.
Stage 8: Privacy shocks (the third-party spine fractures)
The old targeting world relied on cross-site and cross-app tracking. It relied on third-party cookies. It relied on mobile identifiers and permissive data sharing.
That infrastructure has been under attack for years from regulation and platform policy. Apple’s App Tracking Transparency made tracking permission explicit at the user level, and developers must request permission to track across other companies’ apps and websites (see Apple’s overview of ATT here and the framework documentation here).
Google’s cookie story has been more complex and politically fraught, but the direction has still been toward reducing cross-site tracking and pushing privacy-preserving alternatives, with ongoing updates through the Privacy Sandbox program (for example, Google’s July 2024 update emphasizing a user-choice approach here).
Whether you view these moves as privacy progress or competitive warfare, the economic consequence is the same: the third-party targeting ecosystem becomes weaker and less reliable.
Which sets up the final stage.
Stage 9: The platform reclaims targeting as an internal optimization problem
This is where PMax and Advantage+ sit.
Instead of letting advertisers define audiences as hard constraints, the platforms increasingly treat advertiser inputs as hints, then run the allocator themselves.
Google is explicit about this in its Performance Max documentation: PMax uses Google AI to optimize bids and placements for your conversion goals, while advertisers provide inputs like creative assets, audience signals and conversion values (Performance Max overview). And Google is also explicit that audience signals are suggestions, not fences: PMax can show ads outside your signals if those users are likely to convert (audience signals explanation).
Meta’s framing is similar. Advantage+ shopping campaigns are positioned as AI-powered automation that reduces manual setup and optimizes delivery and creative to drive sales (Meta Advantage+ shopping campaigns page). When Meta introduced Advantage+ Shopping, the company explicitly described it as eliminating manual steps and automating large creative combinations at scale (Meta Newsroom announcement).
At a systems level, the industry just moved from buyer-defined targeting to publisher-defined optimization.
That is the shift. Everything else is interface.
How PMax and Advantage+ work (and what they quietly take away)
To understand why this is strategic, you have to get specific about the structural difference.
The old world was constraint-led:
The advertiser defines the audience (often with third-party data), defines placements and budgets, and the platform optimizes delivery inside those walls.
The new world is objective-led:
The advertiser defines the outcome goal (conversions, value, ROAS target), supplies creative assets and conversion signals, and the platform decides who, where, and when, across its inventory, to hit the objective.
This is not “less targeting”. It is a transfer of targeting authority.
Performance Max in plain English
Performance Max is Google’s attempt to collapse campaign complexity into a single outcome engine.
Instead of building separate Search, Display, YouTube, Discover and Shopping strategies, you create one campaign that can run across Google inventory. Google describes it as using AI to find potential customers and serve the most appropriate ad with an optimal bid to maximize performance, guided by your goals and inputs (PMax help page).
It is also worth remembering PMax is not just a new campaign type. It has been positioned as the successor to older automated shopping formats. Google’s 2022 upgrade narrative is direct: Smart Shopping and Local campaigns were upgraded to PMax to unlock broader inventory across YouTube, Search text ads and Discover (Google product blog).
Mechanically, what matters is:
You give Google a goal and constraints like budget and (often) target ROAS. You provide creative assets. You provide conversion tracking and values. You can provide “audience signals”, but they are starting points, not restrictions (audience signals).
Then Google runs the allocator.
Advantage+ in plain English
Advantage+ is Meta’s equivalent move, especially for commerce.
Meta positions Advantage+ shopping campaigns as automation that reduces manual setup and uses AI to optimize delivery and creative combinations, explicitly describing automation of large sets of creative combinations (Meta Newsroom announcement) and describing the product as automated shopping ads designed to optimize delivery (Meta Advantage+ page).
Just like PMax, the structural role is the same:
You define the objective. You supply creative and tracking. Meta decides where the budget goes inside its ecosystem to hit the goal.
It is a portfolio allocator running inside a walled garden.
Why this is smart strategy for Meta and Google (even if marketers complain)
These products are not primarily designed to make advertisers’ lives easier. That is the sales pitch. The deeper point is that they reshape who owns advantage inside the ad market.
There are four strategic wins baked into this shift.
1) They internalize the decision function
In the old world, you could argue that agencies and sophisticated advertisers had a moat. They knew how to segment, how to suppress overlap, how to build exclusions, how to exploit mispriced audiences.
Platforms tolerated this because they still made money. But it was leaky. The platform’s data advantage could be partially arbitraged by external decision-makers.
Automation changes this. If the platform decides the audience and placement inside a goal-based system, then the platform owns the alpha.
Advertisers still provide inputs. But inputs are not control.
2) They disintermediate the third-party ecosystem
Across targeting history, third parties constantly tried to sit between buyer and publisher.
PMax and Advantage+ weaken that layer.
If the platform is doing audience discovery, you need fewer purchased segments. If it is doing placement allocation, you need fewer bid-management tools. If it is optimizing across its own properties, it can reduce reliance on third-party identity.
This is not just about privacy compliance. It is about margin and power.
A simpler stack is a stickier stack.
3) They make it easier to monetize new inventory
Here is a subtle one.
When a platform launches a new surface, advertisers are cautious. They want benchmarks. They want control. They want reporting.
Automation solves adoption friction. If the platform can route spend into new inventory while keeping headline CPA stable, adoption scales faster.
Automated allocation is also a distribution channel for the platform’s roadmap.
4) They build learning flywheels that compound
Meta and Google are machine learning businesses. Ad performance is not just an auction, it is a prediction problem.
The more spend they run through automated systems, the more conversion data they gather, the better their models get. Better models improve advertiser outcomes. Better outcomes increase spend.
PMax and Advantage+ tighten this flywheel because they increase default adoption and reduce friction.
They make the “best way” also the easiest way.
The Byron Sharp lens: targeting is often the wrong hill to die on
There’s an awkward irony in watching platforms “take targeting away” at the exact moment a large body of evidence-based marketing has been telling everyone that targeting was oversold as a growth strategy in the first place.
Byron Sharp and the Ehrenberg-Bass Institute have spent years arguing that brands mainly grow by increasing penetration. Not by obsessively refining who you reach, but by reaching more category buyers and building memory structures so you come to mind when people buy. If you want a straightforward summary of that core view, the Marketing Science site has a plain-English explainer on what Sharp means by mental and physical availability and why penetration matters (How do you measure How Brands Grow?).
What matters here is not a caricature like “Sharp says targeting is useless”. That isn’t the point. The point is that most of what marketers call “targeting”, hyper-narrowing to “the best customers” or “people most likely to buy”, tends to be a mirage for incremental growth.
Optimizers drift toward high-propensity people because that’s what an optimizer does when the reward function is “get me a conversion at the lowest cost”. It finds the easiest wins, the people already near the edge of purchasing, and it makes the dashboard look great. The trap is that high propensity is often the opposite of incremental. You win the attribution, not the customer.
Sharp has been particularly blunt about the hype cycle around programmatic and the inflated claims attached to “precision” and “zero wastage” narratives. His argument is not that relevance is bad. It’s that marketing’s job is not to eliminate all “waste” if that “waste” is the cost of building future demand and reaching light buyers. If you want his tone unfiltered, his post “Programmatic – don’t believe the hype” is instructive (Byron Sharp blog).
This lens matters for PMax and Advantage+ because it reframes what we should want from these systems.
If you let advertisers over-specify narrow audiences, many will shrink reach, self-sabotage long-term demand, and then complain performance “gets worse” when they exhaust the pool. If you nudge them into broad, goal-based optimization, you keep spend flowing and keep the system learning.
Are Meta and Google doing this because they are Ehrenberg-Bass disciples? Of course not.
But the product move rhymes with the research: broad reach and smarter allocation is often healthier for growth than obsessive micro-targeting.
And there is a second alignment that is more cynical and more economically important:
Broad reach does not just help brands grow. It thickens auctions.
When constraints loosen, more advertisers compete over more impressions, and the market clears at higher prices.
Why automation makes advertisers better competitors (and why that tends to push CPMs up)
Here is the part most marketers will feel, whether they articulate it or not.
When targeting is manual, some advertisers are simply bad at it. They waste money. They bid into the wrong places. They choose narrow segments that collapse. They mismatch creative to context. They leave value on the table.
In that world, competent advertisers are not only competing against other competent advertisers. They are also competing against incompetence.
Incompetence is cheap.
Automation removes incompetence as a moat.
That sounds advertiser-friendly, but it’s economically brutal. Because if everyone gets better at extracting conversion value from the same inventory, the same inventory becomes more valuable.
And in an auction, higher value tends to translate into higher price.
This is why I think the long-run CPM story is directionally clear even if the short-run is noisy and cyclical.
A bottom-up model for why CPMs rise
In any performance auction, advertisers have an internal ceiling for what they can pay. That ceiling is math.
A simplified version looks like this:
You have a target CPA (say you can afford $50 per purchase). You have a conversion rate per impression (how many purchases you get per impression). From that you can derive a max CPM you can pay.
If your conversion rate per impression is rrr, then at a CPM of ccc, your expected cost per conversion is:
CPA ≈ c / (1000 × r)
Rearrange it and your max CPM is:
c_max ≈ CPA_target × 1000 × r
Key point: your bid ceiling is proportional to conversion rate.
So what happens if the platform’s automation improves conversion rates across the advertiser base?
Bid ceilings go up.
When lots of bidders’ ceilings go up at once, the clearing CPM tends to go up too.
A concrete example
Imagine a slice of inventory.
An advertiser targets a $50 CPA.
Today, they get 4 purchases per 100,000 impressions.
That conversion rate per impression is 0.00004.
Their max CPM is:
c_max = 50 × 1000 × 0.00004 = $2
Now suppose automation improves match quality and creative selection and they get 5 purchases per 100,000 impressions instead of 4.
New conversion rate is 0.00005.
New max CPM is:
c_max = 50 × 1000 × 0.00005 = $2.50
That is a 25% increase in bid ceiling without the advertiser changing their CPA target.
If you do that across thousands of advertisers, you don’t need a conspiracy theory about platforms “raising prices”. The auction raises prices for them.
Why PMax and Advantage+ push this dynamic
They push it in three ways.
First, better matching. The platform can use first-party signals to find pockets of conversion propensity a human would not explicitly select.
Second, more creative variation. Meta explicitly sells Advantage+ shopping campaigns as automating large creative combinations to learn faster (Meta Newsroom announcement). Creative variance improves match quality, which improves conversion rates.
Third, cross-surface allocation. Google sells PMax as a single campaign type that can access multiple Google channels and inventory (Performance Max overview). That lets the system hunt for cheaper marginal conversions across inventory, then scale.
All three tend to increase realized conversion rate per impression for a large portion of the advertiser base, especially the advertisers who were previously less sophisticated.
That raises bid ceilings.
And then the market clears higher.
The second-order effect: less friction means more demand
Automation does not just improve performance. It reduces operating friction.
Friction is a tax on spend. The harder it is to run campaigns, the fewer advertisers scale aggressively and the more budgets stay offline.
PMax and Advantage+ reduce friction and make scaling simpler. More advertisers show up. More budgets become always-on.
Even if each advertiser only increases spend slightly, the aggregate demand pressure can be large because these platforms operate at enormous scale.
Again, this is directional, not deterministic. Markets have macro cycles. Supply shifts. New formats change the mix. But the mechanism is clean: if automation raises conversion yield and reduces friction, CPM inflation is a normal clearing outcome.
The measurement problem gets worse (and that is exactly why measurement becomes the new advantage)
Now we hit the part everyone tries to avoid because it forces honesty.
If platforms run the allocator, the platform’s reporting becomes less neutral.
Not because anyone is “lying”, but because the platform’s default metrics are optimized for operating its own system. They are designed to encourage spend and to provide tractable feedback loops for bidding models.
That’s why you see platforms increasingly push incrementality language and lift testing. Google positions conversion lift as an incrementality tool to measure conversions directly driven by people viewing your ad (Google conversion lift). Meta positions Conversion Lift as measuring the incremental effect of ads by comparing exposed and control groups (Meta Conversion Lift help and Meta’s measurement page).
This is good. It is progress. It is also not enough.
Because lift studies tend to be channel-contained. They tell you the incremental effect of ads within that platform under a given design. They do not automatically solve the cross-channel allocation problem. They also do not automatically measure longer-term brand effects or halo effects that are not captured in the immediate conversion window.
So what happens in a world where:
Platforms want you to hand them more budget control via automation.
Platforms also want to define what “good” looks like via platform-contained metrics.
The advertiser still needs to decide how much to spend, across channels, over time, with real business outcomes.
This is where third-party incrementality measurement becomes structural advantage.
And this is the part marketers are not yet fully pricing into their future operating model.
Why third-party incrementality measurement becomes a compounding edge
In an automated world, the scarce resource is not targeting settings. It is truth.
If your competitor has a more accurate view of what is truly incremental, they will allocate budgets better over time. They will also feed better signals into the platform’s optimizer, which makes the platform work harder for them.
That second part is the key. Measurement is no longer just governance. It becomes training data.
Google’s Smart Bidding explicitly uses your conversion tracking data to optimize auctions (Smart Bidding overview). Google also pushes features designed to improve conversion measurement and “unlock more powerful bidding”, like enhanced conversions, which send hashed first-party conversion data to improve measurement accuracy (enhanced conversions). Offline conversion imports exist precisely because many valuable outcomes happen outside the immediate online click path, and importing them helps you measure and optimize toward them (offline conversion imports and offline conversion import FAQs). Google has also made value-based bidding explicit: you can optimize to maximize conversion value and use conversion value rules to better reflect what you know about your business value (maximize conversion value bidding and conversion value rules).
Read those pages like an economist, not a marketer.
Google is telling you: the way you “talk” to the bidding algorithm is through the conversion signal and the conversion value you feed it.
Meta is functionally similar. If you want the algorithm to optimize toward higher-quality outcomes, you have to give it cleaner signals and better definitions of value.
That is the bridge to MMM.
Why MMM becomes the private dataset that feeds the platform beasts
A marketing mix model is not just a report. In the modern world, it’s a way to infer incrementality using aggregate data, across channels, including channels and effects that platform-contained reporting struggles to capture cleanly.
And critically, it can produce something extremely valuable in an automated auction world:
A better estimate of which outcomes are actually incremental and how their value changes at the margin.
That is what platforms cannot fully give you, because they do not see your full business system. They see your in-platform outcomes. They see your observed conversion events. They do not see the counterfactual across your whole market in a neutral way.
When you have third-party incrementality measurement, you can do two things your competitors often cannot.
First, you can allocate budgets across channels with a cleaner view of incremental return.
Second, you can translate that truth back into the platform’s optimization loop by redefining what the algorithm should chase.
In practice, that often looks like:
You stop optimizing to “leads”, and optimize to qualified leads or downstream revenue via offline conversion imports. You push value-based bidding instead of pure volume. You adjust conversion values so the algorithm learns that some conversions are worth dramatically more than others. You take what MMM tells you about long-run value and you encode it into what the platform can actually optimize for.
This is why I think MMM will become less optional over time, particularly for sophisticated advertisers spending meaningful budgets. Not because MMM is fashionable, but because it is one of the only scalable ways to create a third-party incrementality truth set that can then be used to steer automated allocators.
And here’s the tell that this is not just a niche view from measurement nerds: Google itself has launched an open-source MMM called Meridian and is publicly positioning it as “the future of marketing mix modelling” for modern measurement needs (Google Think article and Meridian developer site).
When the platform building the auction is also publishing an MMM framework, you should read that as a signal about where the industry is going.
It’s not altruism. It’s ecosystem design.
If advertisers can measure incrementality better in a privacy-durable way, they will trust the system more and keep spending. And if advertisers feed better value signals into automated bidding, the allocator becomes more efficient, auctions become more competitive, and the platform captures more value.
This is the flywheel again, but now the input is measurement.
The ETF parallel: automation turns targeting into passive investing
Now we can connect the story to ETFs, because the analogy is more than cute.
Active managers sold a story for decades: we can pick winners, time markets, generate alpha.
Then index funds and ETFs took over not because they were glamorous, but because they were good enough, cheap, and structurally hard to beat.
As index adoption rose:
More capital flowed into the same baskets.
Alpha got harder.
Fees compressed.
Advantage migrated away from “picking” and toward cost structure, distribution, and factor exposures.
PMax and Advantage+ feel like the ETF-ification of targeting.
They take what used to be a craft skill, audience building, segmentation, placement strategy, and collapse it into an index-like product:
Put money in. Set your objective. Provide your assets. Let the system allocate.
Most advertisers are not trying to be Renaissance media traders. They want outcomes. So they buy the index.
And when everyone buys the index, two things happen.
The average advertiser gets better results than they used to, because the system removes obvious mistakes.
The auctions become more efficient and more competitive, because the dumb money becomes less dumb.
That second point is the hidden tax.
ETF investing made markets harder for stock pickers not because it made everyone stupid, but because it made “good enough” accessible.
Automation does the same for targeting.
So does targeting become a creative talent again, or does advantage move elsewhere?
If everyone’s targeting is broadly the same, where does advantage come from now?
I think the answer is uncomfortable for the industry because it shifts advantage away from the traditional media buyer identity. In a PMax and Advantage+ world, “targeting” becomes less about audience definitions and more about four things.
First, the objective function you feed the machine. If you tell the system to optimize to last-click purchases, it will behave like a last-click machine. If you feed it higher-quality conversion signals and value-based goals, you are redefining the system’s map of value.
Second, first-party data plumbing. Enhanced conversions, offline conversion imports, and clean event hierarchies are not implementation details. They are how you communicate truth to the allocator (enhanced conversions, offline conversion imports).
Third, creative becomes the new targeting. If the platform chooses the audience, creative is how you select for response. It is the self-selection mechanism. It is also how you build memory structures over time, which matters if you take the Sharp view seriously.
Fourth, and this is the one most teams try to dodge, product and offer become auction weapons. In an auction, the advertiser with the higher conversion rate can afford a higher CPM. If your site converts better and your offer is sharper, you can bid more. Automation makes this more true, not less, because it improves the system’s ability to find users who are on the edge of converting.
And then there is the layer that will separate serious advertisers from everyone else.
Third-party incrementality measurement.
Because if targeting has been indexed, then the scarce input is not who you chose. It’s what you know that the index does not.
If you have a robust third-party view of incrementality, you can do two things: allocate budgets better and feed better value signals back into the platforms’ optimization loops.
That is a structural advantage. It compounds. And it is exactly why I believe MMMs, and adjacent incrementality infrastructure, are going to become far more common as the automation era matures. Not because everyone loves measurement, but because everyone will be forced to compete in auctions where the allocator is the same and the only durable advantage is the quality of the truth you feed it.
So here’s the question I’ll leave hanging:
In a world where PMax and Advantage+ have made targeting passive, do we end up in a marketing economy where the real competitive edge is creative and incrementality, not audiences? And if that is true, which brands are building that muscle now, while everyone else is still arguing about interest stacks?

The ETF analogy is spot on. When I first started managing paid campaigns, micro-targeting felt like the entire skill set, but watching conversion rates across brands shows that broad reach with better conversion signal quality usually beats hyper-narrow audience stacks. The part about MMM becoming training data instead of just reporting infrastructure is the shift most teams haven't priced in yet, and that gapgets wider as automation takes over allocation.