In the first part of our interview with TeqBlaze CEO, Niki Bansal, we discussed the concept of traffic curation and its growing importance in the industry. Continuing to develop this topic, we delve deep into how curated deals operate within the SSP architecture as a technology, examining their pipeline logic and economic impact.
To explore how the interplay between publishers and buyers in traffic curation changes, let's turn to Niki.
***
Grigoriy: Many in the market still equate curated deals with “premium PMPs.” How does a properly architected, curated deal differ from that?
Niki: Most PMPs are still built around where you buy, not who you reach. They price inventory on publisher brand, scale, and placement — The New York Times will cost more because of its reach and GEO mix, plus things like top-of-page vs bottom-of-page placements. It's "premium" in a surface-level, inventory-first kind of way.
Curated deals invert that model. Architecturally, a curated deal cannot operate without a normalized request structure — meaning consistent fields, aligned signal taxonomies, and a unified schema across all supply sources. They start from the audience and context: who the user is, what they're doing, and why they're there. Inventory is packaged with targeting signals across multiple sources, not just one "premium" domain.
PMP deals are inventory-first and domain-based; they are also placement tiered.
On the contrary, curated deals are signal-first and taxonomy-aligned. Curated packages are not tied to placements as they contain multi-source audience inventory. Here is an example: if a buyer is promoting a crypto exchange, we prioritize environments with strong finance and trading intent over general political or crime news. We also target a specific audience interested in business and finance.
As you see, the domain is secondary in curation; the audience and context come to the fore.
So while PMPs are still useful, curated deals fundamentally change the buyer-seller flow: from selling expensive pages to assembling the most effective, contextually aligned supply for a specific business outcome.
Last time, you mentioned that true curation shifts power back toward the sell side. How does the SSP architecture ensure that publishers retain data control while still enabling meaningful results for buyers?
Publishers and SSPs take ownership of segmentation, packaging, and audience definition — regaining something the industry quietly took from them over time: control. And with control comes leverage.
Our role at TeqBlaze is to provide the infrastructure that makes this possible — not to curate on behalf of publishers, but to give them the tools, governance controls, and data pathways to do it independently and safely.
Publishers sit on the most valuable asset in the chain: first-party data. They know who their users are, why they’re engaging, and what they’re signaling through behavior and content. When they apply that data to build their own packages, they become decision-makers in how their inventory is positioned and monetized.
We already see this in practice. One major U.S. publisher on our white-label SSP uses a hybrid approach: part of the inventory is segmented and packaged internally using platform tools, while external partners package the rest.
The result is consistent across internal and external package types — the more structured and unified a publisher’s data becomes, the more revenue paths are open for the publisher. And the more control remains on the sell side.
Let's take a closer look at publishers who directly curate inventory using the platform's infrastructure. Do they always win compared to open market trading?
The industry still lacks reliable benchmarking, so we focus on what we can measure with confidence — bid response patterns, impression growth, and platform-level revenue uplift. Curated pathways now account for over 80% of industry trading, and there’s a clear reason for that.
What we consistently see is that publishers generate the strongest uplift when they control the packaging process in-house rather than outsourcing it. And this uplift isn’t random — it correlates directly with the maturity of their data structure.
When a platform has a unified taxonomy, stable segmentation rules, and consistent metadata across its supply, buyers understand the inventory better and bid more aggressively.
That's why we continue to prioritize strengthening the curation architecture on our side. The more structured and coherent the publisher's data environment becomes, the more value curated deals can unlock. Ultimately, curation is an architectural layer that requires ownership, consistency, and an investment in data governance to deliver meaningful results.

What is the realistic limit of automation when it comes to building and optimizing curated deals? Are we already at the stage where machine learning can dynamically build and optimize curated packages based on real-time performance?
Automation always hits the same bottleneck — and it has nothing to do with technology. Architecturally, SSPs like ours are already capable of automating large parts of curation. Systems such as the Agent Communication Protocol (ACP) show that AI agents can collaborate, repackage supply, and optimize trading autonomously.
The real obstacle is transparency. Full automation requires normalized, consistent signals coming back from the DSP — and the market doesn't provide them today. Missing standards don't cause fragmentation; business decisions cause it. Too many players still treat opacity as a competitive advantage, even when it constrains the total value the ecosystem could generate.
If the industry ever reaches a point where value, logic, and performance signals are openly shared across the chain, automated curation could scale quickly and become a foundational trading model. Until then, automation will remain partial.
The SSP architecture is ready — the market isn’t.
How do curated deals interact with optimization tools for traffic shaping or metric-based performance improvement, both built-in and third-party?
When we talk about curated deals, it's important to remember that they already introduce a strong layer of filtering before any optimization even begins. If we're curating properly, we've already answered key questions: Is this the right user? Does the request come from the right placement, context, and taxonomy?
So we're not sending everything downstream and hoping the algorithms fix it — a lot of the noise is already gone.
But curation doesn't eliminate the need for optimization. In many real-world environments, curated supply still results in high QPS volumes — especially when multiple publishers contribute inventory that shares the same audience or contextual logic. At that scale, traffic shaping, intelligent routing, and performance-driven bidding still have room to increase ROI.
That's where built-in optimization tools — or third-party algorithms — come in.
But there's another side to this. Some curated deals are highly targeted and fully pre-negotiated — small audiences, clear business logic, strict economic terms. In those cases, everything is agreed up front. Every bid request is already something the buyer is prepared to evaluate and potentially monetize.
So, additional shaping may not add value because the deal is already precise and tightly controlled.
That's why I always say: there is no universal rule here. Some curated packages benefit from a second optimization layer; others don't need it at all. And we don't force one approach. As a technology provider, we offer development options and comprehensive support.
The automated optimization of packages should always be discussed; it's not the default, for sure.
What is the biggest traffic curation problem currently facing the industry? How does TeqBlaze help companies overcome these obstacles?
One thing we consistently hear across the industry is that curated trading creates value — just not for the publisher. Everyone participates in the uplift, but the economics rarely flow back to the supply side. A major reason for this is structural: publishers have surrendered too much of the value chain to intermediaries, and when curation happens externally, the margin naturally settles elsewhere. Intermediaries often control the taxonomy, signal mapping, and packaging logic — which means publishers lose authority over how their own supply is interpreted.
There’s also an infrastructure problem. Most publishers operate with fragmented taxonomies and inconsistent data schemas. Without unified signals, they lose control over how their inventory is packaged, interpreted, and ultimately sold. And once that control slips, so does their share of the economy.
This is exactly why we focus on giving publishers the ability to curate on their own terms. One practical method we see working extremely well is simple: run a curated deal through your own platform and compare the uplift to that of external curation marketplaces. When publishers do this, the delta is often eye-opening.
Technological autonomy matters here. When a sell-side can build deals, enforce a unified taxonomy, test strategies, iterate, and see results in real time — all within one system — publishers and their monetization platforms are no longer passive participants. They’re shaping the packaging logic themselves. And that shift is where the power and economics of monetization start to rebalance.
Which metrics best show that curation works as intended? Beyond CPM, what KPIs indicate that curated architecture improves performance for both sides?
The strongest validation of curation always comes from the buyer side: if campaigns perform better, it works.
Click-through rate, conversions, engagement, and reach — when these metrics rise, it means the inventory was packaged with the right signals and delivered into the right environment. That is the core job of a deal.
But campaign-level results alone don’t give publishers and SSPs the full picture, because most of that data sits with the DSP. So we also look at the signals the market sends back in real time: Bid Rate and Impression Delivery Growth. On the sell-side, we also track signal completeness, taxonomy coherence, and bid consistency — three indicators that indicate whether the curated architecture is functioning as intended.
This is the point where curation proves itself economically: buyers see better results, and supply sees more consistent demand and higher monetization without “selling more inventory.”
So yes, CPM matters — but the fundamental success indicators create value on both sides of the trade. When buyers achieve stronger outcomes and publishers see increased bidding pressure and fill, the curation logic is doing exactly what it was designed to do.

And the last one: where do you see curated architecture evolving in the next two to three years? Will it eventually replace traditional open market trading entirely?
The answer depends less on technology and more on people. Technologically, curated systems could evolve further. If all participants openly shared logic, data, and value, we could imagine something close to autonomous deal construction — where platforms communicate, evaluate performance, and reconfigure packages in real time, almost like the ACP model I mentioned.
But that level of automation requires complete transparency and fair value exchange across the chain. And that's the most significant obstacle today.
If that changes — if publishers, SSPs, DSPs, and curators openly exchange logic and share the incremental value of every deal — then curated trading can evolve into a default model. In that world, open auctions become one execution channel among many, not the backbone of programmatic.
In other words, the potential is there, but the industry needs to decide whether it wants to unlock it.
***
In the end, the future of curation isn't just about technology, but about incentives and transparency.
When publishers and their monetization platforms together control segmentation, packaging, and deal construction — instead of outsourcing it to intermediaries — the economics change. When traffic value is accurately measured, business logic becomes transparent, and every party in the chain understands what they're buying and why.
TeqBlaze's role is to provide the infrastructure that enables this autonomy. The architecture of our white-label supply-side platform provides publishers with all the tools in one place. They can build curated packages, test strategies, govern data, and enforce consistent logic.
That is what allows curation to become a revenue engine.
So, Niki's insights show that the real question for the sell-side is clear: take ownership of the curation logic or remain constrained by fragmented models? With TeqBlaze, you choose the first at scale.

Grigoriy Misilyuk




