by John Potter, Op-Ed Contributor, September 9, 2016
Internet publishers face a programmatic advertising market that is still incredibly fragmented on the demand side. Given the continued move of advertising dollars to programmatic, this fragmentation is costing publishers a lot of money.
Over the last couple of years, publishers have responded by embracing header bidding. Before header bidding, publishers who used DoubleClick for Publishers (DFP) as their ad server had to rely on the integrated Google Ad Exchange, and to work independently with other demand sources — passing inventory to each, one by one, to try and maximize yield.
At every step of this waterfall approach, publishers would lose inventory and revenue. Moreover, there was no true competition with Google Ad Exchange.
Header bidding improved on this by allowing publishers to gather bids from other demand sources and pass them into their ad server, where they could compete with AdExchange, and raise their yield without the loss of inventory.
While header bidding improved yield, it was difficult for publishers to manage the different tags to maximize both performance and yield.
In response to these difficulties, over the last year, innovative publishers began adopting header-bidding wrappers. Wrappers are the newest technology being touted to simplify the complex integration of adding and managing multiple potential buyers and tags, while minimizing latency — the main complaint about header bidding.
The mainstreaming of header bidder wrappers was seen in the recent announcement by Time Inc. that it would be the first major publisher to adopt one.
The truth is, though, while wrappers are a step forward, they are not the best solution. Instead, the real opportunity lies in server-to-server integrations.
The fundamental problem with wrappers is that while they may be faster than individual header bidding tags, they are still running on the client side, and can still affect site speed and performance. As such, the publisher’s ability to add more bidders and increase yield will remain limited. This is why a publisher’s ultimate goal should be server-to-server integrations. Integration at the server level means that bids can be collected in the background without impacting the user’s experience, and the bid responses can occur at lightning-fast, data-center connection speeds.
Google’s announced response to header bidding, known as EBDA, is a server-to-server product that is directly integrated into DFP. EBDA promises publishers the ability to integrate bidders other than Ad Exchange and allow them to bid on inventory. However, it comes with significant limits: All bidders must be approved, and Google is going to limit the information on the auction that is available to publishers.
These are significant drawbacks. Publishers need full access to all bid data, so they can understand the true market value of their inventory. What’s needed is what AppNexus, in a recent whitepaper, called open dynamic allocation: multiple server-to-server integrations with demand sources and full access to all bid data.
Header bidding has already shown the potential of competition to increase publisher yield, and header bidding wrappers have made managing header bidding tags a lot easier.
However, all client-side bidding solutions will come with issues around latency and performance regardless of how well they are managed. That’s why smart publishers need to move to server-to-server integrations. While using a technology like EBDA will be easier, and will certainly help with latency and yield, it will not give publishers ownership of the valuable data that comes with a truly open system.
Since data is the most valuable asset publishers have, the most innovative will invest in the technology and talent to build or integrate a customized server-to-server platform. And the payoff will be worth it, as they can use this data not only for auction optimization — but for content optimization, and to better understand users, among a host of other possibilities.