Category Archives: measurement

Agency Trading Desk Myths & Memes Debunked (Part I)

Fear, uncertainty and doubt has worked well for many incumbents in the technology and online media game over the years. Why should sell-siders be any different when it comes to Agency Trading Desks (ATDs)? Don’t worry…they’re not.


With buy-siders generally tight-lipped about the subject of ATDs, the resulting vacuum is being filled by constant industry sniping and chatter. Since the advent of the ATD, they have had aspersions notoriously put on them – perpetuating FUD. That the industry trade media and blogs are the only place that a consistently negative view of ATDs can be found should come as no surprise. Yet, the recent spate of chicken-little articles, posts and heated comments represent what is apparently a really threatened sell-side point of view.


While agencies are notoriously silent about their and their client’s businesses (as they should be), two anti-ATD blog posts (The Trouble With Agency Trading Desks and Thumb on the Scalewarranted a response from a different point-of-view…the advertiser. The spin and rhetoric have reached epic proportions and so a debunking of popular myths and memes follow below:



Double-dipping. From the best defense is offense school. It is as if the basic math around billable hours and the service-layer around managing Demand Side Platforms have no value. Data-driven media buying is very different from traditional demo-driven index based methods and takes alot of time as clients and agency partners get up to speed. Moreover, the measurement planning, analytics, technical and media accounting and media reconciliation that are required to manage these campaigns are also very different. 

The notion of “double-dipping” belies a basic misunderstanding of the process: DSPs are not exactly push-button. It is nothing like an in-house production studio – it is very strategic not simple production. Leveraging the expertise associated with intricate technical aspects of tags and data sources alone is a significant effort. Also as a reminder, digital advertising used to be commissionable or marked-up like traditional media. Meanwhile, where is the outrage at ad networks double-dipping with advertiser data?


Profit-margins. The implication that agencies shouldn’t be seeking profitable service offerings is simply outrageous. In the end, it is a service business and comprised of talented specialists that care about client business. With the prevalence of small-scale site retargeting making up alot of the business today, the ad volume and associated fees that ATDs are charging suggest that they may be running somewhat in the red; at least, until the business scales up or broadens to warrant the resource investment

Advertisers squeezing too hard here run the risk of running the people (not machines) doing the work into the ground – not good either. ATDs are not charitable organizations so it is not clear why they should be expected not to earn fees or why they have to justify it ad nauseaum. That said, it is in ATDs best interest to be very transparent with clients about the fees they are charging.


Agency Technology Investment. Holding company ATDs, for the most part are not building their own buying technologies in-house. The spin-out of Adnetik being an exception and who’s success remains to be seen. Instead they are licensing DSP tools/white-labelling and applying their tech-savvy marketing teams to enable a platform for the benefit of clients. While their marketing often use the term platform, they mean technology and the service-layer to support it – not literally hardware and software. 


In some cases, agencies may be using their business intelligence tools to support ATD reporting – that makes sense and is nothing new. Agency analytics teams have been using home-grown BI for years. Advertisers really just need to ask their agency questions if they don’t understand how all of it works…this recalls a famous Chinese proverb: He who asks a question is a fool for five minutes; he who does not ask a question remains a fool forever.


Data-hoarding ATDs. Really? The sell-sider rhetoric on this point is very misleading. Most ATDs are essentially service-providers, consultants armed with a DSP SLA (service level agreement) and the expertise. While agency BI tools attempt to provide handy storage of performance data (with debatable proficiency). Historical benchmarks and campaign reporting data are not the same as actionable behavioral user-level data, i.e. cookies. No, afraid that data is sitting inside ad networks, ad servers (which, by the way sometimes turns out to be the same cookie used by the ad exchange) or in a Data Management Platform. 


Now that said, there is simply no excuse for an ATD or agency to clandestinely re-purpose
so-called 4th party data from ad campaigns for later use. That is a major ethical lapse and sell-siders (publishers) should not tolerate. Ironically though, at the same time, far too many major ad networks are happy to re-purpose advertiser and publisher campaign performance data when it can maximize their revenue.



Mandate. Just what are sell-siders so afraid of? Perhaps their advertiser clients getting the most experienced and savvy teams working on their behalf and more transparancy. That is a huge benefit for client-side marketers that remarkably all too often have few senior digital media natives in-house. As a result, there is a huge-learning curve and time means money in a service business. 


The flip-side is that an agency holding company not consolidating their technical and negotiating expertise on one team raises management competency questions. With the level of technology change today, a centralized team is exactly what holding companies should be doing to effectively manage their resources. A better question and especially so for site retargeting is, that ad networks are still being considered. If old-school planner-buyers are concerned then they ought to put in for a transfer to the ATD.


Conflict of Interest. Wow – look at who is talking. Most advertisers would probably prefer the dedicated separate team within their ATD (usually closely directed by their agency-of-record) than what naked and supposedly independent sell-siders and technology vendors have to offer to protect their interests, i.e. nothing. It seems no different than a client directly buying from a media vendor, where that really new “big idea” has actually been shopped to several other advertisers (probably not all that new.) Plus, if an advertiser decides to pass, 100% probability that “idea” will be offered up to an advertiser’s competitor. Hey, clients can certainly pay a premium for category exclusivity – that option is always available.


On the other hand, AORs by definition get the concept of category exclusivity. With ATDs, there is semblance of brand stewardship and a compeitive firewall. Moreover, an ATD’s media planning agency partners are very unlikely to put any one client account at risk. That’s because in game theory terms, branding is a zero-sum game, i.e. it is about brand X winning, which means that brands A, B and C lose. As such, the ad strategies that are successful cannot be shared, nor the ones that didn’t. The problem is that sell-siders and technology vendors often have the opposite – industry specialists. 


The Machine Knows Better. Of course it makes sense to leverage automatic optimization and novel algorithmic approaches to improve results. However, far too many of the sell-siders and arms vendors out there purport that an ATD just can’t keep up. That may or may not be true but consider the source. How many sellers are transparent enough to report on the performance of their supposed-machine learning technologies? Some will do it but only when asked.

In any case, marketers will always have a need to explain and justify their actions. The client-side CFO does not want to hear about magic or blackboxes. They want to understand how to allocate cash to generate ROAS and ROI in a predictable way. People can be held accountable in a way machines cannot. The simple fact is that advertisers need expert brains to adjust to the changing marketplace and resources – managing campaigns on their behalf.


Early-in-session User Performance. One of the more clever rhetorical devices that pops up when sellers realize they are about to get disintermediated. It essentially questions the competition’s inventory quality suggesting that either directly or indirectly that only they have access to the special ad inventory. That’s right, through first dibs or exclusive relationships, the seller’s inventory “performs better” and therefore more valuable than the other. It is possible but  depends on the seller’s definition of perform – for their bottom line or for their client’s? BTW, still waiting for the data or performance reports that back-this up after multiple requests. Ironically, most of these sellers are also getting a portion of their inventory from the same exchange sources as the ATD; the real question is just how much.


Simplistic Wall Street Metaphors. This is an oldie…first of all day-trading media is a very one-dimensional way of viewing media consumption. It is not the same as a financial asset that has intrinsic value (stocks, bonds, options)…however it does make for nefarious and ominous metaphors with the recent financial crisis and all. Digging past the hackneyed writing, RTB by definition doesn’t allow positions to be taken in the same way as financial trading. These are real-time transactions, i.e. a spot market where ATDs aren’t owning inventory or taking a position. It seems that there is a fundamental misunderstanding of financial atribitraging.

It seems like the amount of technology required to squeeze out any kind of profit through exploiting information inefficiencies across many RTB decisions is more likely going to come from a DSPs that can hedge across multiple advertisers. ATDs just don’t have the financial structure, engineering or research staff to pull this off. In practice, this is nothing more than another red herring. Any ATDs that could save client’s big money would want that to be known.


Kick-backs. One of the more outrageous charges about kickbakcs was refuted in public and so the matter should be closed. Yet, the meme continues to proliferate. It may also depend on the definition of a kick-back. Is free user training or better support a kick-back? How about box seats to the Cubs game and fancy meals? Without knowing the internal accounting between DSPs, exchanges and ATDs it may never be known for certain. Clients can always ask for audit rights but like all memes this one can be difficult to prove or disprove.


Did I miss any or do you have any others to add? Feel free to submit a comment below!

History of Web Analytics…Annotated.

Most history of Web analytics postings and stories routinely cover Analog by Dr. Steve Turner a rather simplistic log file analyzer. Others mention i/Pro, netGenesis, Interse, NetCount and WebTrends.

What they often miss is Lilypad, a marketing and adertising oriented Web-based analytics application. Lilypad was developed by Streams Online Media Development in 1995 and announced in the Fall of 1995. Unlike most of the other technical log file analytics tools, Lilypad was original in that it focused on promotional measurement. Importantly, Lilypad utilized its own database of activity and was coded in Perl leveraging server-side inludes the predecessor to JavaScript page tags.

Lilypad was programmed by James Allenspach under my direction during dowtime in-between client projects. Dave Skwarzek and I worked to brand and promote the product in  way that marketers could appreciate. A seminal offering by a scrappy Web boutique start-up to be sure, Lilypad was influential as an early site metrics tracking application:

If you are doing research on the history of site analytics, digital media tracking or online media measurement, you can learn more about Lilypad here.

Response to Who Will Rid Us of this Meddlesome Click?

Great post by Gian Fulgoni of ComScore and very timely!

After over a decade in the digital advertising industry, I don’t find the click emphasis to be a fascination as much as a perpetual crutch. Continuing to nibble at our heels, clickthrough has really just been the path of least resistance. I’ve heard this from many colleagues:

Ad sales execs seeking to close business tread uneasily and rarely bring up alternate success measures that require too much thought or set-up

Agencies attempt to educate clients about such matters but that doesn’t always work (relationship/credibility/pick your battles/technical complexity)

Client-side marketers are all too often organizationally overwhelmed and only now starting to internalize meaningful digital measurement process

Technology vendors who often have the most expertise, ironically tend to be perceived as the most unreliable sources with their marketing often ahead of actual capabilities

For some reason, easy-to-measure clickthrough are meticulously collected, analyzed and reported. It is as if through the mass willing suspension of disbelief that somehow the display clicks might conceivably convert or are implicitly valuable. The ongoing independent reports from ComScore suggest otherwise to anyone listening. While this post rightfully educates all of us about the troublesome reliance on clickthrough, it also raises the opportunity to raise awareness about passive incremental response.

Instead of skipping down the funnel past post-view response and straight on to purchase, consider the in-between, i.e. non-clicker viewthrough. Even in Gian’s post, viewthrough is not mentioned explicitly but post-view reponse is only suggested in the context of conversion – this is part of the problem. Yet, in practice we all know that very few clickers will convert (super promo-oriented messaging aside).

This measurement incongruity routinely happens whenever post-view activity is mentioned. In so doing, the more executionally challenged, but likely valuable viewthrough response remains unmeasured and therefore invisible.

It is analogous to calculating auto mileage looking at RPMs (engine speed) and not MPH (land speed).

The reality is that implementing an alternative quantitative measurement like viewthrough pushes many marketers to the edge, requiring patience, precise technical set-up and methodical execution. To Gian’s point clickthroughs are fast, cheap and easy to measure; they’re also potentially misleading for all the reasons ComScore and others have researched. What’s more, viewthrough impact accrues over time, which flies in the face of the commonplace action bias to optimize campaigns on something.

So while probably not that final Canterburian cleric’s foot-on-the-neck of clickthrough, viewthrough might at least be one of the assasinative knights.

An industry appeal to standardize or define viewthrough can be found here http://goo.gl/LA8Iv

New for 2011! Standardizing the Definition of View-through

It has been over 17 years since the advent of the Netscape Web browser in 1993 and almost as many years since the first AT & T banner ads were served on Hotwired.com. Back then, the Internet was heralded as the most accountable medium ever.

Fast-forward to 2010: the digital advertising industry has gone mainstream and will likely generate more than $25 Billion (US only). At the same time, a subject that concerns far too many people is declining or flat click-through rates. Last week’s gushing “news” from a rich media vendor that clickthrough rates have supposedly leveled off after years of decline is a good example. 

Definition of Insanity
In a business that obsesses about such meaningless metrics, the digital advertising industry simply cannot continue worrying about click-through rates. Although most will recognize that the novelty of clicking banner ads has largely worn off, this measure which still provides almost no insight on the effectiveness of most campaigns just won’t go away – regardless of marketing objectives.

In the no man’s land somewhere between the ad server and site tracking is an analytics oddity called viewthough: a useful albeit tortured metric. Testament to this is thats not one of the major industry trade groups recognizes viewthrough by including it in their standards glossaries:  IAB, OPA, ARF or the WAA.

Despite early work by DoubleClick, plenty of practioner interest and ongoing research by ComScore, the digital advertising industry still somehow lacks an official definition of a viewthrough. At times, it seems like we’re all too often measuring what is easy or expedient. And, clearly that is not working for the industry.

Here is a sampling of viewthrough articles over the last 10 years:

  • Lilypad White Paper Response Assessment in the Web Site Promotional Mix (2/1997) A very early attempt by the author to describe the phenomenon in the context of measuring ad response; see the diagram.Online Awareness Model of Banner Advertising Promotional Models. At that time there was not yet a way to measure such passive behavior.
  • Conversion Measurement: Definitions and Discussions (9/2003) An early article that focused on “people that ultimately convert but did not click”. Technically, viewthroughs are not people and are probably better described as visits. Also, depending on the campaign objective, a conversion event is optional.
  • Neglecting Non-Click Conversions (11/2003) A pretty thorough piece on the subject, although the term “viewthrough” is not used and there is again an emphasis on conversion.
  • Lies, Damn Lies and Viewthroughs (8/2005) Again, the focus is exclusively on viewthrough conversion, which is clearly a trend. However, it is misleading as it misses all the non-converter traffic.
  • The Most Measurable Medium? We Still Have A Lot To Do! (9/2007) David Smith actually made a literal plea for the industry trade groups to define viewthrough. A great idea, unfortunately it fell on deaf ears and several years later not much has changed.
  • Why view-through is a key component of campaign ROI (9/2010) provides a more balanced look at what viewthrough is but still brings up conversion. Also, th acronym “VTR” is confusing as that is what most people might consider a viewthrough rate similar to how a “CTR” means clickthrough rate.
  • Different Views of View-Through Tracking (10/2010) More of the same, although this article actually quotes Wikipedia (scary) and further convolutes the matter referencing a Google Display Network definition that focuses on viewthrough conversion. Consistent with the theme, the term VTR is used to mean viewthrough conversion rate not view-through rate – two very different measures. On the upside the potential of viewthrough for media planning and optimization was right on.

Curiously absent from the ongoing discussion is what viewthrough inherently represents: measurable incremental value from an affirmative self-directed post-exposure response. With just syndicated panels and qualitative market research to divine results, traditional electronic media could never quantify this .

At the same time, the advertising industry now has over 10 years of similar “directional” qualitative research focused on the familiar yet ephemeral measurements of post-exposure attitudes and intentions (notoriously unreliable). Many see these brand lift studies as rife with data collection challenges and ultimately of dubious value. Just this year, Professor Paul Lavrakas on behalf of the IAB released a critical assessment of the rampant practice.

Parsing The Metrics
It is bizarre that many digital marketers insist on defining viewthrough rates in conversion terms while clickthrough rates are always measured separate from subsequent conversion rates. Mixing metrics has confused the matter but effectively left viewthrough to be held to the higher standard of conversion. Ironic, since very few clickthrough (in volume and rate terms) even result in conversion.

While “clickthrough rate” is always understood to be relative to impressions (# of clicks / # of impressions), “viewthrough rate” seems to have skipped the middle response step and gone all the way to conversion. That doesn’t make sense when there are so many other factors that influence the purchase decision after arriving on a Web site.

To be very specific, viewthrough rate (VTR) should be similarly calculated, i.e. # of (logical) viewthroughs / # of impressions. “Logical” means that the viewthrough is observed where a branded post-exposure visit is most likely to happen analogous to the target landing page of a click-through;usually this means the brand.com home page. 

Measurement Details
The real problem underlying the apparent confusion is that viewthrough measurement invokes several raging and simultaneous, inter-related and often technical debates: branding vs. response, optimization, cookie-deletion, cookie-stuffing, panel recruitment bias, correlation vs. causation and last-click attribution. Anyone one of these arguments can cause a fight.

Nonetheless, in defining what viewthrough actually means it would be helpful to overview the two basic ways of measuring view-through:

  1. Cookie-based: This a browser-server technique that relies on cookie synchronization between the ad server and the target brand site. When the user receives the ad, a cookie is set on their browser that is later recognized upon visit to the target site, which is then matched via speical page tags back to the associated campaign. There are several ways this can be done, e.g.  DART for Agencies (DFA)/Atlas/Mediaplex page tags, ad server integrations (Omniture) or ad unit ridealong pixel tracking (Coremetrics). Optionally, PSA campaigns can be run alongside a camapign for a simultaneous test-control comparison of viewthrough “true” lift;essentially you can measure a baseline amount of viewthrough traffic that would end-up at the site anyway. Downside: subject to browser cookie limitations.
  2. Panel-based: Alternately, a standing Internet behavioral panel can be utilized, e.g. ComScore and Compete. In this approach, two comparable groups are observed: an exposed test group and an unexposed control group that represents the baseline viewthrough. The difference between the rate by which the test group (exposed to ad campaign) and the control group (received PSA or other’s ads) subsequently visit the target site reveals the lift that is explained by the presence of display advertising. This method may also include ad or page tracking, but does not require cookies. Downside: subject to panel bias.

The Impact of Time
Next, an additional layer to viewthrough measurement that is worth mentioning is time, i.e. delayed response. Like traditional advertising media, display ads exhibit an asynchronous response curve where the effect of the advertising decreases over time.In our real-time data collection world, it seems the common sense realities of human behavior are often overlooked.

Many factors can impact the viewthrough response curve, including messaging, frequency, share of voice and creative execution to start. And, one size does not fit all: a considered purchases could reasonably have longer shopping cycles than CPGs. Depending on the method of measuring view-through, typically 30 days or 4 weeks are often used as initial “lookback windows.”

Et Cui Bono?
Although that was fairly straightforward, as soon as viewthrough is connected to a site conversion (through deeper page tracking), the thorny issue of attribution arises (and cookie-based measurement is implied). Viewthrough measurement often goes off on a tangent t this point because there are two layers to attribution.

  • Channel attribution is simply put: which digital channel is assignedtr credit for the conversion event? Measuring display advertising happens to be more complex and most site metrics tools punted on tracking this capability. That means that simpler response channels like paid search, natural search, affiliates, CSEs and email to receive last credit as a default. For many marketers, measuring conversion attribution or participation gets complex and often political very quickly.
  • Media attribution gets really contentious, especially for lead generation and ecommerce-oriented marketers. Performance ad networks often insist on having their own special page tag in place where the conversion event occurs;in this way they can independently measure conversions and potentially optimize their ad delivery. The problem is that there usually are multiple ad network vendor tags on the conversion event page and all of them will count the page load as a conversion. Worse, this is an easy way for the ad network to shoehorn themselves a retargeting cookie pool. Unchallenged, media vendors may claim credit for everything such that marketers end up overpaying for the same conversion. Alternately, some very Byzantine schemes have arisen to guestimate credit. 

Despite all of the above, here is a working definition of a viewthrough for 2011:

Definition of View-through
Viewthrough is a measure of the passive but self-directed impact from a partiucular display ad unit (banner, rich media, video or audio). The viewthrough event follows one or more ad exposures and when the ad unit is clickable can be post-click (initial click visit timed-out) or post-impression (with no click). Importantly, a viewthrough may or may not be associated with a purchase conversion event but must be associated with a target page load or other high-value action. VTR or viewthrough rate is calculated as # of viewthrough / # of impressions.

Viewthroughs decay over time from ad exposure. In-flight viewthrough are observed during the live ad campaign while the post-flight “vapor-trail” begins immediately after the associated ad is served.

Don’t like this definition? Come up with a better one or edit the above…and, the sooner the better or the industry might get stuck with this sketchy Wikipedia entry.

CIMA May 2010 Panel on Analytics

Measuring and Monetizing Digital Media ROI

– How has successful marketing ROI been measured?

– What are the standard measurements of ROI and what tools are available to support the standards?

– Should there be different ROI standards for digital media and offline media?

– Branding campaigns and DR campaigns?

– Differences in metrics associated with display and search vs. video, mobile & social.

– Examples of the best emerging tools for measurement.

– Upcoming changes throughout the next 5 years.

Pictured from Left to Right:  Moderator Cary Goss , Brett Mowry of Digitas, Perianne Grignon of x+1, Domenico Tassone and Andy Stein, both of Sears Holdings – 5/19/10.

Photo by Dan Merlo

Early take on Post-impression Viewthrough: Lilypad White Paper

An oldie but goodie:

White paper about Streams Lilypad analytics tool that I wrote back in 1996. Oddly, the industry has become search obsessed and has barely advanced with respect to post-impression and viewthrough tracking since then. DoubleClick for Agencies (DFA), Atlas and MediaPlex all measure viewthroughs; ComScore routinely provides research about this passive asynchronous behavior.

Web Analytics: Online Marketing Measurement

Good news: I’ll be returning to the classroom for the first time this Spring since “Marketing with Digital Media” at Columbia College way back in the mid-90s; teaching a new course that I’ve developed this time at my alma mater .

Here are the main areas that I’m planning to cover in Web Analytics: Online Marketing Measurement:

  • Develop a plan to measure online marketing based on solid best-practices
  • Know the essential technology of Web analytics and industry terminology
  • Learn how to use industry-standard tools like Google Analytics and Quantcast
  • Prepare meaningful actionable reports that communicate essential findings
  • Make fact-based recommendations and identify opportunities for optimization

Analytics has so many definitions – depends who you ask. What do people want to see?