Category Archives: viewthrough

Calls for Viewthrough, Proper Display Attribution Growing

As the digital media and marketing business continues to evolve the interest in passive response from digital display (banners, video and non-search mobile) is growing. Although the banner is almost 20 years old now (born c. 1995) the legacy obsession with click-based response measurement and simplistic attribution is only now beginning to collapse thanks to new tools and better methodologies.

Bravo…it’s about time.

Dax Herman who heads-up search-targeted display ad network Chango recently put together an elaborate white paper,  Viewthrough Attribution Exposed: What last Touch Isn’t Telling You. I had the opportunity to work with Chango in an ecommerce setting over the past year and found their interest in and approach to meaningful analytics for their near (but not quite) bottom of the funnel ad media refreshing and collaborative.

Recently, Dax asked for my POV on viewthrough measurement which you’ll find a snippet of in the white paper; this also led to a recent post, here at the Tip of the Spear Blog: Viewthrough Incrementality Testing. More broadly, we’re planning to collaborate with other digital advertising leaders to further viewthrough measurement…it won’t be easy but it needs to happen. Coalition
If you or your company has an interest in viewthrough measurement and attribution please get in touch to learn more about collaborating on the effort; as part of this effort we’re looking to work with leading measurement technologies, digital marketers and especially the DAA/IAB to standardize the definition of viewthrough.


Viewthrough & Incrementality Testing

A common question in digital marketing measurement is “what was the lift?” or “what is the incremental benefit?” from a particular promotional campaign. Most of us would agree that the alternative of relying on display ad clicks alone for display response measurement is just wrong. When measuring display media, incrementality testing is absolutely essential to properly gauge the baseline viewthrough effect.

How to Calculate Incremental Cost Borrowing thumbnail

In order to answer this question, a test and control methodology (or control and exposed) should be used, i.e. basic experimental design where a control group is held out that otherwise are identical to the group that is being tested with the marketing message. This is even more important when marketing “up the funnel” where a last click or even last touch measurement from a site analytics platfom will mask impact.

Email marketers have been doing this with their heritage in direct marketing technqiues. It is often pretty straight forward as the marketer knows the entire audience or segment population and holds back a statistically meaningful group; this will enable them to make a general assertion about what the campaign’s actual lift or incremental benefit is. Control and exposed can also be done with display media if the campaign is configured properly to split audiences, elimnate overlap and show the control group a placebo ad. Often PSAs (public service ads) are used, which can be found via the AdCouncil.

This technique is routinely used for qualitative research, i.e. brand lift study services like Vizu, InsightExpress, Dynamic Logic and Dimestore. It is the best way to isolate the impact of the advertising; read more about the challenges of this kind of audience research in Prof. Paul Lavrakas study for the IAB.

Calculating Lift and Incrementality

Dax Hamman from Chango and Chris Brinkworth from TagMan were recently kicking around some numbers to illustrate how viewthrough can be measured; some of that TOTSB covered a while back in Standardizing the Definition of Viewthrough For the purposes of this example, clickthrough-derived revenue will be analyzed separately and fractional attribution will not be addressed. In this example, both control and exposed groups are the same size though this can be expensive and is usually unnecessary using statistical sampling.

  • Lift is the percentage improvement = (Exposed – Control)/Control
  • Incrementality is the beneficial impact = (Exposed – Control)/Exposed

In addition to understanding the lift in rates like viewthrough visit rate, conversion rate and yield per impression by articulating incrementality rate the baseline percentage is revealed – it is just the reciprocal of incrementality (100% = incrementality % + baseline %). Incrementality or incremental benefit, can be used to calibrate other similar campaigns viewthrough response – “similar” being the operative word.

Executing an Incrementality Test

PSA studies are simple in concept but often hard to run. Some out there advocate a virtual control, which is better than no control but not recommended. This method does provide a homegenous group from an offline standpoint so if all things being equal TV and circular are running then it is safe to assume both test and control should be exposed to the rest of the media mix equally. ComScore even came up with a clever zero-control predicted metholdology for their SmartControl service.

Most digital media agencies have experience designing tests and setting up ad servers with the exact same audience targeting criteria across test and control. Better ad networks out there encourage incrementality testing and will embrace the opportunity to understand their impact beyond click tracking.

Was this helpful? If so, let me know and I’ll share more.

Response to Who Will Rid Us of this Meddlesome Click?

Great post by Gian Fulgoni of ComScore and very timely!

After over a decade in the digital advertising industry, I don’t find the click emphasis to be a fascination as much as a perpetual crutch. Continuing to nibble at our heels, clickthrough has really just been the path of least resistance. I’ve heard this from many colleagues:

Ad sales execs seeking to close business tread uneasily and rarely bring up alternate success measures that require too much thought or set-up

Agencies attempt to educate clients about such matters but that doesn’t always work (relationship/credibility/pick your battles/technical complexity)

Client-side marketers are all too often organizationally overwhelmed and only now starting to internalize meaningful digital measurement process

Technology vendors who often have the most expertise, ironically tend to be perceived as the most unreliable sources with their marketing often ahead of actual capabilities

For some reason, easy-to-measure clickthrough are meticulously collected, analyzed and reported. It is as if through the mass willing suspension of disbelief that somehow the display clicks might conceivably convert or are implicitly valuable. The ongoing independent reports from ComScore suggest otherwise to anyone listening. While this post rightfully educates all of us about the troublesome reliance on clickthrough, it also raises the opportunity to raise awareness about passive incremental response.

Instead of skipping down the funnel past post-view response and straight on to purchase, consider the in-between, i.e. non-clicker viewthrough. Even in Gian’s post, viewthrough is not mentioned explicitly but post-view reponse is only suggested in the context of conversion – this is part of the problem. Yet, in practice we all know that very few clickers will convert (super promo-oriented messaging aside).

This measurement incongruity routinely happens whenever post-view activity is mentioned. In so doing, the more executionally challenged, but likely valuable viewthrough response remains unmeasured and therefore invisible.

It is analogous to calculating auto mileage looking at RPMs (engine speed) and not MPH (land speed).

The reality is that implementing an alternative quantitative measurement like viewthrough pushes many marketers to the edge, requiring patience, precise technical set-up and methodical execution. To Gian’s point clickthroughs are fast, cheap and easy to measure; they’re also potentially misleading for all the reasons ComScore and others have researched. What’s more, viewthrough impact accrues over time, which flies in the face of the commonplace action bias to optimize campaigns on something.

So while probably not that final Canterburian cleric’s foot-on-the-neck of clickthrough, viewthrough might at least be one of the assasinative knights.

An industry appeal to standardize or define viewthrough can be found here

New for 2011! Standardizing the Definition of View-through

It has been over 17 years since the advent of the Netscape Web browser in 1993 and almost as many years since the first AT & T banner ads were served on Back then, the Internet was heralded as the most accountable medium ever.

Fast-forward to 2010: the digital advertising industry has gone mainstream and will likely generate more than $25 Billion (US only). At the same time, a subject that concerns far too many people is declining or flat click-through rates. Last week’s gushing “news” from a rich media vendor that clickthrough rates have supposedly leveled off after years of decline is a good example. 

Definition of Insanity
In a business that obsesses about such meaningless metrics, the digital advertising industry simply cannot continue worrying about click-through rates. Although most will recognize that the novelty of clicking banner ads has largely worn off, this measure which still provides almost no insight on the effectiveness of most campaigns just won’t go away – regardless of marketing objectives.

In the no man’s land somewhere between the ad server and site tracking is an analytics oddity called viewthough: a useful albeit tortured metric. Testament to this is thats not one of the major industry trade groups recognizes viewthrough by including it in their standards glossaries:  IAB, OPA, ARF or the WAA.

Despite early work by DoubleClick, plenty of practioner interest and ongoing research by ComScore, the digital advertising industry still somehow lacks an official definition of a viewthrough. At times, it seems like we’re all too often measuring what is easy or expedient. And, clearly that is not working for the industry.

Here is a sampling of viewthrough articles over the last 10 years:

  • Lilypad White Paper Response Assessment in the Web Site Promotional Mix (2/1997) A very early attempt by the author to describe the phenomenon in the context of measuring ad response; see the diagram.Online Awareness Model of Banner Advertising Promotional Models. At that time there was not yet a way to measure such passive behavior.
  • Conversion Measurement: Definitions and Discussions (9/2003) An early article that focused on “people that ultimately convert but did not click”. Technically, viewthroughs are not people and are probably better described as visits. Also, depending on the campaign objective, a conversion event is optional.
  • Neglecting Non-Click Conversions (11/2003) A pretty thorough piece on the subject, although the term “viewthrough” is not used and there is again an emphasis on conversion.
  • Lies, Damn Lies and Viewthroughs (8/2005) Again, the focus is exclusively on viewthrough conversion, which is clearly a trend. However, it is misleading as it misses all the non-converter traffic.
  • The Most Measurable Medium? We Still Have A Lot To Do! (9/2007) David Smith actually made a literal plea for the industry trade groups to define viewthrough. A great idea, unfortunately it fell on deaf ears and several years later not much has changed.
  • Why view-through is a key component of campaign ROI (9/2010) provides a more balanced look at what viewthrough is but still brings up conversion. Also, th acronym “VTR” is confusing as that is what most people might consider a viewthrough rate similar to how a “CTR” means clickthrough rate.
  • Different Views of View-Through Tracking (10/2010) More of the same, although this article actually quotes Wikipedia (scary) and further convolutes the matter referencing a Google Display Network definition that focuses on viewthrough conversion. Consistent with the theme, the term VTR is used to mean viewthrough conversion rate not view-through rate – two very different measures. On the upside the potential of viewthrough for media planning and optimization was right on.

Curiously absent from the ongoing discussion is what viewthrough inherently represents: measurable incremental value from an affirmative self-directed post-exposure response. With just syndicated panels and qualitative market research to divine results, traditional electronic media could never quantify this .

At the same time, the advertising industry now has over 10 years of similar “directional” qualitative research focused on the familiar yet ephemeral measurements of post-exposure attitudes and intentions (notoriously unreliable). Many see these brand lift studies as rife with data collection challenges and ultimately of dubious value. Just this year, Professor Paul Lavrakas on behalf of the IAB released a critical assessment of the rampant practice.

Parsing The Metrics
It is bizarre that many digital marketers insist on defining viewthrough rates in conversion terms while clickthrough rates are always measured separate from subsequent conversion rates. Mixing metrics has confused the matter but effectively left viewthrough to be held to the higher standard of conversion. Ironic, since very few clickthrough (in volume and rate terms) even result in conversion.

While “clickthrough rate” is always understood to be relative to impressions (# of clicks / # of impressions), “viewthrough rate” seems to have skipped the middle response step and gone all the way to conversion. That doesn’t make sense when there are so many other factors that influence the purchase decision after arriving on a Web site.

To be very specific, viewthrough rate (VTR) should be similarly calculated, i.e. # of (logical) viewthroughs / # of impressions. “Logical” means that the viewthrough is observed where a branded post-exposure visit is most likely to happen analogous to the target landing page of a click-through;usually this means the home page. 

Measurement Details
The real problem underlying the apparent confusion is that viewthrough measurement invokes several raging and simultaneous, inter-related and often technical debates: branding vs. response, optimization, cookie-deletion, cookie-stuffing, panel recruitment bias, correlation vs. causation and last-click attribution. Anyone one of these arguments can cause a fight.

Nonetheless, in defining what viewthrough actually means it would be helpful to overview the two basic ways of measuring view-through:

  1. Cookie-based: This a browser-server technique that relies on cookie synchronization between the ad server and the target brand site. When the user receives the ad, a cookie is set on their browser that is later recognized upon visit to the target site, which is then matched via speical page tags back to the associated campaign. There are several ways this can be done, e.g.  DART for Agencies (DFA)/Atlas/Mediaplex page tags, ad server integrations (Omniture) or ad unit ridealong pixel tracking (Coremetrics). Optionally, PSA campaigns can be run alongside a camapign for a simultaneous test-control comparison of viewthrough “true” lift;essentially you can measure a baseline amount of viewthrough traffic that would end-up at the site anyway. Downside: subject to browser cookie limitations.
  2. Panel-based: Alternately, a standing Internet behavioral panel can be utilized, e.g. ComScore and Compete. In this approach, two comparable groups are observed: an exposed test group and an unexposed control group that represents the baseline viewthrough. The difference between the rate by which the test group (exposed to ad campaign) and the control group (received PSA or other’s ads) subsequently visit the target site reveals the lift that is explained by the presence of display advertising. This method may also include ad or page tracking, but does not require cookies. Downside: subject to panel bias.

The Impact of Time
Next, an additional layer to viewthrough measurement that is worth mentioning is time, i.e. delayed response. Like traditional advertising media, display ads exhibit an asynchronous response curve where the effect of the advertising decreases over time.In our real-time data collection world, it seems the common sense realities of human behavior are often overlooked.

Many factors can impact the viewthrough response curve, including messaging, frequency, share of voice and creative execution to start. And, one size does not fit all: a considered purchases could reasonably have longer shopping cycles than CPGs. Depending on the method of measuring view-through, typically 30 days or 4 weeks are often used as initial “lookback windows.”

Et Cui Bono?
Although that was fairly straightforward, as soon as viewthrough is connected to a site conversion (through deeper page tracking), the thorny issue of attribution arises (and cookie-based measurement is implied). Viewthrough measurement often goes off on a tangent t this point because there are two layers to attribution.

  • Channel attribution is simply put: which digital channel is assignedtr credit for the conversion event? Measuring display advertising happens to be more complex and most site metrics tools punted on tracking this capability. That means that simpler response channels like paid search, natural search, affiliates, CSEs and email to receive last credit as a default. For many marketers, measuring conversion attribution or participation gets complex and often political very quickly.
  • Media attribution gets really contentious, especially for lead generation and ecommerce-oriented marketers. Performance ad networks often insist on having their own special page tag in place where the conversion event occurs;in this way they can independently measure conversions and potentially optimize their ad delivery. The problem is that there usually are multiple ad network vendor tags on the conversion event page and all of them will count the page load as a conversion. Worse, this is an easy way for the ad network to shoehorn themselves a retargeting cookie pool. Unchallenged, media vendors may claim credit for everything such that marketers end up overpaying for the same conversion. Alternately, some very Byzantine schemes have arisen to guestimate credit. 

Despite all of the above, here is a working definition of a viewthrough for 2011:

Definition of View-through
Viewthrough is a measure of the passive but self-directed impact from a partiucular display ad unit (banner, rich media, video or audio). The viewthrough event follows one or more ad exposures and when the ad unit is clickable can be post-click (initial click visit timed-out) or post-impression (with no click). Importantly, a viewthrough may or may not be associated with a purchase conversion event but must be associated with a target page load or other high-value action. VTR or viewthrough rate is calculated as # of viewthrough / # of impressions.

Viewthroughs decay over time from ad exposure. In-flight viewthrough are observed during the live ad campaign while the post-flight “vapor-trail” begins immediately after the associated ad is served.

Don’t like this definition? Come up with a better one or edit the above…and, the sooner the better or the industry might get stuck with this sketchy Wikipedia entry.

Early take on Post-impression Viewthrough: Lilypad White Paper

An oldie but goodie:

White paper about Streams Lilypad analytics tool that I wrote back in 1996. Oddly, the industry has become search obsessed and has barely advanced with respect to post-impression and viewthrough tracking since then. DoubleClick for Agencies (DFA), Atlas and MediaPlex all measure viewthroughs; ComScore routinely provides research about this passive asynchronous behavior.

Reponse to Commenter Jaffer Ali’s "Driving In The Rear-View Mirror"

Jaffer Ali, the semi-retired CEO of EVTV1, and a jovial industry colleague is usually good for some creative commentary and periodic fire-starting.

Today’s piece “Driving In The Rear-View Mirror” published in Mediapost, required a response as I completely disagreed with it.

This notion is nothing new. In fact, I know it has been on the radar since the Web 1.0 days; check out the original 1997 Lilypad white paper discussing time-shifted response-behavior and measurement:

(See Figure #C.)