Category Archives: analytics

A Fool and Their Data are Soon Parted

In the post Fear & Loathing in the Ad Technology Stack (3/8/11), TOTSB opined about the latent dangers of having a tag management platform provided by the same vendor as the site analytics solution. Since then, IBM CoreMetrics joined the fray with their Digital Data Exchange solution. Earlier this week, the other and much bigger shoe dropped as Google announced their new and free Tag Manager.

With this latest development, it seemed like a good time to take a look at digital marketers often foolish handling of their customer’s behavioral data. These days such foolishness is like leaving the safe open with money in plain view. Now, let’s take a closer look at what is being offered by Google.

How Does It Work?

The appeal of Google Tag Manager is understandable: “Google Tag Manager is easy, free, and reliable. It gives marketers greater flexibility, and it lets webmasters relax and focus on other important tasks.” Signing-up is easy enough and takes just a few minutes like many other Google tools. Digital marketers can even “opt-out” of anonymously sharing their data for benchmarking purposes. However, this is a faux bone being thrown out by Google that is revealed on a subsequent screen.

Later, users learn that are actually agreeing to share their data with DoubleClick, Google’s advertising business and signing-up for AdWords, too. It is odd that users must explicitly agree to this to use a Tag Management System. On the final screen you can add then add some 3rd party tags. Conveniently this screen is pre-populated with Google’s Ad Words, DoubleClick Floodlight and Google Analytics tags. Supposedly other tracking tags will be coming soon with such drag-and-drop simplicity. Until then you can add custom code.

Google Tag Manager is:

  • Asynchronous itself and calls 3rd party tags asynchronously which means that slow-loading tags (including itself) won’t slow down page download time.
  • Not server-server…at least that is not yet clear. Meaning tags are literally firing on all requests which is technically a worse engineered solution when simultaneously using other Google products and services. When GTM does go S2S, certainly it will be positioned as a speed benefit…just ignore the looming centralized consolidated Google master cookie.
  • Using a Data Layer. Handy, as it means that there is a way to manage standardized data elements from user behavior on a page or other integrated systems.
  • No SLA. That is what free means; as a result this makes GTM less appropriate for enterprise-sized clients. Perhaps this will be included in Google Analytics Premium.


The Trojan Horse
Now for the rub. Considering the success of Google’s model of free analytics, this move by Google should not be a big surprise. If you weren’t already sharing your data with the Google data-mining machine, now there is one more way for them to get even more breadth of data capture.

 

Combined with their the search, free email, social and display media business, Google continues to steadily touch more and more of the entire digital stack. That means they also have maximum user depth, i.e. the full end-to-end view of cause and effect. It is this rich, vast global data set that Google’s engineers have trained their sights on analyzing. The reality is that most digital marketer’s already aren’t technically savvy enough to realize the free Google stack is a digital data Trojan Horse much less do anything about it. When you are used to getting the milk for free why would you want to pay for the cow? Let’s face it – it is a brilliant strategy.

Even if digital marketers decide to forgo Google Analytics and upgrade to a pure-play enterprise analytics solution (not a fake one like Google Analytics Premium), they still have a hole in the data bucket…now thanks to Google Tag Manager. Let’s just call it Google’s little data collection hedge.
 
At the same time, for most Tag Management System vendors this is going to be a really big problem. Google will now commence to eat many TMS’ lunch by putting tremendous price pressure on the market..kind of like dumping. Many digital marketers have already invested in what we can refer to as TMS 1.0 where its all about putting tags in containers albeit through non-server-to-server solutions. Interestingly, many of them are using their paid TMS to deliver their free Google Analytics. Arguably, these clients are the most at risk to Google’s freebies.

Think about it: these TMS 1.0 providers cannot compete any time soon with what will soon be a cloud-based (S2S) architecture. It will be difficult, expensive and risky to change their platforms with many clients and very tedious implementations already behind them. Expect to see more consolidation as a result.

The High Cost of “Free
Most digital marketers have been blissfully unaware of the actual game that they have been playing with Google for years – all under the auspices of free and easy-to-use. Perpetuated by self-appointed experts, there is a popular notion that espouses that analytics technology should be cheap and that it is more valuable to have a well-funded well-paid analytics people…not an expensive tool. The above meme is so Google. It is self-serving and self-reinforcing; it works especially well for the cottage industry of certified implementers and analysts. Unfortunately, it usually also means weak display media measurement, gaping holes in data security/intellectual property control and potentially deep privacy concerns. More tangibly, it could also mean inadvertently feeding your competition through a de facto data co-op while Google makes a buck.

The layers of Google’s conflicts of interest are deep and include:

  • Google Remarketing –  conveniently baked into Google Analytics these days; the Google advertising cookie and the Google Analytics site cookie have been one and the same for some time now
  • Google Analytics – known to overstates Paid Search performance
  • Google Search – recently changed how referral data is passed on  landing pages, thus obfuscating search performance
  • Google Analytics Premium – a thrown bone on fractional attribution and now via DoubleClick Analytics, yet their credibility as an independent arbiter of their own performance is rarely considered

On Being Ethically Challenged About Others’ Intellectual Property
Google’s history is riddled with questionable attitudes towards ownership of other’s data. If your IP attorneys are not paying attention to this – you might need new ones:

Digital Marketer’s Fasutian Bargain
The fact of the matter is that Google is really an advertising company not a technology company. The big question for today’s digital marketers that are considering Tag Manager has not changed. It is the same as the Google Analytics question, i.e. is your company’s most valuable asset (customer’s behavioral data) worth more than the cost of not sharing it with the best data-mining conglomerate in the world? For many smaller companies the answer could be no, but for many largeradvertisers the answer should be – thanks, but no thanks.

Google’s latest self-serving, 3-for-me and 1-for-you offering should really motivate digital marketers to start to think differently about their value of their data, how much they trust others with it and what they need to do next to securely and exclusively control their data. Smart advertisers need to really start paying attention to how much data they are really sharing with a company that Sir Martin Sorrel best referred to as a “frenemy“…and that was way back in 2007. So much for do no evil.

The good news is that it doesn’t have to be this way.

How to Remove Google from your Ad Stack
Others are also noticing Google’s move and that digital marketers do have other alternatives…they are just not free. Back to using common-sense and ROI/TCO analysis to justify technology investments…or risk sharing your data with Google and the competition.

Here are some thought-starters:

  • Tag Management. Best choice at this point: BrightTag. Yes, I am an advisor. However, the reason I am is because only BrightTag has looked beyond tags on pages to the underlying problem of the data transport layer. Unlike the other TMS 1.0 platforms, BT has already a few years into developing a powerful TMS 2.0 tool; it is based on a highly scalable cloud-based infrastructure that offers digital marketer’s a real alternative to Google’s encroaching data glom. The good news is that most everyone that matters is already server-server integrated with BT…except of course (wait for it…)…Google’s products (Google Analytics, Floodlights, AdWords).
  • Analytics: Adobe, ComScore’s Digital Analytix and if you must IBM CoreMetrics
  • Ad Server: MediaMind, Pointroll, MediaPlex and if you must Atlas……not Google Analytics
  • Search Management: Kenshoo, Adobe, Marin…anything but DART Search
  • Attribution: Adometry, Visual IQ have better methodologies…C3, Convertro, Clearsaling…not Google Analytics or DFA.
  • Demand Side Platform: MediaMath, Turn, DataXu…not Bid Manager (formerly Invite Media).

The truth of the matter should be getting clearer to savvy digital marketers. If not, bring in independent viewpoints that are not invested in this madness. Good luck!

For publishers this is a much more complex proposition and the subject of a future post.

FacebookTwitterGoogle+LinkedInStumbleUponRedditShare

Viewthrough & Incrementality Testing

A common question in digital marketing measurement is “what was the lift?” or “what is the incremental benefit?” from a particular promotional campaign. Most of us would agree that the alternative of relying on display ad clicks alone for display response measurement is just wrong. When measuring display media, incrementality testing is absolutely essential to properly gauge the baseline viewthrough effect.

How to Calculate Incremental Cost Borrowing thumbnail

In order to answer this question, a test and control methodology (or control and exposed) should be used, i.e. basic experimental design where a control group is held out that otherwise are identical to the group that is being tested with the marketing message. This is even more important when marketing “up the funnel” where a last click or even last touch measurement from a site analytics platfom will mask impact.

Email marketers have been doing this with their heritage in direct marketing technqiues. It is often pretty straight forward as the marketer knows the entire audience or segment population and holds back a statistically meaningful group; this will enable them to make a general assertion about what the campaign’s actual lift or incremental benefit is. Control and exposed can also be done with display media if the campaign is configured properly to split audiences, elimnate overlap and show the control group a placebo ad. Often PSAs (public service ads) are used, which can be found via the AdCouncil.

This technique is routinely used for qualitative research, i.e. brand lift study services like Vizu, InsightExpress, Dynamic Logic and Dimestore. It is the best way to isolate the impact of the advertising; read more about the challenges of this kind of audience research in Prof. Paul Lavrakas study for the IAB.

Calculating Lift and Incrementality

Dax Hamman from Chango and Chris Brinkworth from TagMan were recently kicking around some numbers to illustrate how viewthrough can be measured; some of that TOTSB covered a while back in Standardizing the Definition of Viewthrough For the purposes of this example, clickthrough-derived revenue will be analyzed separately and fractional attribution will not be addressed. In this example, both control and exposed groups are the same size though this can be expensive and is usually unnecessary using statistical sampling.

  • Lift is the percentage improvement = (Exposed – Control)/Control
  • Incrementality is the beneficial impact = (Exposed – Control)/Exposed

In addition to understanding the lift in rates like viewthrough visit rate, conversion rate and yield per impression by articulating incrementality rate the baseline percentage is revealed – it is just the reciprocal of incrementality (100% = incrementality % + baseline %). Incrementality or incremental benefit, can be used to calibrate other similar campaigns viewthrough response – “similar” being the operative word.

Executing an Incrementality Test

PSA studies are simple in concept but often hard to run. Some out there advocate a virtual control, which is better than no control but not recommended. This method does provide a homegenous group from an offline standpoint so if all things being equal TV and circular are running then it is safe to assume both test and control should be exposed to the rest of the media mix equally. ComScore even came up with a clever zero-control predicted metholdology for their SmartControl service.

Most digital media agencies have experience designing tests and setting up ad servers with the exact same audience targeting criteria across test and control. Better ad networks out there encourage incrementality testing and will embrace the opportunity to understand their impact beyond click tracking.

Was this helpful? If so, let me know and I’ll share more.

Fascinating Stats on City of Chicago Employee Pay

Ever wonder where all the money goes that the City of Chicago takes for running our lovely municipality? Recently elected Mayor Rahm Emmanuel took a bold step in transparency to post this one!


The reason for the skew towards $77k is that is about the pay for most of the employees who are police officers and firefighters. This appears to be a union-set wage, which doesn’t make sense, e.g. as a new police officer/firefighter makes just about as much (not much percentgae  difference in pay) as one that has been on the job for many years….and like all collective bargaining schemes there is no room for management to differentiate pay on performance let alone between competent vs. incompetent.

The reason for the sharp drop off at the bottom-right is that this includes part-time employees, interns and those that receive salaries for being foster parents, police cadets and others.



mean $73,829
median $77,238
mode $77,238
standard deviation $77,238
min $1
max $260,004


POINT OF REFERENCE
  • Estimated (mean) per capita income in 2009: $27,138
  • Estimated median household income in 2009: $45,734 (it was $38,625 in 2000)


It would really be interesting to see the Board of Education and teachers (not necessarily by name)!

Agency Trading Desk Myths & Memes Debunked (Part I)

Fear, uncertainty and doubt has worked well for many incumbents in the technology and online media game over the years. Why should sell-siders be any different when it comes to Agency Trading Desks (ATDs)? Don’t worry…they’re not.


With buy-siders generally tight-lipped about the subject of ATDs, the resulting vacuum is being filled by constant industry sniping and chatter. Since the advent of the ATD, they have had aspersions notoriously put on them – perpetuating FUD. That the industry trade media and blogs are the only place that a consistently negative view of ATDs can be found should come as no surprise. Yet, the recent spate of chicken-little articles, posts and heated comments represent what is apparently a really threatened sell-side point of view.


While agencies are notoriously silent about their and their client’s businesses (as they should be), two anti-ATD blog posts (The Trouble With Agency Trading Desks and Thumb on the Scalewarranted a response from a different point-of-view…the advertiser. The spin and rhetoric have reached epic proportions and so a debunking of popular myths and memes follow below:



Double-dipping. From the best defense is offense school. It is as if the basic math around billable hours and the service-layer around managing Demand Side Platforms have no value. Data-driven media buying is very different from traditional demo-driven index based methods and takes alot of time as clients and agency partners get up to speed. Moreover, the measurement planning, analytics, technical and media accounting and media reconciliation that are required to manage these campaigns are also very different. 

The notion of “double-dipping” belies a basic misunderstanding of the process: DSPs are not exactly push-button. It is nothing like an in-house production studio – it is very strategic not simple production. Leveraging the expertise associated with intricate technical aspects of tags and data sources alone is a significant effort. Also as a reminder, digital advertising used to be commissionable or marked-up like traditional media. Meanwhile, where is the outrage at ad networks double-dipping with advertiser data?


Profit-margins. The implication that agencies shouldn’t be seeking profitable service offerings is simply outrageous. In the end, it is a service business and comprised of talented specialists that care about client business. With the prevalence of small-scale site retargeting making up alot of the business today, the ad volume and associated fees that ATDs are charging suggest that they may be running somewhat in the red; at least, until the business scales up or broadens to warrant the resource investment

Advertisers squeezing too hard here run the risk of running the people (not machines) doing the work into the ground – not good either. ATDs are not charitable organizations so it is not clear why they should be expected not to earn fees or why they have to justify it ad nauseaum. That said, it is in ATDs best interest to be very transparent with clients about the fees they are charging.


Agency Technology Investment. Holding company ATDs, for the most part are not building their own buying technologies in-house. The spin-out of Adnetik being an exception and who’s success remains to be seen. Instead they are licensing DSP tools/white-labelling and applying their tech-savvy marketing teams to enable a platform for the benefit of clients. While their marketing often use the term platform, they mean technology and the service-layer to support it – not literally hardware and software. 


In some cases, agencies may be using their business intelligence tools to support ATD reporting – that makes sense and is nothing new. Agency analytics teams have been using home-grown BI for years. Advertisers really just need to ask their agency questions if they don’t understand how all of it works…this recalls a famous Chinese proverb: He who asks a question is a fool for five minutes; he who does not ask a question remains a fool forever.


Data-hoarding ATDs. Really? The sell-sider rhetoric on this point is very misleading. Most ATDs are essentially service-providers, consultants armed with a DSP SLA (service level agreement) and the expertise. While agency BI tools attempt to provide handy storage of performance data (with debatable proficiency). Historical benchmarks and campaign reporting data are not the same as actionable behavioral user-level data, i.e. cookies. No, afraid that data is sitting inside ad networks, ad servers (which, by the way sometimes turns out to be the same cookie used by the ad exchange) or in a Data Management Platform. 


Now that said, there is simply no excuse for an ATD or agency to clandestinely re-purpose
so-called 4th party data from ad campaigns for later use. That is a major ethical lapse and sell-siders (publishers) should not tolerate. Ironically though, at the same time, far too many major ad networks are happy to re-purpose advertiser and publisher campaign performance data when it can maximize their revenue.



Mandate. Just what are sell-siders so afraid of? Perhaps their advertiser clients getting the most experienced and savvy teams working on their behalf and more transparancy. That is a huge benefit for client-side marketers that remarkably all too often have few senior digital media natives in-house. As a result, there is a huge-learning curve and time means money in a service business. 


The flip-side is that an agency holding company not consolidating their technical and negotiating expertise on one team raises management competency questions. With the level of technology change today, a centralized team is exactly what holding companies should be doing to effectively manage their resources. A better question and especially so for site retargeting is, that ad networks are still being considered. If old-school planner-buyers are concerned then they ought to put in for a transfer to the ATD.


Conflict of Interest. Wow – look at who is talking. Most advertisers would probably prefer the dedicated separate team within their ATD (usually closely directed by their agency-of-record) than what naked and supposedly independent sell-siders and technology vendors have to offer to protect their interests, i.e. nothing. It seems no different than a client directly buying from a media vendor, where that really new “big idea” has actually been shopped to several other advertisers (probably not all that new.) Plus, if an advertiser decides to pass, 100% probability that “idea” will be offered up to an advertiser’s competitor. Hey, clients can certainly pay a premium for category exclusivity – that option is always available.


On the other hand, AORs by definition get the concept of category exclusivity. With ATDs, there is semblance of brand stewardship and a compeitive firewall. Moreover, an ATD’s media planning agency partners are very unlikely to put any one client account at risk. That’s because in game theory terms, branding is a zero-sum game, i.e. it is about brand X winning, which means that brands A, B and C lose. As such, the ad strategies that are successful cannot be shared, nor the ones that didn’t. The problem is that sell-siders and technology vendors often have the opposite – industry specialists. 


The Machine Knows Better. Of course it makes sense to leverage automatic optimization and novel algorithmic approaches to improve results. However, far too many of the sell-siders and arms vendors out there purport that an ATD just can’t keep up. That may or may not be true but consider the source. How many sellers are transparent enough to report on the performance of their supposed-machine learning technologies? Some will do it but only when asked.

In any case, marketers will always have a need to explain and justify their actions. The client-side CFO does not want to hear about magic or blackboxes. They want to understand how to allocate cash to generate ROAS and ROI in a predictable way. People can be held accountable in a way machines cannot. The simple fact is that advertisers need expert brains to adjust to the changing marketplace and resources – managing campaigns on their behalf.


Early-in-session User Performance. One of the more clever rhetorical devices that pops up when sellers realize they are about to get disintermediated. It essentially questions the competition’s inventory quality suggesting that either directly or indirectly that only they have access to the special ad inventory. That’s right, through first dibs or exclusive relationships, the seller’s inventory “performs better” and therefore more valuable than the other. It is possible but  depends on the seller’s definition of perform – for their bottom line or for their client’s? BTW, still waiting for the data or performance reports that back-this up after multiple requests. Ironically, most of these sellers are also getting a portion of their inventory from the same exchange sources as the ATD; the real question is just how much.


Simplistic Wall Street Metaphors. This is an oldie…first of all day-trading media is a very one-dimensional way of viewing media consumption. It is not the same as a financial asset that has intrinsic value (stocks, bonds, options)…however it does make for nefarious and ominous metaphors with the recent financial crisis and all. Digging past the hackneyed writing, RTB by definition doesn’t allow positions to be taken in the same way as financial trading. These are real-time transactions, i.e. a spot market where ATDs aren’t owning inventory or taking a position. It seems that there is a fundamental misunderstanding of financial atribitraging.

It seems like the amount of technology required to squeeze out any kind of profit through exploiting information inefficiencies across many RTB decisions is more likely going to come from a DSPs that can hedge across multiple advertisers. ATDs just don’t have the financial structure, engineering or research staff to pull this off. In practice, this is nothing more than another red herring. Any ATDs that could save client’s big money would want that to be known.


Kick-backs. One of the more outrageous charges about kickbakcs was refuted in public and so the matter should be closed. Yet, the meme continues to proliferate. It may also depend on the definition of a kick-back. Is free user training or better support a kick-back? How about box seats to the Cubs game and fancy meals? Without knowing the internal accounting between DSPs, exchanges and ATDs it may never be known for certain. Clients can always ask for audit rights but like all memes this one can be difficult to prove or disprove.


Did I miss any or do you have any others to add? Feel free to submit a comment below!

The Moratorium: No, you May Not Place a Tag on the Site…

Digital marketers, the time has come to heed the call and end the rampant chaos and confusion by putting in place page tag moratoriums. Today.

WHY THE MORATORIUM?

Upon taking a closer look at the confusion and chaos that the industry has come to tolerate clearly illustrates the rationale.

CONFUSION
For too many and for too long, digital marketing brings with it page tagging needs that need to be executed by technical teams in other departments. Moreover, there are often precision measurement implications to retargeting and conversion tags. Although some legacy ad networks are making strategic moves, this confusion has definitely been a money-making opportunity. Adding to the confusion, the ad networks are rapidly right-sizing their staff, diversifying their offerings and/or reinventing themselves as exchanges, DMPs, DSPs, data layers and more.

That said, the real confusion can be split into three distinct aspects of communication within the digital marketing process:

1. Opaque Reporting – With the advent of DSP’s, OpenRTB and the IAB’s taxonomy, not sharing more performance information is problematic. From an analytics standpoint, not knowing where your high-performing audience segments are coming from and/or where they are in the conversion funnel becomes a opportuniy cost. If you are focused on conversion this makes your campaigns spray and pray.

2. Unclear Benefit – All too often, ad networks are quick and aggressive about getting their tags placed on pages…why? More details and in plain English are needed beyond anecdotal stories and faux studies of performance success. Agencies too have an oppotunity here to better steward their client’s brands. Exactly what is the specific benefit of retargeting, optimization or incremental conversion. A simple litmus test is: proceed when and only when the level of benefit exceeds the level of effort.

3. Data Leakage Risk – In many cases, it is not clear who owns the cookies and/or the behavioral data  vapor trail that is a byproduct of site traffic/ad campaigns. Without clarity on this important intellectual property it shouldn’t be a surprise when you find that the competitors are benefitting from the campaigns that you just ran. With the growing calls for privacy and consumer control this should not be left to chance.

The reality is that digital media is confusing enough rife with opportunities for swirl. Client-side marketers need to continue leaning in, stepping-up and demanding more clarity about those bells and whistles. Those that are not comfortable dealing with the more technical aspects of digital marketing need to get an agency, consultant and/or in-house staff that  are experienced and have demonstrated success.

CHAOS
To suggest that the technical aspects of today’s page tagging create chaos would be an understatement. Historically, page tags have fallen between the organizational cracks into the cross-functional abyss. Page tags have created serious problems for digital marketers and IT/engineering teams alike. With neither resourced properly to deal with this fast-moving technology that is growing more complex – mayhem and frayed relationships are an all too common result.

The good news is that technology is now available to help deal with the chaos: universal tag management systems like BrightTag, Tealium, Ensighten and TagMan can help. The technology also offers three different kind of benefits:

1. Tag Management – There is no question that proliferating page tags are the Achilles heel of most digital marketer and site IT/engineering teams. Today page tags are often late being implemented due to resource constraints with seemingly simple requests triggering requirements-level justification. As a result, in order to get any tags in place the real need is often scaled back to avoid the upfront time – that means a less than ideal deployment and less meaningful measurement. If the tags actually do get implemented, they are at certainly risk of randomly disappearing mid-campaign further compromising measurement. Last, once they are live, some page tags are escape notice and are never decommissioned upon campaign end. Don’t expect ad networks to remind you to remove their tags. Phantom cookie pools are probably rampant.

2. Data Sharing – Beyond rendering tags on pages at the right places and the right times, the better tag management systems are being baked into site CMS (content management systems) to enable the routine passing of data attributes. Instead of hot-rodding simplistic 3rd party ad server container tags, the newer platforms are deeply integrated and have Web-based interfaces that marketing, IT and agencies can access. A huge benefit of this is avoiding the software development-QA queue and the subsequent management hassle of dealing with one-off JavaScript code.

3. Tag Latency – Most page tags are “dumb,” meaning that they always fire all the time. So-called “smart” tags now offer conditional tag rendering, which provides marketers with even more precise control. More advanced approaches like BrightTag’s take advantage of super-fast asynchronous server-server connections, i.e. while the page is downloading in the user’s browser. If your page tag functionality can’t be called through their server-side API connection then latency is unavoidable.

The result of this is compromised measurement and unnecessary latency putting digital ad campaigns at risk. It just doesn’t have to be this way with universal tag management technologies that make the entire process easier. For the first-time ever, agency ad ops, analytics, media planning and engineering teams have the chance to collaboratively and proactively manage burgeoning page tags.

PRETZEL LOGIC
A recent article by Joe Marchese of MediaPost, Putting Lipstick On The Banner puts it best. While I vehemently disagree with the assessment of display ad efficacy (there’s more to display than clicks), Mr. Marchese does make a good point about the apparent pretzel logic of digital media.

Already challenged to explain the value of their existing campaigns, by adding more complexity digital marketers are usually not really improving their campaigns. With more retargeting, research and tracking tags on the horizon (bright-shiny objects) – savvy digital marketers and even partners can see why getting their house in order with their own version of The Moratorium makes total sense.

The message of The Moratorium to ad networks, data providers and other meta media purveyors is a simple one: don’t bother asking for page tags unless you’re also bringing solutions to the chaos and confusion that you’re also bringing. Behind it is a more sustainable business relationship built on transparency and success.

Digital marketers will continue to get the results that they deserve, until they demand better from media partners and even digital agencies.

Until then the answer should be: No, you may not place a tag on the site.

History of Web Analytics…Annotated.

Most history of Web analytics postings and stories routinely cover Analog by Dr. Steve Turner a rather simplistic log file analyzer. Others mention i/Pro, netGenesis, Interse, NetCount and WebTrends.

What they often miss is Lilypad, a marketing and adertising oriented Web-based analytics application. Lilypad was developed by Streams Online Media Development in 1995 and announced in the Falle of 1995. Unlike most of the other technical log file analytics tools, Lilypad was original in that it focused on promotional measurement. Importantly, Lilypad utilized its own database of activity and was coded in Perl leveraging server-side inludes the predecessor to JavaScript page tags.

Lilypad was programmed by James Allenspach under my direction during dowtime in-between client projects. Dave Skwarzek and I worked to brand and promote the product in  way that marketers could appreciate. A seminal offering by a scrappy Web boutique start-up to be sure, Lilypad was influential as an early site metrics tracking application:

If you are doing research on the history of site analytics, digital media tracking or online media measurement, you can learn more about Lilypad here.

Response to Who Will Rid Us of this Meddlesome Click?

Great post by Gian Fulgoni of ComScore and very timely!

After over a decade in the digital advertising industry, I don’t find the click emphasis to be a fascination as much as a perpetual crutch. Continuing to nibble at our heels, clickthrough has really just been the path of least resistance. I’ve heard this from many colleagues:

Ad sales execs seeking to close business tread uneasily and rarely bring up alternate success measures that require too much thought or set-up

Agencies attempt to educate clients about such matters but that doesn’t always work (relationship/credibility/pick your battles/technical complexity)

Client-side marketers are all too often organizationally overwhelmed and only now starting to internalize meaningful digital measurement process

Technology vendors who often have the most expertise, ironically tend to be perceived as the most unreliable sources with their marketing often ahead of actual capabilities

For some reason, easy-to-measure clickthrough are meticulously collected, analyzed and reported. It is as if through the mass willing suspension of disbelief that somehow the display clicks might conceivably convert or are implicitly valuable. The ongoing independent reports from ComScore suggest otherwise to anyone listening. While this post rightfully educates all of us about the troublesome reliance on clickthrough, it also raises the opportunity to raise awareness about passive incremental response.

Instead of skipping down the funnel past post-view response and straight on to purchase, consider the in-between, i.e. non-clicker viewthrough. Even in Gian’s post, viewthrough is not mentioned explicitly but post-view reponse is only suggested in the context of conversion – this is part of the problem. Yet, in practice we all know that very few clickers will convert (super promo-oriented messaging aside).

This measurement incongruity routinely happens whenever post-view activity is mentioned. In so doing, the more executionally challenged, but likely valuable viewthrough response remains unmeasured and therefore invisible.

It is analogous to calculating auto mileage looking at RPMs (engine speed) and not MPH (land speed).

The reality is that implementing an alternative quantitative measurement like viewthrough pushes many marketers to the edge, requiring patience, precise technical set-up and methodical execution. To Gian’s point clickthroughs are fast, cheap and easy to measure; they’re also potentially misleading for all the reasons ComScore and others have researched. What’s more, viewthrough impact accrues over time, which flies in the face of the commonplace action bias to optimize campaigns on something.

So while probably not that final Canterburian cleric’s foot-on-the-neck of clickthrough, viewthrough might at least be one of the assasinative knights.

An industry appeal to standardize or define viewthrough can be found here http://goo.gl/LA8Iv

New for 2011! Standardizing the Definition of View-through

It has been over 17 years since the advent of the Netscape Web browser in 1993 and almost as many years since the first AT & T banner ads were served on Hotwired.com. Back then, the Internet was heralded as the most accountable medium ever.

Fast-forward to 2010: the digital advertising industry has gone mainstream and will likely generate more than $25 Billion (US only). At the same time, a subject that concerns far too many people is declining or flat click-through rates. Last week’s gushing “news” from a rich media vendor that clickthrough rates have supposedly leveled off after years of decline is a good example. 

Definition of Insanity
In a business that obsesses about such meaningless metrics, the digital advertising industry simply cannot continue worrying about click-through rates. Although most will recognize that the novelty of clicking banner ads has largely worn off, this measure which still provides almost no insight on the effectiveness of most campaigns just won’t go away – regardless of marketing objectives.

In the no man’s land somewhere between the ad server and site tracking is an analytics oddity called viewthough: a useful albeit tortured metric. Testament to this is thats not one of the major industry trade groups recognizes viewthrough by including it in their standards glossaries:  IAB, OPA, ARF or the WAA.

Despite early work by DoubleClick, plenty of practioner interest and ongoing research by ComScore, the digital advertising industry still somehow lacks an official definition of a viewthrough. At times, it seems like we’re all too often measuring what is easy or expedient. And, clearly that is not working for the industry.

Here is a sampling of viewthrough articles over the last 10 years:

  • Lilypad White Paper Response Assessment in the Web Site Promotional Mix (2/1997) A very early attempt by the author to describe the phenomenon in the context of measuring ad response; see the diagram.Online Awareness Model of Banner Advertising Promotional Models. At that time there was not yet a way to measure such passive behavior.
  • Conversion Measurement: Definitions and Discussions (9/2003) An early article that focused on “people that ultimately convert but did not click”. Technically, viewthroughs are not people and are probably better described as visits. Also, depending on the campaign objective, a conversion event is optional.
  • Neglecting Non-Click Conversions (11/2003) A pretty thorough piece on the subject, although the term “viewthrough” is not used and there is again an emphasis on conversion.
  • Lies, Damn Lies and Viewthroughs (8/2005) Again, the focus is exclusively on viewthrough conversion, which is clearly a trend. However, it is misleading as it misses all the non-converter traffic.
  • The Most Measurable Medium? We Still Have A Lot To Do! (9/2007) David Smith actually made a literal plea for the industry trade groups to define viewthrough. A great idea, unfortunately it fell on deaf ears and several years later not much has changed.
  • Why view-through is a key component of campaign ROI (9/2010) provides a more balanced look at what viewthrough is but still brings up conversion. Also, th acronym “VTR” is confusing as that is what most people might consider a viewthrough rate similar to how a “CTR” means clickthrough rate.
  • Different Views of View-Through Tracking (10/2010) More of the same, although this article actually quotes Wikipedia (scary) and further convolutes the matter referencing a Google Display Network definition that focuses on viewthrough conversion. Consistent with the theme, the term VTR is used to mean viewthrough conversion rate not view-through rate – two very different measures. On the upside the potential of viewthrough for media planning and optimization was right on.

Curiously absent from the ongoing discussion is what viewthrough inherently represents: measurable incremental value from an affirmative self-directed post-exposure response. With just syndicated panels and qualitative market research to divine results, traditional electronic media could never quantify this .

At the same time, the advertising industry now has over 10 years of similar “directional” qualitative research focused on the familiar yet ephemeral measurements of post-exposure attitudes and intentions (notoriously unreliable). Many see these brand lift studies as rife with data collection challenges and ultimately of dubious value. Just this year, Professor Paul Lavrakas on behalf of the IAB released a critical assessment of the rampant practice.

Parsing The Metrics
It is bizarre that many digital marketers insist on defining viewthrough rates in conversion terms while clickthrough rates are always measured separate from subsequent conversion rates. Mixing metrics has confused the matter but effectively left viewthrough to be held to the higher standard of conversion. Ironic, since very few clickthrough (in volume and rate terms) even result in conversion.

While “clickthrough rate” is always understood to be relative to impressions (# of clicks / # of impressions), “viewthrough rate” seems to have skipped the middle response step and gone all the way to conversion. That doesn’t make sense when there are so many other factors that influence the purchase decision after arriving on a Web site.

To be very specific, viewthrough rate (VTR) should be similarly calculated, i.e. # of (logical) viewthroughs / # of impressions. “Logical” means that the viewthrough is observed where a branded post-exposure visit is most likely to happen analogous to the target landing page of a click-through;usually this means the brand.com home page. 

Measurement Details
The real problem underlying the apparent confusion is that viewthrough measurement invokes several raging and simultaneous, inter-related and often technical debates: branding vs. response, optimization, cookie-deletion, cookie-stuffing, panel recruitment bias, correlation vs. causation and last-click attribution. Anyone one of these arguments can cause a fight.

Nonetheless, in defining what viewthrough actually means it would be helpful to overview the two basic ways of measuring view-through:

  1. Cookie-based: This a browser-server technique that relies on cookie synchronization between the ad server and the target brand site. When the user receives the ad, a cookie is set on their browser that is later recognized upon visit to the target site, which is then matched via speical page tags back to the associated campaign. There are several ways this can be done, e.g.  DART for Agencies (DFA)/Atlas/Mediaplex page tags, ad server integrations (Omniture) or ad unit ridealong pixel tracking (Coremetrics). Optionally, PSA campaigns can be run alongside a camapign for a simultaneous test-control comparison of viewthrough “true” lift;essentially you can measure a baseline amount of viewthrough traffic that would end-up at the site anyway. Downside: subject to browser cookie limitations.
  2. Panel-based: Alternately, a standing Internet behavioral panel can be utilized, e.g. ComScore and Compete. In this approach, two comparable groups are observed: an exposed test group and an unexposed control group that represents the baseline viewthrough. The difference between the rate by which the test group (exposed to ad campaign) and the control group (received PSA or other’s ads) subsequently visit the target site reveals the lift that is explained by the presence of display advertising. This method may also include ad or page tracking, but does not require cookies. Downside: subject to panel bias.

The Impact of Time
Next, an additional layer to viewthrough measurement that is worth mentioning is time, i.e. delayed response. Like traditional advertising media, display ads exhibit an asynchronous response curve where the effect of the advertising decreases over time.In our real-time data collection world, it seems the common sense realities of human behavior are often overlooked.

Many factors can impact the viewthrough response curve, including messaging, frequency, share of voice and creative execution to start. And, one size does not fit all: a considered purchases could reasonably have longer shopping cycles than CPGs. Depending on the method of measuring view-through, typically 30 days or 4 weeks are often used as initial “lookback windows.”

Et Cui Bono?
Although that was fairly straightforward, as soon as viewthrough is connected to a site conversion (through deeper page tracking), the thorny issue of attribution arises (and cookie-based measurement is implied). Viewthrough measurement often goes off on a tangent t this point because there are two layers to attribution.

  • Channel attribution is simply put: which digital channel is assignedtr credit for the conversion event? Measuring display advertising happens to be more complex and most site metrics tools punted on tracking this capability. That means that simpler response channels like paid search, natural search, affiliates, CSEs and email to receive last credit as a default. For many marketers, measuring conversion attribution or participation gets complex and often political very quickly.
  • Media attribution gets really contentious, especially for lead generation and ecommerce-oriented marketers. Performance ad networks often insist on having their own special page tag in place where the conversion event occurs;in this way they can independently measure conversions and potentially optimize their ad delivery. The problem is that there usually are multiple ad network vendor tags on the conversion event page and all of them will count the page load as a conversion. Worse, this is an easy way for the ad network to shoehorn themselves a retargeting cookie pool. Unchallenged, media vendors may claim credit for everything such that marketers end up overpaying for the same conversion. Alternately, some very Byzantine schemes have arisen to guestimate credit. 

Despite all of the above, here is a working definition of a viewthrough for 2011:

Definition of View-through
Viewthrough is a measure of the passive but self-directed impact from a partiucular display ad unit (banner, rich media, video or audio). The viewthrough event follows one or more ad exposures and when the ad unit is clickable can be post-click (initial click visit timed-out) or post-impression (with no click). Importantly, a viewthrough may or may not be associated with a purchase conversion event but must be associated with a target page load or other high-value action. VTR or viewthrough rate is calculated as # of viewthrough / # of impressions.

Viewthroughs decay over time from ad exposure. In-flight viewthrough are observed during the live ad campaign while the post-flight “vapor-trail” begins immediately after the associated ad is served.

Don’t like this definition? Come up with a better one or edit the above…and, the sooner the better or the industry might get stuck with this sketchy Wikipedia entry.