How to measure the unmeasurable

Sometimes you have a page that you want to optimize but there’s no clear metric to determine success. For example, it might be a customer service or support page – no add to cart buttons or lead generation form to give you hard data on if you’re using the optimal design.

In such situations, if you’re trying to get customers to the information they need you might use time on the page to determine success, the less time on the page, the better the design. You got your customers to the information they need quickly and they left happy. But that is an unclear metric, it could be the page design is not optimal and people are just leaving quickly in frustration because they can’t find what they need. AKA: a high bounce rate.

Another interesting way to measure such pages that have no clear metric of success is to incorporate survey data into your analysis. I have done this with survey providers like Foresee and Opinion Labs.  Now imagine that a web page has an optimization test running. For simplicity sake, say it’s an A/B test of two page designs. 50% of traffic sees design X and 50% sees design Y. Or, Sarah sees page design X and Jim sees page design Y.

Sarah comes to the page and sees design X. She can’t find the information she needs and she’s frustrated so she answers the survey and gives the site a negative review. Jim, however, sees design Y and gets exactly what he needs in a few seconds. He answers the survey and gives a very positive review. Now multiply Sarah and Jim’s experience by 100. With a larger sample of data you get a better idea of how your page design performs.

By combining time on the page with survey results you get a better sense of which design is optimal. If design Y has a lower time on page metric and has better survey results you have two metrics that confirm each other and you can be more confident that you have a better design.

’tis the season

In my game we call it “seasonality”, the term is meant to describe user behavior that can differ wildly during certain periods of the year. This should not be a surprise to seasoned digital marketers, but what’s interesting is that it can have an effect on products and services that one would not normally suspect.

I conducted a test for a loyalty product where the value proposition to the user was the ability to accumulate loyalty points for both airline flights and hotel stays. If you link your hotel loyalty account with your airline loyalty account you can double your rewards. The test was launched 10 days before Christmas and allowed to run for an additional 10 days after Christmas. The test content was a series of banner images – very straightforward. Which image would have the most impact on getting users to link  their two loyalty programs?

10 days before Christmas : 0% lift in linked accounts. Some of the challengers had a slight positive or negative lift but none had any statistical significance.

10 days after Christmas: 15% lift in linked accounts with 97% statistical confidence for the best performing banner image.

This is an interesting result, but more than just a test result we can infer some best practices for this particular line of business. Since the number of actions – linking the two accounts – was roughly the same in the 10 days before and after Christmas it wasn’t that people were too busy in the run up to the holiday to take action, they linked accounts with an equal frequency before and after Christmas. So what changed was the users’ willingness to be marketed to. Of course some of this could be because of the busy-ness of the season, users just want to get the task done and don’t linger on subconscious messaging.

 Business lesson: take a look at other campaigns and channels for these two time periods. You might just find that marketing spend is far less effective in the two weeks before Christmas and thus budgets should be aligned accordingly, either curtail the spend before Christmas or shift it to the period after Christmas when users are more open minded to what you have to offer.


Showrooming stats

I just wanted to share some mind blowing stats from the Get Elastic post on showrooming.

71% of US mobile users own smartphones, and 81% of them use their devices in-store.

92% of consumers who showroom have used Amazon to compare prices, compared to 84% who’ve used Google, and 77% that use price comparison sites.

50% of male and 42% of female consumers who showroom are members of Amazon Prime.

67% of showroomers will buy from a physical store over Amazon when the store matches Amazon’s price with a rebate.

73% of consumers expect a retailer’s online pricing to be the same in-store, and 61% expect online promotions to be the same in-store — yet only 16% of top global retailers have price parity, and 73% offer the same promotions.

Amazon changes prices every 10 minutes on average.

41% of customers who showroom end up buying elsewhere.

Mobile strategy

A client asked for some help fleshing out their mobile strategy and in my research I found some interesting statistics and cautionary tales I’d like to share.

KPIs: if you’re selling something through a mobile eCommerce store, orders and revenue are clearly the metrics of choice. If you’re looking to collect user data, successful completion of the form is your goal. But for advertising campaigns, it’s a little trickier. The standard metric is click through rates (CTR) meaning, of those users who saw the ad, what percentage clicked it? Well, guess what? It’s estimated that 50% of mobile ad clicks are accidental. I know that seems like a lot until you think about it. Small screens and big fingers and no physical object defined to tap are perfect for accidental tapping. Then there’s the baby sitter factor. Since an iPad or iPhone is now a common plaything for toddlers and children of all ages, how many of them are stabbing their little fingers onto things they don’t even understand? Better mobile ad metrics, if you don’t have a clear cut sale or form registration to rely on, would be time on site and bounce rate. You also might want to intentionally build in a second step to your mobile landing page so step 1 is clicking the ad, step 2 is hitting the landing page where the value proposition is reinforced, step 3 is tapping through to the meat of the content and step 3 is your success metric for that ad that was seen in step 1.

Platform segmenting: Android and iOS users are not the same. Not even close. You need to break them out and figure out what works for each audience. There are 18% more users aged 18 to 24 on iOS devices. 39% of Android users make less than $50,000/year vs. 23% for iOS users. iOS users are 35% more likely than Android users to engage in m-commerce. iOS users engage in all activity more on their devices vs Android from the banal like checking weather to the more sophisticated tasks like mobile banking.

Browser segmenting: Safari market share is dropping while Chrome’s is rising and Android browser usage has remained steady in the last year. No other browsers matter in mobile. This infers that iOS users are switching to Chrome. Now who would do that? Safari works fine. The younger, techy and likely high earners is who. I bet if you took a sample of users on Chrome and iOS you’d find an audience much more willing and able to complete a task on a smartphone.

Location segmenting: location matters, yes, but in densely populated areas it matters less. First you want to target urban users because they have the data networks to make mobile a pleasant experience and a handful of urban areas control a huge amount of GDP

However, there’s a lot of demographic variety. Imagine if you will a Google exec hailing a cab on 14th Street in Manhattan. To the location servers the cabbie behind the wheel and the Google exec in the back seat look like the same prospect, but clearly, they are very different in terms of comfort with technology and earnings. However, if you then took all the people in that high earning zip code, targeted just iOS users and users on Chrome you would stand a much greater chance of getting to that Google exec who is your target audience.

Sample size and significance

I recently conducted a test for an electronics retailer whose focus is on selling protection plans and accessories for computer products like iPads and laptops. Having worked for a major electronics manufacturing company doing analytics I know the profit margins on hardware (save Apple products) is astonishingly thin. Like paper thin. Like grocery store thin. The fat margins are on the accessories and protection plans. So we did a common sense test and made the area on product detail pages for laptop and tablet accessories and protection plans more prominent.

Now, the take rate for these products – particularly the critical protection plans – is very low. The test, if run according to the rules I learned from my stats professor (Art Vittorito you were my favorite professor!), should have a sample of at least 100 actions on the KPI per creative treatment or user experience. That means this test would have run for months on end to reach that goal.

This was a multivariate test. So you look at the results on the variable level as opposed to the creative level. The results for one variable were so off the charts positive (over 100% lift in desired user action) that even though the sample was lower than desired, you just have to run with it. Yes, it is possible to flip a coin and have it come up heads 20 times in a row. But 40 times in a row? 60 times? And the contributing factor was that it just made sense. The KPI content was part of a series of tabs. We brought that tab to the far left where the natural eye scanning starts (we read left to right of course). So the insane positive lift numbers + duh factor = calling the test before good statistical modelling sample is in.

Sometimes you don’t have to wait.

Old and in the way? Great!

One day I was analyzing an email campaign weeks after the campaign had dropped and learned something really interesting. I usually grab the results of a campaign a few days after it drops so any insights or learnings can be built into the next campaign. But, I had been on vacation for a week, came back to over 800 emails and by the time I dug out this campaign was 3 weeks over.

But it wasn’t “over”.

Pulling the results I noticed that this 3 week old email was still getting some significant clicks. And there were small spikes in clicks on the same day we drop our weekly emails. It became very apparent that these emails were still hanging around in peoples in boxes and when they get the next weekly email it reminds them to open previous weeks’.

So that old email, just sittin’ there takin’ up space like the octogenarian at the diner who only orders coffee but uses the booth for 3 hours – actually has value.

So I did a little study of three different emails in 3 different web properties in 3 different countries and found (I can’t give you specifics) that between 10 and 20% of clicks were 7-30 days after the initial email drop.  Also, 9 to 14% of the total clicks were from unique clickers meaning the vast majority of these old clicks were from people who had not opened the email before. So this is fresh content to them.

So what’s lesson here? This begs for a test. I would do an immediacy vs. mellowness test. If up to 20% of openers are opening after 7 days then any offer that’s “limited time” would be a message lost on them. Conversely, a long running offer could inspire a reaction of “this email is OLD. I can’t believe this offer’s still good!”.

Here’s the test I would conduct: one email has a very strong message -Limited time offer! It should be an aggressive offer of high value good for 48 hours only. Another email should have a less aggressive offer that’s good for 30 days. Who knows? Maybe the energy and spryness of a live hard, die young marketing message is actually trumped by an email that’s old and in the way.

The Zombies are here!

Today’s post is not about getting good Omniture reporting it’s about stopping Omniture reporting. Sometimes more data is worse than no data at all.

For most companies getting Omniture code off a site is not a problem. It’s a call to the IT department. But what about businesses with partners and affiliates? What about internationalized sites? Sometimes it’s harder to get the Omniture code off of a site than it is to get it on.

I once worked for a large multinational that did business in 122 countries. Somehow, it became common practice for foreign developers to take the Omniture code from the US site (my bailiwick) and copy and paste it onto their site and add a new reporting suite for their own purposes. This kind of development sends reporting data to the US Omniture suite and to the foreign Omniture suite. So, when I’d see an overnight jump in, say, laptop sales, I’d start my analysis of what drove traffic and find out that a web site in Poland launched with US Omniture code on it. Not only does it inflate my reporting but you’re stealing my server calls! After all, Omniture gives you only so many per month.

In a company that does business in 122 countries just try and get the web marketer for Eastern Europe on the phone. Not easy. And once he’s on the phone and speaking his native Hapsburgian tongue how do you explain “dude, get my Omniture suite name off your site”?

In the Omniture contract it states that if the relationship between the client (you) and Omniture is terminated but the client still has Omniture code on her site or sites and is sending server calls to Omniture’s database they can still charge you. That makes perfect sense. Those thousands or millions of data points do have a storage cost. So take a long term view and don’t create zombies.

Yes, zombies. Even dead web sites can live on and while they may not feed on brains they do feed on server calls. I worked for a media company that had content platforms that they sold to partners all over the world. They’d put up a partner branded site, plug in my company’s content distribution system and share the revenue. They had the eyeballs and drove the traffic we had the goodies to sell. Hundreds and hundreds of partnerships were created over an 8 year period.

Sometimes these business partnerships would die, they would bury the corpse of a site by taking it out of the navigation and stop all marketing efforts, but the zombie site would be alive and well. Not robust and spry mind you, but still out there, still sucking on server calls slowly and methodically with its arms stiffly outstretched. But how does the beastly zombie site stay alive if nobody can find it? Bookmarks, search engine cache, old links buried in forums. You’d be surprised how much traffic a site that was “taken down” can generate years later.

Now you think finding a foreign manager in your own company is hard. Try finding the person in Italy who is a new hire and hasn’t heard of this 3 year old dead partnership and has the power to make the call to IT to do the work and IT prioritizes it somewhere just above cleaning the rings out of the coffee pot. Those are really, really strong zombies. And you might as well abandon that plan to find supplies, tools and weapons and get to that well forested island in the river with the class 4 rapids all around it. Crap, what happens in winter when the river freezes?! Need another plan!

So what’s the solution? If this sounds like your company go talk to the legal department. If they have affiliate and partner agreements they probably have a legal boiler plate they use. Get them to include a clause that states that within 30 days of the termination of the business agreement all Omniture tagging that leads to server calls in your reporting suites must be removed. After 30 days the partner will be billed for those server calls at whatever your Omniture cost per million of server calls is.

Good luck zombie killers!

Careful of complexity

I worked on a site for a major web portal – I’m not going to say which one but it may or may not have a name like a certain creature in a little book called Gulliver’s Travels – and we had our Omniture analytics code on this site. It got traffic, lots of traffic, page views in the millions per day and then one day it started getting 20 times the normal amount! Because the company has dozens of sites and a handful of site managers nobody noticed this enormous spike in traffic for a week. The fixed number of server calls we had purchased from Omniture in a month was eaten up in under a week. Costs were being incurred. Senior managers were freaking out.

As the analytics guy I put a debugger on the site and frantically identified the problem to the correct products and areas of the site that were madly sending a fire hose of image requests to Omniture. I sent that off to the programmers so they could quickly identify the problem in the code.

You know what it wound up being? A single extraneous space at the end of a certain javascript call. Just one space cost the company thousands of dollars.

Another time an agency built a flash application for a particular customer acquisition campaign. This piece of flash had dozens of buttons and screens, video that played inside it, links to downloadable content….it was complicated. But not as complicated as their measurement plan. They wanted to measure everything – and I mean everything – to come up with some hooey “engagement” metric that was supposed to justify the agency’s enormous fees. But the measurement plan and resulting javascript was so complicated that nearly nothing was measured at the end of the day. The campaign cost tens of thousands of dollars and we couldn’t tell how many leads were generated.

What’s the moral here? Complexity breeds costs.

Whenever an analytics package or measurement plan is being designed I like to follow the KISS rule: keep it simple, stupid. On most projects I can limit my plan to three basic questions:

1-      What are our “KPIs plus”? KPIs plus meaning what do we use to judge the project a success or failure and what additional learnings can we take away from this for future projects?

2-      What is actionable? What can the data tell us so we can tweak, test and improve ROI?

3-      What is affordable? The more metrics, the more server calls. Watch out, they can come back and bite cha.

Omniture, WebTrends and Google Analytics = Ford, Chevy and Chrysler.

What kind of car did you learn to drive with? For me, growing up in the country, I learned to drive on a Ford F-350 flatbed truck with 4 wheel drive and a manual transmission with 6 forward gears and two reverse. It was big, loud, powerful and dirtier in the cab than it was on the fenders. You might have learned on the family sedan, a Chevy perhaps. But once you learn to drive a Chevy does that mean you can only drive Chevys? Of course not. If you can drive a Chevy you can drive a Ford, a Toyota or even a Bentley.

It’s the same with web analytics packages. But short sighted managers and HR departments don’t see it that way.

I’ve used Omniture, WebTrends and fooled around with Google Analytics on this site. If I know how to track and analyze an email campaign in Omniture I can do it in WebTrends even if I’ve never seen the package before. It’s the concepts that matter, not the interface.

But I’ve spoken with recruiters, managers and HR reps that absolutely insist you have X years working in analytics package Y. That’s dumb and limiting the pool of candidates. So, this is a plea to those who are the hiring decision makers (almost used ‘deciders’, couldn’t do it): open your minds. A good analyst is someone who finds insights, pulls learnings from obscure data and deeply understands KPIs. It’s not someone who is familiar with a particular software’s interface.

Start hiring smart.

Is social media worth it?

Those of us in online marketing have been wondering for years whether social media is worth it. Oh, don’t get me wrong, I’m not saying that social media is a fad. No, my friends, the era of sharing is certainly here to stay. But for businesses is it worth it? Is there a return on spending X amount of man hours on keeping a twitter account lively or a Facebook page engaging?

I’ve read studies that put the value of a Facebook fan at $12 dollars and others at $0.12 cents. We’ve all heard about using Twitter as an alternative customer support tool and keeping customers happy. But I can tell you from personal experience that Twitter can also be used as a megaphone for an unhappy customer. And bad or scandalous content is going to be shared far more than good content. Which of the following do you think would be more likely to be retweeted?

A: The new Purina Oatmeal flavor Mango Madness tastes great!
B: The new Purina Oatmeal flavor Mango Madness tastes like flakes of card board  floating in dumpster juice!

Sharing can be daring.

But, there’s one reason why a social media presence must be actively maintained: they give good SERP.

Google me. Do it. Google “Chris Hedick” right now. Where is Twitter and Facebook in the results? Right at the top. Because of Google’s smothering love of social media sites a good social media strategy is essential.

But Chris? What’s a ‘good’ social media strategy? Don’t I just put up a Facebook fan page and let people rave about my company or product? No.

A good social media strategy has structure. A good social media strategy has a content strategy. Here’s one that I recommend:

1- Do your keyword research. An earlier post on end user vernacular got into this a little bit, but it’s worth repeating. Find out how your users are searching. This is more art than science but by combining Google Adwords, internal search analytics and even Twitter itself you can quickly find out how people talk about your product. I consulted for a friend who was helping to clean up some of this housing crisis debacle. Two terms were being used all the time in the media to explain the situation of the people he was trying to help: “underwater mortgage” and “upside down mortgage”. Which to concentrate upon? Plug ‘em into Twitter and see how frequently there was a tweet on each. The more frequently used term is the winner.
2- Now that you know how your users speak, talk to them in that language via social media. Use those terms in posts, YouTube videos and tweets in a subtle and natural way.
3- Have a blog or landing site specifically devoted to receiving people interested in those terms. Then make sure that the social media content links to that blog or other content that is relevant to the terms. And make sure that the blog has a content strategy that emphasizes those terms.
4- Link out as well. Let’s say you make marsh mellows. Doing some research revealed that the number one reason people buy marsh mellows is to make Rice Krispie treats. Linking your blog post about Rice Krispie treats to a Rice Krispie recipe site isn’t going to hurt you, it’s only going to up your SERP rank for the term “rice Krispie treats”.

So to answer the question “is social media worth it?” I’d have to give a qualified and yet emphatic yes.

Social Media is a powerful weapon that can carefully and powerfully target like a sniper rifle. But just throwing stuff up on Twitter without a well thought out strategy is like trying to hit your customer at 500 paces with a Nerf gun.