Best practice is often pooled ignorance

Someone told me that the other day – “best practice is often pooled ignorance”. I laughed in tacit agreement. Best practice is what the herd thinks is right. It’s what the thought leaders think is right. And often it is right. But not always. In 2015 many were laughing at the idea of a viable Trump candidacy for President. Now he’s the one laughing.

Here’s a few examples of “best practice” that aren’t always best practice.

It’s considered best practice to have any call to action buttons above the fold on landing pages. You want the user to see that big shiny button straight off and let him know that’s what he needs to click before he scrolls any where. This is Hubspot’s recommendation, I’m Hubspot certified and that’s what I believed.

However, the product and the audience are not taken into account in such a best practice recommendation. Consider the landing page or lead generation form for a nursing home for an elderly parent. In such an emotionally charged purchase decision would it make sense to scream “buy now!” with a bright button above the fold? No, it would go counter to the mind frame of the customer at that point in time in the customer journey. It might even seem gaudy. In such a customer journey where a sensitive and emotional purchase decision is being considered you have to emphasize trust, build rapport, appear to NOT wanting to increase business. In such a case you may want to have a landing page with several paragraphs regarding the high level of care, the difficulty of making such a decision and then have a simple text link as the call to action. It seems crazy, but by de-emphasizing conversion you may in the end wind up increasing conversion.

Another example: the 5 second rule of landing and home pages. Best practice says if you don’t communicate the idea or the product value in 5 seconds you lose the customer. Best practice says that copy should be short and to the point because people don’t like to read. After all, we do have the acronym TLDR – too long to digitally read. However, if you look at almost any stock or investing landing page for disreputable penny stocks or get rich quick sites you will find that they are incredibly long, incredibly verbose and almost intentionally annoying to navigate. BUT THEY ALL DO IT. If they all do it, it must be working. There’s something about the get rich quick mind set that makes it more attractive to bury the value proposition in a mountain of text.

Even the more reputable purveyors of riches-through-stocks advice like the Motley Fool make it difficult to get to the meat of the matter in their marketing. Motley Fool drives users from email to a landing page where there’s a video that auto-plays but the video has no controls. You can’t pause it, you can’t fast forward or reverse. It’s several minutes long and you  just have to sit through it to get to the heart of the pitch. Is this a marketing mistake? Possibly, but I bet they tested it and found that this is what works best for conversion.

So remember, best practice isn’t always best. Always consider the audience, the psychology of the purchase decision and the product and ABT: always be testing.

Reward framing and offer perception

I used to do a lot of offer testing for a company with 35 e-commerce sites. Offer tests work like this: the traffic is split 50/50, one group sees one particular offer to buy, say a “buy 2 get 1 free” offer, the other group is given a different offer of “33% off your order when you buy 3 items”. Now, if the customer buys exactly 3 items and they are all priced the same then the discount to the customer is the same. However, the perception of the offer is not the same. We tested this very offer combo on two different sites during two different times of the year. Each time we tested it “buy 2 get 1 free” out performed “33% off your order when you buy 3 items”.

In a test done in the physical realm, car wash loyalty cards were tested. Customers to a car wash were given two different kinds of loyalty card. We’ve all seen these cards, they have little icons on them and the business owner punches them out or puts a sticker on the icons to indicate a purchase has been made. One type of card had a buy 8 get the 9th free message on it. The other had a buy 10 get one free message but two of the icons that indicate you had made a purchase were punched out. So, both types of cards have exactly the same value, they both require 8 purchases to get one for free. However, for the buy 8 get 1 free card only 19% of customers redeemed it for a free car wash while for the buy 10 get one free with two purchases already made card 34% redeemed it for the free car wash.

That’s a pretty big difference. So, if you’re going to offer the customer a reward of some type, consider re-framing that offer in another way and see which one converts at a higher rate.

 

How to measure the unmeasurable

Sometimes you have a page that you want to optimize but there’s no clear metric to determine success. For example, it might be a customer service or support page – no add to cart buttons or lead generation form to give you hard data on if you’re using the optimal design.

In such situations, if you’re trying to get customers to the information they need you might use time on the page to determine success, the less time on the page, the better the design. You got your customers to the information they need quickly and they left happy. But that is an unclear metric, it could be the page design is not optimal and people are just leaving quickly in frustration because they can’t find what they need. AKA: a high bounce rate.

Another interesting way to measure such pages that have no clear metric of success is to incorporate survey data into your analysis. I have done this with survey providers like Foresee and Opinion Labs.  Now imagine that a web page has an optimization test running. For simplicity sake, say it’s an A/B test of two page designs. 50% of traffic sees design X and 50% sees design Y. Or, Sarah sees page design X and Jim sees page design Y.

Sarah comes to the page and sees design X. She can’t find the information she needs and she’s frustrated so she answers the survey and gives the site a negative review. Jim, however, sees design Y and gets exactly what he needs in a few seconds. He answers the survey and gives a very positive review. Now multiply Sarah and Jim’s experience by 100. With a larger sample of data you get a better idea of how your page design performs.

By combining time on the page with survey results you get a better sense of which design is optimal. If design Y has a lower time on page metric and has better survey results you have two metrics that confirm each other and you can be more confident that you have a better design.

Button psychology

For the vast majority of task oriented online marketing it all comes down to the button. During the customer journey we may have started with our prospects clicking a link in Google’s search results, that took them to a landing page where we expressed the value proposition of our product to the user, that may have lead to a showcase page for more information on our product and glossy content and then we ask them to click a button: an add to cart button, a download button, a register button. Please dear customer, just click this button.

In my years of digital marketing and testing two things have become apparent.

1. When dealing with forms, especially one page forms, buttons that indicate that the process will be over with that button click are almost always most effective at converting the customer.

The marketing psychology of this is pretty obvious. Consider the word “submit” for a button. It’s vague. It communicates sending digital information through the ether, but then what? Now consider the dreaded “continue” button. That communicates that there’s more form to come, that punching in your personal information – a process most people hate – is an ongoing process, you could be doing this until dinner time!

Now consider the words “finish” and “complete”. That intones that you’re done. This is the end, don’t abandon this form now, you just have a few more boxes to tick and then you get the satisfaction of having completed the task. It is my experience that “finish” and “complete” can lead to considerable increases in form completion.

2. When dealing with e-commerce sites yellows, oranges and reds are the button shades that convert the most.

Walmart.com’s add to cart buttons are orange, Amazon’s is a dark yellow and Target’s is of course red, but that’s also in inline with their branding. This is no accident. I am sure they are testing this all the time. Orange is empowering, stimulating and reinforces what I like to call the “warm glow of consumerism”, that feeling of excitement you get when you pull the trigger on buying some cherished item.

Of course, you should always be testing buttons but if you’re looking for quick wins in conversion of form completions or purchases, the two suggestions above are likely winners.

The power of one word, or, why copy matters.

The beginning of this post is taken from an excellent episode of RadioLab (they’re all excellent) and I encourage one and all to listen to it.

Facebook had a problem; users were reporting photos as abusive and offensive when they weren’t. Why? People were posting innocuous group photos of family and friends and it turns out that other people in the photos simply weren’t happy that the photo had been posted. They may not have liked their pose, their facial expression or simply didn’t want it public. Having no other option to have that photo taken down, the self imagined victims were reporting the photos as offensive content.

Facebook created a solution, they asked users to select from some options in a drop down to describe how the photo made them feel. Some of the options were things like “embarrassing”, “upsetting”, “bad photo” and “other” along with a box to enter text to describe “other”. When they released the feature they found 50% of users selected one of the named emotions and 34% selected “other”. But of the people who selected “other” the number one typed in response was “it’s embarrassing” – a slight variation to one of the drop down choices.

In light of this data Facebook tweaked the interface. So instead of just “embarrassing” and “upsetting” they changed the choices to “it’s embarrassing” and “it’s upsetting”. This raised the number of people who selected one of the listed emotions from 50% to 78%. One word: “it’s” increased interaction with that content by over 50%. By including “it’s” the blame is shifted from a person to an object. I’m not embarrassed, it’s that photo that is embarrassing.

I recently did a copy test in the global navigation bar of a major electronics manufacturer. The global navigation had 5 options including two items; “Discover” and “Shop”. Now if you’re looking for a new widget from this company, which would you choose to learn more about it? You aren’t ready to buy it just yet, so you might want to “Discover” more information about it. But when you go to “Discover” you find glossy articles talking about other products from the company, not the one you want. “Shop” is the menu item you want, it is where all the products are listed by category and you can drill down to the product detail information. So we changed “Shop” to “Shop Products” and saw a 54% increase in interactions with that menu item and a 35% decrease in interactions with the “Discover” menu item. 

A travel business client has a gift card line of business where you can buy someone a card with a dollar amount attached for travel anywhere. The default amount shown (you can change it, but the dollar amount is very prominent) is $1000. I suggested we do a Valentine’s day promotion with the headline theme of “send the ones you love to the places they love”. It’s a message of giving something you know the recipient will enjoy and the word “love” is used twice – a powerful emotion and a powerful word. The client’s marketing department came back with a different approach of “venture big this Valentine’s day”. The word “venture” as in “venture capital” means to take a dangerous risk. The first definition from dictionary.com is below.

Venture, noun. 1. an undertaking involving uncertainty to the outcome especially a risky or dangerous one: a mountain climbing venture.

So the message is: spend a lot of money ($1000 suggested on the page) on the outside chance that your loved one will appreciate it. The test did indeed fail and my suspicion is that the word “venture” is primarily responsible.

And so you see, a single word can have a powerful effect. Take care with your copy and test it rigorously.

 

’tis the season

In my game we call it “seasonality”, the term is meant to describe user behavior that can differ wildly during certain periods of the year. This should not be a surprise to seasoned digital marketers, but what’s interesting is that it can have an effect on products and services that one would not normally suspect.

I conducted a test for a loyalty product where the value proposition to the user was the ability to accumulate loyalty points for both airline flights and hotel stays. If you link your hotel loyalty account with your airline loyalty account you can double your rewards. The test was launched 10 days before Christmas and allowed to run for an additional 10 days after Christmas. The test content was a series of banner images – very straightforward. Which image would have the most impact on getting users to link  their two loyalty programs?

10 days before Christmas : 0% lift in linked accounts. Some of the challengers had a slight positive or negative lift but none had any statistical significance.

10 days after Christmas: 15% lift in linked accounts with 97% statistical confidence for the best performing banner image.

This is an interesting result, but more than just a test result we can infer some best practices for this particular line of business. Since the number of actions – linking the two accounts – was roughly the same in the 10 days before and after Christmas it wasn’t that people were too busy in the run up to the holiday to take action, they linked accounts with an equal frequency before and after Christmas. So what changed was the users’ willingness to be marketed to. Of course some of this could be because of the busy-ness of the season, users just want to get the task done and don’t linger on subconscious messaging.

 Business lesson: take a look at other campaigns and channels for these two time periods. You might just find that marketing spend is far less effective in the two weeks before Christmas and thus budgets should be aligned accordingly, either curtail the spend before Christmas or shift it to the period after Christmas when users are more open minded to what you have to offer.

 

Be the man with the plan.

Testing for testing’s sake is all well and good, but to really understand your customer you should have a strategic plan and calendar that creates long lasting lessons you can draw on. Here’s something I put together for a travel industry client that has a business line selling gift cards. This is a pretty simple plan because the client does not get a lot of traffic and we have to reduce complexity in order to learn as much as possible as quickly as possible.

In December:

Q: Does holiday gift giving content convert at a higher rate over adventure travel?

Plan: do a 2×2 copy and image test MVT Surfing Girl image vs. Xmas tree image and copy “I’m sending ____to_____” vs. “Travel is under the tree this year”. Then do a second confirmation test with similar but slightly different content. By having this be an MVT test you can see if the holiday themed copy with the surfing image converts better than the surfing image with “I’m sending” copy, thus giving further confirmation of your findings.

Result: You find out if a holiday theme is the right one for this product. You now have a strategic understanding you can carry over to next year.

 

In January:

Q: Are cold weather escape images (beach) better at converting vs. winter sports (skiing) and for which region of the country?

Plan: hero image A/B test with generic copy around “getting away” that would apply equally to both. Create geolocation personas by region (New England, Southeast, etc.). Do second confirmation wave with similar but slightly different hero images.

Result: You now know what kind of content appeals to which regional segment. You can carry this learning over to the next year.

 

In February:

Q: Now that we’ve figured out which kind of vacation appeals to which region how can we optimize that?

Plan: using the regional segments and the winning hero image for each segment, do copy tests to determine the best approach.

Result: You have leveraged the last test to further refine and understand your customer on a much deeper level. Example: You now know that Midwesterners want to see images of the beach and copy that reflects family fun.

Sample size and significance

I recently conducted a test for an electronics retailer whose focus is on selling protection plans and accessories for computer products like iPads and laptops. Having worked for a major electronics manufacturing company doing analytics I know the profit margins on hardware (save Apple products) is astonishingly thin. Like paper thin. Like grocery store thin. The fat margins are on the accessories and protection plans. So we did a common sense test and made the area on product detail pages for laptop and tablet accessories and protection plans more prominent.

Now, the take rate for these products – particularly the critical protection plans – is very low. The test, if run according to the rules I learned from my stats professor (Art Vittorito you were my favorite professor!), should have a sample of at least 100 actions on the KPI per creative treatment or user experience. That means this test would have run for months on end to reach that goal.

This was a multivariate test. So you look at the results on the variable level as opposed to the creative level. The results for one variable were so off the charts positive (over 100% lift in desired user action) that even though the sample was lower than desired, you just have to run with it. Yes, it is possible to flip a coin and have it come up heads 20 times in a row. But 40 times in a row? 60 times? And the contributing factor was that it just made sense. The KPI content was part of a series of tabs. We brought that tab to the far left where the natural eye scanning starts (we read left to right of course). So the insane positive lift numbers + duh factor = calling the test before good statistical modelling sample is in.

Sometimes you don’t have to wait.

Tesla segmenting

Mr. Musk’s patent surrender really has got me thinking hard about Teslas and marketing segmentation. I can’t think of another auto brand that says so much about a person than Tesla. Well, an auto brand that has a product for sale under $100,000. I think we can all imagine the type who drives a $250,000 Lamborghini and you probably don’t want to sit next to him on a long flight even if it is first class. But a Tesla? Tells me right off the bat you’ve got a lot of disposable income, you’re probably highly educated, interested in technology and environmentally sensitive. 

If I had a business in the Hamptons or Greenwich or any play ground of the 1% the first thing I would do would do is to go out and buy a Tesla Super Charger and market the beejeezus out of it. If you don’t know, Tesla is setting up a network of super charger stations that will recharge the Tesla’s batteries in under an hour, thus eliminating the “range fear” of running out of batteries before you can recharge your car.

What would I stock for these people? Solar cell phone chargers. Maybe some high end gluten free snacks. $10 organic artisanal scones. I’m not sure, but as I watch that Wall Street bond trader who wants the world to know he’s successful but not too full of himself exit that amazing car I will most certainly keep track of what he’s buying. ABT: always be testing.

Old and in the way? Great!

One day I was analyzing an email campaign weeks after the campaign had dropped and learned something really interesting. I usually grab the results of a campaign a few days after it drops so any insights or learnings can be built into the next campaign. But, I had been on vacation for a week, came back to over 800 emails and by the time I dug out this campaign was 3 weeks over.

But it wasn’t “over”.

Pulling the results I noticed that this 3 week old email was still getting some significant clicks. And there were small spikes in clicks on the same day we drop our weekly emails. It became very apparent that these emails were still hanging around in peoples in boxes and when they get the next weekly email it reminds them to open previous weeks’.

So that old email, just sittin’ there takin’ up space like the octogenarian at the diner who only orders coffee but uses the booth for 3 hours – actually has value.

So I did a little study of three different emails in 3 different web properties in 3 different countries and found (I can’t give you specifics) that between 10 and 20% of clicks were 7-30 days after the initial email drop.  Also, 9 to 14% of the total clicks were from unique clickers meaning the vast majority of these old clicks were from people who had not opened the email before. So this is fresh content to them.

So what’s lesson here? This begs for a test. I would do an immediacy vs. mellowness test. If up to 20% of openers are opening after 7 days then any offer that’s “limited time” would be a message lost on them. Conversely, a long running offer could inspire a reaction of “this email is OLD. I can’t believe this offer’s still good!”.

Here’s the test I would conduct: one email has a very strong message -Limited time offer! It should be an aggressive offer of high value good for 48 hours only. Another email should have a less aggressive offer that’s good for 30 days. Who knows? Maybe the energy and spryness of a live hard, die young marketing message is actually trumped by an email that’s old and in the way.