The presenters for this session were Jay Greenberg, Director of eCommerce, Spencer's
George Michie, Co-Founder and VP Client Services, The Rimm-Kaufman Group
Stephanie Pike, Web Strategy, Circuit City Stores, Inc.
(By the time this session rolled around my battery had died on my laptop so I had to take notes the old fashioned way. Needless to say this is going to be a collaboration of my notes, but a lot of great content was shared by these speakers that I thought was important enough to post anyway.)
The purpose of this session was to teach us about effectively testing on our websites. The speakers brought their personal experiences to the table and shared their successes as well as their failures. It is funny because to me it seemed like we could learn a lot more from the failures they shared then the successes.
Some of the areas they would be discussing were:
- Expectation management
- What to test
- How to test
- What to do after the test is completed
Conversion Nirvana refers to the idea that conversion is a cyclical experience. The reality is that of MVT tests that are performed many are not going to be conclusive. A merchandising rule that was shared in addition to Bryan's Golden Rule was to: "List 70%, Offer 100%, and Creative for 10%". Conversion rates depend on the quality of the traffic that comes to the site. So how do we work toward "Conversion Nirvana"?
- Improving conversion rates
- Targeting segments to take this farther
- Eliminating massive redesign projects
- Ending subjective arguments about creative
- Developing confidence
- Don't test little things
- Be patient; expect a lot of 0 results between two versions
- Testing misses lifetime value issues, null results might be a victory
- To elaborate on what he explained was lifetime value issues he explained that a customer may have a poor experience but because they were able to make it far enough in to the checkout process they may complete the sale. Having said that the customer that had the poor experience on your website will probably remember their poor experience and not return to the site again.
How do we decide what to test?
- Scalability of testing:
- How important is the campaign?
- What can I learn and can it be used across other departments or areas?
- How easy can I get the actionable data?
- Am I empowered to react on the results of the test, if so do I have the resources to take action?
Some key points to remember:
- Test against what you can control to get improved conversion rate.
- Define the business objectives ahead of time.
Where to start:
- Images, e.g. People v. Product, Solo v. Lifestyle
- Copy Length
The items I listed above were the key takeaways that I got from the session. Some of the things that were interesting to see were the actual tests that Spencer's and Circuit City performed. Each retailer would show the audience their test and then take a poll to see what we thought the winner would be. The point they were making was the test doesn't always turn out the way you thought it would, and be careful about what you are testing as there may be noise that clouds the consistency of the test.