Because social media and analytics software have given entrepreneurs a wealth of options to find and connect with customers, running a marketing experiment is one of the fastest ways to test an idea. But where do you start, and, with all the options out there, how do you get clear answers from a wealth of data? Marketing experts Cindy Alvarez, Alistair Croll and Anita Newton gave us five tips for running successful tests, and a few pitfalls to avoid.
Author of, Lean Customer Development: How to Build Products Your Customers Will Buy, Cindy Alvarez has been using customer development techniques for well over a decade, and today heads up product design and user research for Yammer, a Microsoft company. Speaker and author of several books on technology and business, including Lean Analytics, Alistair Croll has launched several companies, run the Year One Labs accelerator in Montreal, and today works on research and strategy at CloudOps. The VP of marketing at Adknowledge, a global ad tech company, Anita Newton also runs marketing at the startup Mighty Green Solutions and teaches marketing through the Kauffman Foundation’s Founders School and FastTrac’s Venture Program.
Tip 1: Before you run a test, get some initial information from your customers. Because you’re going to want to test your biggest risk.
Cindy Alvarez: You want to de-risk your idea. In general, the biggest risk is that no one cares, so you want to just put something in front of people, and see if you can get evidence that they care about it. Putting up a test marketing site, putting a prototype in front of someone, doing a Kickstarter, sending an email and seeing if anyone responds — these things are great ways of experimenting.
Another approach that doesn’t involve creating anything — and so it’s cheap and easy — is to look for analogs to what you’re doing. They don’t necessarily have to look a lot like what you’re doing, but think about the concerns you have and how you could extrapolate those concerns from someone else. For example, the experiment that we’re doing right now with Yammer is trying to get people to participate more. The other day, I read an article on Massive Open Online Courses, MOOCs, about how online participation in those courses doesn’t come naturally to most people, and the organizers were trying to figure out how to encourage online participation.
When I saw that, I thought, “What a great analog to what we’re trying to do.” So now I have a researcher who is talking to people who are enrolled in MOOCs, and asking about the experience.
Alistair Croll: When you’re too early in the business to have hard data, you need to base your assumptions on competitive analysis, industry baselines, and customer feedback from interviews. You’re after the riskiest assumption in the business model, because that’s the one you need to prove you can overcome.
Usually, the metric for that is tied to attention. We all think our products are unique, special snowflakes, but the hard truth is that nobody really cares about our products. So I’d begin with metrics around engagement and stickiness first.
Anita Newton: When I’m just starting, I’m not that structured — because you just don’t know what you don’t know. But start getting information with oneonone conversations, and also through some sort of anonymous approach. For a B2B product, talk to salespeople: If I’m developing a content marketing plan, or a set of campaigns, whether it’s a webinar or a gift, I will walk them through my ideas, and they will rip two thirds of them.
Tip 2: With some info from your customers, come up with a constrained hypothesis.
Alvarez: If you can make a tight hypothesis (through observation and customer interviews), it’s easier to get clear results. If you say something like, “I think these people would benefit from better software.” That’s very vague. Are you making it better in the right way? It could be that you’ve improved something about their experience but it’s not the most important thing. It’s very hard to get clear results out of that sort of thing. Also, don’t try to test three things at once.
Croll: You should always have a hypothesis that you’re either proving or disproving. We say in Lean Analytics that if a metric doesn’t change your behavior, it’s a bad metric. Know what you’ll do based on the results before you collect the data. People hate this. They want to just start — start building, start collecting, start measuring. But understanding why you’re doing something and how it will affect you is crucial.
Newton: Make sure you really understand what you’re testing and what you’re trying to learn. That sounds so obvious, but most people don’t do it. For instance, if you have two landing pages you’re testing, ask: What do I want to accomplish? Is it traffic, is it engagement, is it conversion? Maybe you could say, I want to change this landing page so I can see if that improves my conversion rate.
Tip 3: Other constraints, like time or money, will save you some headaches.
Croll: You should always focus. Remember high school math, when you solved an equation by getting one variable on the left and everything else on the right? You need to isolate one variable so you can change it.
We wrote a long post about scoring customer interviews [tool alert] on our blog. Usually the qualitative information gives you the “unknown unknowns,” exploratory insight you want to investigate. Then you need to find a way to quantify it. Let’s say that customer interviews give you five possible marketing angles. So set up five Google, Facebook, or LinkedIn campaigns and try out the five angles, and see which one works best. You don’t have to be selling a product — just ask people to fill out a survey, and see which angle or tagline gets the best results. As a bonus, you’ll have survey responses (which you can use for content marketing) and respondents (with whom you can set up more interviews).
Newton: You need to have a budget. It doesn’t need to be a lot of money, but be clear on what that is. When I do a lot of testing on Facebook, we don’t spend a lot of money, but we did quantify that money, even if it was just $25 or $50.
Tip 4: You’re going to get murky results. So, run your test again, take a harder look at your metrics, and sometimes, go with your gut.
Croll: The first results are always murky. You learn that you’re asking the wrong questions, of the wrong audience. Let’s say, for example, that you run an online survey and get answers that are all over the map. Then you slice and dice the data — by geography, by gender, by browser — and you notice that all the respondents who seemed positive used a recent version of MacOS. This immediately tells you something about their comfort with technology, socio-economic status, and so on. You could then use this to more tightly narrow your research, or to ask different questions, or to decide whether your user interface is too advanced or too simple, or to invest in customer support earlier, or to prioritize iOS versus Android. All from one piece of data you didn’t even know you were looking for.
Newton: With murky information, you have to go with your gut. Or when things are murky, a lot of times we’ll just do it again. Run an experiment a couple different times, a couple different ways. Setting up an experiment again is not as hard as setting it up from scratch.
Tip 5: Talking through tests with your team is where you will learn the most.
Alvarez: The greatest tool that we use is not a software tool, but is just the practice of storytelling. It’s just figuring out: When we learn something interesting, how do we make it memorable? How do we fit it together with the things we felt in the past? How do we update the oral tradition of what things people know within the company? It’s not very hightech, but it’s incredibly effective, and it’s a way to scale up what, in our org is a fairly small team to take advantage of lots and lots of smart people we have.
Newton: Once you get that initial view of your data set — survey or conversation or whatever — the real value initially is coming back to your team and talking through it. And if you don’t have a team, just bring in another person. You’ll get smarter.
It would be a mistake to not sit down to do a postmortem. It’s very uncomfortable to do it, but you have to check your ego at the door and have the hard conversations, or else it’s not going to get better. When we sit down and talk about why we failed, it’s really great. And with that, if you’re bigger, marketing automation really works, but don’t hire an agency. Do it yourself: you’ll learn more and save so much money.
As a bonus, here’s Stephanie Hay with three techniques for testing marketing content, including checks on the language you use, what specific things to ask users, and how to get found by potential customers [5:08]:
This week, the Lean Startup is taking over the blog on Intuit Labs with original stories and a fresh perspective. Centered around experimentation and investigating all parts of a business or product idea, this week’s posts include case studies, tips, Q&As, startup stories and more. If you want to learn more about Lean Startup and how it’s applied at Intuit, visit the Intuit Innovation Institute. This piece was written by Mercedes Kraus.
The post Five Tips for Quickly Testing Your Idea appeared first on Intuit Labs.