«

»

Jan 05

5 Dangers of DIY Research

Photo Credit: Roboppy (Flickr.com)

Like many segments of the business world, the marketing research industry has weathered a tough couple of years as the US economy has sagged and budgets have dried up. One of the by-products of the recent recession has been that many firms have taken at least some of their marketing research internal, forcing marketing managers and analysts to conduct inexpensive research online or through no-frills focus groups.

Naturally, some folks in the marketing research industry feel threatened by this shift, because it means that the work they’ve relied on for so many years is suddenly drying up. But there are others, like myself, who are concerned about “do-it-yourself” (or DIY) research for far more practical reasons. Here are five reasons why DIY research can be a dangerous practice for your company… and five suggestions for how you can turn things around if you’re already committed to a DIY study.

1)      DIY Research is rarely done with a proper sample. In the world of marketing research, the method used to sample a population is an enormous concern. If you use the wrong sampling method, you can potentially bias your results towards a point of view that is not representative of your entire target market… and that can be dangerous if you’re about to make a decision upon which millions of dollars hinge.

Basically, there are two categories of samples: probability and non-probability. Another name for these would be random samples (which are assumed to be normally distributed) and convenience samples (which are not). If you are conducting an online survey, making phone calls from a list you purchased or sending mail surveys out to predefined ZIP codes, chances are good that you’re using a convenience sample.

Not that it’s a bad thing to use a convenience sample. In fact, a lot of marketing research is done with this sampling method because it’s easy and it’s quick. But the problem is that a convenience sample has many limitations — chief among them the fact that you cannot apply statistical tests or techniques to your findings without a huge mea culpa basically saying that your findings aren’t statistically valid or reliable when applied to anyone outside your sample. All you can really do with a convenience sample is examine what your sample has to say about something. That can be good and useful if you’re just trying to get a general read on things or diagnose problems. But it can be very bad if you’re trying to gain sophisticated information about a target market so you can develop a better product or service.

So, how do you deal with a convenience sample? The simple answer is that treat your findings like blips on a radar rather than as detailed, photographic images taken from a satellite. You can use your findings to make general decisions and, of course, to commission more detailed research. But what you can’t do is make decisions based solely on the differences between the blips, nor will you be able to tell if other blips like them will appear in the future. You just don’t have enough of the right information to do that.

2)      DIY Research is rarely done with appropriate scales. Quick, on a scale of 1 to 7, with 1 being lowest and 7 being highest, tell me how you feel about the performance of the last set of tires you bought for your car. Ready? Go.

Are you confused by the question? I know I’d be. Who really can sit down and think up seven different points of differentiation in terms of their car tires? The tires either work or they don’t. I can either be satisfied or dissatisfied, or perhaps neither. At most, that question requires a three-point scale.

On the other hand, let’s say I want to ask you to evaluate your overall satisfaction with the last car you purchased, and I give you three choices: dissatisfied, neutral, or satisfied. Can you really, effectively rate your satisfaction with your car on a three-point scale? Chances are good you’d prefer to have those seven points of differentiation I just gave you for the tires.

These are some of the many questions that must be asked when developing rating scales. And the really tricky part is that much of scale development has less to do with the comfort of the respondent and more to do with the aims of the study. For example, if I were a tire manufacturer and the study was with professional drivers, I might need to offer a more sensitive scale than I would with the average consumer. If I were a luxury car manufacturer, I might be more interested in measuring the minute differences between satisfaction than I would be if I were selling mass-market vehicles.

And then there’s the question of what the scale is actually measuring. Satisfaction scales are common, but are they always appropriate? Is it really telling you anything to ask a respondent how satisfied they are with their tires? Maybe it would be better to ask them how effective their tires are in icy weather, or how safe they feel with the tires they have. Those are far more pertinent questions.

So, how do you ensure your scale is appropriate? I would recommend looking critically at what the data would tell you. You should also test your scale before you release your survey. If you find that your sensitive scale is getting too many top or bottom choices, you can probably reduce sensitivity. If you’re finding that your three-point scale is getting a lot of neutral answers, you should probably increase sensitivity.

But granted, all of these measures are worthless if you miss out on the next concern:

3)      DIY Research rarely is tied to established objectives. When a marketing research professional begins working with a client, he or she will often insist on hammering out a specific list of research questions that need to be answered and accompanying, bullet-pointed research objectives that will help to answer these questions. These questions and objectives guide the development of the methodology and the research instrument (survey, moderator’s guide, and so forth). Ultimately, the purpose of the research is to provide the information needed to make one or more decisions.

But DIY research rarely seems to be developed through such a systematic structure. Rather, DIY surveys often include everything the marketing team can think to ask. There is no plan for analysis, no thought given to what really needs to be known, and no consideration for the effect of one set of questions on another. Demographic questions are often far too personal, and screening questions are typically absent.

It is impossible for good, solid marketing research to take place without objectives, because the objectives serve as the backbone for the entire study. What you tend to find with DIY surveys is that there are a lot of questions that offer the “nice to know” information, but they don’t help the marketing managers make good decisions. Some marketing professionals will try to cloak the lack of relevance by spouting off lots of useless statistics generated from cross-tabs. That’s just a waste of everyone’s time and money.

The solution to this problem is to sit down and systematically identify the purpose of the research (called a “problem statement”), the questions that need to be answered to solve the problem, and the information needed to answer the questions. Use that framework to develop your questions, and don’t go beyond it without first adjusting the structure. (Screening questions and demographic questions are the only things you don’t have to tie to objectives, but use them sparingly.)

You should also develop a set of action standards constructed in the “If X then Y” format. For example, “If more than 50% of our customers aren’t satisfied with our new logo, we’ll begin the process of changing it.”

4)      DIY Research often results in respondent fatigue. Imagine that you’re in a focus group where the moderator is so unsure of himself that he keeps asking the same questions over and over because he wants to make sure that he didn’t miss anything, even though you’ve been going for two hours. Or imagine that you’re taking an online survey that should take only 5 minutes, but which is only halfway done by the time you reach the 15-minute mark.

Both of these scenarios are breeding grounds for “respondent fatigue,” where participants in the research are simply too weary of the process to be of any use. It’s a common circumstance when the research team doesn’t have any real experience with research.

The best way to put a stop to respondent fatigue is to limit the scope of your research. When you’re developing your objectives, stick to what’s relevant, and don’t go into too much detail. In a focus group, you can probe questions if the answers your respondents are offering aren’t sufficient. In a survey, you can include one or more open-ended questions (or optional probes) to shed light on any attitudes or ideas that need further explanation.

5)      DIY Research often has terrible questions. Ask any marketing research professional, and he or she will tell you that writing questions is tough because very complex ideas have to be asked in the most simple and straightforward manners possible. What’s more, a question that seems perfectly obvious to the research team might baffle less sophisticated respondents, or even be taken to be asking something else.

Probably the most common variety of bad question is a question that is unclear or vague. “Describe your feelings about buying our brand of peanut butter,” a question might ask, and then provide a set of canned responses, with the instruction, “pick 3.” Every respondent is going to interpret the question differently, and the canned responses are only going to make the data more worthless, because some respondents are going to be frustrated at not seeing choices that suit their own interpretation and just tick off the first three boxes they see so they can move on.

Consider this focus group prompt: “I’m going to ask you all to imagine that you are a bag of chips. Now, on the pads in front of you, I want you to write down what you like about yourselves and what you wish was different.” This could be seen as a creative exercise with some interesting projective qualities, but consider for a moment the sort of junk data that is going to come from the creatively-frustrated members of the group. Who can really relate that well to a bag of salty snacks?

One of my favorite offenders are questions that are double-barreled: “Do you think that the President is honest and trustworthy?” I might feel that he’s one, but not the other. “Would you say that your bank is small and helpful, or large and impersonal?” I might feel any combination of those adjectives. It’s very important to ensure that questions are asking about one characteristic – not two or more.

So, how do you improve question design? As it happens, there is something of a science to asking questions, and there are numerous books available that offer academically-validated scales and measurement tools. Another means of improving question quality is by employing a field test of a survey or moderator’s guide with some potential respondents. This will help to ensure that questions make sense, that responses match up, and that respondents aren’t confused.

Final Thoughts: While this has been a lengthy piece, I hope it has been helpful in showing you why marketing research is often best left to the professionals. Without the training to spot these problems (as well as others I have not listed), you can waste a lot of time and resources developing research that is essentially worthless.

Of course, I’m not opposed to firms doing DIY research, but I do recommend that you at least consult with a professional prior to running the study on your own. And on that note, feel free to pose any questions you might have in the comments below, or e-mail me directly at sjordanATresearchplanDOTcom.

-SJJ