May 25, 2007

Us v/s Him - Treatment of Outliers in Qualitative Research


Are qualitative researchers - an inclusive lot - when it comes to the treatment of 'Outliers' or Do we banish the stray thought that we hear in research - as a maverick of sorts....'non representative' of the audience at large?

I read this post on Helen's blog some days ago, that set me thinking. Is there a subconscious bias prevalent within the discipline of qual research - a bias that favors the majority such that we lay greater emphasis on the thoughts expressed by 'many' vis-a-vis those expressed by a 'few'?

There are two reasons I could think of - about why this could happen

1. An external bias (from the client)

Even though in the recent past people have begun to appreciate the kind of information that qualitative research brings to the fore - it is still, often judged by the same yardstick as is used in case of quant research. How many times have we heard clients challenge findings by asking questions like...can you tell me how many people felt that way and the researcher diffidently admitting...well, there were just a few. To which one would hear...well then lets not give it too much importance and move on...Why does this emphasis on numbers infiltrate qualitative studies as well? Is that the only metric by which the worth of something can be judged ? What makes people so easily believe that the view point expressed by 'one' or a 'few' is a not a thought worth pursuing ?

I suppose repeated exposure to this way of thinking could lead researchers to start questioning their own beliefs such that the next time one is faced with a stray thought like that - rather than sounding shaky in front of a client, the researcher may nip it in the bud.

2. The other is the researcher's own bias

For long qual researchers have relied on the technique of 'story telling' to analyse and present findings. This helps the researcher see inter-connections in the data set such that she can look at the data holistically and understand the consumer's mind set i.e. where the consumer is coming from. It is also an effective way of communicating findings to the client - to narrate the findings in a sequence that ultimately builds up to the climax i.e. the insight !

But how does one treat a piece of data that does not fit in with one's beautifully crafted story? For a long time I would keep aside that piece of data, treat it as an exception - that is what I had initially learnt - filter it out - if it does not fit in with your story. It is much later I learnt that...If it does not fit in with your story....there must be a bloody good reason for it....so look at the data again...tap it gently from either side, sleep over it and you will eventually find out WHY!

I do not believe that every stray thought that one hears would definitely hide something significant under its skin. However, don't discard it or belittle it before giving it the attention it deserves. As Helen rightly puts it...

"An inconvenient little wobble in a research context, could mean something more significant in real life. I think we have a duty to report them. Carefully."

P.S - If you are used to thinking 'the more the merrier' - think again - more does not have to mean more number of people with single thought, it could also mean a few people with more number of thoughts

,

4 comments:

Anonymous said...

Excellent! After a long time, am seeing something posted here on Quali Research that is thought provoking!

After sitting through countless presentations made by well-meaning research firms, and reading reams of info on research findings over the years, I must confess I have developed a healthy skepticism about the value of Research and its insights. One of the biggest contributors to that cynicism has certainly been this uncanny ability of researchers to simply dismiss findings that are not substantiated by a large percentage of data points.

I believe in the Power of One. I know that nothing worthwhile was achieved by a committee. I recognize the importance of going against the grain... In fact, my entire blog - UncommonWisdom - is dedicated to that perspective!

Unknown said...

Hi Reshma
I thought I'd left a comment here (I ceratinly wrote one last week) but since I can't see it I'll try again!

I don't feel a lot of pressure from clients on this issue. Though I think I may be lucky with the clients I have. On the rare occasions when a client (usually one less familiar with quasl) does pipe up with "and how many people said something like that", I politely explain that 'numbers' are irrelevant, the fact that a view was expressed at all means it is potentially significant.

From experience, I know that my understanding on many issues has been 'unlocked' by something a single person said (so often the quiet person, who looked board through most of the session - but who was actually just thinking). The task then is to understand why other people didn't express this view.

The pressure of the 'story' is something I feel a lot more. The desire for a clear story is quite strong. Clients pay us to help them understand, the last thing we want to do is to make things muddier. Reporting inconsistencies always runs the risk of making you look inconsistent - even though this is unfair.

But I think you're right - if something doesn't fit, you need to understand why. Once you've got to the bottom of it you'll find your story.

Reshma Bachwani said...

Helen, thanks for taking the trouble to post your thoughts again. I think there was some problem with blogger - I wasn’t getting any comment alerts either

I empathize with your feeling of 'making things look inconsistent' and I’m sure many researchers including me have felt the need to put forth a crisp clear story - probably since that has been knowledge handed down to us. One of the reasons for following this practice would have been to eliminate data inconsistencies. I feel partly that can be taken care of at the data gathering stage itself - if respondents are probed enough on mutually contradictory / inconsistent viewpoints they may have expressed at different points in the conversation.

The other thing that comes to mind on this issue is - that if data is scattered all over the place - then sometimes that in-itself could be a reality that needs to be communicated e.g. in case where a brand identity is diffused in reality – the learning could be a need to consolidate

If you (or any one reading this post) can identify any more reasons on why this bias creeps in - it would help a great deal.

Naveen - I am in the process of compiling information on factors that cause research buyers to become skeptics and expectations they have from research. I will need more inputs from you on that.

Thanks!

Dina Mehta said...

Hi Reshma and Helen .. your posts and comments made me think and took me back many many years. To the late 80’s. Sometimes it pays to be a qualitative researcher for almost 20 years :).

When Nirma, an indigenous brand of low-cost washing powder was launched - it opened up a whole new market at the bottom of the pyramid. Hindustan Unilever Limited who was reigning supreme in the overall washing powder and detergents category then, launched Wheel to carve out its own market and eat into Nirma’s share. Research required to find chinks in Nirma’s armour.

Priya Tandan, my first boss, tells this story ... she heard and reported in one comment among so many in the numerous focus groups she conducted on the project. And it was the only negative murmur against Nirma - that it sometimes led to a burning sensation on the hands, when being used.

This little 'wobble' led to one of the most successful advertising campaigns of the time - one of those with a real Big Idea - where a woman is shown dramatically rejecting the yellow (Nirma) powder, saying “door ho jao meri nazron se - maine maangi thi safaii, tune di haathon ki jalan” ( transliteration - get away from my sight — I had asked for something to help me clean, instead you gave me a product that burns my hands). This then became one of the biggest barriers to Nirma usage and a strong negative association with the brand.

I often recall this example when doing my analysis - and remind myself to look for the inconsistencies and not just the consistencies. And I often tell the story and play it back to Clients who question me about “how MANY said that?”!

Couldn't resist blogging this too :)