Customer Service Research Falls Short Of Real Science

Radical Customer Service – Research Issues

How We Know Things Using The Scientific Method And How Most Customer Service Research Is Unreliable And Misleading

…previous section

In this article, we’ll look at how scientific research is supposed to work and why most customer service falls way short in terms of methods and reporting. The result? Companies who depend on this research WILL make poor decisions about customer service.

Ok. Don’t cringe. This shouldn’t hurt much, and neither will reading this cause a bad headache.

You’re probably familiar with the scientific method. In its simplest form the researcher does an experiment to explore the relationship between”one thing” and “another”. Or we might compare two different “things” to see how each affects some outcome.

So, let’s say we want to compare two weight loss regimens to see which one is more effective. We create two groups of people and randomly assign them to one of two group, let’s say group A goes on a high protein, low carbs diet, and group B on a high carb, low protein diet. Then we follow them for, say, six months, and have them weigh in. We take that DATA (the weights), use some statistical techniques to determine whether our result is likely to be a fluke (by chance), or a result of true differences in the effectiveness of the diets.

Let’s assume that the data tells us that group A loses more weight than group B. The difference between the two groups is large enough that it’s not “a chance occurrence — a fluke. Does this mean that we have established a “truth”. Does this mean we can predict that people who embark on a weight loss program will always lose more weight on Program A than Program B?

The answer is no. Well, why not?

There are a lot of reasons why scientists don’t rely on any one research study. Mistakes can happen. Maybe the two groups had some hidden differences to start. Maybe the data wasn’t analyzed or recorded properly. If the average weight loss between groups was different, did everyone in group A lose more weight than group B? Or did some people in group A actually lose less than some in Group B?

Or, perhaps there was some bias operating on the part of the researchers. What if it turned out the research was actually sponsored by the company that sells the products used in Group A? Wouldn’t that be a concern?

So, the first point is this: No one study is ever taken to be indicative of a final “truth” in science. Research needs to be REPLICATED by other researchers using similar methodologies. If you end up with a number of similar studies drawing the same conclusions, you can be much more confident of the conclusions.

Catching Bias, Research Mistakes, And Faulty Interpretations

In science, it’s important that “wrong” information doesn’t go unchallenged. Mistakes get make, interpretations can be faulty, and researchers make errors in interpreting their own data. Research can be contaminated by researcher bias, so it’s important to try to remove from the equation, “how the researcher might “want the results to come out”. In the example above, would you find the results as credible if you knew who the sponsor was? Probably not. You shouldn’t, but it’s not just because the sponsoring company might have tried to “rig the results”.

The reasons are much more subtle. At the risk of stating the obvious, we are all human beings, and we all filter information based on our opinions, knowledge, and a host of other factors. As an example, we know that humans have a tendency to seek out information that is consistent with, and confirms their own opinions, rather than look for evidence that falsifies their opinions. We aren’t even aware of doing it. It’s just part of how we work. There are many other established “cognitive distortions” in the psychological literature.

With scientific research we want to do our best to “protect” the research from contamination from the very human tendencies we all have. It’s not so much the obvious risk of doing fake research as it is that we can accidentally impose our own perceptions, hopes and biases onto the research. Biases are contaminants.

Mechanisms To Fight Bias And Errors That Contaminate The Knowledge Pool

In science there are a number of mechanisms and processes in place to fight bias, and identify research errors.

Replication By Other Researchers: Replication, meaning doing similar research a number of times is important, and realizing a single study is meaningless, of course. The “by others” is important here. Repeating the same research by the same researcher isn’t nearly as valuable as other researchers from the research community redoing it. The answer is obvious. In the event that the original researcher’s biases contaminated the first results, it’s probably that if s/he repeats the work, that bias will still operate. If OTHERS, in other places, locations and contexts, try to replicate, and they get the same results, THEN you have something. You can be more confident in the conclusions — not pefectly confident, but more confident.

Oversight: Peer Review and Journal Publication: There’s a fundamental principle in science. When one does research there’s an expectation that the research is shared with the scientific community in the particular field. However, it needs to be shared in a particular way. For knowledge to advance, research has to INVITE criticism. Science is one of the rare human endeavors that only works when people — expert people, or peers, pick apart the research. Without other experts hunt actively for errors; mistakes in the method the research used, the data collection, and the interpretation of the finding, a field cant move forward.

To that end, research is published in professional journals. Thousands exist on just about every topic, and they don’t make for scintillating reading. Circulation is small, journals are expensive, and you can’t stroll into your neighborhood vendor and purchase, for example, The Journal Of Experimental Psychology. In fact, unless one is a researcher oneself, it’s unlikely that you’ve ever heard of any of them.

There’s a process in place for the best research journals to ensure that, at each step in the publication process, each submission is scrutinized. Typically, a researcher will submit his or her research to the specific, and hopefully relevant journal. An editorial board reviews the article, essentially to screen out the obvious junk, and things that aren’t appropriate for that particular journal. If the editorial board sees potential merit in the article, it assigns the article to a number of “peer reviewers”. Peer reviewers are experts on the particular topic and do not get paid for their contributions. Those peer reviews go through the submission, with a specific goal of hunting down errors the original researcher might have made. They make comments, and recommendations regarding the “value” of the article to the community, and whether they feel it is of sufficient quality to be published in that particular journal. Those comments go back to the editorial board which then decides whether to forge ahead with the publication of the article “as is”, to return it to the researcher with a request for modification, or, to say “sorry, not good enough”.

Once the article is published, the process of criticism continues. The research community chips in with their comments and concerns. Other researchers, stimulated by questions about the article, may do their own similar research. The article may be submitted to conferences, and presented there, once again, to be cut to ribbons by people in the field.

To make a long story short, there is considerable OVERSIGHT. That oversight is, at least, in theory “crowdsourced”, but not “just anybody” participates. Only people who are knowledgable enough.

Before we move on, there’s one more thing to cover. For colleagues and peers to properly evaluate any piece of research, they need to have access to as many of the details of the research as possible. They need to know how the data was collected. If it’s social science research, how were participants selected? What questions were they asked? What instruments were used to collect the data? If the research community doesn’t have access to those details, they can’t catch errors of logic, interpretation and so on. To that end, journal articles and conference presentations provide as much detail as is feasible, without the article becoming a book length tome. Even if the actual article may not include everything, the researcher will, and certainly should make available everything, right down to the raw data, if it’s requested. It’s openness and transparency personified.

Takeaway: The overwhelming majority of customer service research published by the major research firms lacks the oversight necessary to ensure that what’s published is methodologically sound. Most of the research falls far short of the standards we use in regular “scientific research”.

The Null Hypothesis: In our weight loss example, most non-scientists would think that the research study set out to prove that the two different diets were differentially effective — that one is better, one worse. In science that’s not how it works. In fact, scientists start out with a “null hypothesis”, which means that, there is NO difference between the two regimens. That’s the default state. This default state must be shown to be false for that particular research. If the evidence to support the “experimental hypotheses”; that one diet is more effective than another is weak, partial, and so on, the conclusion MUST be that the null hypothesis holds.

This is a little arcane, but the over-riding question to ask about the customer service research we see published on the Internet by the major research firms is this: Is there a vested interest for research companies to start from the premise that they want to prove the “experimental hypothesis.

How Customer Service Research Findings Get Skewed By The Time You See It

So, let’s take this opportunity to look at the typical “cycle” of information flow, from research through to acceptance of conclusions as fact to identify not only what happens with this particular myth, but what happens with the much more important ones we’re going to cover.

Before we follow the trail of how information and research from the customer service industry is disseminated, we need to understand three central human psychology issues that have to do with what we accept and/or reject as fact.

The first has to do with our sense that if something sounds scientific, it’s much more easily believed and accepted, particularly if it’s spread widely, and repeated. Mind that something doesn’t actually have to BE scientific, or to be more accurate, apply a rigorous scientific method to be believable. It simply has to appear scientific. It has to sound objective. It has to sound like scientists and “researchers” are involved and that they know what they are doing. It has to involve collecting some sort of data, rather than just presenting an opinion. Information, and “research” that comes from the customer service industry sounds credible and fits these criteria. And, after all, very few of us have access to the details of the research and very few of us have the skills to take that information and evaluate whether there are flaws in how the data was collected, or whether the conclusions made are contaminated by bias or other errors. So, in effect, it’s a natural tendency to embrace findings that sound scientific. We have faith. Science wouldn’t steer us wrong. Science is our friend.

The second thing to remember is that when conclusions are backed up with numbers, it’s far more likely the conclusions will be believed. It’s pretty simple. If you read a report that said: “A lot of customers switch from one company to another based on poor customer service”, do you have the same reaction to a report that says: “Sixty-six percent of customers have switched companies due to poor customer service resulting in an annual loss of $50.6 billion”?

No. Probably not. Numbers cause us to believe. Generalities don’t. There are some exceptions, of course. If a research report stated that companies lose $50,600,564,726.83 (note the down to the penny detail), you might wonder about how anyone could calculate down to the penny in any meaningful way. Be that as it, may generally, numbers are convincing, and we attend to them. The bigger the numbers, at least when it comes to money, the more attention generated.

The third psychological aspect is that human beings tend to seek out information that supports their existing opinions, beliefs and what they want. Countless studies in Psychology, going back decades, and looking at actual behavior, and not self-reports, have replicated this finding. People don’t look for information that will conflict with, or falsify what they already believe is the case. So, people that believe customer service is important to the economic health of a company will tend to look for and read material that says what they already believe. They don’t look for information to disconfirm that, even though, as we saw in chapter xx (Business of customer service) there IS evidence that the relationship between perceived customer service quality and profit is not as direct as most think.

Now we have some mental tools to look at how and why incorrect or partially correct information about customer service is created and spread.

Next…

Author: Robert Bacal

Leave a Reply