Debunking 5 Misconceptions about Net Promoter

Since the day it was first introduced, the Net Promoter System has been a hotly debated topic.

Whether you’re reading the Wikipedia page or random tweets from NPS (Net Promoter Score) critics, you’re bound to find several arguments trying to debunk the effectiveness of this metric.

In a recent article, a notable speaker on the topic of Customer Experience (CX) had some fairly strong opinions that he shared, even going as far to say that measuring NPS is considered harmful to your business.

With two-thirds of all Fortune 1000 companies utilizing the metric, along with 100’s of thousands of smaller businesses, there is bound to be those that will attempt to latch on to its popularity to play the role of the contrarian.

Generally speaking, the (sometimes valid) arguments of the critics are either based on largely misinformed reasoning or simply, improper execution of the overall system. Very rarely do these critiques hold up in practice.

That’s not to say that NPS is without fault, and it certainly doesn’t mean that some of those that oppose the metric don’t have strong points.

However, if you’re going to take a position on which side of NPS you fall, it may be helpful to understand some of the more common misconceptions, and what it is that the naysayers are getting wrong.

People don’t recommend toilet paper to friends and colleagues

Who can argue with that? I certainly can’t.

While toilet paper is just an example, the overall belief is that the primary Net Promoter question doesn’t apply to all products and industries.

You can replace ‘toilet paper’ with a company that makes sprockets for spaceships or oversized fans for industrial warehouses. The point is, it’s a product or service that is unlikely to come up in a conversation or as a recommendation with friends or colleagues.

The basis for this argument states that answering the question at face value, a customer is likely to indicate that they “would not recommend” this product or service to a friend or colleague. But not because they are an unhappy customer, rather, it’s because they don’t know anyone it seems relevant to recommend them to.

That seems to make sense on the surface, but that’s really as deep as the argument goes.

Every company needs some semblance of organic (or word of mouth) growth to succeed in the long term. The good news is that, regardless of what you sell, EVERY company is capable of it.

To give you an example, I’m not a big shoe shopper and I have never purchased a pair that I haven’t tried on first. Therefore, purchasing shoes on Zappos is never something I’ve done.

However, if there was ever a time that I was looking to purchase shoes online, that is the first place that comes to mind.

This isn’t because they’ve targeted me heavily (or at all for that matter), and it’s not because a friend told me about their amazing shoes. It’s because I’ve heard countless stories of how amazing their customer service is. In fact, I’ve even shared the stories myself secondhand.


Just because online shoe shopping isn’t relevant to me, hasn’t prevented their organic message from reaching me.

There are countless examples where companies have built word-of-mouth growth by creating narratives that extend beyond their immediate product or service — even toilet paper.

While it is possible that your customers may tell you that they wouldn’t recommend you for reasons outside of being unhappy, you may want to look at their answers instead of blaming the question.

NPS scoring doesn’t allow for incremental improvements

There is an argument that has been made that the NPS score itself is rather useless based on the formula (% of Promoters – % of Detractors) alone.


In their example, the argument states that if every customer provides you with a 6 rather than a 0, the score should not remain the same. (i.e. In both of those scenarios your score would be -100.) It goes on to say that if the individual scores increased by just 1 point, to a 7, improving your overall score to 0, it should not indicate a 100% improvement.

Setting aside the extremely unrealistic example of all customers providing the same score, there is actually strong reasoning, backed by in-depth research in establishing the scoring range and formula.

In other words, it’s not random and wasn’t created without consideration.

The logic for the calculation was created by first looking at the real-world behaviors of customers based on their score. After careful examination across several companies and industries, each number was assigned to a profile based on their likelihood to share your brand. Again, this was based on observing actual behaviors.

As a metric that’s designed to communicate how likely it is that customers will refer you to others, all detractors (regardless if they are a 0 or a 6) should be considered equal. And, the same is true with promoters as well.


The reason that the scale is more nuanced is because it’s important to understand the severity of someone’s sentiment. For example, even though someone who responds with 0 or 6 are both detractors, the timeline for predicting their future behavior is different. A 6 is often a detractor who is planning to leave a brand and who would not recommend, but they aren’t likely to churn in the same timeframe as a 0. Where as someone who scored a 0 is often times already lining up another vendor while submitting negative reviews about your brand while they’re responding to your survey.

Prioritization is critical in this step, and the additional scale helps you take action in the right manner.

The formula was created specifically to align with what is most likely to occur based on each customer response in aggregate, it was not intended to be an “in average” metric.

In reality, customer scores vary across the board, which is why you don’t see companies with scores of -100 or 100. While the calculation isn’t based on a median value, the sentiment variance from customers generally provides room for incremental score increases and decreases.

Previous customer behavior is more relevant than future intent

Nobody can argue that looking at the past behaviors of customers can be valuable. Without a doubt. That’s especially true when looking at how customers have historically navigated through your product or even when exploring past purchasing habits.

However, Net Promoter isn’t a product usage metric, it’s a loyalty metric. Or, more specifically, an indicator of a customer’s propensity to talk about you (either positively or negatively).

The trouble is, when it comes to predicting future word-of-mouth, past behaviors tell you very little.

The reason is that, as a customer, sentiment changes rapidly.

This is where NPS comes into play.

Here’s a personal example:

Many moons ago, I was a fan of Pepsi. Sorry Coke fans, I just thought that their line of products tasted better overall — all the way from Pepsi to Mountain Dew.


At that time, Pepsi could have looked at my consumption habits and almost predicted my future weekly purchases to a T. And, if social had been around then, they undoubtedly would have seen a virtual love fest of their product online.

That lasted probably into my mid-twenties, at which point, sugared beverages started to catch up with me. Like many others, I decided it was time to transition over to diet sodas.

And just like that, I went from a die-hard Pepsi fanatic to a devoted fan of Diet Coke. The flavor of Pepsi’s diet brands just didn’t taste as good as Coke’s, so my sentiment changed.

If you were to look at that transition on an NPS scale, you may have seen 9’s and 10’s early on, 4 – 6’s while in transition and 1 or 2 in the end.

By looking at my past behaviors, Pepsi certainly would have been able to see my purchases going down. They would even have been able to see that I was switching to diet products. What they couldn’t see is WHY my behavior had changed.

My likelihood to positively recommend Pepsi as a brand dropped because my sentiment had changed.

Had Pepsi taken the time to align my purchase history with my NPS scores, the would have been able to see that my propensity to recommend their products was decreasing along with my behavior as a consumer. More importantly though, they would have known why.

Since NPS can be gamed, the data is unreliable

This is indeed partially true. Your Net Promoter Score can be gamed and I won’t even argue with that.

So can a thousand other metrics, but that’s a weak defense and doesn’t address the question of reliability.

So, let’s dig into how it can be gamed and why.

There are probably a ton of different approaches to artificially increasing your Net Promoter Score, but the more common approaches include:

  1. Changing/deleting scores: If you have control of your Net Promoter program, it’s not difficult to change or delete individual detractor scores.
  2. Asking a customer to leave a positive score: It isn’t uncommon to hear a customer representative ask for a positive review following an interaction.
  3. Incentivizing the customer to complete the survey: Offering cash compensation or some other form of payment introduces a bias.

Why would someone want to game their NPS score? After all, it’s not a competition.

The reason is quite simple … their job depends on it.

Some companies have used NPS as a KPI tied to job performance and compensation. For the record, this is not something that we endorse or recommend at Promoter.

Whether it’s the executive team, the customer success department or an individual employee, it’s never a good idea as a company to tie any sort of bonus or employee performance to a Net Promoter Score.

Doing so will potentially jeopardize the validity of your customer data, which defeats the point of implementing NPS to begin with.

With that said, this isn’t an issue with Net Promoter as a system, rather it’s an issue with how it’s being used within the organization.

And the same can be said with using incentives to boost your response rate. Incentivizing your customers to complete your survey shifts their motivation and introduces a bias in your data. If you’re interested to learn more about this, we wrote an entire post on why using incentives is a bad idea.

At the end of the day, your results are as reliable as you make them. If you follow best practice guidelines and avoid some of the more common mistakes, your NPS results will be amongst the most valuable data you receive.

Just avoid the games. The core of Net Promoter is a system (hence the name), not a framework.

NPS is nothing more than a vanity metric

Oftentimes, those who criticize Net Promoter tend to focus their attention specifically on just the score. What they don’t realize, or at least fail to acknowledge, is that NPS is more than just a number — it’s an entire system.

We’ve stated numerous times that, without additional context, the overall NPS score can be largely meaningless.

Sure, it’s a useful benchmark and has some practical applications when it comes to high-level organizational assessment, but the real value of NPS is the entire system.

What that entails is a combination of individual scores matched with verbatim responses. It’s individual customer sentiment combined with text-analysis to create trending opportunities. It’s engagement of 30 to 40% of your customer base in meaningful conversations.  It’s identifying at-risk customer profiles to reduce and prevent churn. It’s activating a base of advocates to drive growth.

And the list goes on.

To say that NPS is nothing more than a vanity metric is to minimize it to its lowest common denominator.  

[bctt tweet=”To say that NPS is nothing more than a vanity metric is to minimize it to its lowest common denominator” username=”promoter_io”]

NPS isn’t just a score, it’s a system. The score is just the very first step in the process, and unfortunately where a lot of organizations stop.


While these are some of the more common criticisms of Net Promoter, there are a few others that we may touch on in a future post.

In the meantime however, what we have generally found is that most arguments are based on either a lack of first-hand knowledge or bad experiences based on faulty execution.

If you still have your doubts about the effectiveness of Net Promoter as a system, I’d encourage you to try if for yourself. If you follow our guidance and don’t see results within 60 days, we’ll give you your money back.

2 thoughts on “Debunking 5 Misconceptions about Net Promoter”

  1. This is a great article, and appropriately timed as we are having conversations internally about NPS. Some folks at my company think that we should not survey customers who have not used the product in over six months. They argue that the product has changed drastically over time, and those people are not able to provide relevant feedback.

    My thought is that if we survey those customers, we can learn their reason for not recommending us. If their feedback is related to an issue that we have resolved, it then becomes a great opportunity to educate the customer and rebuild the relationship.

    What do you think?

  2. No matter if the NPS is good or bad, it is better to adopt the scale to the customer’s culture and background as much as possible: a 10 from a typical US consumer might be the same as a 7 or 8 from a typical German or Dutch consumer. A software engineer might be more critical in their NPS rating of a cloud service than the average Joe.

Leave a Reply to D. Kruegel Cancel Reply

Your email address will not be published.

Scroll to Top