The Three Deadly Pitfalls in New Product Validation and How to Avoid Them

Creating a new product that people will want to buy is risky; however, there are specific things you can do to mitigate that risk.

Dr. Ari Zelmanow
9 min readJan 1, 2019

--

Humans are naturally a risk-averse species. In fact, we hate losing TWICE AS MUCH as we like gaining. This means we go out of our way to mitigate losses, even if it means forgoing gains.

This also manifests in developing new products where the risk of loss in developing a new product is significant.

When we build something that nobody uses, likes, or buys, we have “wasted” time and money. In addition, it is a blow to our ego.

The problem is, people and businesses offering solutions or “teaching” in product development or validation are either woefully unaware of the concepts of “risk,” “loss,” and “uncertainty,” or are more interested in peddling their products or services to care.

We live in a world of probabilities. However, we can make better decisions in the face of uncertainty, especially as it relates to developing a new product.

There is a, cold, hard, truth when it comes to new product development or starting a business; a majority of ideas are destined to fail. Anyone telling you that they can GUARANTEE your idea will succeed is full of shit. Period. That said, there are things you CAN do to increase your odds of success.

It is called validation.

But first, we must address the arguments against validation.

The Anti-Validation Camp

I was reading an article by Alex Hillman, who is arguably representative of the “anti-validation” camp. In this article, he attacks Eric Ries of The Lean Startup while he (unsuccessfully) makes an argument against validation.

First, he says, “Validation is as much of a system as throwing spaghetti against a wall to see what sticks.” From this statement alone, I am not sure he knows what validation is. According to Business Dictionary, validation is defined as:

Assessment of an action, decision, plan, or transaction to establish that it is (1) correct, (2) complete, (3) being implemented (and/or recorded) as intended, and (4) delivering the intended outcome.

By definition, validation is the opposite of throwing spaghetti at a wall to see what sticks.

He argues that “uncertainty isn’t a requirement, or even a default, to starting a business.” I would like to know where he got this “fact.” I am guessing this applies to businesses that follow his “system or process.” However, I’m not buying. The data don’t lie. If there wasn’t uncertainty, new products and businesses wouldn’t fail the majority of the time.

Then he goes on to tout his system of “observational research” that “teaches you a data-collection process [ethnography] used by real researchers in various fields of science.” Is he saying that interviews aren’t used by “real researchers in various fields of science?”

He uses the following example to illustrate:

“Imagine going to see the lions on display in the zoo. Now imagine seeing the same species of lion in the wild on an African safari. Technically, you’re looking at the same animal both times. But they behave differently in the wild than they do in captivity.”

Makes sense so far. He is talking about the importance of environment and context. He continues,

“You wouldn’t make a judgment call about what MOST lions do based on a lion in a zoo, because MOST lions aren’t in zoos. If Validation is like going to see lions in the zoo, our process is like seeing the animals in the wild, on Safari. Which is why we call our process Sales Safari.”

This is where he loses me. If you are trying to learn about what lions have done in the wild vs. what lions have done in the zoo (PAST or PRESENT TENSE), then observation and ethnography are a well-suited research method. If you are trying to see how lions will perform in a new environment (FUTURE), i.e. a wildlife reserve, going to either the zoo or to the savanna won’t help you answer that question.

What Alex seems to ignore is that technique and skill are as important, if not more important, than the method itself. He is comparing apples and oranges, i.e. good ethnography and bad interviews.

While debating the merits of ethnography are outside the scope of this article, I would argue that ethnography alone is not well-suited to explore or validate new product opportunities. In fact, I would argue that interviews, done correctly, are far better suited for exploring new product opportunities.

This leads to the deadly pitfalls of new product validation.

The Three Deadly Pitfalls

Nothing EVER goes as planned.

Product development is like life, and as the saying goes, “the best-laid plans…often go awry.” You can meticulously plan, but you can’t account for every possible variable. This is precisely what happened on June 6, 1944…

D-Day…

The invasion of France is underway and British, Canadian, and American airborne forces have planned and rehearsed for months a precise series of glider and parachute landings that were designed to secure key terrain that would enable the ground invasion forces to advance rapidly inland.

The airborne invasion forces took off from England and months of planning appeared to vanish instantly. Parachute forces dropped into unmarked landing zones, gliders landed in the wrong areas, and thousands of soldiers from different units were mixed together in the night.

It had the making of an epic military disaster…

But, hours later, the original military objectives were being accomplished by ad-hoc units that faced much fiercer German resistance than anticipated.

Why?

The allies had a plan, but when things went sideways, they were able to adapt, improvise, and overcome.

World Heavyweight Champion Mike Tyson has been credited with saying, “Everybody has a plan until they get punched in the mouth.” What I take away from this is that you have a plan, but should be ready to pivot and adapt to changes, i.e. getting punched in the mouth. Most importantly, it is up to you, the product developer, the figure it out.

Solution: It is your job as a product developer and entrepreneur to figure it out.

Risk is a part of the equation.

In any given applied business decision, there are simply too many variables to properly control for; no amount of research in the world can address every variable.

Big data can’t solve this problem.

Small data can’t solve this problem.

No data can solve this problem.

In an applied setting, i.e. the “real world,” you will never hit 100% certainty. Never.

There will always be some inherent risk. When you are developing a new product you are going to have to accept some level of risk. It is your job to leverage research to help mitigate this risk by providing the right information to help you make a more informed decision.

The only time there isn’t risk is if the decision you are making doesn’t matter. For example, Erica Hall, from Mule Design, states in her book Just Enough Research,

“If you are working on a technical proof of concept that really, truly doesn’t have to satisfy any real-world users — then research shouldn’t spend time investigating it.”

If we can agree that there is risk in developing a new product, we must then turn to the implicit and explicit bias of different data sources and methods of inquiry. In simpler terms, the key issue is looking at probabilities and data appropriately.

For example, if you flip a fair coin (balanced) 15 times and it comes up heads each time, what is the probability it will come up tails on the next flip?

50%.

The past probabilities have nothing to do with the current flip. The coin has no idea what the prior flips are. Each flip is 50/50.

If you thought it would come up tails, you are not alone. It is called the Gambler’s Fallacy. There are literally dozens of heuristics, biases, and fallacies that can impact how we analyze data and make decisions when developing products. Lucky for us, we can systematically control for these by using mental models or frameworks.

When validating a product, let Bayesian Thinking be your guide. In simple terms, Bayesian Thinking is the use of probabilities to represent degrees of belief, and updating those probabilities as new information becomes available. There are four reasons you should integrate Bayes into product development and validation:

It provides a framework that helps quantify how much we know, based on the data at hand;

  • It encourages us to update our beliefs as new information becomes available;
  • It is logically sound;
  • It yields optimal decisions.

Ultimately, accepting some risk is part of the equation. Understanding how to make better decisions in the face of risk is the key to developing new products.

Solution: First, be aware of bias. Second, use data correctly to help make decisions. I am a proponent of leveraging Bayesian Thinking in product development. Third, you can develop better systems and mental models to mitigate the risk from these variables. The Farnam Street Blog does a great job of breaking this down.

The problem of induction.

In 17th century London the phrase “black swan” was equated with an impossibility; the idea of anything other than a white swan was preposterous. In 1697 Willem de Vlamingh, a Dutch sea captain discovered the black swan while exploring the coastlines of western Australia. A century and a half later, the English philosopher John Stuart Mill used the bird to demonstrate what is known as the problem of induction.

“No amount of observations of white swans,” he said, “can allow the inference that all swans are white, but the observation of a single black swan is sufficient to refute that conclusion.”

Using data about the past to predict the future is a fallacy. This exists because you’re measuring a dynamic system with high variance. Consider the example of the Thanksgiving turkey, presented by author Nassim Taleb.

“Consider a turkey that is fed every day,” Taleb writes. “Every single feeding will firm up the bird’s belief that it is the general rule of life to be fed every day by friendly members of the human race ‘looking out for its best interests,’ as a politician would say.

“On the afternoon of the Wednesday before Thanksgiving, something unexpected will happen to the turkey. It will incur a revision of belief.”

“Consider that [the turkey’s] feeling of safety reached its maximum when the risk was at the highest!” Taleb writes.

“But the problem is even more general than that; it strikes at the nature of empirical knowledge itself. Something has worked in the past, until — well, it unexpectedly no longer does, and what we have learned from the past turns out to be at best irrelevant or false, at worst viciously misleading.”

And this is really what the problem of Black Swans is all about.

This also works in the opposite direction. Consider the iPhone; there was *zero* data that would have led Apple to believe that they would sell billions of iPhones, i.e. no existing marketing segmentation strategy would have made the case for it. However, we all know what happened there.

Determining what will happen in the future can only happen with an appointment with the Oracle, a fortune teller, or abduction (educated guesses and speculation).

Solution: Conduct validation research using a theoretical framework, i.e. Jobs to be Done, and rapid, rigorous, and relevant interviews.

The Solution

To overcome the three deadly pitfalls:

  1. Nothing ever goes as planned;
  2. Risk is part of the equation;
  3. The problem of induction.

We must validate our idea, product, or concept before spending a ton of time developing it. This is best accomplished through validation interviews. It is important to understand that there is a clear distinction between customer interviews and customer conversations.

When you validate your idea before you build it, you will launch your product or service to happy customers and predictable sales on day one, validate your idea or concept prior to developing it, saving you countless hours of “testing and pivoting” to get it right, and uncover the hidden motivations, aspirations, and desires of your target audience — before spending your time and money — relative to developing your concept, product, or idea.

After all…

Why BUILD it, if they won’t BUY it?

Described as a modern-day, consumer-focused Sherlock Holmes, Dr. Ari Zelmanow is consumer psychologist (and retired police detective) who empowers product developers and engineering teams (creators and makers) to conduct rapid, rigorous, and relevant interviews to solve the right problems for their customers. He currently leads a research team at Twitter and has worked with some of the most iconic brands across the globe to create breakthrough products, brands, and retail experiences. He believes in truth, justice, and the American way, excellent sushi, and NY style pizza. When he is not chasing his four children, he… well, he is always chasing his four children. Do you want a better understanding of your customers? Visit his blog at AriZelmanow.com, or follow him on Twitter by clicking here, LinkedIn by clicking here, or Facebook by clicking here.

This story was originally published on www.AriZelmanow.com.

--

--

Dr. Ari Zelmanow

I write about how Thinking Like a Detective helps businesses capture and keep more customers so that they can experience predictable growth.