At TMRE 2024, Ben Harknett, CEO at Cambri, presented the session, “Are KPIs & Benchmarks Really Doing a Good Job of Predicting Product Concept Strength?”
Harknett asks, “Are benchmarks and KPIs as good as we would like to think they are at predicting product success when we think about concept testing?” Using Cambri’s service delivers more accuracy in predicting how strong a concept is in the market, he notes.
He says, “We’re a platform at our core, that allows you to test iteratively along your innovation process. What are we trying to do when evaluating concepts? Is this concept strong? Maybe you’re looking at a bunch of different concepts and you’re trying to pick the strongest or the top 20%. And then you’re trying to figure out how can we improve those concepts, which levers to pull to have the biggest impact on those concepts moving forward.”
“Traditionally, that’s been done looking at traditional KPIs, the likes of willingness to buy and uniqueness, looking at them versus category benchmarks, and then looking through a bunch of verbatims and open ends, and you as the researcher trying to join those dots and trying to figure it out: Which one’s the strongest and how do we improve them?” Harknett observes.
He continues, “How do we actually define the notion of a strong concept? Today that is largely defined as how did this concept test versus other concepts? But that doesn’t necessarily mean it’s going to perform well. The second problem that we observe is there’s often contradicting KPI’s. What if the KPIs are really strong, but the open ends are pretty weak? The third piece is, how do we actually improve those concepts? Which levers do we pull? All of this requires a ton of manual analysis and intuition, and certainly the time element is something that as researchers we have none of.”
Leveraging Artificial Intelligence
“At Cambri, we run concept testing with real respondents. We are interpreting that data to help make sense of it and help you make better decisions,” says Harknett. “Where we start to go off script a little bit is our ability to make sense of the open ends. This is something that we’ve been developing for many years—our proprietary natural language processing model. We’re able to make sense of and understand all of the open-ended answers that come with a survey response. We’re able to understand those conditions. Then, of course, we need to train that AI model. We need to tell this model, what does good look like? The output of that is a score that tells you if this product is going to perform well in the market or not.”
With its Launch AI performance tool, Cambri can identify the specific strengths and weaknesses of the product and concept testing.
He adds, “Importantly then, a set of drivers also tells you what is driving that score up or down? Is this product performing well because of the design of the packaging, or is it performing well because of the brand strength? Or what is perhaps pulling down that score, so that you have clear ideas of which levers to pull? Launch AI gives you a clear answer, this is something we should work on. We’re close, but we just need to iterate on these next few things, Or actually, we should just scrap it altogether.”
Watch the video for the full session, including a related case study, at TMRE 2024 as Cambri’s Harknett discusses the company’s product concept testing service.
Contributor
-
Matthew Kramer is the Digital Editor for All Things Insights & All Things Innovation. He has over 20 years of experience working in publishing and media companies, on a variety of business-to-business publications, websites and trade shows.
View all posts