Blog
Industri

Top 5 insights on quantitative research methods and landscape

Written on:
December 23, 2023
Antarika Sen

As we set our goals and intentions for 2024, I wanted to reflect on some learnings on the quantitative research and insights industry to take stock before leaping forward!

As a researcher at Milieu Insight, I was grateful for the opportunity to attend and present at the Quant UX Conference 2023 and the European Survey Research Association Conference this year.

What I loved the most about the conferences was that despite involving researchers and professionals from a wide range of industries and functions, the talks provided practical and universally applicable takeaways. It was also fulfilling to present findings from the research experiments carried out at Milieu that advocate for better research design.

In this piece I highlight key themes and lessons from these events that can be applied to anyone in the quantitative research and consumer insights space.

1. Surveys are the backbone of quantitative research in most organizations

Surveys and quantitative research are inextricably linked to each other. Maria Cipollone, a senior user experience researcher at Spotify, performed a thematic analysis of job openings in quantitative and user experience research. Her findings revealed that "Survey" was the most frequently used word in the job descriptions signalling that Surveys are the predominant research method for deriving insights across different organizations.

Across the board, quant researchers are required to use survey instrumentation to collect data from users / consumers and perform a host of different analyses.

This only means that as a group, elevating our collective understanding of survey design is crucial. Survey research is a mix of art and science - one that aims to minimize bias, ask the right questions, reach the intended audience, ensure data quality, and generate actionable insights. Each step in this process is dynamic, constantly evolving with technology and our knowledge.

There were quite a few talks that touched upon the different stages of surveys as a research method. I will highlight some of them below.

2. Question where your Survey data comes from

Research buyers often partner with market research firms to source participants for their surveys. However, an important point to note is that not all market research firms directly recruit and manage their own respondent panels (i.e. a pool of survey takers). Instead, they often partner with third-party panel providers that have large databases of participants who are available to take surveys on demand. This means that market research firms can reach their target audience quickly and easily.

In his talk, Ben Leff, CEO of Verasight, highlighted the importance of asking the right questions when you choose to work with market research firms and/or third-party panel providers.

Some of the questions one needs to ask before working with a vendor are :

1. Are all respondents recruited directly by the market research firm or is data collection outsourced to other third-party vendors?

Why does it matter : Companies that recruit and manage their own panels tend to have greater control over data quality. This is because they can identify and remove duplicates, bots, and bad quality respondents, ensuring data integrity. When research firms recruit from multiple vendors, they lose this control.

2. How many surveys do respondents take per week?

Why does it matter : Frequent survey participation can lead to negative effects on respondents' behavior. Issues like survey fatigue, satisficing, and potential priming effects (due to unknown prior survey engagements) come into play.

3. How often are respondents screened out of surveys and are they incentivised only if they qualify?  

Why does it matter : Repeated exclusion from surveys without receiving incentives can cause frustration among respondents. This frustration might result in respondents providing inaccurate answers to qualify for more surveys.

The talk was a good reminder on how to engage with and better understand the inner workings of market research firms and panel providers so that one can manage the risks associated with them more effectively.

3. Open-ends can be used to weed out bad respondents

Bad survey respondents come in many shapes and forms. Some are clear-cut cases of fraud : bots, duplicates, dishonest respondents who are not your target audience. And then there are well-intentioned respondents who start out good but turn into bad actors because of poor survey experience, frustration or fatigue.

To catch these bad actors researchers may run a wide range of checks which include but are not limited to straight-lining, speed check, attention check, conflicting responses, over-selection, and so on.

However, Tom Wells, former UX and quant researcher with Uber and Facebook, shed light on how open-end responses can be used as an additional strategy for identifying suspicious and fraudulent responses. The reason being, some seasoned survey-takers may know how to provide favorable responses that can go undetected using the routine checks. Open-ends however are often harder to fake.

Using strategies highlighted in a journal article published in Public Opinion Quarterly and using case studies during his time at Uber, Tom shared the following examples of open-ends to be wary of.

  1. Open-end questions can be good validation checks. Do people really know what they are talking about? The below example is a validation question that was run as an open-end question. True Uber drivers would be acutely aware of the status level they have been awarded so legitimate responses can be expected to fall into one of the four categories : diamond, platinum, gold, blue. However, suspicious respondents who may have sneaked into the survey, provided either very generic responses (e.g., “top tier”, “lowest”, “basic” but not actually naming the status) or completely irrelevant responses (e.g., “Love it”, ‘Good Citizens”). A very strict cleaning criteria would eliminate both the generic and irrelevant responses. However, some may give the first one the benefit of doubt and keep them in.
  2. One should be wary of answers that are extremely short, don’t elaborate in detail, and are overwhelmingly positive especially for questions like product or service reviews from a group of dedicated users. In the example below, verified Uber drivers tended to provide detailed, thoughtful responses often including negative feedback. However, suspicious responses from opt-in self-report surveys (where drivers can’t be verified) often had short, positive responses that did not really make any sense and were irrelevant. Some answers were also recycled from previous questions. These would be flagged by a researcher and typically excluded from the dataset.

No survey is 100% perfect : there are limitations to internal surveys and surveys run via opt-in panels. As researchers we can strive to better improve our survey design and analysis methods to make most of each.

4. Tips on designing better cross-country research

I presented findings from research experiments at Milieu on how to design better cross-country surveys keeping in mind cultural biases with a focus on Southeast Asia.

Cross-country surveys are often used by brands to derive meaningful insights on how customers are similar or different across countries so that they can make better business decisions. However, one aspect that is often overlooked is that respondents from different countries may have inherent differences in the way they interpret and use survey scales. This was the core theme of my talk.

My favorite bit was seeing so many people writing in to share their happiness in seeing data and research from Southeast Asia from countries like Singapore, Indonesia, and the Philippines. Research findings tend to be underrepresented from this part of the world, which means learnings from elsewhere are generalized and applied without appreciation of cultural nuances.  We were grateful to find a platform where we could share learnings from Southeast Asia.

A couple of points that we highlighted were

  1. Take into account that some countries tend to display top-choice bias. For instance, in our experiments, we found that irrespective of the question content and response scale length, some countries like vietnam and Philippines are twice more likely to use the highest rating on a scale compared to other countries like Singapore. It is good to get a sense of what the response patterns are like for the markets you are surveying, and whether there are consistently high raters such that you can account for it while analyzing and interpreting the results.
  2. Response scale design matters. Choosing the right response scale format can minimize response bias and yield more accurate responses to enable reliable cross-country comparisons. Our research found that a spinner design (shown on the right below) helped to reduce top-choice bias by a significant margin for countries that exhibited the strongest top choice bias (in our study Vietnam and Philippines) when compared to a standard single-select rating scale (shown on the left below).

5. The need to move away from NPS was the focus of many talks

The NPS score is a single question metric that asks customers how likely they are to recommend a product or service to others on a scale of 0 to 10. The score is then calculated by subtracting the percentage of detractors (scores 0 to 6) from the percentage of promoters (scores 9 to 10).

There is a shared acknowledgement among the research community that the Net Promoter Score (NPS) has several limitations and has fallen short as a north star metric.

In this talk, Noam Segal, a Senior Research Manager at Upwork (also known for being an author of the website NPSistheworst.com) really lay the groundwork into the reasons why NPS just doesn't work as a metric. The issues he highlighted were :

  1. Future behavior : NPS isn’t anchored in actual behavior, it is just hypothetical future behavior, which is often seen unfavorably in survey science. Positive scores might come from good experiences with the brand, but other factors can impact future customer actions too.
  2. Wonky calculation : NPS segments customers into three groups : promoters, detractors and passives. However the arbitrary method of segmentation can often mask small improvements such that even with a higher rating you can end up with the same NPS score.
  3. Noisy scale : A 11-point scale provides a lot of options, which can be confusing for respondents and makes it difficult to compare across different cultures and sectors. What if the value assigned to a rating of 8 is different for different people?
  4. One dimensional  : NPS is a single-item measure, which means that it only captures one aspect of consumer sentiment. Other feedback metrics, such as customer satisfaction surveys, can provide a more comprehensive view of the customer experience.
  5. Easy to game : Companies can artificially inflate their NPS scores by timing the survey and asking the question in a way that makes it easier for customers to give high ratings.

Everyone knows there’s problems with NPS, how do we go about finding an alternative and executing it?

There's no fun in just bashing NPS. The conference was also a great platform to listen to different groups of researchers who have, in their own ways, championed the effort of finding alternative solutions and optimizing ways of implementation.

During their stint at Goldman Sachs, Alyssa Nitz, Manager & Staff UX Researcher,  & Steven Snell, Survey Methodologist, set out to find alternatives to NPS.

While reviewing organizational level OKRs within product management, they noticed that NPS was being used by leadership and head of product across a suite of products.

However, the various products were at different levels of maturity, with disparate user groups and varied utility.

One needed a more robust north star metric than NPS.

After several sessions of speaking to various stakeholders, understanding the products and the needs, they developed a set of questions that they termed as Goldman Sachs Attitudinal Tracking (GSAT). This was meant to be a lean, flexible framework consisting of four core KPIs with questions that revolved around satisfaction, ease of use, market comparison, and future predicted use (refer to image below). And to allow for some variability across different products they allowed for customisation of up to 4-close ended questions and 1 open-ended question.

Considerations on choosing NPS-alternatives

Anamika Suresh, a mixed methods researcher, shared learnings from her experience of setting up an alternative system beyond NPS in the online grocery and travel space during her stint with FairPrice Group and Skyscanner, respectively.

In her experience, NPS did not serve as a great north star metric because it was extremely volatile, not comparable across different businesses and orgs. It also did not help her identify and dig deeper into specific areas of improvement. Furthermore, from an execution perspective, there’s also a complicated sense of ownership because there is often a disconnect between those who track it (e.g., researchers) versus those who act on it (product owners).

Here are some guiding principles that were highlighted across both the talks

  1. Have clear, transparent communication with stakeholders. It is also not often easy to communicate, convince and appeal to leadership teams and their business mindset to pivot away from NPS. As researchers we often have to educate other stakeholders about the limitations of the metric and why it is important to find new alternatives
  2. Be clear on the objectives of the new metric : are you measuring retention? Acquisition? Adoption? Engagement? Ease of doing a task? Your metric should be reflective of the behavior you want to measure.
  3. Have clear alignment with stakeholders at every step of the process. It’s important to have working sessions where researchers have a clear understanding of the product and what the stakeholders wish to achieve. The HEART framework and FURPS framework are two guides that can be used for alignment.

Final thoughts

To have the opportunity to interact with and learn from some of the great minds in the quant research and survey research space was a humbling experience. Felt like a student all over again! I hope you found the article useful. Here's to advancing our research endeavours in the upcoming year!

Siap untuk meningkatkan permainan wawasan Anda?

Mulai langkah awal menuju keunggulan data-driven
Hubungi Milieu Sekarang
Terima kasih, kami akan segera menghubungi Anda!
Ups! Ada yang tidak beres saat mengirimkan formulir.