Skip to content
UPCOMING EVENTS:UX, Product & Market Research Afterwork23. Apr.@Packhaus WienDetailsInsights & Research Breakfast16. Mai@Packhaus WienDetailsVibecoding & Agentic Coding for App Development22. Mai@Packhaus WienDetails
UPCOMING EVENTS:UX, Product & Market Research Afterwork23. Apr.@Packhaus WienDetailsInsights & Research Breakfast16. Mai@Packhaus WienDetailsVibecoding & Agentic Coding for App Development22. Mai@Packhaus WienDetails

Recruiting Participants: Finding the Right People

The quality of your research is directly tied to the quality of your participants. Recruiting is not an administrative task, it is a methodological decision that determines whether your findings will generalize.

Marc Busch
Updated March 25, 2024
10 min read

Summary

Effective recruiting starts with clear criteria derived from your research plan. Key considerations include incidence rate (how rare your target is), the trade-off between speed and quality, and whether to use panels, customers, or intercept recruiting. Always verify through screening questions that participants match your target profile. The single biggest recruiting mistake is accepting anyone who shows up.

The quality of your research is directly tied to the quality of your participants. If you recruit the wrong people, even a flawlessly executed study will produce misleading findings.

Recruiting is not an administrative task you delegate and forget. It is a methodological decision that directly determines whether your findings will generalize to your actual users.

Defining Your Target

Before you can recruit anyone, you must know exactly who you are looking for. This definition comes directly from your and your segmentation work.

Your target definition should include:

  • Behavioral criteria: What actions have they taken? (frequency of use, purchase history, feature adoption)
  • Attitudinal criteria: What do they believe or value? (early adopter mindset, price sensitivity)
  • Contextual criteria: What is their situation? (job role, household composition, device usage)

The Incidence Rate Problem

refers to how common your target population is within the general population. This is one of the most practical constraints in recruiting.

If you need "users who have made a purchase in the last 7 days," your incidence rate might be 30%. But if you need "users who abandoned checkout in the last 7 days after adding a subscription product to their cart," the incidence rate might drop to 3%.

Why this matters: A lower incidence rate means:

  • Longer recruiting timelines
  • Higher costs (you pay to screen many people to find a few matches)
  • More pressure to relax criteria (which compromises quality)

The solution is not to accept unqualified participants. It is to plan for realistic timelines and budgets based on your actual incidence rate.

Calculating Your Funnel

Recruiting is a sales funnel. Expect a 1-5% response rate to cold outreach. If your Incidence Rate (IR)—the percentage of people who qualify—is 10%, you need to reach 1,000 people to find 10 qualified participants.

Here is the math:

StageExample
People Contacted1,000
Response Rate (3%)30 respond
Incidence Rate (10%)3 qualify
No-Show Rate (20%)~2-3 actually show

This is why recruiting timelines slip. Most teams drastically underestimate the top of the funnel.

Recruiting Sources

Where you find participants influences who you find. Each source has trade-offs:

Your Own Customers

Pros: Already use your product, easy to segment using your own data, often motivated to participate.

Cons: May have strong opinions (positive or negative) that differ from the general market. Not useful for testing with non-users or competitive research.

Research Panels

Third-party panels (like UserTesting, Respondent, or Prolific) provide access to pre-screened participants.

Pros: Speed, geographic diversity, access to hard-to-reach segments.

Cons: "Professional respondents" who participate in many studies may behave differently than real users. Quality varies significantly between panel providers.

Intercept Recruiting

Recruiting people in the moment, on your website, in your app, or at a physical location.

Pros: Catches users in context, reduces recall bias, participants are genuinely engaged with your product.

Cons: Limited control over who responds, potential for self-selection , may interrupt the user experience.

Snowball Recruiting

Ask current participants to refer others with similar characteristics.

Pros: Useful for niche or hard-to-reach populations.

Cons: Can create homogeneity bias, people tend to refer others similar to themselves, narrowing your sample.

The B2B Complexity: The Buying Center

Consumer research has a simplifying assumption: the person who uses the product is the person who buys it. In B2B, this assumption falls apart.

The employee who uses the software every day is often not the person who decided to purchase it. A project manager might live in your tool for eight hours a day, but the VP of Operations signed the contract. These two people have entirely different needs, concerns, and decision criteria. Research with only one of them gives you half the picture.

The Buying Center Framework

B2B purchasing decisions rarely involve a single person. Instead, they involve a network of roles that collectively influence the decision. This network is called the buying center. Each role has distinct priorities, and your research might need to address several of them.

RoleWho They AreWhat They Care About
UserThe employee who interacts with the product daily to do their job.Efficiency, usability, how the tool fits into their workflow.
BuyerThe person with budget authority who signs the contract (often a department head or executive).ROI, total cost of ownership, security, compliance, vendor reputation.
ChampionAn internal advocate, often a power user, who believes in the product and pushes for adoption.Evidence to convince skeptics, success stories, implementation support.
InfluencerPeople whose expertise shapes requirements (senior colleagues, external consultants).Technical credibility, industry best practices, peer validation.
EvaluatorIndividuals who assess the product against formal criteria (IT, security, procurement).Technical compatibility, security certifications, integration requirements.

The Recruiting Implication

This framework has direct consequences for who you recruit.

Usability research requires Users. You need the people who will actually click the buttons and navigate the screens. Testing checkout flows with a CFO who will never use the product tells you nothing about day-to-day usability.

Value proposition research requires Buyers and Champions. If you want to understand what makes someone choose your product over a competitor (or choose to buy at all), you need the people involved in that decision. Users can tell you if the product works; Buyers can tell you if it is worth paying for.

Pricing research requires Buyers and Evaluators. The person who uses a tool rarely knows what their company pays for it, or what the budget constraints are. Asking a User about pricing acceptance is asking them to guess.

Implementation research might require all five roles. Rolling out enterprise software involves Users learning new workflows, IT Evaluators managing technical integration, Champions driving adoption, and Buyers tracking whether the investment paid off.

Practical Guidance

When designing B2B studies, ask yourself: Which buying center role is relevant to my research question?

If you are testing usability, recruit Users. If you are exploring purchase drivers, recruit Buyers and Champions. If you are investigating why deals stall, you might need Influencers and Evaluators.

Sometimes you need multiple roles in the same study. A pricing study might require separate sessions with Users (to understand perceived value) and Buyers (to understand willingness to pay). Do not try to cover both in a single participant profile. Recruit each role deliberately.

The Screener

A screener is a short questionnaire used to verify that potential participants match your target criteria before admitting them to the study.

Designing Effective Screeners

Use verification questions, not leading questions:

  • Bad: "Do you use fitness apps regularly?" (People say yes to qualify)
  • Good: "In the past 7 days, which of the following have you done?" [List of activities including fitness app use mixed with distractors]

Avoid obvious qualification criteria: If your study is clearly about "people who shop online," anyone can figure out they need to answer "yes" to shopping questions. Bury the real criteria among plausible alternatives.

Check for consistency: Ask related questions at different points. If someone claims to use your product daily but later cannot name a single feature, flag the inconsistency.

Red Flags

Watch for:

  • Participants who give the "right" answer to every screening question
  • Responses that are too fast (not reading carefully)
  • Inconsistent answers across related questions
  • Overly enthusiastic participants who seem desperate to qualify

How to Design a Screener (Without Tipping Your Hand)

The Problem: If you ask "Do you play golf?", participants will lie to get the incentive. Professional panel respondents are sophisticated; they know which answers qualify them.

The Solution: Disguise your criteria. Embed the target answer in a list of decoys.

  • Transparent (bad): "Do you play golf regularly?"
  • Disguised (good): "Which of the following sports do you play weekly? [Tennis, Swimming, Golf, Cycling, None of the above]"

The second version does not signal which answer you want. A participant cannot game a question when they do not know what you are looking for.

The Verification: Always conduct a 5-minute "Tech Check" video call before the main session. Frame it as testing their camera and microphone, but use it to verify:

  1. Identity: Is this the same person who filled out the screener?
  2. Articulation: Can they string together coherent sentences about the topic?
  3. Engagement: Are they present and focused, or distracted and rushed?

No-Shows and Backup Plans

Expect some participants to not show up. The no-show rate varies but typically ranges from 10-30% depending on the study type and incentive.

Mitigation strategies:

  • Send reminder emails/texts 24 hours and 2 hours before the session
  • Overbook slightly (recruit 12 to fill 10 slots)
  • Have backup participants on standby for critical studies
  • Make rescheduling easy for participants who give advance notice

Incentives

Incentives are not bribes, they are compensation for participants' time and effort. The right incentive:

  • Respects their time: Compensation should reflect the study length and any inconvenience
  • Matches the audience: A $50 gift card means different things to a college student versus a C-level executive
  • Does not bias responses: Avoid incentives that depend on giving certain answers

Common incentive types:

  • Cash or cash equivalents (gift cards)
  • Product credits or discounts
  • Early access to new features
  • Charitable donations in their name

Working with

If your organization has defined personas [1], use them as a starting point for recruiting criteria. But remember:

Personas are composite representations, they combine characteristics of many users into an archetype. Real participants will not perfectly match any persona.

Your job is to recruit people who share the key characteristics of your target persona, not to find someone who matches every attribute.

The Single Biggest Mistake

The single biggest recruiting mistake is accepting anyone who shows up.

When timelines slip or panels underperform, there is pressure to fill slots with whoever is available. This is precisely when discipline matters most.

A study with the wrong participants does not just produce less insight, it produces misleading insight. You will find patterns, make recommendations, and influence decisions based on data that does not represent your actual users.

It is better to delay a study, reduce scope, or adjust expectations than to proceed with the wrong people.

What This Means for Practice

Recruiting deserves the same rigor as any other methodological decision. Define your target precisely. Screen carefully. Verify that who shows up matches who you intended to study.

The participants you recruit determine the population your findings represent. Choose them deliberately.

References

  1. [1]
    John Pruitt & Tamara Adlin. (2010). "The Persona Lifecycle: Keeping People in Mind Throughout Product Design". Morgan Kaufmann.Link

READY TO TAKE ACTION?

Let's discuss how these insights can drive your business forward.

Recruiting Participants: Finding the Right People | Busch Labs | Busch Labs