Summary
Most panels optimize for speed and volume. We optimize for data you can trust. Our four-pillar approach — identity verification, participant experience, pre-launch study review, and continuous quality monitoring — addresses the structural problems that produce bad data in the first place.
Most panels optimize for speed and volume. The result: fake profiles, disengaged respondents, and data you cannot trust. In B2B research, incentives are so low and survey spam so high that it is unlikely real CEOs and decision-makers spend their evenings filling out surveys for €1.75. The problem is not that participants are lazy — it is that the system does not give them a reason to care.
Our approach addresses the structural causes of bad data through four pillars: verified people, participant experience, study quality control, and continuous quality monitoring.
Verified People
We verify who our participants are — not just that they exist.
Every participant goes through multi-step identity verification before they are invited to any study:
- Email verification — required before any incentive payout
- Bank transfer with IBAN name matching — the name on the bank account must match the profile. Exceptions are handled manually with a documented explanation
- Phone verification — where chosen as a contact method
- External profile review — LinkedIn, Steam, or other external profiles are linked and manually reviewed by staff
- Video verification — the highest level, achieved through qualitative study participation where a researcher has personally interacted with the participant
Identity Levels
Each participant carries an identity status that is visible to clients when building their sample.
| Level | Label | Requirements |
|---|---|---|
| 1 | Verified | Email confirmed + IBAN name match |
| 2 | Confirmed | Verified + external profile linked and manually checked |
| 3 | Known | Confirmed + participated in a qualitative video study — a researcher has personally interacted with this person |
These levels are cumulative. A "Known" participant has passed through every prior verification step.
Participant Experience
Quality data comes from people who are treated well. This is the causal core of our approach: participants who are respected answer honestly and with engagement. Participants who are annoyed click through mindlessly.
We structure the entire participant experience around this principle:
Fair, above-industry incentives. Incentives are standardized by participant rank. B2B Decision Makers receive significantly more than B2C General consumers — because real decision-makers do not fill out surveys for €1.75. This is not generosity; it is the only way to attract the professionals whose responses actually matter. For a detailed discussion of incentive budgeting and payment methods, see Research Incentives: Budgeting & Compliance.
Participant-controlled frequency. Participants choose how often they want to be invited during onboarding. We enforce an upper bound — no more than once per month for consumer studies, once per quarter for professional studies — but many participants set their own preferences below that ceiling. Every invitation is hand-picked for the participant's profile.
| Segment | Upper Bound | Note |
|---|---|---|
| B2C (General + Gamer) | Max 1x per month | Participant may choose less |
| B2B (Professional + Decision Maker) | Max 1x per quarter | Participant may choose less |
Study duration limits. Time windows are set so that studies do not overwhelm participants.
Expert panel positioning. Participants sign up with Busch Labs because they are treated as experts, not as data points. They receive selective, targeted invitations instead of spam.
Study Quality Control
We review every study before a single participant is invited.
Every study is personally reviewed by Busch Labs staff — discussion guide, incentive level, duration, methodology. Studies that do not meet our standards are rejected. This has happened, and it will happen again.
Clients receive feedback and improvement suggestions, not just a "no." The goal is to get the study to a standard where participants will take it seriously and produce useful data.
Participant experience is protected, even if that means declining a project. A study that wastes participants' time damages the relationship that makes future data possible. We treat that relationship as a long-term asset.
Continuous Quality Monitoring
We monitor response quality without disrupting the participant experience.
- Open-end quality scoring — evaluating response length, relevance, and plausibility
- Completion time analysis — flagging responses that are too fast (speedsters) or too slow (multitaskers)
- Plausibility checks — spot-checked or full review depending on the study
- Environment consistency checking — comparing technical snapshots (device, location region, timezone) across registrations and study participations to detect anomalies
We do not use classic trap questions. They annoy participants and damage the experience we have built. The cost of a trap question is higher than the cost of monitoring quality through other means.
The Bottom Line
We would rather tell you we cannot deliver your sample than deliver bad data. If the participants you need are not in our panel at the verification level your study requires, we will say so. That is a more useful answer than a dataset full of professional survey-takers.