What user research actually is
User research is the practice of learning directly from the people who use (or might use) your product, in order to make better product decisions. That's it. It doesn't require a research team, a lab, a budget, or a PhD. What it requires is a structured approach to gathering insight from real people instead of guessing.
Most founders avoid user research because they think it's slow, expensive, or only relevant once they've built something. All three of these are wrong. User research is fastest and cheapest before you build, because the cost of changing a Figma file is zero and the cost of changing shipped code is high.
Founders who do user research well don't build perfect products on the first try — they build things that are wrong in smaller and smaller ways, faster and faster, because they're learning from real people between every iteration.
Generative vs Evaluative Research
There are two fundamentally different types of user research, and mixing them up is the most common mistake founders make.
Generative research
Generative research explores unknown territory. You use it when you don't yet know what to build, or when you want to understand a problem space before committing to a solution. The goal is to discover: what problems do these people have, how do they think about them, what do they currently do instead?
Methods: open-ended interviews, diary studies, contextual observation. The output is themes, insights, and hypotheses — not metrics.
Evaluative research
Evaluative research tests something you've already designed or built. You use it when you want to know: does this work? Can users complete this task? Is this message clear? Does this design cause confusion?
Methods: usability testing, structured feedback, A/B tests, first-click tests. The output is specific issues, completion rates, and conversion data.
Method 1: User Interviews
A 30-minute conversation with someone who represents your target user generates more useful insight than any survey or analytics tool at the early stage. Interviews give you the language users use to describe their problems, the emotional weight of those problems, and the context around current solutions — all of which are invisible in quantitative data.
How to structure a discovery interview
Don't create a list of questions. Create a list of topics you want to explore, and let the conversation flow naturally. Discovery interviews are not surveys — you follow interesting threads, ask "why?" repeatedly, and listen more than you talk.
A basic structure that works:
- Warm-up (5 min): Ask about their role, day-to-day work, and context. Get them talking.
- The problem area (15 min): Ask about their experience with the specific area you're exploring. "Tell me about the last time you [experienced the problem]." "What do you do about it?" "What have you tried that didn't work?"
- Current solutions (8 min): "What tools or methods do you currently use?" "What do you like about them?" "What's frustrating?"
- Closing (2 min): "Is there anything I should have asked that I didn't?"
How many interviews do you need?
For early-stage generative research: 5–8 interviews with people who match your target user profile. After 5 sessions, you'll start hearing the same themes repeat. After 8, you have diminishing returns unless you're testing a distinct second segment. Don't interview 30 people to feel more certain — interview 8 people who are a tight fit and do something with what you learn.
For recruiting: see our dedicated guide on how to recruit user research participants for free. Or generate a set of interview questions instantly with our free User Interview Questions Generator.
Method 2: Structured Feedback Platforms
Structured feedback platforms like HelpMarq match your project with reviewers who have relevant experience and guide them through a template covering the dimensions that matter: clarity, value proposition, UX, credibility, and conversion. This produces coverage you can't get from an open-ended "what do you think?" request.
Unlike user interviews, structured feedback scales — you can get multiple independent perspectives within 48 hours, without scheduling calls, writing screeners, or coordinating sessions. It fills the gap between "talk to users individually" and "run a full usability study."
Structured feedback is most useful for: landing pages before launch, MVPs before wider beta, pitch decks before investor meetings, and any point where you want a reality check from people who have no social incentive to be kind. Submit your project to HelpMarq to get started.
Method 3: Surveys
Surveys work best when you already know what questions to ask — which means they're more useful evaluatively than generatively. A survey with the wrong questions gives you confident-looking data that answers the wrong question. Use interviews to discover the questions, then surveys to measure them at scale.
The three most useful survey questions for startups
- Sean Ellis PMF question: "How would you feel if you could no longer use [product]?" — measures product-market fit signal.
- Exit intent: "What almost stopped you from signing up today?" — reveals conversion barriers directly.
- Visitor intent: "What brought you to this page today?" — reveals what job-to-be-done people are hiring your product for.
Free tools: Tally (unlimited responses on free plan), Google Forms (simple), Hotjar (for in-site surveys with exit intent triggers).
Method 4: Usability Testing
Usability testing involves giving users specific tasks to complete on your product or prototype, then observing what happens. You're measuring task completion, time-on-task, confusion points, and error rates — not opinions. Watching someone try and fail to complete a task you thought was obvious is one of the highest-ROI research activities available.
Moderated vs unmoderated
Moderated testing: you're in the session (in person or on a call), guiding the participant, asking follow-up questions in real time. Slower but richer.
Unmoderated testing: participants complete tasks independently, you review recordings afterward. Faster and easier to scale. See our comparison: Unmoderated vs moderated usability testing: which to use when.
Free usability testing tools
- Maze (free tier): 1 study/month, 10 responses — task-based prototype testing
- Loom: ask participants to screen-record while narrating (DIY think-aloud)
- Google Meet + screen share: moderated sessions at zero cost
- Useberry (free tier): first-click tests and prototype walkthroughs
Method 5: Behavioral Analytics
Behavioral analytics tools record what users actually do on your site or app — where they click, how far they scroll, where they drop off, what they rage-click. This is passive research: you install a script and data accumulates automatically. It tells you what is happening without telling you why — which is why it pairs well with qualitative methods.
Best free options: Microsoft Clarity (fully free, unlimited session recordings and heatmaps), Hotjar free tier (limited recordings + basic surveys), Posthog free tier (event-based analytics and funnel analysis).
How to Recruit User Research Participants
The biggest obstacle most founders cite for user research is "I don't know who to talk to." This is solvable. The full guide covers every method: How to recruit user research participants (free methods).
Quick methods that work:
- Twitter/X and LinkedIn searches: Find people who have publicly complained about the problem you're solving, then reach out directly.
- Community postings: Post a short recruiting message in Slack groups, Discord servers, or forums where your target users congregate. Offer a small gift card or public credit.
- Your existing network: You probably know 2–3 people who fit your ICP. Ask each of them for one referral.
- Customer-facing job postings: If companies are hiring for roles that involve your problem area, the person in that role is your ICP. LinkedIn gives you direct access.
How to Analyze What You Learn
Raw research data is not insight. You need to process it. A basic analysis workflow for qualitative research:
- Capture everything: Record all sessions (with permission), take notes during, transcribe key quotes.
- Extract observations: Go through your notes and pull out every discrete observation — one per sticky note or row in a spreadsheet. Don't interpret yet, just observe.
- Cluster by theme: Group observations that seem related. Label the clusters.
- Identify patterns: A pattern is something at least 3 of your participants mentioned or did. Single observations are interesting; patterns are actionable.
- Generate insights: An insight is an interpretation of a pattern. "5 of 8 participants didn't understand the pricing" is an observation. "The pricing structure is creating conversion hesitation, probably because we're using technical plan names instead of outcome-based ones" is an insight.
- Define actions: For each insight, decide: build, change, test further, or discard.
Free User Research Toolkit for Startups
| Research type | Best free tool | When to use it |
|---|---|---|
| Discovery interviews | Google Meet + Otter.ai (transcription) | Before building — understand the problem |
| Structured feedback | HelpMarq | Any stage — get evaluative feedback in 48h |
| Surveys | Tally (unlimited free) | Measuring patterns across users at scale |
| Usability testing (unmoderated) | Maze free tier | Testing prototypes and click flows |
| Session recordings | Microsoft Clarity | Post-launch — understand real visitor behavior |
| Exit intent surveys | Hotjar free tier | Find out why visitors leave without converting |
| Interview questions | HelpMarq Generator | Generate tailored interview question sets |
Get structured user feedback on your product today
HelpMarq matches your project with reviewers who have relevant expertise. Get comprehensive, structured feedback in 48 hours — free, no credit card required.
Submit your product →