What User Research Means — and What It Does Not

User research is the systematic study of the people for whom a product or service is being built. Its purpose is to understand how those people think, what they do, what they need, what frustrates them, and how they make decisions — not how we assume they do these things. This distinction is the entire point. In practice, products are built with alarming frequency on the basis of internal opinions, experiences imported from other markets, or — most often — the instincts of whoever holds the most budget or speaks most confidently in the room.

User research is not a one-time event at the start of a project. It is a practice — a continuous engagement with the reality of the people who use a product. The Nielsen Norman Group, the world's leading research organisation in UX, estimates that user research can increase the ROI of design projects by a factor of 10 to 100, depending on project complexity and the extent of bad decisions it prevents. That estimate understates the case in many situations where the wrong product is built with great efficiency.

Qualitative and Quantitative Methods — Both Are Necessary

User research divides roughly into two methodological camps. Qualitative methods — depth interviews, ethnographic observations, usability tests — provide the why. They reveal motivations, mental models, workarounds, and frustrations that numbers alone cannot surface. Five well-conducted user interviews can generate insights that no A/B test can replicate — because they make visible the context behind the behaviour, not just the behaviour itself.

Quantitative methods — surveys, clickstream analysis, analytics review, heatmaps — provide the how many. They allow you to quantify how widespread a problem is, how many people take a particular path, precisely where drop-offs occur in a funnel. Both method types are complementary and do not replace each other. With only quantitative data, you know something is not working but not why. With only qualitative data, you understand the why — but not whether it applies to one user or ten thousand. The strongest research programmes use both and triangulate between them.

Interviews, Surveys, and Usability Tests Compared

User interviews are the most powerful qualitative method — and the one most commonly misapplied in practice. A good interview is not a questionnaire delivered verbally. It is an open, exploratory conversation designed to understand the thinking of one person — not to collect confirmation for existing hypotheses. Jakob Nielsen demonstrated in early NNG research that five participants in qualitative usability tests are sufficient to identify approximately 85 percent of the significant usability problems in a product. More participants yield diminishing returns; the right question is usually which problems you are testing for, not how many people you are testing with.

Surveys are scalable and cost-efficient — but difficult to interpret well. Properly designed surveys have clear hypotheses, avoid leading questions, and are treated as a starting point for investigation rather than a source of final answers. Usability tests are the most direct method: users attempt a specific task with a product while researchers observe where they fail, pause, or improvise. This method delivers immediately actionable insights — often in a single session — because it removes assumptions from the equation and replaces them with observed behaviour.

When User Research Should Happen in a Project

The answer to when user research should take place is: earlier than most projects do it. The most expensive phase for corrections is development. Errors that a single hour of user research would have identified in the concept phase cost multiples more to fix once implementation has begun. IBM has documented in its own research that fixing a defect discovered after launch costs up to one hundred times more than fixing the same defect during the design phase. This ratio alone justifies the investment in early-stage research for almost any project of meaningful scale.

The right moment for generative research — interviews, contextual inquiry, diary studies — is before the concept phase. It helps to establish which problems actually need to be solved, and for whom. Evaluative research — usability tests, expert reviews, first-click testing — ideally runs throughout the design and development process to verify that the solutions being built actually work. No moment in a project is the wrong time for research; earlier is structurally more economical because the cost of change decreases as you move toward production.

What Happens When User Research Is Missing

Skipping user research is rarely an explicit decision. It is usually the sum of small compromises: the budget is tight, the timeline is compressed, and everyone in the room believes they already know what users want. IDEO — the design consultancy that helped establish the discipline of human-centred design — estimates that the majority of failed digital products fail not because of poor technology, but because they solve a problem the target audience either does not have or prioritises differently than assumed. Research would have revealed this before the code was written.

The consequence of absent research is not always complete failure. More often it is a product that functions but never reaches its potential — because core features miss actual needs, the onboarding experience loses people who would have converted, or the language of the product does not match the language of its users. User research makes these problems visible when they are still solvable — before the budget is spent, before the launch date passes, and before the window for correction closes. The question is not whether the insights have value; it is whether you want them before or after you build the wrong thing.