What is the difference between quantitative and qualitative testing, and why are both important in usability testing?
Collecting data to inform and improve the user experience design is a never-ending need, for companies from tiny start-ups to large enterprises. There’s much healthy debate on what UX research or usability testing methods should be used, and when.
When it comes to quantitative versus qualitative research, best practices suggest a mix of qualitative and quantitative approaches to capture a holistic perspective. The challenge, however, is how to combine both in a cost-effective, productive way to obtain data to drive user-centered design.
What’s the difference?
The difference between quantitative and qualitative approaches is often explained using contrasting terminology like hard vs soft, numeric vs descriptive, statistics vs insights, measure vs explore, what vs why, UX vs GUI. These quick contrasts are useful to highlight the strengths and limitations of each approach if used alone without any of the other.
Qualitative tells stories.
At the core of user experience is the subjective, emotion-based response of the individual user – the way a website makes visitors feel. These feelings can range from delighted, impressed, or hooked, to confused, frustrated, and angry. All these welling emotions, and the ones in between, have one thing in common: they won’t show up in the numbers.
Quantitative data can tell you which pages people visited, and how long they stayed, and where they came from and where they went to, but the story itself is missing; the feelings aren’t there.
The user that clicked to your registration page and then left without signing up – what kept them from continuing on? The users staying on the homepage for so long, was it because their interest was captured by great content, or because they were fruitlessly searching around for an About section? When someone clicks to a new page, did they move a step closer to their goal or did they find that they had mistaken the page for something it didn’t provide?
Listening to a user narrate their journey, hearing their reactions as they navigate a site, fills in those blanks. The ups and downs, the irritations and confusions, the aha moments, the satisfaction of a task completed, all come together to tell a story, and the best and worst things about your design stand out like warm bodies through infrared goggles. Qualitative feedback tells “why” at a level that quantitative data cannot reveal.
Quantitative gives context.
Quantitative feedback allows you to understand your site’s usability in the context of a much bigger picture.
Unlike qualitative information, it can be used to make easy, reliable comparisons. User testing tools like SUS (System Usability Scale), the SEQ (Single-Ease Question), and task durations and completion rates that measure and quantify usability can map the individual’s user experience, chart usability increases and decreases over time, and show how your website performs compares to other sites.
For this reason, quantitative data has a unique capacity to persuade. It shows stakeholders and decision-makers what’s working and what’s not, and demonstrates, with numbers, undeniable disparities in performance. That gives it the capacity to strengthen and justify qualitative findings. Then it can be used to set predictable objectives for a new design sprint.
Combining qualitative and quantitative in your research
The challenge with getting quantitative data is that it requires scale for the numbers to mean anything. But user testing is a research method that relies on video data, and video analysis is hard to scale up.
The key is to read between the lines of your quantitative data. Video may not scale well, but if you know how to use all the usability testing data you’ve collected, it’s a lot easier to handle, and can be dissected efficiently even at a large scale.
Data like task duration or task completion are more than just statistics or benchmarks to show higher-ups. Which task took the longest to complete? Which could not be completed? Was it longer than you expected? Look for users whose times/completion scoring stand out from the rest; an unusually long task duration could indicate a user who struggled with the task.
Both of these statistics even provide a shortcut timestamp to launch user videos immediately on the task timestamp so researchers can quickly get to the bottom of what went wrong.
Now you have a starting point to tackle any mass of video data: instead of randomly picking one of your user videos and watching it beginning to end, pick the user that struggled most, and skip to the task they had trouble with.
As you start combining other forms of data you can compile your shortlist of video clips to watch.
Task completion rates and task usability measures like the Single Ease Question (SEQ) help identify the most difficult portions of the experience, and which users found certain tasks difficult or even impossible to finish.
System usability scores like SUS or PSSUQ indicate which users had the worst (and best) overall experience on the site. If you’re going to watch any videos all the way through, these ones will probably be the most useful for your time investment.
UXCrowd, a crowdsourced usability tool, uses voting to show what users liked and disliked about the site. These results can help you identify which parts of the test videos to focus on; you can also compare UXCrowd responses to written survey responses to see what else users had to say about those issues.
Now, having used your quantitative data to create a preliminary framework for analyzing your research, you have a pared-down but highly targeted list of video clips to watch, that may look something like this:
- Users with the best and worst experiences
- Hardest 1-2 tasks from the users with the worst task ratings
- Tasks with unusually long durations from some users
- Tasks containing issues found in UXCrowd
- Users with interesting feedback in the written survey
After working through this much of your data, you can use what you’ve seen so far to make decisions about any additional videos to watch.
Sign up for a free trial to launch your first qualitative & quantitative usability study!
Related reading:
- UserZoom Go vs TryMyUI
- Usability Testing
- Usability Testing Tools
- Quantifying user experiences with SUS & PSSUQ
- The best UserTesting.com alternative
Trymata and Loop11
To help you tackle your usability research, TryMyUI and Loop11’s partnership can come in handy. When Trymata and Loop11 are used in tandem, designers, researchers and marketers have a powerful set of tools to optimize site and user experience.
Quantitative research with Loop11
Broadly, quantitative research can provide path and performance analyses by capturing the “what” of user behavior. Loop11 is a remote usability testing tool that provides quantitative measures of real users’ behavior through clickstream analysis. Loop11 delivers quantitative metrics such as task completion rates, number of clicks, time on tasks, and detailed path analysis. To get started, researchers use Loop11 to create tasks or questions that are then presented to real users. Loop11 supports unlimited tasks and questions, and as many as 1,000 users can participate. Data generated from users’ clickstreams is automatically presented in real-time reports.
The quantitative measures from Loop11 are especially useful if researchers want to
- Identify any usability problems
- Measure task efficiency and success
- Compare against competitor usability metrics
Qualitative research with Trymata
In contrast, a goal of qualitative research is to gain valuable insight into the thought processes behind the user’s actions or clickstream. Trymata is a remote usability testing software platform that captures the “voice of the customer” via a video recording and written summary documented by the user. As users navigate a website, they “think aloud,” verbalizing their thoughts and reactions as they complete the tasks or questions posed by the researcher.
For example, a user may express surprise to find the Register button in a particular location on the site. Quantitative research tracks that the user clicked “Register” and what the user clicked on before and after. For researchers, the additional insight that the user was “surprised” augments the analysis of the user’s behavior, and the researcher observes this first-hand by viewing the narrated video delivered by Trymata.
What you can learn from the qualitative feedback from Trymata:
- How does the user experience compare to the user’s expectations about how things will work?
- Why is the user experiencing the website or product this way (with in-context feedback)
What’s the linkage?
Loop11 provides the quantitative measures through metrics and reports, and Trymata delivers the qualitative insights via narrated videos and written answers to survey questions. With these two easy-to-use, on-demand testing tools, any researcher or marketer can simultaneously gain insight into both the “what” and the “why” of user behavior.
Although there is no single usability tool that provides this powerful combination, the combined functionality can be replicated by initiating a project in Loop11 and then augmenting the testing process in Trymata. To get started, open an account on Trymata.com and Loop11.com. Both sites offer a free trial for the first project to help get started and become familiar with the process and results. Once the accounts are created, the process is straightforward:
- At Loop11, create the test scenarios, invite user participants, and analyze metrics and reports.
- At Trymata, copy the project URL generated upon launching your Loop11 project. In “Create a new test,” paste the Loop11 project URL in “URL.” There’s no need to re-enter the test scenario and tasks in the “Scenario” and “Task” sections, as that information is already captured in the Loop11 project URL.
- Use the default written survey questions or customize as needed. Select the demographics and number of testers.
- Trymata handles the rest, and will deliver videos of each test user navigating the website with “think aloud” narration. Users will also provide written answers to your survey questions; for example, “What did you like about the website?”
That’s it! Let us know how this works for you.