is a crucial process in software development, helping designers and developers create user-friendly products. By observing real users interact with interfaces, teams can identify issues, gather insights, and make improvements that enhance the overall user experience.

There are various approaches to usability testing, including formative vs. summative, moderated vs. unmoderated, and in-person vs. remote. Each method has its strengths, allowing teams to gather different types of data and insights to inform their design decisions and optimize product usability.

Goals of usability testing

  • Usability testing evaluates how well users can interact with a product or system to achieve their goals, providing valuable insights for design improvements
  • Identifies usability issues, confusing elements, and areas for enhancement by observing real users as they attempt tasks within the interface
  • Ensures the product meets user needs, is intuitive to use, and provides a positive user experience, ultimately increasing user satisfaction and adoption

Formative vs summative testing

Formative testing

Top images from around the web for Formative testing
Top images from around the web for Formative testing
  • Conducted during the design and development process to identify usability issues early and inform iterative improvements
  • Focuses on identifying specific usability problems, gathering qualitative feedback, and refining the design based on user insights
  • Typically involves smaller sample sizes and may use low-fidelity prototypes or wireframes to test concepts and interactions
  • Helps shape the product's design direction and ensures usability is considered throughout the development lifecycle

Summative testing

  • Performed after the product is developed or near completion to assess its overall usability and validate that it meets user requirements
  • Evaluates the effectiveness, efficiency, and satisfaction of the final product using quantitative metrics and benchmarks
  • Involves larger sample sizes and uses high-fidelity prototypes or fully functional systems to simulate real-world usage scenarios
  • Provides a comprehensive assessment of the product's usability, helping to make final design decisions and ensure readiness for release

Moderated vs unmoderated testing

Moderated testing

  • Involves a facilitator guiding participants through the testing process, providing instructions, answering questions, and observing their interactions
  • Allows for real-time observation of user behavior, body language, and emotional responses, providing rich qualitative insights
  • Enables the facilitator to probe for deeper understanding, clarify confusion, and adapt the test based on user feedback (Wizard of Oz technique)
  • Suitable for complex or specialized products that require guidance or when detailed qualitative feedback is needed (usability lab testing)

Unmoderated testing

  • Participants complete the usability test independently, without the presence of a facilitator, using online tools or platforms
  • Enables testing with a larger and more diverse sample of users, as participants can complete the test at their own convenience ()
  • Provides quantitative data on task completion rates, , and user paths, allowing for statistical analysis and comparison
  • Suitable for evaluating specific tasks or gathering feedback on a larger scale, particularly for web-based or mobile applications (online usability testing)

In-person vs remote testing

In-person testing

  • Conducted in a physical location, such as a usability lab or office, with the participant and facilitator present in the same room
  • Allows for direct observation of user interactions, body language, and facial expressions, providing rich qualitative insights
  • Enables the facilitator to build rapport with participants, probe for deeper understanding, and adapt the test based on user feedback
  • Suitable for testing physical products, specialized equipment, or when detailed qualitative feedback and observation are required

Remote testing

  • Conducted online, with participants and facilitators in different locations, using video conferencing, screen sharing, or specialized usability testing tools
  • Enables testing with a geographically diverse sample of users, reducing travel costs and logistical constraints
  • Provides a more natural testing environment, as participants use their own devices and settings, increasing ecological validity
  • Suitable for testing web-based or mobile applications, gathering feedback from a larger sample, or when is not feasible (remote usability testing)

Usability testing methods

Think-aloud protocol

  • Participants verbalize their thoughts, feelings, and decision-making processes while interacting with the product or system
  • Provides insights into users' mental models, expectations, and challenges, helping to identify areas of confusion or frustration
  • Requires participants to be comfortable with verbalization and may influence their natural behavior, so facilitators should provide clear instructions and practice sessions

Cognitive walkthrough

  • Evaluators or experts step through a series of tasks, simulating the user's problem-solving process and assessing the learnability of the interface
  • Focuses on evaluating the ease of learning and identifying potential barriers for new users, particularly in goal-oriented tasks
  • Helps identify gaps in the user's understanding and opportunities for improving the onboarding experience or user guidance

Heuristic evaluation

  • Experts assess the interface against a set of established usability principles or heuristics, identifying potential usability issues and areas for improvement
  • Commonly used heuristics include Nielsen's 10 , which cover aspects such as consistency, error prevention, and user control
  • Provides a structured approach to identifying usability problems, but may not capture issues that arise from real user interactions

Eye tracking studies

  • Specialized equipment tracks participants' eye movements and gaze patterns while interacting with the interface, providing insights into visual attention and information processing
  • Helps identify which elements capture users' attention, how they scan the interface, and potential areas of confusion or distraction
  • Requires specialized hardware and software, and data analysis can be complex, but provides valuable quantitative insights into user behavior

A/B testing

  • Compares two or more design variations to determine which performs better in terms of usability, engagement, or conversion rates
  • Randomly assigns participants to different design variations and measures key metrics, such as or time on task
  • Provides quantitative data to support design decisions and optimize the user experience, particularly for web-based or mobile applications

Usability testing process

Defining test objectives

  • Clearly articulate the goals and research questions the usability test aims to address, ensuring alignment with the product's overall objectives
  • Identify the key tasks, user flows, or features to be evaluated, focusing on critical paths and areas of potential usability concern
  • Define success criteria and metrics for assessing usability, such as task completion rates, time on task, or user satisfaction scores

Recruiting representative users

  • Identify the target user profile, considering demographics, skills, and experience levels relevant to the product or system being tested
  • Determine the appropriate sample size based on the test objectives, available resources, and desired level of confidence in the results
  • Recruit participants who match the target user profile, ensuring a diverse and representative sample to capture a range of perspectives and behaviors

Preparing test scenarios

  • Develop realistic test scenarios and tasks that align with the test objectives and represent typical user goals and workflows
  • Ensure tasks are clearly defined, achievable within the test timeframe, and cover the key features or areas of interest
  • Create task instructions and data sets that provide necessary context and guidance without leading or biasing participant behavior

Conducting the test sessions

  • Provide a welcoming and comfortable environment for participants, ensuring they feel at ease and understand the purpose of the test
  • Obtain informed consent and communicate that the focus is on evaluating the product, not the participant's abilities or performance
  • Follow the test protocol consistently across sessions, providing clear instructions, observing participant behavior, and collecting relevant data

Analyzing and reporting results

  • Compile and analyze the data collected during the test sessions, identifying patterns, usability issues, and areas for improvement
  • Prioritize usability findings based on severity, frequency, and impact on user experience, using a structured classification system
  • Prepare a clear and concise report summarizing the key findings, recommendations, and actionable insights for stakeholders and the design team

Usability metrics

Task success rate

  • Measures the percentage of participants who successfully complete a given task, providing an indicator of the usability and effectiveness of the interface
  • Helps identify tasks that may be challenging or confusing for users, and can be used to track improvements over time

Time on task

  • Measures the amount of time participants take to complete a specific task, providing insights into the efficiency and learnability of the interface
  • Helps identify tasks that may be overly complex or time-consuming, and can be used to benchmark performance against industry standards or competitor products

Error rate

  • Measures the frequency and severity of errors participants encounter while completing tasks, providing insights into the error tolerance and recoverability of the interface
  • Helps identify potential usability issues, such as unclear instructions, confusing layouts, or inadequate feedback, and can inform error prevention and handling strategies

User satisfaction ratings

  • Measures participants' subjective perceptions of the usability, usefulness, and overall experience of the product or system, often using standardized questionnaires (System Usability Scale)
  • Provides a quantitative measure of user attitudes and preferences, helping to gauge the emotional response and perceived value of the product
  • Can be used to track changes in user satisfaction over time, compare against benchmarks, or assess the impact of design improvements

Best practices for usability testing

Defining clear test goals

  • Articulate specific, measurable, and actionable test objectives that align with the product's overall goals and user requirements
  • Prioritize the most critical aspects of the user experience to be evaluated, focusing on areas of potential usability concern or strategic importance

Ensuring representative users

  • Recruit participants who closely match the target user profile in terms of demographics, skills, and experience levels relevant to the product or system
  • Strive for a diverse and inclusive sample that captures a range of perspectives, behaviors, and accessibility needs

Providing realistic test scenarios

  • Develop test scenarios and tasks that reflect real-world user goals, workflows, and context of use, ensuring ecological validity
  • Avoid leading or biasing participants by providing clear instructions and context without overly prescriptive guidance

Maintaining a neutral tone

  • Conduct usability testing with a neutral and objective tone, avoiding leading questions or biasing participant behavior
  • Encourage participants to think aloud and share their honest thoughts and experiences, emphasizing that there are no right or wrong answers

Observing without interference

  • Observe participant behavior and interactions unobtrusively, allowing them to navigate the interface naturally and independently
  • Avoid providing assistance or guidance unless necessary for the participant to proceed, as this may influence their behavior and skew the results

Common usability issues

  • Unclear or inconsistent navigation labels, hierarchies, or groupings that make it difficult for users to find desired content or features
  • Inadequate visual cues or feedback to indicate the current location or path within the interface (breadcrumbs, highlighted menu items)

Confusing terminology

  • Use of technical jargon, unfamiliar acronyms, or inconsistent language that may be unclear or ambiguous to the target users
  • Lack of clear definitions, tooltips, or contextual help to explain complex or specialized terms

Inconsistent design patterns

  • Inconsistent use of colors, typography, icons, or layout patterns across the interface, leading to confusion and cognitive overhead
  • Deviation from established design conventions or user expectations, such as unconventional placement of common UI elements (search bar, menu)

Accessibility barriers

  • Insufficient color contrast, small font sizes, or cluttered layouts that may be difficult to read or navigate for users with visual impairments
  • Lack of keyboard accessibility, alternative text for images, or proper heading structures that may hinder users relying on assistive technologies (screen readers)

Incorporating usability test results

Prioritizing usability issues

  • Assess the severity, frequency, and impact of identified usability issues, considering their effect on user experience, task completion, and overall product goals
  • Prioritize issues based on a structured classification system (critical, high, medium, low) and align with the product roadmap and development resources

Iterative design improvements

  • Develop targeted design solutions and recommendations to address the prioritized usability issues, considering user feedback and best practices
  • Implement changes, starting with the most critical issues and progressively refining the interface based on ongoing user feedback and testing

Retesting after changes

  • Conduct follow-up usability testing sessions after implementing design improvements to validate their effectiveness and identify any new or unintended usability issues
  • Compare usability metrics and user feedback before and after the design changes to measure the impact and success of the improvements
  • Continuously monitor and assess the product's usability over time, making iterative refinements based on evolving user needs and feedback

Key Terms to Review (29)

A/B Testing: A/B testing is a method of comparing two versions of a webpage, app, or other digital asset to determine which one performs better based on user interactions. This technique helps in making data-driven design decisions by analyzing user behavior and feedback to optimize user experience and improve engagement.
Affordance: Affordance refers to the properties of an object or environment that suggest how it can be used, essentially communicating its functionality to users. This concept is crucial in design as it influences how users interact with a product or system, guiding their actions and decisions through visual cues and usability.
Card Sorting: Card sorting is a user-centered design method used to help organize information by having participants group and label items based on their understanding. This technique is particularly valuable for shaping the structure of websites and applications, enhancing usability, improving navigation, and ensuring that the labeling aligns with users' mental models of the content.
Cognitive Load: Cognitive load refers to the amount of mental effort being used in the working memory. It emphasizes the limitations of human cognitive processing, which can impact how effectively users interact with information and systems. When cognitive load is high, it can hinder usability and learning, affecting how users comprehend and navigate interfaces, as well as how they retain information.
Cognitive Walkthrough: A cognitive walkthrough is a usability inspection method used to evaluate the ease of use of a user interface by simulating a user's problem-solving process. This technique focuses on understanding how users interact with a system, particularly when they are unfamiliar with it, allowing evaluators to identify potential usability issues. By breaking down tasks step-by-step, cognitive walkthroughs help assess whether users can accomplish their goals efficiently and effectively.
Don Norman: Don Norman is a renowned cognitive scientist and usability engineer known for his work on user-centered design and the principles of effective design. His insights emphasize the importance of understanding users' needs and behaviors to create products that are not only functional but also enjoyable and intuitive to use.
Error rate: Error rate refers to the frequency of errors encountered by users while interacting with a system, often expressed as a percentage of total interactions. This measurement is crucial in evaluating the usability and efficiency of a design, helping identify areas needing improvement. A high error rate may indicate design flaws that disrupt user experience, while a low error rate typically signifies a well-designed interface that meets user needs effectively.
Eye Tracking Studies: Eye tracking studies are research methods used to measure eye movement patterns and gaze behavior as individuals interact with visual stimuli, such as websites or applications. These studies provide insights into how users visually process information, helping to identify areas of interest and potential usability issues in design. By analyzing where users look and for how long, designers can optimize interfaces to enhance user experience and engagement.
Formative Testing: Formative testing is a method used to evaluate a product or design during its development phase, allowing for real-time feedback and improvements before the final version is completed. This type of testing focuses on understanding user needs and behaviors to guide design decisions, ensuring that the end product is more user-friendly and effective. By incorporating user input early in the process, formative testing helps identify issues that can be addressed immediately, ultimately enhancing usability and satisfaction.
Heuristic Evaluation: Heuristic evaluation is a usability inspection method where a small group of evaluators examine an interface and judge its compliance with recognized usability principles, known as heuristics. This technique allows for quick identification of usability problems in a design without needing extensive user testing. It connects to various aspects of user experience, such as understanding task flow, measuring design impact, and facilitating design critiques.
In-person testing: In-person testing refers to the process of evaluating a product or system by directly observing users as they interact with it in a controlled environment. This method allows researchers and designers to gather real-time feedback, identify usability issues, and gain insights into user behaviors and preferences. In-person testing is crucial for understanding user experience in a tangible way, enabling improvements based on direct observations.
ISO 9241: ISO 9241 is an international standard that provides guidelines and requirements for the ergonomic design of human-computer interfaces. It aims to improve usability and user experience by addressing factors such as user satisfaction, effectiveness, and efficiency. This standard is essential for ensuring that products are designed with the end-user in mind, which relates closely to usability testing, inclusive design, and various usability testing methods.
Iterative Design: Iterative design is a continuous process of creating, testing, and refining a product based on user feedback and testing outcomes. This approach allows designers to make incremental improvements, ensuring the final product aligns closely with user needs and preferences while adapting to any constraints or challenges that arise throughout development.
Jakob Nielsen: Jakob Nielsen is a prominent usability expert known for his work on user-centered design and web usability principles. He co-founded the Nielsen Norman Group, which focuses on improving user experiences across digital platforms. His principles, particularly the heuristics for interface design, have become foundational in guiding usability testing, navigation design, task analysis, heuristic evaluation, cognitive walkthroughs, and keyboard navigation.
Moderated Testing: Moderated testing is a usability evaluation method where a facilitator guides participants through tasks while observing and interacting with them in real-time. This approach allows for direct feedback, clarification of tasks, and in-depth discussions about user experiences, enhancing the understanding of user behavior and preferences. It combines structured task completion with spontaneous dialogue, offering insights that can inform design improvements.
Nielsen's Heuristics: Nielsen's Heuristics are a set of ten general principles for interaction design that serve as guidelines to improve user experience and usability. These heuristics help identify usability problems in a user interface and are often applied during evaluations, testing, and design processes to enhance the overall effectiveness of a product. They provide a foundation for assessing how well users can interact with software or systems, ensuring they are intuitive and accessible.
Remote usability testing: Remote usability testing is a method of evaluating a product's user experience by observing real users as they interact with it from their own locations, typically through online tools and software. This approach allows researchers to gather insights from a diverse group of users without the constraints of physical space, leading to more inclusive and varied feedback. By leveraging technology, remote usability testing enhances collaboration and enables teams to make informed design decisions based on user interactions.
Summative Testing: Summative testing is a method of evaluating the effectiveness and quality of a product, system, or program at the end of a development cycle. This type of testing focuses on measuring outcomes and overall performance against predefined criteria, often providing insights into how well the design meets user needs and requirements. Summative testing is essential for making final decisions about product release or future improvements.
Surveys: Surveys are systematic methods of gathering information from individuals, often through questionnaires or interviews, to understand their opinions, behaviors, or experiences. They serve as a critical tool for collecting data during various phases of design and research processes, enabling teams to make informed decisions based on user insights.
Task Success Rate: Task success rate is a key metric used to evaluate how effectively users can complete a specific task within a design or system. It is often expressed as the percentage of users who successfully complete the task compared to the total number of users attempting it. This measure connects to various elements, such as how prototypes are tested, how usability is assessed, and how cognitive processes are analyzed during user interactions.
Think-aloud protocol: Think-aloud protocol is a qualitative research method where participants verbalize their thoughts, feelings, and decision-making processes while performing a task. This technique provides insights into user behavior and cognitive processes, helping designers understand how users interact with systems, which is crucial for improving usability and aligning products with user expectations.
Time on Task: Time on task refers to the amount of time a user spends engaged in a specific task or activity within a system or interface. It’s a critical measure in understanding how effectively users can complete tasks, as it indicates both efficiency and usability. Analyzing time on task helps identify areas for improvement in design, whether through usability testing, heuristic evaluation, or remote testing, allowing designers to enhance user experience and performance.
Unmoderated Testing: Unmoderated testing is a usability evaluation method where participants complete tasks independently without real-time guidance or observation from a moderator. This type of testing is typically conducted remotely, allowing users to interact with a product or service in their natural environment, which can lead to more genuine feedback and insights into user behavior. It helps in identifying usability issues without the influence of a facilitator, offering valuable data on user interactions and experiences.
Usability Heuristics: Usability heuristics are general principles or guidelines used to evaluate and improve the usability of a product or interface. They serve as rules of thumb that help designers identify potential usability issues and ensure that the user experience is intuitive and efficient. These heuristics are critical during various phases of design, from initial testing to evaluating user interactions, and play a key role in understanding user constraints and mental models.
Usability Testing: Usability testing is a method used to evaluate a product or service by testing it with real users to see how easily they can interact with it. This approach helps identify any usability issues, understand user behavior, and gather feedback to improve the design, ensuring that the final product meets user needs effectively.
User Experience (UX): User experience (UX) refers to the overall satisfaction and interaction a user has with a product, system, or service, particularly in the digital space. It encompasses every aspect of the user's interaction, including usability, accessibility, and the emotional response elicited during use. Creating a positive UX involves understanding users' needs and behaviors, leading to designs that provide meaningful and relevant experiences. This focus on users directly influences processes such as usability testing, the creation of wireframes, and the development of interactive prototypes.
User interviews: User interviews are a qualitative research method used to gather insights and feedback directly from users about their experiences, needs, and motivations. This method is crucial for understanding user perspectives and helps inform design decisions during various phases of product development, such as understanding user needs during the empathize phase, validating design choices through usability testing, identifying emerging preferences in design trends, and revealing users' mental models that shape their interactions with a product.
User Satisfaction Ratings: User satisfaction ratings are metrics used to gauge how pleased users are with a product or service. These ratings help in understanding user experiences and identifying areas for improvement, making them essential for enhancing overall usability and effectiveness. By collecting feedback through surveys or direct observation during usability testing, designers can make informed decisions that lead to better user-centered designs.
User-Centered Design: User-centered design is an approach that prioritizes the needs, preferences, and limitations of end-users at every stage of the design process. This methodology emphasizes understanding user behaviors and experiences to create products that are both effective and enjoyable to use.
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.