Background

ASI distributors rely on our platform to generate quotes and place orders for products they present to clients for events. These distributors dedicate significant time to configuring products, handling multiple customer orders simultaneously, and overseeing product deliveries. Our dedicated management tool streamlines the entire order process, empowering distributors to efficiently manage their operations from start to finish. Our goal was to reassess the order management process by identifying pain points and enhancing efficiency and usability, ensuring a faster and more seamless ordering experience for our users.

Challenge

We aimed to refine our designs through user testing, but time constraints made it challenging to involve our distributors directly. To ensure meaningful feedback, we needed test participants who closely resembled our distributor user base. Screening users on Usertesting.com was a crucial step in our study, allowing us to gather insights that aligned with the perspectives our distributors would provide. This process did extend the timeline but was necessary to provide relative data.

My Role and Reponsibility

Methodology

The goal of our study was to identify which workflows to improve for our distributors that would have the highest impact of order performance. We outlined the following research questions to guide us:

  1. Identify key workflows that distributors complete within the order management system.

  2. Establish a baseline by evaluating the current user experience for each workflow.

  3. Refine the design through iterative testing and compare results to measure improvements.

Before planning our usability testing, I conducted a heuristic evaluation of the order management system within our platform using Jakob Nielsen’s 10 usability heuristics as a framework. We chose heuristic evaluation to understand key areas of the ordering process that had the highest amount of usability violations. I also used this evaluation to guide the tasks for the quantitative usability studies. Our designers used this evaluation to identify and fix easy issues within the current experience for the first improvement test. We chose to analyze the full process of creating and completing an order that a distributor would create to fully understand the holistic experience of our management system. The heuristics we used are listed below:

  1. Visibility of System Status

  2. Match Between the System and the Real World

  3. User Control and Freedom

  4. Consistency and Standards

  5. Error Prevention

  6. Recognition Rather than Recall

  7. Flexibility and Efficiency of Use

  8. Aesthetic and Minimalist Design

  9. Help Users Recognize, Diagnose, and Recover from Errors

  10. Help and Documentation

Impact of Heuristic evaluation

  • Identified four order processes with these violations that we would like to test

  • Our designers updated 25 different navigational elements that they would improve on ahead of our benchmark test

  • Results confirmed distributors’ pain points that our support team informed us on

  • Recommended a new workflow that would solve many violations

Participant Screening

Our primary challenge was determining how to conduct quantitative usability testing quickly and efficiently. While we initially aimed to test with our actual user base, time constraints required us to source participants externally. We opted to use UserTesting’s participant pool and developed a detailed screener to closely match our existing distributor user base. The screener included multiple questions assessing participants’ job titles, professional responsibilities, familiarity with competitors, experie nce selling branded products or services, and knowledge of the promotional products industry. Our particpants must have selected “Helping clients who want to market their brands on various products or materials” since that is the true purpose of a distributor as ASI. Based on these criteria, we screened over 300 candidates and successfully recruited 150 qualified participants. This allowed us to run faster testing iterations with a representative sample.

Quantitative Usability Testing

Baseline tests were conducted on UserTesting.com using a Figma prototype. We selected four key order management processes—common distributor tasks with the most severe heuristic violations. Each process was tested individually with 10 participants per process, totaling 40 participants in the baseline test. To ensure fresh perspectives, we used different participants for each round of testing.

Each test included multiple tasks designed to be efficient and unbiased, ensuring participants could navigate the system naturally without being guided toward a specific outcome.

For each task, we recorded four key measurements:

  • First Click – Did the participant successfully click on the correct location first?

  • Success – Did the participant successfully complete the task?

  • Time on Task – How long did it take to complete the task?

  • Perceived Ease Score – On a scale of 1 (Very Difficult) to 5 (Very Easy), how would the participant rate this task?

These metrics were selected to effectively capture both the participant's experience and performance, providing a clear picture of the usability of our order management platform. For each test, we calculated the average score of the four measurements for each task and the overall test, then recorded them in an Excel document for future comparison. For the purpose of this case study, we will review how we tested the process of sending a purchase order.


Benchmark test results:

  • First Click: 45% succesfully clicked the correct location first

  • Success: 75% of participants successfully completed the task

  • Time on Task: 2 minutes

  • Perceived Ease Score: Participant rated the difficuly 3.5 / 5


Our benchmark usability test revealed that participants were initially unsure where to begin when attempting to send a purchase order. Even after locating the starting point, many struggled to complete the subsequent steps, encountering significant usability challenges. In response, our designer spent two weeks refining the interface based on participant feedback. To measure progress consistently, we used the same tasks and metrics in each round of testing, evaluating 10 new participants per iteration via UserTesting.com.

Improvement test 1 results:

  • First Click: 75% succesfully clicked the correct location first

  • Success: 100% of participants successfully completed the task

  • Time on Task: 57 seconds

  • Perceived Ease Score: Participant rated the difficuly 4.5 / 5

The second round of testing showed a 25% improvement in task completion rates, indicating clearer user guidance. First-click accuracy also improved by 30%, though it remained below our target threshold. Based on these insights, our designer further refined the process flow, and we conducted a third round of testing with an additional 40 participants.

Improvement test 2 results:

  • First Click: 90% succesfully clicked the correct location first

  • Success: 95% of participants successfully completed the task

  • Time on Task: 1 minute 35 seconds

  • Perceived Ease Score: Participant rated the difficuly 4.4 / 5

In the third round of testing, we observed an improvement in first-click accuracy, with a slight decrease in overall task success rate. The most notable change was a 37-second increase in average time on task. Given our time constraints, we prioritized improvements in first-click accuracy and success rate, deferring optimization of task efficiency for a future iteration. Across all three rounds—including the baseline and two iterations—we tested with a total of 120 participants, which directly informed our final design and development decisions.

Impact of analysis

  • All metrics were above 90% of acceptance for each process

  • Reduced total number of negative support tickets about the ordering process

  • Stakeholders felt satisfied with our improved metrics and wanted to test on own users

  • Project manager reallocated resources from roadmap to focus on delivering this improved experience


Reflection and Takeaways:

Iterative qualitative usability testing allowed us to identify issues in our current experience and guided improvements in subsequent tests. However, the testing could have been more robust with a larger pool of participants per iteration. Due to time and budget constraints, we were limited to 10 participants per test. Another area for improvement was incorporating our own users into the testing process. Since they spend most of their workday on our platform, their feedback would have been more relevant and insightful. As we move forward with implementing our design, we plan to conduct usability testing with our actual users to gather final feedback.

Previous
Previous

Discovery Interviews and Design Validation for Data-Driven Suppliers

Next
Next

Designing for Real People: Discovery Interviews to Empathy Maps