Evaluative Research of a Conference Management Software

As part of a team of five (a mix of researchers and designers) I conducted an heuristic review and ran several usability tests to assess the success and failure rate of task completion for high-priority user tasks.

Note: company name has been redacted to comply with non-disclosure agreement.

Where do you start with a client who has never done UX testing before?

Make sure you understand their goals

Our client had two business goals they wanted to address through our research:

  1. Reduce the high volume of support requests they were receiving

  2. Implement feedback from real users immediately in order to make their product more usable

At first, they were hesitant to put us in touch with anyone from their current user base. This posed a big challenge because the software was highly specialized and recruiting participants that were close to their target audience would be difficult under a tight timeline with no incentive offered.

Gain their Trust

We knew that in order to do our work effectively, we needed to put the client at ease and gain their trust so that we could work with some of their real users.

As a first step, we completed an heuristic review that evaluated several areas that were working well. We also identified a series of opportunities for improving the current interface so that it would be more in line with UX best practices. We used Nielsen’s 10 Heuristics for our study.

Through the process of the heuristic evaluation, and presenting the findings to stakeholders, they became more comfortable working with us and opened up to further collaborate. Laying the groundwork for effective collaboration and conversation paid dividends when we moved on to the interview and testing phase of the project.

Process

Heuristic Evaluation

As a team of five, we had strength in numbers. We each conducted our own independent review before coming together to discuss our findings.

While we identified many of the same errors and issues, each person was able to bring something new and different to the table that we incorporated into the final report. Different perspectives from each colleague allowed each of us to catch something another person may have missed. As a group, we ended up identifying 185 separate heuristic violations.

Summative Testing

Following our heuristic evalution, we conducted all summative testing over Zoom. Each test took approximately one hour, and included 4 sample tasks for us to walk through with the user. We randomized the task order for each participant.

In addition to consulting Nielsen’s Heuristics, we incorporated Dumas & Redish’s Severity Scale. This scale allowed us to be more specific with our insights and recommendations.

Instead of simply saying, “this feature is not usable” we could state, “none of our participants were able to successfully complete this task in the time allotted. Since task completion was blocked, this received a rating of 1 (the most severe rating). “

In addition to an objective scoring of whether a task was completed, we also wanted insight into user perception around the task. To achieve this, we asked a single ease of use question (SEQ) following each task. Somewhat surprisingly, even participants who were not able to complete a task often rated it as being easy. The average rating was 5.2 with 7 being the easiest possible rating.

Key Findings

This was an incredibly rich interface designed for users with expert niche knowledge. The software was primarily used by event planning professionals working for medical, scientific, or higher education institutions.

With the complex and detailed interface, we had a wealth of feedback to offer, both from our own team review, and feedback synthesized from testing participants.

Know your audience: we were selective about which recommendations we highlighted in our final presentation.

Our findings touched on four main themes

  1. Layout and Design

  2. Information Architecture and Wayfinding

  3. System Feedback

  4. Functionality

We had ten total recommendations, but chose to present on five which would yield the highest impact

Our stakeholders included C-level leaders within the company. We tailored our presentation to our audience and dove into the most egregious areas for improvement. For stakeholders that desired more detailed information, we included the full report and list of recommendations in our appendix and made it clear that we were available for anyone to reach out to with follow up questions.

Key Findings

  1. The perceived action of buttons and clickable features did not always match the participant’s desired action.

  2. Overwhelming information density within the platform leads to cognitive overload.

  3. Participants were not clear where they were in the system or how to move quickly back and forth between pages.

  4. System warnings were not always clear or present when needed.

  5. Participants wanted more guidance for completing tasks.

Uncovering a double-edged sword: feature customization available to all users.

One of the platform’s main selling points was its ability to create custom features that a specific group or organization needed for their concert management. Perhaps they needed special speaker metadata, or a unique scheduling tool. However, in practice, this created chaos. Once a customization was made, it was often visible as an option to ALL users - not just to the user who had initiated the change. This caused an alarming amount of choices to be present for users, resulting in challenge navigating the system and and staying focused on specific tasks.

Creating Actionable Next Steps Informed by User Feedback

We advised on changing terminology and color in several areas across the application

We recommended updating the system styling to remove red as a button color, especially when it was also used to signify ‘closed’ and ‘incomplete’ in other system controls.

We strongly encouraged using specific terminology on CTA buttons and navigation areas.

For example, instead of a labelling a button as “Download”, change the label to read, “Run Search” to describe the task that is being accomplished.

To combat information density, we recommended:

  • Removing redundant links

  • Creating a few short self-paced video tutorials that would answer users’ most common support questions and walk them through the dense UI.

Previous
Previous

Exploratory Research for a CRM Serving 800K Users

Next
Next

Measuring User Engagement and Satisfaction After a Major Platform Redesign