How might we facilitate digital communication between technicians and engineers during an alteration?
How might we facilitate digital communication between technicians and engineers during an alteration?
How might we provide technicians with a seamless workflow that enhances productivity?
How might we provide technicians with a seamless workflow that enhances productivity?
How might we help technicians leverage past experiences?
How might we help technicians leverage past experiences?
Gathering qualitative data during prototype testing is fairly easy, but gathering quantitative data can be much more difficult. Google’s HEART framework, their Goals-Signals-Metrics framework, and NASA’s Task Load Index provide some hints toward creating quantifiable metrics. Our high-level goal is to “beat the Rolodex-style, old-school way of transferring knowledge by talking to knowledgeable people.” We have broken those down into the following questions or signals and their enabling metrics:
Is it fast and easy to find relevant info?
Does the product provide the trust necessary to act on the information?
Do the media formats support actionable information?
Our first iteration was an InVision prototype that focused on surfacing tribal knowledge before work execution through a web-based application. We aimed to understand how effectively the affordances of the UI facilitate the task of gathering information for supporting information before beginning a task. Additionally, we wanted to test what effectively signals trustworthiness in the information provided thus increasing the likelihood of using the information provided.
The methods used during this phase included: Task-Based Think-Aloud Usability Testing with Probing and interviews. During the session, we gave users a scenario where they were technicians who had to do soldering on a specific circuit board but needed to find more information about their task before starting. We had testers explore the homepage, then, find more information about their tasks using the interface. At the end of the session, we discussed what stood out in their experience and many testers voiced opinions on how to improve this.
Our next iteration was a mobile interface for surfacing tribal knowledge before work execution through embedding annotations within the WAD that technicians are already instructed to read before doing work. We aimed to understand if our UI could make it faster and easier for users to find relevant supplemental information than the current process of asking a buddy. We also wanted to see how best to foster trust for user-generated annotations and how to connect analogous info in a meaningful way.
The methods used during this phase included: Task-Based Think-Aloud Usability Testing with Probing and Modified Speed Dating. During the session, users identified which operations were assigned to them and looked for supplemental information within an assigned operation through browsing or search. After the last screen in the last task was encountered, users ranked three versions of that screen for trustworthiness and explained their rationale for their ranking.
The last iteration we tested was a responsive web interface viewed on a tablet for surfacing tribal knowledge through embedding annotations within the WAD before or during the pre-task briefing that occurs before executing a WAD. We aimed to understand if our newer UI improved the utility of technician-sourced supplemental information and if our newer trust signals improved feelings of trust and actionability.
The methods used during this phase included: Task-Based Think-Aloud Usability Testing with Probing. During the session, users identified which operations were assigned to them, looked for supplemental information within an assigned operation and marked that supplemental information for review in the pre-task briefing meeting. At the end of the session, we discussed what stood out in their experience and many testers voiced opinions on how to improve this.