Envisioning the Future


Our team looked at insights and breakdowns to understand potential areas for exploration.

We reviewed our design opportunities and challenged assumptions. 

20180327_195326
ROUND ROBIN IDEATION

With the research still fresh in our minds, we each picked a unique breakdown and wrote the beginning of a solution a piece of paper. We rotated our solutions every two minutes, allowing a different team member to build off of previous ideas.
YES-AND STORYTELLING

As a group, we picked a user, scenario, and problem. We went around the room to narrate a story on how might a technician attempt to solve an issue when he's missing parts on his current operational step.
ASSUMPTIONS REVERSAL

We envisioned an alternate world where technicians had all the power in the world and could fix all their issues without supervision. We ideated on how those solutions might look like.

We storyboarded over 100 concepts.

We generated ideas through storyboards,
and then clustered our ideas into high-level groups.

Screen Shot 2018-07-24 at 5.47.52 PM

 

(Here's a link to the prezi document)

Which we then narrowed down to 3 main ideas:

SUPPORT LIVE COLLABORATION

noun_Wrench_437333

How might we facilitate digital communication between technicians and engineers during an alteration? 

How might we facilitate digital communication between technicians and engineers during an alteration? 

REDUCE COGNITIVE LOAD

noun_cognitive_1391102

How might we provide technicians with a seamless workflow that enhances productivity?

How might we provide technicians with a seamless workflow that enhances productivity?

TRANSFER PAST EXPERIENCES

noun_Knowledge Transfer_27472_000000

How might we help technicians leverage past experiences?  

How might we help technicians leverage past experiences?  

Our codesign feedback helped guide us to our final idea. 

We kicked off our summer at NASA Ames with a presentation and a co-design session with our clients and used their observations as a proxy for our users.

 

Screen Shot 2018-07-24 at 8.55.02 PM
FINDINGS
  • NASA employees use historical data as a source of authority (i.e. "It was okay last time" is a valid reason for an argument)

  • Be smart about how you capture data, it could easily lead to data bloat.

  • Recoding structured data is easier to manage and reference, but less flexible to user needs

To help guide our development, we focused on testing specific product metrics.

We wanted to ensure that relevant information was being provided, that the information had the signals necessary to allow the technicians to know if they could trust it and to provide information that could be acted on.

Gathering qualitative data during prototype testing is fairly easy, but gathering quantitative data can be much more difficult. Google’s HEART framework, their Goals-Signals-Metrics framework, and NASA’s Task Load Index provide some hints toward creating quantifiable metrics. Our high-level goal is to “beat the Rolodex-style, old-school way of transferring knowledge by talking to knowledgeable people.” We have broken those down into the following questions or signals and their enabling metrics:

 

TASK COMPLETION

metric-1


Is it fast and easy to find relevant info?

USER RATINGS

metric2


Does the product provide the trust necessary to act on the information?

USER FEEDBACK

metric3


Do the media formats support actionable information?

We conducted usability tests to refine our prototype's interface and form factor.

Iteration 1: Reddit/Quora Inspired

Our first iteration was an InVision prototype that focused on surfacing tribal knowledge before work execution through a web-based application. We aimed to understand how effectively the affordances of the UI facilitate the task of gathering information for supporting information before beginning a task. Additionally, we wanted to test what effectively signals trustworthiness in the information provided thus increasing the likelihood of using the information provided.

The methods used during this phase included: Task-Based Think-Aloud Usability Testing with Probing and interviews. During the session, we gave users a scenario where they were technicians who had to do soldering on a specific circuit board but needed to find more information about their task before starting. We had testers explore the homepage, then, find more information about their tasks using the interface. At the end of the session, we discussed what stood out in their experience and many testers voiced opinions on how to improve this.

prototype 1
FINDINGS
  • All users wanted to look at the WAD instructions while using our prototype

  • “NASA has a Rolodex culture”

  • Years can be a positive and negative signal for trustworthiness of information

  • Users wanted a truncated version of the information for later use

Iteration 2 & 3: Rap Genius Inspired 

Our next iteration was a mobile interface for surfacing tribal knowledge before work execution through embedding annotations within the WAD that technicians are already instructed to read before doing work. We aimed to understand if our UI could make it faster and easier for users to find relevant supplemental information than the current process of asking a buddy. We also wanted to see how best to foster trust for user-generated annotations and how to connect analogous info in a meaningful way.

The methods used during this phase included: Task-Based Think-Aloud Usability Testing with Probing and Modified Speed Dating. During the session, users identified which operations were assigned to them and looked for supplemental information within an assigned operation through browsing or search. After the last screen in the last task was encountered, users ranked three versions of that screen for trustworthiness and explained their rationale for their ranking.

it_2_and_3_rap_genius_mobile
FINDINGS
  • Users still wanted to talk to superiors for more information on the WAD

  • Not being clear about what our UI elements mean decreases trustworthiness thus actionability. What do 10 thumbs up mean? That people approve this, have done this, or think it is a good idea?

  • Using annotations in a document that is already integrated into a technician’s workflow seems to be an effective way to surface contextually relevant technician-sourced information

  • Not all users noticed the difference between clicking on the underlined text and highlighted text. There might be more effective ways to link analogous information.

Iteration 4: Tablet Genius

The last iteration we tested was a responsive web interface viewed on a tablet for surfacing tribal knowledge through embedding annotations within the WAD before or during the pre-task briefing that occurs before executing a WAD. We aimed to understand if our newer UI improved the utility of technician-sourced supplemental information and if our newer trust signals improved feelings of trust and actionability.

The methods used during this phase included: Task-Based Think-Aloud Usability Testing with Probing. During the session, users identified which operations were assigned to them, looked for supplemental information within an assigned operation and marked that supplemental information for review in the pre-task briefing meeting. At the end of the session, we discussed what stood out in their experience and many testers voiced opinions on how to improve this.

tab_genius2
FINDINGS
  • UI elements intended to establish trust, like upvotes, require clear meanings. Unclear signal meaning made users less likely to trust them and take action

  • Users couldn’t locate “Mark for briefing” function and were unclear about its function

  • Color coding supplemental information on the overview page was confused for GoogleDocs-style viewers

  • Styled call-outs in the WAD were confused for technician-supplied annotations