top of page

PROBLEM BACKGROUND AND RESEARCH

Past Research

When I joined Lutron, the team already had some research done such as heuristic evaluation of GUI, user interviews, Jobs To Be Done, As-is journey, object view map and etc. I started looking through all the research, understanding pain points, the target user. My role was to get familiar with past research, wireframe based on the .., conduct user testing, and iterate. 

 

I specifically owned the “Create a Project” flow of the project where users create their first project step by step before starting designing. I was also involved in the “Add an Area” flow, which is the starting point in users’ design process.

Analysis

Key Findings

After conducting interviews, we created an affinity map to analyze our findings which lead us to 3 main insights.

 01.

Uncertainty

Surrounding Food

Users were uncertain about some ingredients, nutritional values, and potential consequences, regardless of their educational background.

“I don’t understand the meanings of many items in the ingredient lists"

02.

Health Choices

are Self Motivated

Users were self-motivated and motivation was influenced by genetic factors, documentaries, and the desire for an active and healthy lifestyle. 

“I just want to eat healthy and not deal with health issues in the future.”

03.

Different

Definitions of Health

Users had different definitions of a healthy lifestyle. Certain products were considered unhealthy by some while others were okay with them. 

“For me, cereals are unhealthy no matter what, I always prefer oats"

Low-Fidelity Usability Testing

Before moving forward to higher fidelity prototypes, we wanted to validate our design with user testing. We conducted task-based usability tests with 5 users. Based on the usability test results, we found 5 major insights.

01.

There is a lack of visual feedback.

02.

Users couldn't find the shopping list.

03.

 Recipe page was overwhelming.

04.

 Users liked the minimalistic design.

05.

 Removing products from the list was hard.

USABILITY TESTING

 Usability Testing of the GUI

We wanted to neurobot’s GUI’s usability in an environment with minimal teacher guidance. We conducted usability testing with 5 participants from non-STEM backgrounds to approximate the target user. 

 

Users were introduced to a neuroscience slide deck teaching fundamentals of neuroscience required to complete the usability tasks. We have also asked users to complete a 10-question System Usability Scale at the end of the user test. This served as a benchmark for future design solutions.

Research Questions 

  • What user flow patterns lead to confusion? ​

  • Can users explain how each interaction with the interface contributes to the larger task?

  • Can users understand the robot’s inputs and outputs?

  • Can users understand the brain’s UI structure?

WellNourish

Encouraging Healthy Grocery Shopping by Providing Accessible Information

Team

Sara Augioli

Bahar Shahmammadova

Timeline

8 weeks

Tools Used

Adobe XD

My Role

Project Lead

Lutron

Transforming desktop experience into mobile for less advanced users

Team

2 Interaction Designers

Development Team 

UX Consultant

Bahar Shahmammadova

Timeline

10 weeks

Skills

Interaction Design

UX Design

Prototyping

UX Research

My Role

UX Co-Op

PROBLEM SPACE

Success Benchmark

Users can complete a neuroscience module during a regular class time block, and demonstrate that they understand the learning objectives.

Stakeholder Interview

After using the GUI,  we conducted interviews with different Backyard Brains staff and a high-school teacher who had experience using the software. 

 

The teacher requires 1 class period just to demo the robot and explain the GUI to students before they start to design their own robotic brains.  

It takes 55 minutes just to demo the robot.

ANALYSIS

Key Findings

"Create a Project"

 01.

The process is clear and provides the right amount of information

 02.

Installer contact information cards are useful.

 03.

Sharing phone numbers is necessary for installers.

 04.

Users typically share one contact but it is possible to share more than one. 

 05.

Some users are not familiar with certain terms used on the screen.

"Add an Area"

 01.

5/6 interviewees prefer starting with an empty list with some kind of recommendations.

 02.

Users don’t want to create more than 3 layers of areas.

 03.

Add sub area icon is not cleat for the user.

 04.

Installers name areas very differently. 

 05.

Jumpstart doesn’t give enough context for what is being added.

Backyard Brains

Making computational neuroscience accessible and engaging to students.

Team

Emma West

Fan Xu

Milly DaI

Tianchi Fu

My Role

UX Design and Research

Timeline

12 Weeks

Skills

UX Research

UX Design

Interaction Design

Prototyping

Lutron

Transforming desktop experience into mobile for less advanced users

Team

2 Interaction Designers

Development Team 

UX Consultant

Bahar Shahmammadova

My Role

UX Co-Op

Skills

Interaction Design

UX Design

UX Research

Prototyping

Timeline

10 Weeks

PROBLEM

There is a low adoption rate for the new Lutron system due to the barrier to entry for entry-level users.

Entry-level installers are not comfortable using complicated GUI (Graphical User Interface) and this affects the adoption of the new Lutron system. Based on the previous research, GUI can be intimidating for these users.

How might we grow the new system by decreasing the barrier to entry for less advanced installers?

SOLUTION

A mobile experience that simplifies steps of the commissioning process for entry-level installers.

Design-As-You-Go simplifies the commissioning process of the new system into a few easy steps with a mobile application, guiding users through the process to create their first commissioning experience without the need to use GUI.

Team

2 Interaction Designers

Development Team 

UX Consultant

Bahar Shahmammadova

My Role

UX Co-Op

Timeline

10 Weeks

Skills

Interaction Design

UX Design

UX Research

Prototyping

RESEARCH

Understanding Neurorobot and Its GUI

The project required an understanding of Neuroscience principles and comprehension of the goals of each lab. Terminologies in the GUI were not standardized and as the neuron connections became more complex, it was harder to navigate. 

The curriculum consists of several labs and each lab is meant for one class period but it took our tech-savvy team 5 hours to complete 1 lab.

Key Feautures

Simple Onboarding

Students can easily access saved brains from labs or quickly create new brains with descriptions. 

Flexible User Flow

The improved user flow allows flexibility and increases the visibility of the system. 

Intuitive Layers Panel

The layers panel allows the editing of neurons or axons, and organizing neural networks for complicated brain designs.

Improved Layout

The new Play Mode allows students to toggle between graphs and maximizes the observation space. 

Wireframes

Based on the research, user flow, and top ideas from ideations, we started wireframing and exploring different ways to redesign the neurorobot GUI. 

Major Design Changes

Onbarding

Visual guide and descriptions for saved brains to eliminate the confusion between creation and choosing a brain.

Toolbar

Confusing buttons changed into an intuitive toolbar to streamline students' workflow. Icons are easy to recognize and use.

Editing Axons and Neurons

Redesigned way to edit axons helps users easily and quickly recover from their mistakes and modify axons without taking much screen real estate.

Combined Graphs

Graphs are combined into one area, letting users focus on one graph at a time. Redesigned graphs are also less confusing and overwhelming.

Button Hierarchy

Separated buttons by importance. Adding a dopamine button under the graph it belongs, an intuitive play button, and a menu to better communicate function.

Single Ease Question (SES) Analysis

We asked users to rate the difficulty of each task from 1 to 7, 1 being very difficult and 7 being very easy. To analyze this data we listed each user's response and obtained an average score for the difficulty of each task and an overall average to understand how difficult users found the usability test. These questions were also repeated for our second usability test to evaluate our design and compare it to the old GUI. 

ANALYSIS

Data Synthesis

We used a method for issue prioritization of the usability test results by listing all of the insights we received from users in a spreadsheet, and next to each insight we wrote the frequency of the issue, its impact, and the criticality of the task. This method helped us to receive a severity score for each insight by multiplying the three variables. 

 

Severity = Task criticality x Impact x Frequency

 

The severity score was used to identify the most important issues users face while using the current GUI of the neurorobotics. It also helped us to set a benchmark for our future design and usability test. 

Heuristic Evaluation 

We followed Nielsen’s 10 Usability Heuristics to evaluate the usability of the current GUI of Neurorobot. Each team member independently went through 10 heuristics and then we met as a team to assess the top usability issues.

 

Our team also created a presentation with key findings from heuristic evaluation so that they can use these principles in the future when developing new products.

 01.

Unclear

System-Status

Users are often not informed about the system status.

02.

Lack of User Control

There are no “undo” or “unselect” functions in the system.

03.

No Error Messages

The system doesn’t let users discover and recover from errors.

04.

Confusing

Terminology

The system uses system-oriented terminology making the interaction unnatural. 

Main Challenges

Previous design was complex and required teachers to learn how to use the interface. Students spent a substantial time understanding and learning the interface before they can start learning neuroscience.

Unforgiving

It was easy for students to make mistakes while using the interface. Once mistakes are made, they are hard to undo. 

Lack of Organization

There was no way to meaningfully organize the increasingly complex neuron network designs.

Limited Feedback

Limited visual feedback confuses students, and leaves users guessing if their inputs are implemented.

 01.

Lack of User Feedback

There is a lack of user feedback and some system feedback is confusing to users. 

02.

Hard to Recover from Mistakes

It is easy for users to make mistakes in the process but very hard to recover from mistakes.

03.

Actions are not Editable

Users' actions are not editable, which is frustrating for users and unnecessarily prolongs the time it takes to finish tasks. 

04.

Unnecessary Steps

The system requires users to make more steps than necessary to complete tasks.

05.

Reliance on Recall

The system relies on recall than recognition.

Key Findings

After our research and data synthesis, we asked new questions that guided our brainstorming sessions. These how might we statements were overarching statements from many other questions we were trying to solve.

Design Sprint with Stakeholders

With all of our findings in mind, we conducted a design sprint with our stakeholders as the next step in our exploration. We wanted to include stakeholders and ideate solutions together to add a different perspective to our findings and to make sure solutions are executable for Backyard Brains.

 

We ideated using Crazy 8's method during the workshop and voted for top ideas. Afterward, we categorized our top sketches/ideas into clusters.

User Flow

Based on our research, we created a new user flow, considering an ideal class session using the robot and the GUI for high-school teachers and students before our brainstorming to guide us through ideation. 

Second User Test Findings

Our first usability test had 13 severe issues ranked higher than 5, the new prototype had only 7 severe issues based on the exact same usability test. The average of the task difficulty we were testing with Single Ease Question (SES), went down from 6.4 to 5.6.

Severity Issues

13

Before

7

After

Task Difficulty

6.4

Before

5.6

After

Second User Test Findings

Our first usability test had 13 severe issues ranked higher than 5, the new prototype had only 7 severe issues based on the exact same usability test. The average of the task difficulty we were testing with Single Ease Question (SES), went down from 6.4 to 5.6.

Severity Issues

13

Before