April 2024 - Present
Solo designer (conversation design, User Testing)
An accessible, multimodal AI voice cueing system
Figma
CUE-D is one of the semi-finalist projects in The Longitude Prize on Dementia. It is a real-time voice assistant designed specifically for people living with dementia, it socially interacts with the individual, using ML/AI to learn theindividual’s habits, interests, and hobbies.
This case study will explore the conversation design process of CUE-D, including the initial research, sample dialogues, multimodal design, and testing iterations.
Intuitive and Accessible Interfaces: How might we design intuitive and accessible interfaces to ensure people with dementia can easily learn and use CUE-D?
Effective Cues and Timing: How might we create timely and proper cues to ensure prompts are delivered at the right time so that they can effectively support task completion?
Although not directly involved, understanding the research methods is crucial to create a more effective and supportive tool.
Additionally, we also analyzed the DemantiaBank to understand the communication patterns, vocabulary usage, and linguistic challenges in dementia to guide conversational tone and language in CUE-D.
People with dementia face significant difficulties in verbal communication, including:
The goal is to understand users and technical capabilities to define the core functionalities.
To better understand how people with dementia complete a daily task, I breaks down the journey into four key activities—Planning, Executing the Task, Problem-Solving, and Completion—with corresponding user needs and stories.
Taken technical limitations, level of effort, and timeline into consideration, I decided to focus on these use cases that have the most impact on the daily life of people with dementia.
To accommodate users like Sarah, who may have speaking difficulties or require more time to articulate thoughts, CUE-D should be designed to adapt to the user's unique speaking habits and preferences.
After defining the key use cases, I started to write sample dialogues for each use case. The goal of it is to get a quick, low-fidelity sense of the "sound-and-feel" of the interaction I am designing. These dialogues were then tested using text-to-speech (TTS) rendering to ensure they sounded natural and aligned with the intended conversational tone.
👇Click on each tab to see sample dialogue:
CUE-D remembers Sarah’s previous behaviors and incorporates them into its suggestions.
The assistant actively updates its memory to improve future interactions.
People with dementia tend to derail from their current task and forget the next step. Therefore, CUE-D should includes reminders like “keep an eye on the boiling water” and “turn off the stove” to ensure safe cooking practices.
CUE-D detects potential mistake from the user ---- 2 a.m for a brunch instead of 2 p.m. It seeks confirmation in a supportive tone and offers a suggestion/correction.
CUE-D accurately confirms and schedules Sarah’s request, reducing cognitive load.
CUE-D keeps the interaction supportive by offering further assistance ---- helping the user find the Vitamin D.
Given timeline and resource constraints, and the fact that CUE-D is a voice-focused assistant, the focus was placed solely on testing the voice functionality. A very simple and common interface was used during this phase, with tablets serving as the testing devices to facilitate user interaction.
The participants are all elders who either have early-stage dementia or their partners. We employed a task-based approach to evaluate CUE-D’s core functionalities. Due to time and environmental limitations, we focused on three key tasks designed to test CUE-D’s primary features:
After each session, participants were asked post-interview questions to gather insights on their impressions of CUE-D, thoughts on its usability, and any suggestions for improvement. This feedback helped us refine CUE-D’s conversational design and core functionality further.
Click to listen how CUE-D help the user to set up a reminder:
We also found some technical issues during the user testing, such as CUE-D were not able to correct obvious errors from users, not responding the users, and interrupted users while they are talking.
After analyzing user feedback, a new use case was identified for CUE-D: proactively assisting users in reviewing their daily routines and upcoming events without requiring users to initiate the interaction:
During the first iteration, users highlighted the difficulty in distinguishing whether CUE-D was actively listening or inactive. To address this, I explored different interface designs to provide clear visual and auditory cues that communicate CUE-D's listening state.
Emotional Design were greatly considered during the design process, such as a "smiling face" or "moving wave," were included to make CUE-D more relatable and engaging.
Five variations of user interfaces were designed:
"That one with a smiley face (Option 4) is very very welcoming 👍”
"Having texts on the screen would be very helpful" 👍
The “Siri-like” interface (Option 2) reminds me of the ECG diagram 😩"
"It would be helpful if CUE-D had a more obvious icon changed when it’s actively listening. That way, I’d know when to start speaking.😌"
The final screens integrate emotional design and explicit visual indicators to create a better user experience.
Awareness: Throughout the accessibility improvement process, I've learned to recognize the importance of creating inclusive experiences that accommodate diverse needs, particularly for individuals living with dementia.
Continuous Learning: Accessibility is a complex and evolving field, and there's always more to learn. This project has inspired me to continue educating myself about best practices, guidelines, and emerging technologies in accessibility design. By staying informed and proactive, I can contribute to creating more inclusive digital experiences in the future.