How might we use object detection to help people living with visual disabilities?
Thiea Vision is a start-up that aims to help people living with visual impairments navigate the world. It is a phone app that uses a neural network to process a video feed from a wi-fi enabled camera and convert it into audio feedback.
Living with a visual disability can be tough. This is especially true for people living with severe visual impairment as all of our built environments do not cater to somebody with poor vision. Everything from wayfinding to navigating roads to something as simple as finding a coffee mug can be really challenges for someone with visual disabilities.
With the youngest of the baby boomers hitting 65 by 2029, the number of people with visual impairment or blindness in the United States is expected to double to more than 8 million by 2050
As of no, people living with visual impariments have to almost “hack” their way around life as society as a whole relies primarily on vision for communicating. Everything from way-finding to identifying the correct buttons on a microwave are complicated by vision loss.
The idea was to use a neural network that runs real time object detection to identify objects and convert visual into audio feedback for people living with visual disabilities. The idea started out over a cup of coffee with a friend who was working on TensorFlow.
FILLER TEXT FILLER TEXT FILLER TEXT FILLER TEXT FILLER TEXT FILLER TEXT
what i read
What I heard
what i felt
I spent a weekend with a blindfold on to gain some empathy for what it feels like to live with a visual disability. This led to some interesting insights.
WHAT PEOPLE SAID
We sent out surveys asking people living with visual disabilities about some of the challenges they felt and what their biggest problems were. We also asked them how they navigated these challenges and what their own personal “hacks” were. (n=21)
I charted out the various stakeholders involved in the ecosystem that surrounds a person living with severe visual disabilities to understand the different intervention points that come into play.
Instead of creating user personas and journey maps, I decided to create an empathy map for my target audience. This helped me verbalize the biggest pain and gain points and effectively communicate my findings. I felt that a journey map or a persona would be too limiting as the target audience is very broad in terms of likes, interests and other personality traits. The only thing they have in common is their visual disability.
Total loss of vision in both eyes is considered to be 100% visual impairment and 85% impairment of the whole person. It is the 21st century and technology has finally reached the point where we can make the world more accessible for everyone.
By creating a camera paired with a smartphone to run real time object detection on a live video feed. The smartphone processes the video and provides audio feedback to the user.
The biggest challenge with this project was to translate raw data (bounding boxes and probability percentages) from the neural network into a user-friendly format. Since the primary sense for someone with a visual disability is their hearing, the audio feedback must be extremely selective. Based on user feed-back, 5 modes were chosen: Navigation, Object Detection, Currency Identification, Object detection, and Text to Speech.
CHECKING ACCESSIBILITY FOR COLOR BLINDNESS
RATIONALE BEHIND SCREENS
I consciously decided to create as simple an interface as possible. Most users interact with apps on their phones by using the talk-back feature on their phones. However, a small subset of users that have severe visual disabilities are still able to see objects up close. To meet their needs, the screens have big bold text with high-contrast. All of the different modes also have different colors to make it easier to differentiate between screens.
CHECKING ACCESSIBILITY FOR COLOR BLINDNESS
Based on user-feedback, I modified the screens to include tabs to indicate the current mode. I also updated the Visual Design to simplify screens and get rid of unnecessary elements.
The moment we decided to design a product that was mounted on to glasses, we knew it would be impossible to make the product invisible. We decided to make the product a fashion accessory instead to make it something that users would be proud to wear.
The app was designed to be used by people with visual disabilities. To make it as easy as possible, minimal and large text, along with a range of bright colors for each screen were used.
Micro-Animations were designed with the intent of optimizing the user experience for people with low vision.