Thiea Vision is a start-up that aims to help people living with visual impairments navigate the world. It is a phone app that uses a neural network to process a video feed from a wi-fi enabled camera and convert it into audio feedback.
Total loss of vision in both eyes is considered to be 100% visual impairment and 85% impairment of the whole person.
By using TensorFlow, a framework for designing deep neural networks to to identify objects.
Surveys, Interviews, Literature reviews, and empathic modelling were used to develop the system. I personally spent two days with a blind fold in an attempt to better understand the problems that people with visual disabilities face.
So what did people have to say?
We used the insights gained from the surveys and questionnaires to decide on the different features the app would have.
The biggest challenge with this project was to translate raw data (bounding boxes and probability percentages) from the neural network into a user-friendly format. Based on user feed-back, 5 modes were chosen: Navigation, Object Detection, Currency Identification, Object detection, and Text to Speech.
The app was designed to be used by people with visual disabilities. To make it as easy as possible, minimal and large text, along with a range of bright colors for each screen were used.
Micro-Animations were designed with the intent of optimizing the user experience for people with low vision.
We decided to design a product that would be mounted onto the user’s glasses as this would provide the best field of vision for the software.
Various CAD models and physical prototypes were developed before the design was finalized.
The body is made of machined aluminum with copper rings to hold the lens in place.
Hiding in plain site
The moment we decided to design a product that was mounted on to glasses, we knew it would be impossible to make the product invisible.
We decided to make the product a fashion accessory instead to make it something that users would be proud to wear.