click here to download project abstract of blind people association
ABSTRACT
With the rise of development around software engineering, humanity hasn’t
implemented enhancements that we can use for visually impaired people. With the
development of artificial intelligence and machine learning techniques, engineers can
use to infuse “intelligence” even in dumb computers and with the ease of
accessibility, we can extend this “intelligence” to our smartphones to help the visually
impaired people understand their surroundings and receive a helping hand during
their day-to-day activities artificial intelligence powered.
Our mobile application bridges the gap between visually impaired people and the
visual world by leveraging the power of Deep Learning which can be made
accessible even on low-end devices with a lucid User-Interface that allows them to
better understand the world around them.
Our primary purpose is to leverage and study how Deep Learning architecture and
easy prototyping tools can help us develop applications that we can quickly render,
even on low-end devices. With this application, we aim to have a one-stop solution to
allow blind or partially blind People to understand their surroundings better and cope
with the dynamic world ahead of them.
Our mobile application allows the Users to leverage Image Captioning Architecture
to generate real-time insight into their surroundings while using Natural Language
Processing to speak out lucidly. The cornerstone of our mobile application is its User
Interface which would infuse a lucid experience for the User with its ease of handling
and use.
INTRODUCTION
Vision is one of the most important human senses and helps us understand
the perception of the environment around us. This information is not available
through conventional methods for visually impaired people, and they have to depend
on external interventions to utilize it. However, it leads to a lot of problems in their
safety and mobility, with the visually impaired people having to rely on external
factors for everyday activities. The rise of artificial intelligence and mobile application
development has fused the user experience to provide the best possible user
experience for the general mass. Applications like Tik Tok, Instagram, Amazon, and
Spotify use real-time machine learning algorithms to accelerate their data-driven
insights and bring the best possible user experience. However, not much research
and implementation have been done by researchers and engineers to help visually
impaired people understand their visual scenes. Existing work has been mainly
focused on helping understand the components and characteristics of the
environment, which is not fully scalable. This project aims to propose a solution for
visually impaired people to better understand their environment, with specific
features implemented to allow better them to navigate and interact with the
environment blind people association.