Abstract
This project aims to develop an AI-powered mobile application specifically designed to assist visually impaired people by enhancing their daily navigation and interaction with the environment. Using advanced AI technologies like object recognition, audio feedback, and real-time navigation assistance, the app seeks to improve the quality of life for visually impaired users by providing them with greater independence and safety.
Introduction
Visually impaired individuals often face significant challenges in navigating and interacting with their surroundings. Traditional aids like canes or guide dogs have limitations, particularly in unknown or dynamic environments. This project proposes an innovative solution using AI to provide real-time, context-aware assistance through a user-friendly mobile application.
Existing System
Existing solutions for visually impaired assistance include tactile paving, audio signals at pedestrian crossings, and standard GPS-based applications. However, these systems do not provide detailed information about obstacles, nor do they offer real-time object recognition or comprehensive indoor navigation features.
Proposed System
The proposed mobile app will integrate AI-driven technologies to detect and interpret the environment, providing auditory feedback to the user. It will feature object and text recognition, facial recognition for identifying acquaintances, and detailed navigation assistance both indoors and outdoors. The app will be designed to work seamlessly with existing accessibility tools on smartphones.
Methodology
- Requirement Gathering: Collaborate with visually impaired individuals and accessibility experts to identify key features and usability requirements.
- Data Collection: Assemble datasets for training AI models, including images, videos, and audio from various environments.
- Model Development: Develop and train AI models for object detection, facial recognition, and text recognition using deep learning techniques.
- App Development: Create the mobile application with a focus on accessibility features, such as voice commands and audio feedback.
- Integration: Integrate AI models with the mobile app and ensure real-time performance.
- Testing and Feedback: Conduct usability testing with visually impaired users to refine features and improve the interface.
- Deployment and Maintenance: Launch the app on both iOS and Android platforms and provide continuous updates based on user feedback and technological advancements.
Technologies Used
- Python & TensorFlow: For AI model development.
- Swift and Kotlin: For iOS and Android app development, respectively.
- Google Cloud APIs: For additional AI capabilities like enhanced speech recognition.
- Docker and Kubernetes: For deploying and managing backend services.
- Firebase: For user data management and analytics.