Blind People, Real time Application, Raspberry pie, Alexa, Python Programming Language, Artificial Intelligence, Deep Learning, Image Capture, Audio Output, identification of visual relationships.
Abstract
One billion people worldwide suffer from a vision problems that should have been avoided or is yet unaddressed. In terms of geographical differences, low- and middle-income countries are projected to have four times the prevalence of distant vision impairment as high-income regions. In terms of near vision, rates of untreated near impaired vision are projected to be bigger than 80percentage points in western, eastern, and sector Directly Africa, while percentages in high-income regions such as Asia Pacific, Australasian, Western Europe, and Asia-Pacific are less than 10%. The probability of more people developing vision impairment is predicted to rise as the population grows and ages. In order to assist the blind, we created a device imaging system in which a blind person can carry an audio device with them that will guide them through their surroundings and help them live a safer life while also increasing awareness of their surroundings. This was accomplished by applying innovative image captioning techniques that included the use of effective net B3 procedures and tokenization approaches, in which the machine learnt situations with different captions. When a picture is taken with a camera, the CPU recognizes it and predicts it. Following the estimate, it will be given to the alexa microphone, which will provide an auditory output to the user, allowing them to recognize the scene that is unfolding around them. As a result of this study, we are able to deliver synthetic eyesight to the blind, allowing them to acquire confidence when travelling alone.
Article Details
Unique Paper ID: 155706
Publication Volume & Issue: Volume 9, Issue 1
Page(s): 1462 - 1467
Article Preview & Download
Share This Article
Join our RMS
Conference Alert
NCSEM 2024
National Conference on Sustainable Engineering and Management - 2024