Hello,
I'm trying to build a light portable device designed for blind or visually impaired people. The device is capable of recognizing text in front of the person using a camera. The text is converted to speech and the person will be able to hear the synthesized voice that pronounces the words in earphones.
The camera is mounted on special glasses with attached earphones. I think this is a good location for the camera because all humans are capable of precise head-hand coordination. Even without seeing, a human is able to point its head in the direction of his hand. This means that if a blind person is holding a book, that person is capable of pointing the glasses straight at the book.
Here are my thoughts so far:
The image sensor will be small, similar to this:
http://www.raspberrypi.org/camera
Sensors from Omnivision are small and I think they are great for this application.
I think ARM running Linux is a good option, because many image processing applications and speech synthesizers exist for this platform.
Overall, I want this device to look similar to this (of course it has other functionality):
http://forum.allaboutcircuits.com/attachment.php?attachmentid=60717&d=1382902520
I need help on build an embedded system capable of running Linux, do image processing and text2speech. The electronics involved will be quite complex, so any thoughts on which microcontroller and memory to use are welcome. The software is, perhaps, even more complex.
Anybody who is interested in this project and wants to help, please post here, or write me an email: xkyve1 [at] gmail.com
I will update this post with my progress. So far I've started on PC and a Raspberry PI.
Thank you
I'm trying to build a light portable device designed for blind or visually impaired people. The device is capable of recognizing text in front of the person using a camera. The text is converted to speech and the person will be able to hear the synthesized voice that pronounces the words in earphones.
The camera is mounted on special glasses with attached earphones. I think this is a good location for the camera because all humans are capable of precise head-hand coordination. Even without seeing, a human is able to point its head in the direction of his hand. This means that if a blind person is holding a book, that person is capable of pointing the glasses straight at the book.
Here are my thoughts so far:
The image sensor will be small, similar to this:
http://www.raspberrypi.org/camera
Sensors from Omnivision are small and I think they are great for this application.
I think ARM running Linux is a good option, because many image processing applications and speech synthesizers exist for this platform.
Overall, I want this device to look similar to this (of course it has other functionality):
http://forum.allaboutcircuits.com/attachment.php?attachmentid=60717&d=1382902520
I need help on build an embedded system capable of running Linux, do image processing and text2speech. The electronics involved will be quite complex, so any thoughts on which microcontroller and memory to use are welcome. The software is, perhaps, even more complex.
Anybody who is interested in this project and wants to help, please post here, or write me an email: xkyve1 [at] gmail.com
I will update this post with my progress. So far I've started on PC and a Raspberry PI.
Thank you
Attachments
-
44.6 KB Views: 16