Sign Language Translator

Thread Starter

bestproj

Joined Jul 19, 2012
9
Hello Everyone,

I’m an undergraduate in Electronic Engineering.I have an idea to develop a Sign Language Translator as my final year project.

I suppose to process the sign image using web cam and OPENCV. And convert that processed image into text or voice using Java or any other language. So…

1)From where can I get more information about this kind of project technology?
2)Are there any developed Sign language Translator as a product(any company)?
3)Is the process I mentioned above reliable?


I must be thankful to u if u will kindly reply this..

ThankU,
bestelec
 

blah2222

Joined May 3, 2010
582
Hello Everyone,

I’m an undergraduate in Electronic Engineering.I have an idea to develop a Sign Language Translator as my final year project.

I suppose to process the sign image using web cam and OPENCV. And convert that processed image into text or voice using Java or any other language. So…

1)From where can I get more information about this kind of project technology?
2)Are there any developed Sign language Translator as a product(any company)?
3)Is the process I mentioned above reliable?


I must be thankful to u if u will kindly reply this..

ThankU,
bestelec
Took me two seconds to google this...

I've seen a lot of projects using the glove technique and some using the webcam, but I think the glove is proving to be the best way due to its mobility.
 

DMahalko

Joined Oct 5, 2008
189
A webcam vision system for direct sign language scanning and recognition from hand shapes is hardly going to be simple to design.

But first, to recognize hand shapes, first you have to recognize what a hand IS.

The first step is to develop "hand recognition" -- find hands in the visual scene and track them. Doing this across a wide range of light levels and lighting is going to be hard because shadows, skin tones, jewelry, and tattoos will result in confusing images.

3D stereo/binocular camera vision may be more useful than a single camera since you gain depth information to determine that some fingers are behind or in front of others, or obscured by objects near the hands such as tree branches or other people. This would also help to deal with contrast and shadows that lead to detection confusion with only one camera.

Some sort of temporal buffering and tracking would be good. So you know a hand is present in the pixel field of frame 531. But you lose the tracking for a few tens of frames and get it back again at frame 682. The intervening frames with no hand detected, may have partial information that the program can be adapted to recognize.

It might be useful to generate a 3D physics simulation of the detected information, a virtual hand / wrist / palm / knuckle / finger model, using detected visual hand positions to move the model.

The model itself may provide useful details since you can reject data that causes the model to either move too fast or to move to physically impossible positions. And for frames with missed data you can interpolate the simulated hand movement across the gaps.

Detection is likely going to have to be fuzzy and forgiving since people may not always form the distinct, perfect textbook shapes. They may "smear" the hand shapes and not form them fully. This can also happen for reasons like hand pain or physical problems since as partial loss of fingers due to amputation.

A physics simulation of a hand is also useful since you can do the recognition from the 3D model. In real life, signing hands will not always be viewed in a full frontal position with the wrist down, as shown in textbooks. In actuality the camera may be viewing the hand off to the side, or from behind, and signing can be performed at any angle including with hands down, signing to someone behind the signer.

If you can find hand shapes of any kind and pass them to the 3D model, then the model can be read for characters, in the full frontal textbook position.



A raw visual-camera hand scanner and hand position playback simulator, even without sign language interpretation, would likely be highly useful if it could be developed.

I am all in favor of helping people with disabilities so I have tried to make this discussion as complete as possible, to also assist others who may be attempting to do this and find this discussion with Google.

- Dale Mahalko, Gilman, WI
 

Thread Starter

bestproj

Joined Jul 19, 2012
9
Took me two seconds to google this...

I've seen a lot of projects using the glove technique and some using the webcam, but I think the glove is proving to be the best way due to its mobility.

I know that,it will take less than 1 second to googling any thing.But the question is; it will not give all the things as we wish.If google give every things to us, NO NEED THIS KIND OF FORUMS.

I don't think "the glove is proving to be the best way due to its mobility", Because it will difficult to wear always when they go hear and there.If we can install Sign Language Translation Software to Mobile 4n with Camera;I think it will really help for def people to work as they wish like normal human.So to come up with that we need 2 develop that in to mobile 4n.

Any Way Thanks 4 your reply,
bestproj.
 

Thread Starter

bestproj

Joined Jul 19, 2012
9
A webcam vision system for direct sign language scanning and recognition from hand shapes is hardly going to be simple to design.

But first, to recognize hand shapes, first you have to recognize what a hand IS.

The first step is to develop "hand recognition" -- find hands in the visual scene and track them. Doing this across a wide range of light levels and lighting is going to be hard because shadows, skin tones, jewelry, and tattoos will result in confusing images.

3D stereo/binocular camera vision may be more useful than a single camera since you gain depth information to determine that some fingers are behind or in front of others, or obscured by objects near the hands such as tree branches or other people. This would also help to deal with contrast and shadows that lead to detection confusion with only one camera.

Some sort of temporal buffering and tracking would be good. So you know a hand is present in the pixel field of frame 531. But you lose the tracking for a few tens of frames and get it back again at frame 682. The intervening frames with no hand detected, may have partial information that the program can be adapted to recognize.

It might be useful to generate a 3D physics simulation of the detected information, a virtual hand / wrist / palm / knuckle / finger model, using detected visual hand positions to move the model.

The model itself may provide useful details since you can reject data that causes the model to either move too fast or to move to physically impossible positions. And for frames with missed data you can interpolate the simulated hand movement across the gaps.

Detection is likely going to have to be fuzzy and forgiving since people may not always form the distinct, perfect textbook shapes. They may "smear" the hand shapes and not form them fully. This can also happen for reasons like hand pain or physical problems since as partial loss of fingers due to amputation.

A physics simulation of a hand is also useful since you can do the recognition from the 3D model. In real life, signing hands will not always be viewed in a full frontal position with the wrist down, as shown in textbooks. In actuality the camera may be viewing the hand off to the side, or from behind, and signing can be performed at any angle including with hands down, signing to someone behind the signer.

If you can find hand shapes of any kind and pass them to the 3D model, then the model can be read for characters, in the full frontal textbook position.



A raw visual-camera hand scanner and hand position playback simulator, even without sign language interpretation, would likely be highly useful if it could be developed.

I am all in favor of helping people with disabilities so I have tried to make this discussion as complete as possible, to also assist others who may be attempting to do this and find this discussion with Google.

- Dale Mahalko, Gilman, WI

Dear Sir,

First MANY THANKS 4 your well explained reply.
I didn't expect this kind of helpful reply.

I also think if we can apply this kind of Sign Language Translator to Mobile 4N, It will really help them to work as they wish like normal human.So thats why I started this kind of project in my third year.

But still I am at the beginning. These days I'm trying to implement Vision based Hand Gesture Recognition system using simple Web Cam(OpenCV and Visual Studio 2008). It will really help 4me if I can get some code help regarding Hand Gesture Recognition.

So I think I can ask help from u via this forum.Again I must be thankful to u for your kind help.

Thank U,
bestproj.
 
Top