Remote sensing/imageing?

Thread Starter

Mathematics!

Joined Jul 21, 2008
1,036
I am curious how electromagnetic waves can give you an image of your surroundings?

I know with visible light the color is determined by the wavelength/frequency.

How would they create the image with microwaves ,or infared ...etc

Even with mechanical waves like sonar how do they recreate the image.
Do they use distance it takes before bounce back of the objects, even still don't get how they can get what the image looks like only depth of objects at particular points?

But now a days baby sonagrams are awsome quality.
So their must be something I am missing with how images are created.

ANybody care to eleborate

I know how to do remote control , transmitter/reciever stuff , but remote sensing/ imaging is a whole other ball game.

I still don't really get how a light source gives you a complete image. If the image is the reflection of light back to your eyes then the entire image is the sum of all the light rays reflecting back to your eyes.
 
Last edited:

studiot

Joined Nov 9, 2007
4,998
I hope this is not going to turn out to be one of your marathon threads, but here is the beginnings of an answer.

You body/brain use a mathematical process known as triangulation, using the distance between your eyes as a baseline and solving the the triangle to points of light in front of you. Each eye sees a slightly different picture and the brain makes a composite from both.

Your eyes actually scan the whole area of vision and build up a pattern of information representing the distances and colours of the outlines of the objects in front of you.

This scanning is not done in a regular fashion like a TV or computer screen, but in a psuedo random manner that covers all points eventually.
Because your memory stores the received information the brain can build up the pattern and make 3 dimensional sense of the information. It can also direct the scan (unconsciously) to a particular area of interest.

Knowledge of this mechanism allows one to fool the eyes with false pictures.
 
Last edited:

Thread Starter

Mathematics!

Joined Jul 21, 2008
1,036
You body/brain use a mathematical process known as triangulation, using the distance between your eyes as a baseline and solving the the triangle to points of light in front of you. Each eye sees a slightly different picture and the brain makes a composite from both.

Your eyes actually scan the whole area of vision and build up a pattern of information representing the distances and colours of the outlines of the objects in front of you.

This scanning is not done in a regular fashion like a TV or computer screen, but in a psuedo random manner that covers all points eventually.
Because your memory stores the received information the brain can build up the pattern and make 3 dimensional sense of the information. It can also direct the scan (unconsciously) to a particular area of interest.

Knowledge of this mechanism allows one to fool the eyes with false pictures.
I am still not fully understanding how the image is created.
Take visible light
The transmitter is a light source and the reciever is are eye's and brain.
But what I don't get is the transmitter is just ray's of light with different frequencies bouncing off of different objects back to the eye. But if the eye is looking at one place you should only see that one ray. I.e only one pixel?

If in fact the eye is "like you said constantly scaning" even though it appears to be looking at a fix point. Then their still must be a way that the brain interprets the information and stores the color and depth in the mind that distingishes each mind pixel. Plus their must be a frame rate to the human reciever or else how would you know when an image is fully interperted. Not to mention what determines the resolutions?
 
Last edited:

Wendy

Joined Mar 24, 2008
23,408
Actually the rods (for Black and White) and cones are kinda pixels, but there isn't an organized pattern that we would think of as such. The densest area is directly back of the eyeball, and it thins out more and more towards the sides. As studiot mentioned, a bunch of processing power is involved for the total effect, though we're not aware of it. It's like the blind spot in the middle of our vision, everyone has it, but you never notice it.

Google, Wikipedia, and yes, even the Discovery Channels are your friend.
 

studiot

Joined Nov 9, 2007
4,998
Plus their must be a frame rate to the human reciever or else how would you know when an image is fully interperted. Not to mention what determines the resolutions?
There is no frame rate or set resolution. Both may be regarded as continuously variable.

Perhaps saying the eye scans is a bit misleading. Darting about under voluntary control would be a better description.

Our understanding of the 'technology' is a long way from complete and our machine versions (computers/cameras etc) are a long way inferior. In particular a machine cannot update only a part of a frame, but must declare a new frame. Animals are not constrained in these ways.

I think you are interested in the pattern juggling (data processing) rather than the mechanics of vision so I will only say the interesting bit.

All animals, with one known sea creature exception, use the same system of vision although individual species are not developed to the same extent. The photoreceptor is the same chemical in all animals except one. The same system of triads of colour receptors is used as in a colour camera - except that some animals can only see one two colours. So the RGB system is common to machine and nature.

As you aptly put it, the photoreceptors are identified with a mental pixel. When you open your eye, the eye does not look at a fixed point but rapidly does an area sampling. This establishes a coarse pictureas; no pixel is discarded. Based on this the brain decides what areas to increase the resolution on next. This is presumably an evolutionary trait as it allows threats to be identified and monitored rapidly. Alternatively you may choose to look at a fixed object. Other objects go 'out of focus'. This process of enhancing sections of the overall picture can continue over several cycles, all the while random peripheral excursions allow threat monitoring. These random excursions will also pick up movement and allow the brain to redirect focus if necessary.

This whole process is predicated upon a sophisticated system of pattern recognition and much current research is directed into this imported subject.

You would be suprised how few points a human brain needs to decide that an object represents a 'face' and deduce quite a lot about that face. It's only a handful.
 

Thread Starter

Mathematics!

Joined Jul 21, 2008
1,036
Ok , I kind of figured that this subject wasn't fully understood and current research is going on to understand it better.

But how does a camera capture an image and what makes the resolution better?

If a photoreceptors gives one pixel of a camera image then wouldn't you need millions just to get a good quality picture.

And photoreceptors are these electronic components that convert different frequencies of light into a certain different voltage. So you can distinush what color.

I am just curious how the camera builds the image?

Also since we are mostly talking about light being the transmitter source.
How would you do it with infrared , RF , microwave , X-ray ,...etc as the transmitter source.
I guess is their something equivalent to photoreceptors for these frequencies for the receiver end?
 
Last edited:

studiot

Joined Nov 9, 2007
4,998
And photoreceptors are these electronic components that convert different frequencies of light into a certain different voltage. So you can distinush what color.
No they are all the same.

Effectively there are three B&W receptors side by side with colour filters in front so they each only receive the R, G or B component.

Its the same in Nature.

At one time they actually used three separate cameras in studios (with filters) and overlaid the results.
 

Thread Starter

Mathematics!

Joined Jul 21, 2008
1,036
Effectively there are three B&W receptors side by side with colour filters in front so they each only receive the R, G or B component
So then is one pixel the combination of all three B&W receptors. And the color of that pixel is determined by how much red, green , blue you have which is in turn determined by how much light got reflected back to that particular receptor?

Also I would assume that a camera's resolution is determined by how much B&W receptors you have and their density? If this is true is their any corospondence between camera's that say 1 megapixel and it's assosiated B&W receptors?

The only other question I have is why does are brain interpret 3 B&W receptors into 1 pixel and not 3 pixels that are just green , red , blue?

As well we have been taking about light as the source for imaging stuff but how would they do it with RF , microwave , infrared , x-rays ,...etc ? Are their different types of 3 B&W receptors for these frequencies? Or do they do it a total different way?
 
Last edited:

Wendy

Joined Mar 24, 2008
23,408
Getting away from animals (humans included) resolution can be improved mechanically (and electronically) by averaging. A lot of modern test equipment uses this. Basically you take many samples and average each pixel. This assumes a static picture, but the average is much clearer and higher resolution than one frame. If you could process the stream with a computer and compensate for the movement it gets better, since different pixels see different areas, and overlap differently. You are able to extrapolate more information that way.
 

radiohead

Joined May 28, 2009
514
You can make your own thermal camera. Take a standard CCTV camera and obtain one of those peltier junctions (really hot on one side/really cold on the other), put the cold side against the video processor. The cold against the processor will trick it into sensing warm objects instead of just objects...for my experiment I used an Everfocus camera and an old TV set as the monitor. It worked fine for me as long as the temperature differences were at least 10-15 degrees ferenheit. I don't have any schematics for this, it was one of those "gee whiz, what if" experiments...
 
Top